uuid int64 541B 3,299B | dataset stringclasses 1
value | text stringlengths 1 4.29M |
|---|---|---|
1,314,259,993,163 | arxiv | \section{Introduction}
In 1958 Dirac published his famous Hamiltonian formulation of the metric General Relativity (or metric gravity, for short) \cite{Dir58}.
Since then and for a very long time that Dirac's formulation was known as the only correct Hamiltonian approach ever developed for the
metric gravity. In particular, only by using this Hamiltonian formulation, i.e., the primary and secondary constraints derived in this
Dirac's approach, one was able to restore the complete and correct gauge symmetry (diffeomorphism) of the free (metric) gravitational field(s).
A different Hamiltonian formulation of the metric GR published earlier in \cite{PirSS} was overloaded with numerous mistakes, which can
easily be found, e.g., in all secondary constraints derived in \cite{PirSS}. Moreover, some important steps of the Hamiltonian procedure,
developed earlier by Dirac in \cite{Dir50}, were missing in \cite{PirSS}. For instance, the closure of Dirac procedure \cite{Dir50} was
not demonstrated et al. In reality, it is impossible to show such a closure with wrong secondary constraints, but after reading \cite{PirSS}
one can get an impression that authors did not understand why they need to do this, in principle. The complete and correct version of the
Hamiltonian formulation for the metric gravity, originally proposed in \cite{PirSS}, was re-developed and corrected only in 2008 \cite{K&K}
by Kiriushcheva and Kuzmin. Below, to respect this fact we shall call the Hamiltonian formulaion of the metric GR developed in \cite{K&K} by
the K$\&$K approach. This approach also allows one to restore the complete diffeomorphism as a correct gauge symmetry of the free gravitational
field.
Note that after publication \cite{K&K} there were two different and non-contradictory Hamiltonian formulations of the metric gravity.
Therefore, it was very interesting to investigate relations between these two approaches. In \cite{FK&K} we have shown that Dirac
formulation of the metric GR and `alternative' K$\&$K-formulation are related to each other by a canonical transformation of dynamical
variables of the problem, i.e., by a transformation of the generalized `coordinates' $g_{\alpha\beta}$ and corresponding `momenta'
$\pi^{\mu\nu}$. Furthermore, such a canonical transformation has special and relatively simple form (more details can be found in
\cite{FK&K}). After an obvious success of our analysis in \cite{FK&K} the following question has imediately arose: is it possible to
derive another canonical transformation of dynamical variables in the metric gravity which can reduce the canonical Hamiltonian $H_C$
of the metric GR derived in \cite{K&K} to some relatively simple forms which are well known in classical mechanics? If the answer is
`Yes', then it opens access to a large number of analytical and numerical methods developed for classical dynamical systems with such
Hamiltinians. Furthermore, for many similar systems the corresponding solutions and their properties are also known and we can use
these solutions to solve `new' gravitational problems, etc. Below, to answer this question we present the new canonical transformation
of dynamical variables, i.e., generalized coordinates and momenta, in the metric General Relativity. This new canonical transformation
is also a very special and unique, since it reduces the canonical Hamiltonian $H_C$ of metric GR to the natural form which is almost
identical to the natural form of many `regular' Hamiltonians already known in analytical mechanics of the potential (dynamical) systems.
For instance, similar Hamiltonians describe the non-relativistic system of interacting $N$ point particles, where all inter-particle
forces are generated by some regular potential(s).
This paper has the following structure. In the next two Sections we introduce the $\Gamma - \Gamma$ Lagrangian ${\cal L}$ of the
metric General Relativity. By using this Lagrangian ${\cal L}$ we define the corresponding momenta $\pi^{\alpha\beta}$. At the next
stage of our method we apply the Legendre transformation to exclude velocities and construct the canonical $H_C$ and total $H_t$
Hamiltonians of the metric General Relativity. All derived formulas, equations and even logic used in next two Sections are pretty
standard for any Hamiltonian formulation of the metric GR. Moreover, they were derived and discussed in a number of earlier studies
(see, e.g., \cite{K&K} and \cite{Fro1}). Nevertheless, the two following Sections are important to make and keep this study
completely independent of other publications and united by a central idea to illustrate the power of canonical transformations for
Hamiltonian systems. The fundamental and secondary Poisson brackets are defined and calculated in Section IV. These brackets are the
main working tools to perform research and obtain solutions for any Hamilton dynamical system, including our Hamiltonian system of
the gravitational field(s) defined in the metric General Relativity. In particular, our Poisson brackets are used to investigate a
few fundamental problems currently known in metric GR. Section VI is the central part of this study, since here the canonical
Hamiltonian $H_C$ of the metric GR is reduced to its natural form. Here we also illustrate a number of advantages of the normal form
of the canonical Hamiltonian $H_C$ for numerous problems known in the metric GR. A few directions for future development of metric GR
are also discussed there. Concluding remarks can be found in the last Section.
Now, let us introduce a few principal notations which are extensively used below. Everywhere in this study we assume that our readers
are familiar with the tensor calculus, tensor notations and tensor analysis at least at the level of excellent Kochin's book
\cite{Kochin}. Notations from that book, the rules of tensor transformatons, etc, are used below without any additional reference. In
particular, in this study the notation $g_{\alpha\beta}$ stands for the covariant components of the metric tensor which are dimensionless
quantities. Note that all components of the metric tensor $g_{\alpha\beta}$ can be considered either as the actual gravitational fields,
or as the tensor components of one (united) gravitational field. Each of the $g_{\alpha\beta}$ components is a function of spatial and
temporal coordinates, i.e., $x^{\alpha} = ( x^{0}, x^{1}, \ldots, x^{d-1})$ in our current notations. In this study all components of
metric tensor $g_{\alpha\beta}$ are considered as the generalized coordinates of the problem. Analogous notations $\pi^{\alpha\beta}$
designate the corresponding contravariant components of momenta which are conjugate to the covariant components $g_{\alpha\beta}$ of the
metric tensor (see below and references \cite{K&K} and \cite{FK&K}).
The determinant of the metric tensor $g_{\alpha\beta}$ is denoted by its traditional notation $- g$, where $- g > 0$. The Latin alphabet
is used for spatial components of vectors/tensors, while the index 0 means their temporal component. In this study the notation $d$
(where $d \ge 3$ \cite{X}) designates the total dimension of our space-time manifold. This means that an arbitrary Greek index $\alpha$
varies between 0 and $d - 1$, while an arbitrary Latin index varies between 1 and $d - 1$. The quantities and tensors such as
$B^{((\alpha \beta) \gamma | \mu \nu \lambda)}, I_{mnpq}$, etc, applied below, have been defined in earlier papers \cite{Dir58},
\cite{K&K}, \cite{FK&K} and \cite{Fro1}. In this study the definitions of all these quantities and tensors are exactly the same as
in \cite{K&K} and \cite{FK&K} and there is no need to repeat them. The short notations $g_{\alpha\beta,k}$ and $g_{\gamma\rho,0}$
are used below for the spatial and temporal derivatives, respectively, of the corresponding components of the metric tensor. Any
expression which contains a pair of identical (or repeated) indexes, where one index is covariant and another is contravariant,
means summation over this `dummy' index. This convention is very convenient and drastically simplifies many formulas derived in
metric GR.
\section{$\Gamma - \Gamma$ Lagrangian of the metric General Relativity}
In this Section we introduce the Lagrangian of the metric General Relativity. Formally, such a Lagrangian (or Lagrangian density)
should coincide with the integrand in the Einstein-Hilbert integral-action (see, e.g., \cite{LLTF} and \cite{Carm}). However, that
Lagrangian, which is often called the Einstein-Hilbert Lagrangian, contains a number of derivatives of the second order and cannot
be used directly in the principle of least action. By applying some standard procedure (see, e.g., \cite{LLTF}) one can transform
the `singular' Einstein-Hilbert Lagrangian into the `regular' $\Gamma - \Gamma$ Lagrangian which contains no second order derivative
and is written in the form
\begin{eqnarray}
{\cal L} &=& \frac14 \sqrt{-g} B^{\alpha\beta\gamma\mu\nu\rho} \Bigl(\frac{\partial g_{\alpha\beta}}{\partial x^{\gamma}}\Bigr)
\Bigl(\frac{\partial g_{\mu\nu}}{\partial x^{\rho}}\Bigr) = \frac14 \sqrt{-g} B^{\alpha\beta\gamma\mu\nu\rho} g_{\alpha\beta,\gamma}
g_{\mu\nu,\rho} \label{eq05}
\end{eqnarray}
where
\begin{eqnarray}
B^{\alpha\beta\gamma\mu\nu\rho} &=& g^{\alpha\beta} g^{\gamma\rho} g^{\mu\nu} - g^{\alpha\mu} g^{\beta\nu} g^{\gamma\rho} + 2
g^{\alpha\rho} g^{\beta\nu} g^{\gamma\mu} - 2 g^{\alpha\beta} g^{\gamma\mu} g^{\nu\rho} \; \; \label{Bcoef}
\end{eqnarray}
is a homogeneous cubic function of the contravariant components of the metric tensor $g^{\alpha\beta}$. This formula can also be written
as the cubic function of the inverse powers of covariant components of the metric tensor $g_{\alpha\beta}$. The both forms of the
$B^{\alpha\beta\gamma\mu\nu\rho}$ tensor are equivalent, since the equality $g_{\alpha\gamma} g^{\gamma\beta} = g^{\alpha}_{\beta} =
\delta^{\alpha}_{\beta}$ is always obeyed \cite{Kochin}. In this study the covariant components of the metric tensor $g_{\alpha\beta}$
are chosen as the straight set of coordinates for the Hamiltonian formulation(s) of the metric GR. In thjis case, the contravariant
components of the metric tensor $g^{\alpha\beta}$ form the corresponding set of dual coordinates. For tensor Hamiltonian fields these
two sets of coordinates (in fact, the two sets of canonical variables which include these coordinates) are very closely related to
each other by the Poisson brackets (see discussion below). Note also that in the right-hand side of this formula, Eq.(\ref{eq05}), the
short notation $g_{\alpha\beta,\gamma}$ designates the partial derivatives $\frac{\partial g_{\alpha\beta}}{\partial x^{\gamma}}$ in
respect to the spatial/temporal coordinates. Note that the $\Gamma - \Gamma$ Lagrangian ${\cal L}$, Eq.(\ref{eq05}), contains the partial
temporal derivatives $g_{0 \sigma,0} (= g_{\sigma 0,0})$ of the first-order only, and it is used below to derive the total Hamiltonian of
the metric GR. In some papers the temporal derivatives $g_{0 \sigma,0}$ were called the $\sigma$-velocities.
In reality, to derive the closed formula for the Hamiltonian of metric GR we need a slightly different form of the $\Gamma - \Gamma$
Lagrangian where all temporal derivatives (or time-derivatives) are explicitly separated from other derivatives (see,
e.g., \cite{K&K})
\begin{eqnarray}
{\cal L} = \frac14 \sqrt{-g} B^{\alpha\beta 0\mu\nu 0} g_{\alpha\beta,0} g_{\mu\nu,0} + \frac12 \sqrt{-g} B^{(\alpha\beta 0|\mu\nu k)}
g_{\alpha\beta,0} g_{\mu\nu,k} + \frac14 \sqrt{-g} B^{\alpha\beta k \mu\nu l} g_{\alpha\beta,k} g_{\mu\nu,l} \label{eq51}
\end{eqnarray}
where the notation $B^{(\alpha\beta\gamma|\mu\nu\rho)}$ means a `symmetrical' $B^{\alpha\beta\gamma\mu\nu\rho}$ quantity which is
symmetrized in respect to the permutation of two groups of indexes, i.e.,
\begin{eqnarray}
B^{(\alpha\beta\gamma|\mu\nu\rho)} &=& \frac12 \Bigl( B^{\alpha\beta\gamma\mu\nu\rho} + B^{\mu\nu\rho\alpha\beta\gamma} \Bigr)
= g^{\alpha\beta} g^{\gamma\rho} g^{\mu\nu} - g^{\alpha\mu} g^{\beta\nu} g^{\gamma\rho} \nonumber \\
&+& 2 g^{\alpha\rho} g^{\beta\nu} g^{\gamma\mu} - g^{\alpha\beta} g^{\nu\rho} g^{\gamma\mu} - g^{\alpha\rho}
g^{\beta\gamma} g^{\mu\nu} \; \label{eq52}
\end{eqnarray}
By using the Lagrangian ${\cal L}$, Eq.(\ref{eq51}), and standard definition of momentum as a partial derivative of the Lagrangian in
respect to the corresponding velocity (see, e.g., \cite{Dir64}), we obtain the explicit formulas for all components of the tensor of
momentum $\pi^{\gamma\sigma}$
\begin{eqnarray}
\pi^{\gamma\sigma} = \frac{\partial {\cal L}}{\partial g_{\gamma\sigma,0}} = \frac{1}{2} \sqrt{-g} B^{((\gamma\sigma) 0|\mu\nu 0)}
g_{\mu\nu, 0} + \frac{1}{2} \sqrt{-g} B^{((\gamma\sigma) 0|\mu\nu k)} g_{\mu\nu, k} \; \; \; \label{mom}
\end{eqnarray}
The first term in the right-hand side of the last equation can be written in the form
\begin{eqnarray}
\frac{1}{2} \sqrt{-g} B^{((\gamma\sigma)0|\mu\nu 0)} g_{\mu\nu, 0} = \frac{1}{2} \sqrt{-g} g^{00} E^{\mu\nu\gamma\sigma} g_{\mu\nu, 0}
\; \; \label{B}
\end{eqnarray}
where the Dirac tensors $E^{\mu\nu\gamma\sigma}$ and $e^{\mu \nu}$ are
\begin{eqnarray}
E^{\mu \nu \gamma \rho} = e^{\mu \nu} e^{\gamma \rho} - e^{\mu \gamma} e^{\nu \rho} \; \; , \; \; {\rm and} \; \; \; e^{\mu \nu} =
g^{\mu \nu} - \frac{g^{0 \mu} g^{0 \nu}}{g^{00}} \; \; \; \label{E}
\end{eqnarray}
and it is easy to check that $E^{\mu\nu\gamma\sigma} = E^{\gamma\sigma\mu\nu}$ and $e^{\mu \nu} = e^{\nu \mu}$. Also, as follows
directly from the formula, Eq.(\ref{E}), the tensor $e^{\mu \nu}$ equals zero, if either index $\mu$, or index $\nu$ (or both) equals
zero. The same statement is true for the Dirac $E^{\mu\nu\gamma\sigma}$ tensor, i.e., $E^{0\nu\gamma\sigma} = 0, E^{\mu 0\gamma\sigma}
= 0, E^{\mu\nu 0\sigma} = 0$ and $E^{\mu\nu\gamma 0} = 0$. The $E^{pqkl}$ quantity is called the space-like Dirac tensor of the fourth
rank. Note that all components of this space-like tensor $E^{p q k l}$ are not equal zero. Furthermore, the space-like tensor
$E^{p q k l}$ is a positively-defined and invertable tensor. Its inverse space-like tensor $I_{m n p q}$ is also positively-defined
and invertable space-like tensor of the fourth rank which is written in the form
\begin{equation}
I_{m n q p} = \frac{1}{d - 2} g_{m n} g_{p q} - g_{m p} g_{n q} \label{I}
\end{equation}
This tensor plays a very important role in our Hamiltonian analysis (see below). From here we can write $I_{m n p q} E^{p q k l} =
g^{k}_{m} g^{l}_{n} = \delta^{k}_{m} \delta^{l}_{n}$, where the $g^{\alpha}_{\beta} = \delta^{\alpha}_{\beta}$ tensor is the substitution
tensor \cite{Kochin}, while the symbol $\delta^{\alpha}_{\beta}$ denotes the Kroneker delta (it equals zero for all possible indexes,
unless $\alpha = \beta$, when its numerical value equals unity).
First, let us consider the `regular' case when in Eq.(\ref{mom}) $\gamma = p$ and $\sigma = q$. In this case one finds the following
formulas for double space-like components of the momentum tensor
\begin{eqnarray}
\pi^{pq} = \frac{\partial {\cal L}}{\partial g_{p q,0}} = \frac{1}{2} \sqrt{-g} B^{((p q) 0|\mu\nu 0)} g_{\mu\nu,0} + \frac{1}{2}
\sqrt{-g} B^{((p q) 0|\mu\nu k)} g_{\mu\nu, k} \; \; \label{momenta}
\end{eqnarray}
For each pair of $(pq)-$indexes (or $(mn)-$indexes). The tensor in the right-hand side of this equation is invertable and the velocity
$g_{m n, 0}$ is explicitly expressed as the linear function (or linear combination) of the space-like components $\pi^{pq}$ of momentum
tensor:
\begin{eqnarray}
g_{mn, 0} &=& \frac{1}{g^{00}} \Bigl( \frac{2}{\sqrt{-g}} I_{m n p q} \pi^{pq} - I_{m n p q} B^{((pq) 0|\mu\nu k)} g_{\mu\nu, k} \Bigr)
\nonumber \\
&=& \frac{1}{g^{00}} I_{m n p q} \Bigl( \frac{2}{\sqrt{-g}} \pi^{pq} - B^{((pq) 0|\mu\nu k)} g_{\mu\nu, k} \Bigr) \label{veloc}
\end{eqnarray}
where the Dirac tensor $I_{m n p q}$ is defined by Eq.(\ref{I}). As follows from Eqs.(\ref{momenta}) and (\ref{veloc}) for the space-like
components of metric tensor $g_{pq}$ and corresponding momenta $\pi^{mn}$ one finds no principal difference with the Hamilton dynamical
systems, which are routinely studied in classical mechanics. Indeed, these space-like components of momenta and corresponding velocities
are related to each other by a very simple (linear) equation. However, even these components of momenta $\pi^{pq}$ do not related with
the corresponding velocities $g_{pq,0}$ directly, i.e., by one equation and/or by one scalar parameter, e.g., by some `effective' mass.
Instead, for gravitational field(s) the corresponding relation, Eq.(\ref{veloc}), has a matrix form and one space-like component of
momenta $\pi^{mn}$ depends upon quasi-linear combination \cite{QL} of different velocities $g_{pq,0}$ (and vice versa). Nevertheless, even
such a `non-traditional' matrix definition of momenta works very well in actual applications and, in particular, allows one to develop the
complete and non-cotradictive Hamiltonian approach for the metric GR.
In the second `non-regular' (or singular) case, when $\gamma = 0$, the first term in the right-hand side of Eq.(\ref{mom}) equals zero
and this equation takes the from
\begin{eqnarray}
\pi^{0\sigma} = \frac{\partial {\cal L}}{\partial g_{0\sigma,0}} = \frac{1}{2} \sqrt{-g} B^{((0\sigma) 0|\mu\nu k)} g_{\mu\nu, k}
\; \; \; \label{constr}
\end{eqnarray}
which contains no velocity et al. Furthermore, this equation, Eq.(\ref{constr}), determines the momentum $\pi^{0\sigma}$ as a polynomial
(cubic) functions of the contravariant components of the metric tensor $g^{\alpha\beta}$ and a linear function of the both $\sqrt{- g}$
value and spatial derivatives of the covariant components $g_{\mu\nu, k}$ of metric tensor. It is clear that such a situation cannot be
found neither in classical mechanics, nor in quantum mechanics of arbitrary systems of particles. However, for actual physical fields
similar situations arise quite often. The physical meaning of Eq.(\ref{constr}) is simple and can be expressed in the following words.
The function
\begin{eqnarray}
\phi^{0\sigma} = \pi^{0\sigma} - \frac{1}{2} \sqrt{-g} B^{((0\sigma) 0|\mu\nu k)} g_{\mu\nu, k} \; \label{primary}
\end{eqnarray}
must be equal zero at any time, i.e., it does not change during actual physical motions (or time-evolution) of the gravitational field.
Dirac in \cite{Dir50} proposed to write such equalities in the symbolic form $\phi^{0\sigma} \approx 0$ and called these $d$ functions
$\phi^{0\sigma}$ (for $\sigma = 0, 1, \ldots, d - 1$), Eq.(\ref{primary}), by the primary constraints (see also \cite{Dir64}).
\section{Total and canonical Hamiltonians of metric General Relativity}
Now, by applying the Legendre transformation to the known $\Gamma - \Gamma$ Lagrangian ${\cal L}$, of the metric GR, Eq.(\ref{eq51}),
and excluding all space-like field-velocities $g_{mn,0}$ we can derive the following formulas for the total and canonical Hamiltonians
of the metric GR. In particular, the total Hamiltonian $H_t$ of the gravitational field in metric GR derived from the $\Gamma - \Gamma$
Lagrangian ${\cal L}$, Eq.(\ref{eq05}), is written in the form
\begin{eqnarray}
H_t = \pi^{\alpha\beta} g_{\alpha\beta,0} - {\cal L} = H_C + g_{0\sigma,0} \phi^{0\sigma} \label{eq1}
\end{eqnarray}
where $\phi^{0\sigma} = \pi^{0\sigma} - \frac{1}{2}\sqrt{-g} B^{\left( \left(0\sigma\right) 0\mid\mu\nu k\right)} g_{\mu\nu,k}$ are
the primary constraints, while $g_{0\sigma,0}$ are the corresponding $\sigma-$velocities' and $H_C$ is the canonical Hamiltonian of
metric GR
\begin{eqnarray}
& &H_C = \frac{1}{\sqrt{-g} g^{00}} I_{mnpq} \pi^{mn} \pi^{pq} - \frac{1}{g^{00}} I_{mnpq} \pi^{mn} B^{(p q 0|\mu \nu k)}
g_{\mu\nu,k} \label{eq5} \\
&+& \frac14 \sqrt{-g} \Bigl[ \frac{1}{g^{00}} I_{mnpq} B^{((mn)0|\mu\nu k)} B^{(pq0|\alpha\beta l)} -
B^{\mu\nu k \alpha\beta l}\Bigr] g_{\mu\nu,k} g_{\alpha\beta,l} \nonumber
\end{eqnarray}
which does not contain any primary constraint $\phi^{0\sigma}$. All $d$ primary constraints $\phi^{0\sigma}$, where $\sigma = 0, 1, \ldots,
d - 1$, are included in the total Hamiltonian $H_t$, Eq.(\ref{eq1}). It should be emphasized again that these primary constraints arise during
our transition from the $\Gamma - \Gamma$ Lagrangian ${\cal L}$, Eq.(\ref{eq05}), to the Hamiltonians $H_t$ and $H_C$, since the $\Gamma -
\Gamma$ Lagrangian ${\cal L}$ is a linear (not quadratic!) function of all $d$ momenta $\pi^{0\sigma} = \frac{\delta L}{\delta
g_{0\sigma,0}}$ each of which includes at least one temporal index \cite{K&K}. The total and canonical Hamiltonians $H_t$ and $H_C$ are the
scalar functions defined in the $2 d-$dimensional phase space $\Bigl\{ g_{\alpha\beta}, \pi^{\mu\nu} \Bigr\}$, where components of the
metric $g_{\alpha\beta}$ tensor and momentum tensor $\pi^{\mu\nu}$ have been chosen as the basic dynamical variables. Such a phase space
is, in fact, a symplectic space and the corresponding symplectic structure is determined by the Poisson brackets between its basic dynamical
variables, i.e., coordinates $g_{\alpha\beta}$ and momenta $\pi^{\mu\nu}$. Now we need to define the Poisson brackets (or commutators) which
play a great role in any the Hamiltonian formulation developed for the metric GR. These Poisson brackets are introduced in the next Section.
\section{Poisson brackets}
Let us define the Poisson brackets (or PB, for short) which are absolutely crucial for the creation, development and applications of any
Hamiltonian-based approach in the metric General Relativity. From now on we shall consider only Hamiltonian approaches (in metric GR) which
are canonically related either to the K$\&$K-approach \cite{K&K}, or to the Dirac approach \cite{Dir58}. Note again that these two
Hamiltonian formulations are canonically related to each other (for more details, see \cite{FK&K}). Therefore, it is possible to obtain and
present the basic (or fundamental) set of Poisson brackets only for one of these two Hamiltonian formulations, e.g., for the K$\&$K-approach.
Analogous Poisson brackets for other Hamiltonian formulations of metric GR can be derived from these `fundamental' values known in the
K$\&$K-approach. The basic Poisson brackets between $\frac{d(d + 1)}{2}$ components of the momentum tensor $\pi^{\mu\nu}$ and $\frac{d(d +
1)}{2}$ `coordinates' $g_{\alpha\beta}$ in the K$\&$K-approach are \cite{K&K}
\begin{eqnarray}
[ g_{\alpha\beta}, \pi^{\mu\nu}] = - [ \pi^{\mu\nu}, g_{\alpha\beta}] = g_{\alpha\beta} \pi^{\mu\nu} - \pi^{\mu\nu} g_{\alpha\beta}
= \frac12 \Bigl(g^{\mu}_{\alpha} g^{\nu}_{\beta} + g^{\nu}_{\alpha} g^{\mu}_{\beta}\Bigr) = \frac12 \Bigl(\delta^{\mu}_{\alpha}
\delta^{\nu}_{\beta} + \delta^{\nu}_{\alpha} \delta^{\mu}_{\beta}\Bigr) = \Delta^{\mu\nu}_{\alpha\beta} \; \; \; , \label{eq15}
\end{eqnarray}
where $g^{\mu}_{\alpha} = \delta^{\mu}_{\alpha}$ is the substitution tensor \cite{Kochin} and symbol $\delta^{\mu}_{\beta}$ is the Kronecker
delta, while the notation $\Delta^{\mu\nu}_{\alpha\beta}$ stands for the gravitational (or tensor) delta-function. All other fundamental
Poisson brackets between basic dynamical variables of the metric GR equal zero identically, i.e., $[ g_{\alpha\beta}, g_{\mu\nu}] = 0$ and
$[ \pi^{\alpha\beta}, \pi^{\mu\nu}] = 0$. This set of $\frac{d^{2}(d^{2} - 1)}{4}$ Poisson brackets has a fundamental value, since these
PB define the unique symplectic structure directly related to the Rimanian structure of the original $d (d + 1)$-dimensional tensor phase
space and to the metric tensor $g_{\alpha\beta}$. We hope that readers are familiar with the general properties of Poisson brackets (see,
e.g., \cite{Gant} - \cite{GF}).
In general, the $\frac{d^{2}(d^{2} - 1)}{4}$ Poisson brackets mentioned above are sufficient to operate successfully in any correct Hamiltonian
approach developed for the metric GR. However, in many applications it is crucially important to determine other Poisson brackets, which are also
called the secondary PB. The secondary PB are calculated between different analytical functions of basic dynamical variables, i.e., coordinates
and momenta, but they arise quite often in actual calculations. In general, it is difficult and time-consuming to derive the explicit formulas for
secondary PB every time when you need them. Furthermore, in actual applications one usually needs to determine a few hundreds of different Poisson
brackets. Here we present a number of additional (or secondary) Poisson brackets which are sufficient for our purposes in this study. The first
additional group of secondary Poisson brackets is
\begin{eqnarray}
[ g^{\alpha\beta}, \pi^{\mu\nu}] = - \frac12 \Bigl( g^{\alpha\mu} g^{\beta\nu} + g^{\alpha\nu} g^{\beta\mu} \Bigr) \; \; {\rm and} \;
\; [ g^{\alpha\beta}, g_{\mu\nu}] = 0 \; . \label{eq151}
\end{eqnarray}
which include the contravariant components of the metric tensor $g^{\alpha\beta}$. Note that the $g^{\alpha\beta}$ tensor is inverse of the
$g_{\alpha\beta}$ tensor, since the following equations $g_{\alpha\gamma} g^{\gamma\beta} = g_{\alpha}^{\beta} = \delta_{\alpha}^{\beta} =
g^{\beta\gamma} g_{\gamma\alpha}$ are always obeyed between components of the metric tensor. Therefore, we need to check the correctness of
Eq.(\ref{eq151}) in the case of direct replacement $g^{\alpha\beta} \rightarrow \frac{1}{g^{\alpha\beta}}$. The second sub-equation in
Eq.(\ref{eq151}), i.e., $[ \frac{1}{g^{\alpha\beta}}, g_{\mu\nu}] = 0$ does not change its form, while for the first sub-equation one finds
\begin{eqnarray}
[ g^{\alpha\beta}, \pi^{\mu\nu}] = [ \frac{1}{g_{\alpha\beta}}, \pi^{\mu\nu}] = - \Bigl(\frac{1}{g_{\alpha\beta}}\Bigr)^{2}
[ g_{\alpha\beta}, \pi^{\mu\nu}] = - (g^{\alpha\beta})^{2} \Delta^{\mu\nu}_{\alpha\beta} = - \frac12 \Bigl( g^{\alpha\mu} g^{\beta\nu} +
g^{\alpha\nu} g^{\beta\mu} \Bigr) \; \; , \label{eq151a}
\end{eqnarray}
which coincides with the first equality in Eq.(\ref{eq151}) and we do not have any contradiction here.
The second set of additional Poisson brackets arises, if one explicitly introduces the dual system of dynamical variables $\{ g^{\alpha\beta},
\pi_{\mu\nu}\}$ which always exists for any tensor Hamiltonian system. When I started to write this paper one of my goals was to avoid the
use of components of the `dual momentum' $\pi_{\mu\nu}$ as dynamical variables. However, after a number of attempts I gave up and arrived to
the following conclusion: to create a truly correct and non-contradictory Hamiltonian formulation for some dynamical tensor system we have to
deal with the two different $d (d + 1)-$dimensional sets of dynamical variables: (a) the straight set $\{ g_{\alpha\beta}, \pi^{\mu\nu}\}$,
and (b) the dual set $\{ g^{\alpha\beta}, \pi_{\mu\nu}\}$. The Poisson brackets between all dynamical variables from these two sets must be
derived and carefully checked for non-contradictory. In those cases when all these Poisson brackets (for dynamical variables from the straight
and dual sets) do not contradict each other we can say that our newly created Hamiltonian formulation is truly covariant, self-sustained and
correct. Otherwise, one needs to re-define all momenta and try to repeat the whole Hamilton procedure from the very beginning. The necessity
to deal with the two sets of dynamical variables instantaneously is an important difference between Hamiltonian procedures developed for the
affine vector spaces and Riemanian tensor spaces. In other words, the instant presence of two sets of dynamical variables (straight and dual
sets) is a common feature of all Hamiltonian formulations for the tensor fields. It can be shown that only by dealing with the both straight
and dual sets of dynamical variables we can guarantee the internal covariance and self-sustainability of our Hamiltonian approach developed
for the metric GR.
The fact that we need to operate with the both straight and dual systems of dynamical variables in any Hamiltonian formulation developed for
tensor dynamical systems can be illustrated by the following example. Let us suppose that we have defined the momentum as above, i.e., we
introduced the contravariant tensor of momentum $\pi^{\rho\sigma}$. Then, by using the metric tensor $g_{\alpha\beta}$ we can introduce the
new tensor of momentum $\pi_{\mu\nu} = g_{\mu\rho} g_{\nu\sigma} \pi^{\rho\sigma} = g_{\mu\rho} \pi^{\rho\sigma} g_{\nu\sigma} =
\pi^{\rho\sigma} g_{\mu\rho} g_{\nu\sigma}$ which is a covariant tensor of second rank. The same transition ($\pi^{\rho\sigma} \rightarrow
\pi_{\rho\sigma}$) changes the corresponding Poisson brackets. Some terms in the `new' PB are transformed easily, while analogous transformations
for other terms are hard to find. Nevertheless, all these new PB must be determined correctly. Arguments such as `we do not want to introduce this
new tensor of momentum' cannot be considered as serious, since, if the momentum $\pi^{\rho\sigma}$ is a true contravariant tensor, then it should
be transformed as a tensor. In reality, someone can take the covariant components of this new momentum $\pi_{\mu\nu}$ as the new $\frac{d(d +
1)}{2}$ dynamical variables. The corresponding coordinates in this `new' Hamiltonian formulation are chosen as components of the contravariant
$g^{\alpha\beta}$ metric tensor. Briefly, these dynamical variables $\{g^{\alpha\beta}, \pi_{\rho\sigma} \}$ lead to another `new' Hamiltonian
formulation of the metric GR. It is clear that the both Hamiltonian formulations developed with these two sets of basic dynamical variables must
essentially be the same, or at least, they must be related to each other by a canonical transformation (otherwise, both of them are wrong). Let us
present the Poisson brackets for the dual set of dynamical variables $\{ g^{\alpha\beta}, \pi_{\mu\nu}\}$
\begin{eqnarray}
[ g_{\alpha\beta}, \pi_{\mu\nu}] = \frac12 \Bigl( g_{\alpha\mu} g_{\beta\nu} + g_{\alpha\nu} g_{\beta\mu} \Bigr) \; \; {\rm and} \; \;
[ g^{\alpha\beta}, \pi_{\mu\nu}] = - \frac12 \Bigl( g^{\alpha}_{\mu} g^{\beta}_{\nu} + g^{\alpha}_{\nu} g^{\beta}_{\mu} \Bigr) =
- \Delta^{\alpha\beta}_{\mu\nu} \; \; . \label{eq153}
\end{eqnarray}
and also $[ g^{\alpha\beta}, g^{\mu\nu}] = 0 , [ \pi_{\alpha\beta}, \pi_{\mu\nu}] = 0$ and $[ g_{\alpha\beta}, g^{\mu\nu}] = 0$. The last PB
bracket which we want to present here is
\begin{equation}
[ \pi_{\alpha\beta}, \pi^{\mu\nu}] = \pi_{\alpha}^{\mu} \delta_{\beta}^{\nu} + \delta_{\alpha}^{\mu} \pi_{\beta}^{\nu} \; \; \; , \; \;
\label{pipi}
\end{equation}
This means that the co- and contra-covariant components of the momentum tensor do not commute with each other. By using these Poisson brackets
one can show that the both straight and dual sets of dynamical variables produce almost identical Hamiltonian formulations of metric gravity.
This means that each of these two Hamiltonian formulation of the metric GR (in the straight and dual spaces) is correct.
Now, let us present a few following Poisson brackets which are very useful in actual calculations. Let $g (> 0)$ will be the determinant of the
metric tensor $g_{\alpha\beta}$ and $F(g)$ is an arbitrary analytical function of $g$. In this notation one finds
\begin{eqnarray}
[ F(g), \pi^{\alpha\beta}] = \Bigl( \frac{\partial F}{\partial g} \Bigr) g g^{\alpha\beta} \; \; \; {\rm and} \; \; \;
[ \sqrt{- g}, \pi^{\alpha\beta}] = - \frac{1}{2 \sqrt{- g}} g g^{\alpha\beta} = \frac12 \sqrt{- g} g^{\alpha\beta} \; , \; \label{eq154}
\end{eqnarray}
for $F(g) = \sqrt{- g}$, if the determinant $g$ is negative. Analogously, for the $\pi_{\alpha\beta}$ momentum we obtain
\begin{eqnarray}
[ F(g), \pi_{\alpha\beta}] = \Bigl( \frac{\partial F}{\partial g} \Bigr) g g_{\alpha\beta} \; \; \; {\rm and} \; \; \;
[ \sqrt{- g}, \pi_{\alpha\beta}] = - \frac{1}{2 \sqrt{- g}} g g_{\alpha\beta} = \frac12 \sqrt{- g} g_{\alpha\beta} \; \label{eq155}
\end{eqnarray}
These formulas lead to the following expressions
\begin{eqnarray}
[ \frac{1}{\sqrt{- g}}, \pi^{\alpha\beta}] = - \frac{1}{2 \sqrt{- g}} g^{\alpha\beta} \; \; \; {\rm and} \; \; \;
[ \frac{1}{\sqrt{- g}}, \pi_{\alpha\beta}] = - \frac{1}{2 \sqrt{- g}} g_{\alpha\beta} \; \label{eq155a}
\end{eqnarray}
which are important for our calculations performed in the next Sections. All other Poisson brackets needed in calculations can be determined
with the use of our PB presented in Eqs.(\ref{eq15}) - (\ref{eq155a}). A large number of Poisson brackets which are often needed in various
problems of metric GR can be found in our paper \cite{FroUnp}.
Another example is slightly more complicated and includes the tensor(s) $e^{\mu \nu}$ defined above. From the explicit formulas for the
components of $e^{\mu \nu}$ tensor, Eq.(\ref{E}), one finds that only non-zero elements of this tensor are located in the space-like
corner of the total $e^{\mu \nu}$ tensor. These non-zero elements form the space-like $e^{pq}$ tensor (or space-like part of the total
$e^{\mu \nu}$ tensor) which is often called the space-like Dirac tensor (or space-like tensor of the second rank). For this tensor one
easily finds the following useful relation
\begin{eqnarray}
g_{\alpha\beta} e^{\alpha\beta} = g_{\alpha\beta} g^{\alpha\beta} - g_{\alpha\beta} \Bigl(\frac{g^{\alpha 0} g^{\beta 0}}{g^{00}}\Bigr)
= d - g_{\beta}^{0} \; \frac{g^{\beta 0}}{g^{00}} = d - \frac{g^{00}}{g^{00}} = d - 1 = g_{mn} e^{mn} \; \; \label{d-1}
\end{eqnarray}
where $g_{\alpha\beta} g^{\alpha\beta} = d$ and $d$ is the total dimension of our space-time continuum. By using our formulas for the Poisson
brackets obtained above we derive the following formulas
\begin{eqnarray}
[ e^{pq}, \pi^{\alpha\beta}] &=& - \frac12 \Bigl( g^{p\alpha} g^{q\beta} + g^{p\beta} g^{q\alpha} \Bigr) + \frac12 \Bigl( g^{0\alpha}
g^{p\beta} + g^{0\beta} g^{p\alpha} \Bigr) \Bigl(\frac{g^{0q}}{g^{00}}\Bigr) \nonumber \\
&+& \frac12 \Bigl(\frac{g^{0p}}{g^{00}}\Bigr) \Bigl( g^{0\alpha} g^{q\beta} + g^{0\beta} g^{q\alpha} \Bigr) - \frac{g^{0p} g^{q\alpha}
g^{0\beta} g^{0q}}{(g^{00})^2} \; \label{e-tens}
\end{eqnarray}
and
\begin{eqnarray}
[ e^{pq}, \pi_{\alpha\beta}] = - \Delta^{pq}_{\alpha\beta} + \Delta^{0 p}_{\alpha\beta} \Bigl(\frac{g^{0q}}{g^{00}}\Bigr)
+ \frac12 \Bigl(\frac{g^{0p}}{g^{00}}\Bigr) \Delta^{0 q}_{\alpha\beta} - \Delta^{0 0}_{\alpha\beta} \frac{g^{0p} g^{0q}}{(g^{00})^2}
\; \label{e-tensA}
\end{eqnarray}
Analytical formulas for these PB are important, since there were some ideas to use components of this space-like tensor $e^{pq}$ as the new
$\frac{d(d - 1)}{2}$ canonical variables (new coordinates) for another `advanced' Hamiltonian formulation of the metric GR. As follows from
Eqs.(\ref{e-tens}) and (\ref{e-tensA}) the complexity of arising Poisson brackets makes this idea unworkable.
To conclude this Section let us present the following formula for the fundamental Poisson brackets written in the united form for the both
straight and dual stes of dynamical variables
\begin{eqnarray}
[ g_{\alpha\beta}, \pi^{\mu\nu}] = \Delta^{\mu\nu}_{\alpha\beta} = [ \pi_{\alpha\beta}, g^{\mu\nu}] \; \; \; . \label{eq1551}
\end{eqnarray}
This beatiful formula includes two fundamental Poisson bracket(s) and clearly shows the differences which arise during transition from the
straight set of canonical variables to analogous dual set. As follows from the formula, Eq.(\ref{eq155}), the truly dual system of dynamical
variables (for the original $\{ g_{\alpha\beta}, \pi^{\mu\nu}\}$ system) must be $\{ -g^{\alpha\beta}, \pi_{\mu\nu}\}$ system rather then our
dual $\{ g^{\alpha\beta}, \pi_{\mu\nu}\}$ system of variables introduced above. Below, we shall ignore this comment and consider the $\{
g_{\alpha\beta}, \pi^{\mu\nu}\} \rightarrow \{ g^{\alpha\beta}, \pi_{\mu\nu}\}$ transition as a canonical transformation of dynamical
variables for our Hamiltonian formulation of the metric GR. Therefore, based on the general theory described in \cite{Gant} we can write the
following equality
\begin{eqnarray}
\pi^{\mu\nu} \delta g_{\mu\nu} - H_t \delta t + \delta F = v \Bigl( \pi_{\mu\nu} \delta g^{\mu\nu} - \overline{H}_t \delta t \Bigr)
\; \; , \; \; \label{eq1553}
\end{eqnarray}
where $v$ is a real, non-zero number which is called the valence of this canonical transformation, while $F(t, g_{\alpha\beta},
\pi^{\gamma\sigma})$ is its generating function. The notations $H_t$ and $\overline{H}_t$ means the total Hamiltonians
written in the both systems of dynamical variables, i.e., in the straight $\{ g_{\alpha\beta}, \pi^{\mu\nu}\}$ and dual $\{ g^{\alpha\beta},
\pi_{\mu\nu}\}$ systems of variables, respectively. It is clear that for such a canonical transformation we can use the same time $t$ (for
both systems) and this transformation is univalent which means that $| v | = 1$ (in reality, we have found that $v = - 1$). Furthermore, it
is possible to show that for the $\{ g_{\alpha\beta}, \pi^{\mu\nu}\} \rightarrow \{ g^{\alpha\beta}, \pi_{\mu\nu}\}$ canonical transformation
the generating function $F$ can be chosen in a very special form $F = S(t, g_{\mu\nu}, g^{\alpha\beta})$ which corresponds to the free
canonical transformation(s). In this case the previous equation takes the form
\begin{eqnarray}
\pi^{\mu\nu} \delta g_{\mu\nu} - H_t \delta t + \delta S(t, g_{\mu\nu}, g^{\alpha\beta}) = v \Bigl( \pi_{\mu\nu} \delta g^{\mu\nu}
- \overline{H}_t \delta t \Bigr) \; \; \; \label{eq1555}
\end{eqnarray}
and three following equations are also obeyed (for $v = - 1$)
\begin{eqnarray}
\pi^{\mu\nu} = \frac{\partial S}{\partial g_{\mu\nu}} \; \; , \; \; \pi_{\mu\nu} = - \frac{\partial S}{\partial g^{\mu\nu}} \; \; {\rm
and} \; \; \overline{H}_t = - H_t + \frac{\partial S}{\partial t} \; \; . \; \; \label{eq1557}
\end{eqnarray}
The last equation, Eq.(\ref{eq1557}), opens a short way to the Jacobi equation for the gravitational field in metric GR, but here we cannot
discuss this interesting problem (more details can be found in \cite{Fro1}), since it is located outside of the main stream of our current
analysis.
\section{Applications of Poisson brackets to actual problems of metric GR}
The knowledge of all Poisson brackets derived above allows one to achieve a number of goals in the Hamiltonian formulation(s) of metric General
Relativity. In particular, by using these Poisson brackets we can complete the actual Hamiltonian formulation of the metric GR. Another problem
which can be solved with the use of our Poisson brackets is explicit derivation of the Hamilton equations of motion for actual gravitational
field(s) which are often called the time-evolution equations. Also, with these Poisson brackets we can find the new canonical transformations
which are simplify either the canonical Hamiltonian $H_C$, or secondary constraints $\chi^{0\sigma}$ (they are defined below). In particular,
below, we consider the reduction of the canonical Hamiltonian $H_C$ to its natural form. The first two of the mentioned problems are briefly
considered in the next two subsections. These two problems were extensively discussed in earlier studies \cite{K&K}, \cite{FK&K} and \cite{Fro1}.
Therefore, there is no need for us here to move into deep analysis of these problems and repeat all formulas derived in those works. Here we just
want to illustrate how our formulas for Poisson brackets allow one to simplify analytical calculations of many difficult expressions. In contrast
with this, the third problem (i.e., reduction of $H_C$ to its natural form) is the central part of this study and we have to disclose all details
of our computations. These details can be found in the next Section. In general, analytical computations of a large number of Poisson brackets is
a very good exercise in tensor calculus.
\subsection{Constraints and Dirac closure of the Hamiltonian procedure}
Let us complete the Hamiltonian formulation of the metric GR, described above, by using the momenta $\pi^{mn}$, primary constraints $\phi^{0\sigma}$
and canonical Hamiltonian $H_C$ defined in Eq.(\ref{momenta}), Eq.(\ref{constr}) and Eq.(\ref{eq5}), respectively. First, we need to determine
commutators between the canonical Hamiltonian $H_C$, Eq.(\ref{eq5}), and primary constraints $\phi^{0\sigma}$, Eq.(\ref{primary}). This directly
leads (see discussion in \cite{K&K}) to the secondary constraints $\chi^{0\sigma} = [ H_C, \phi^{0\sigma} ]$, where $\sigma = 0, 1, \ldots, d - 1$.
This means that we have to add these $d$ non-zero secondary constraints $\chi^{0\sigma}$ to this Hamilton formulation \cite{constr}. The explicit
formulas for the secondary constraints $\chi^{0\sigma}$ are very cumbersome and can be found in \cite{K&K} (see also \cite{Fro1}). Here we do not
describe derivation of these and other similar formulas, since they were derived earlier in \cite{K&K}, and they are not original for this study. Our
formulas for Poisson brackets substantially simplify the whole process of derivation of these formulas for the primary and secondary constraints and
for their commutators. In particular, by uising our Poisson brackets one can show that all Poisson brackets between primary constraints equal zero
identically, i.e., $[ \phi^{0\lambda}, \phi^{0\sigma} ] = 0$, while $[ \phi^{0\lambda}, \chi^{0\sigma} ] = \frac12 g^{\lambda\sigma}$. The Poisson
brackets between canonical Hamiltonian $H_C$ and secondary constraints $\chi^{0\sigma}$ are expressed as `quasi-linear' \cite{QL} combinations of the
same secondary constrains $\chi^{0\sigma}$, i.e., we obtain
\begin{eqnarray}
[ \chi^{0\sigma}, H_{c} ] &=& -\frac{2}{\sqrt{-g}} I_{mnpq} \pi^{mn} \Bigl(\frac{g^{\sigma q}}{g^{00}}\Bigr) \chi^{0p} + \frac12 g^{\sigma k}
g_{00,k} \chi^{00} + \delta_{0}^{\sigma} \chi_{,k}^{0k} \label{close} \\
&+& \Bigl( -2 \frac{1}{\sqrt{-g}} I_{mnpk} \pi^{mn} \frac{g^{\sigma p}}{g^{00}} + I_{mkpq} g_{\mu\nu,l} \frac{g^{\sigma m}}{g^{00}}
A^{(pq) 0 \mu\nu l} \Bigr)\chi^{0k} \nonumber \\
&-& \Bigl( g^{0\sigma} g_{00,k} + 2 g^{n\sigma} g_{0n,k} + \frac{g^{n\sigma} g^{0m}}{g^{00}} (g_{mn,k} + g_{km,n} - g_{kn,m}) \Bigr) \chi^{0k}
\nonumber
\end{eqnarray}
where $A^{(pq) 0 \mu\nu k}$ is the symmetrized form (upon all $p \leftrightarrow q$ permutations) of the following expression
\begin{eqnarray}
A^{pq 0 \mu\nu k}= B^{(pq 0 \mid \mu \nu k)} - g^{0k} E^{pq \mu \nu} + 2 g^{0\mu} E^{pq k \nu}.
\end{eqnarray}
The Poisson bracket, Eq.(\ref{close}), indicates that the Hamilton procedure developed for the metric GR in \cite{K&K} and \cite{FK&K} is closed
(Dirac closure), i.e., the Poisson bracket $[ \chi^{0\sigma}, H_{c} ]$ does not lead to any tertiary, or other constraints of higher order(s).
Analogously, the Poissonbrackets between secondary constraints $[ \chi^{0\sigma}, \chi^{0\gamma}]$, where $\sigma \ne \gamma$ (when $\sigma =
\gamma$ this PB equals zero identically), is
\begin{eqnarray}
[ \chi^{0\sigma}, \chi^{0\gamma} ] &=& [ \chi^{0\sigma}, [ \phi^{0\gamma}, H_{c} ]] = - [ \phi^{0\gamma}, [ H_C, \chi^{0\sigma} ]] -
[ H_C, [ \chi^{0\sigma}, \phi^{0\gamma} ]] \nonumber \\
&=& [ \phi^{0\gamma}, [ \chi^{0\sigma}, H_C ]] - \frac12 [ g^{\sigma\gamma}, H_C ] \; \; , \; \label{chichi}
\end{eqnarray}
where the Poisson bracket $[ \chi^{0\sigma}, H_C ]$ is given by the formula, Eq.(\ref{close}). This formula also does not lead to any constraint
of higher order and/or to any other expression which is not a function of the dynamical variables only (see dscussion in \cite{Dir64}). This proves
that the Hamiltonian system which includes the canonical Hamiltonian $H_C$ and all primary $\phi^{0\lambda}$ and secondary $\chi^{0\sigma}$
constraints \cite{constr} is closed (here $\lambda = 0, 1, \ldots, d - 1$ and $\sigma = 0, 1, \ldots, d - 1$). The actual closure of the Dirac
procedure \cite{Dir50} for the Hamiltonian formulation of the metric GR considered above was shown for the first time in \cite{K&K}. Formally, the
explicit demonstration of closure of the whole Dirac procedure \cite{Dir50} is the last and most important step for any Hamiltonian formulation of
the metric GR \cite{Dir64}. However, in reality one needs to check one more condition which appears to be crucial for separation of the actual
Hamiltonian formulations of the metric GR from numerous quasi-Hamiltonian constructions developed in this area, since the middle of 1950's.
This additional condition is the rigorous conservation of the bothe true (or algebraic) and gauge symmetries of the metric GR which coincides with
the symmetry of original Einstein's equation(s) for the free gravitational field. In general, by performing a chain of transformations from the
original $\Gamma - \Gamma$ Lagrangian to the Hamiltonian formulation of the metric GR we have to be sure that all regular and gauge symmetries (or
invariances) are conserved. Disappearance (or reduction) of the original gauge symmetry of the problem simply means that our transformations to the
Hamiltonian formulation are fundamentally wrong, or simply that `they are not canonical'. Our formulas for the Hamiltonians $H_t, H_C$ presented
above and explicit expressions for all primary and secondary constraints \cite{K&K}, \cite{Fro1} allow one to derive (with the use of Castellani
procedure \cite{Cast}) the correct generators of gauge transformations, which directly and unambogously lead to the diffeomorphism invariance
\cite{K&K}. This diffeomorphism invariance is well known gauge symmetry (or gauge, for short) for the free gravitational field(s) since early years
of the metric GR (see, e.g., \cite{Carm}). Currently, there are only two known Hamiltonian formulations developed for the metric GR (\cite{Dir58}
and \cite{K&K}) which are able to reproduce the actual diffeomorphism invariance directly and transparently. Note that for all approaches, which
are based on the $\Gamma - \Gamma$ Lagrangian of the metric GR, such a reconstruction of the diffeomorphism invariance (or gauge) is a relatively
simple problem (see, e.g., \cite{Saman}). In contrast with this, for any Hamiltonian-based formulation the complete solution of similar problem
requires a substantial work. However, it is clear that analytical derivation of the diffeomorphism invariance is a very good test for the total
$H_t$ and canonical $H_C$ Hamiltonians as well as for all primary $\phi^{0\sigma}$ and secondary $\chi^{0\sigma}$ constraints derived in any new
Hamiltonian formulation of the metric GR. Any mistake either in the $H_t$and $H_C$ Hamiltonians, or in the $\phi^{0\lambda}$ and $\chi^{0\sigma}$
constraints leads to the loss of true diffeomorphism invariance.
\subsection{Hamilton equations of motion for the free gravitational field}
In general, if we know the total $H_t$ and canonical $H_C$ Hamiltonians, Eqs.(\ref{eq1}) and (\ref{eq5}), respectively, then we can derive the
Hamilton equations of motion (or system of Hamilton equations) which describe the time-evolution of all dynamical variables in the metric GR,
i.e., time-evolution of each component of the metric tensor $g_{\alpha\beta}$ and momentum tensor $\pi^{\gamma\rho}$. These equations are
\cite{Fro1}
\begin{eqnarray}
\frac{d g_{\alpha\beta}}{d x_0} = [ g_{\alpha\beta}, H_{t} ] \; \; \; {\rm and} \; \; \; \frac{d \pi^{\gamma\rho}}{d x_0} = [ \pi^{\gamma\rho},
H_{t} ] \label{eq20}
\end{eqnarray}
where the notation $x_0$ denotes the temporal variable. In particular, for the spatial components $g_{ij}$ of the metric tensor $g_{\alpha\beta}$ one
finds the following equations
\begin{eqnarray}
\frac{d g_{ij}}{d x_0} &=& [ g_{ij}, H_{t} ] = [ g_{ij}, H_{c} ] = \frac{2}{\sqrt{-g} g^{00}} I_{(ij)pq} \pi^{pq} - \frac{1}{g^{00}} I_{(ij)pq}
B^{(p q 0|\mu \nu k)} g_{\mu\nu,k} \; \label{eq25} \\
&=& \frac{2}{\sqrt{-g} g^{00}} I_{(ij)pq} \Bigl[ \pi^{pq} - \frac12 \sqrt{-g} B^{(p q 0|\mu \nu k)} g_{\mu\nu,k} \Bigr] \nonumber
\end{eqnarray}
where the notation $I_{(ij)pq}$ stands for the $(ij)-$symmetrized values of the $I_{ijpq}$ tensor defined in Eq.(\ref{I}), i.e.,
\begin{equation}
I_{(ij)pq} = \frac12 \Bigl( I_{ijpq} + I_{jipq} \Bigr) = \frac{1}{d - 2} g_{ij} g_{pq} - \frac12 ( g_{ip} g_{jq} + g_{iq} g_{jp} ) \; \; \; .
\end{equation}
Analogously, for the $g_{0\sigma}$ components of the metric tensor one finds the following equations of time-evolution
\begin{eqnarray}
\frac{d g_{0\sigma}}{d x_0} = [ g_{0\sigma}, H_{t} ] = g_{0\sigma,0} \; , \; \; \label{eq253}
\end{eqnarray}
since all $g_{0\sigma}$ components commute with the canonical Hamiltonian $H_C$, Eq.(\ref{eq5}), while all $g_{ij}$ commute with the primary
constraints $\phi^{0\sigma}$. This result could be expected, since the equation, Eq.(\ref{eq253}), is, in fact, a definition of the
$\sigma-$velocities (or $g_{0\sigma,0}$-velocities), where $\sigma = 0, 1, \ldots, d - 1$.
The Hamilton equations for tensor components of the momentum $\pi^{\alpha\beta}$, Eq.(\ref{eq20}), are substantially more complicated. They
are derived by calculating the Poisson brackets between each term in $H_{t}$ and $\pi^{\gamma\rho}$. This general formula takes the form
\begin{eqnarray}
\frac{d \pi^{\alpha\beta}}{d x_0} &=& - [ H_{t}, \pi^{\alpha\beta} ] = - \Bigl[ \frac{I_{mnpq}}{\sqrt{-g} g^{00}}, \pi^{\alpha\beta} \Bigr]
\pi^{mn} \pi^{pq} \nonumber \\
&+& \Bigl[ \frac{I_{mnpq}}{g^{00}}, \pi^{\alpha\beta} \Bigr] \pi^{mn} B^{(p q 0|\mu \nu k)} g_{\mu\nu,k}
+ \frac{1}{g^{00}} I_{mnpq} \pi^{mn}\Bigl[ B^{(p q 0|\mu \nu k)}, \pi^{\alpha\beta} \Bigr] g_{\mu\nu,k} + \ldots \; \label{eq255}
\end{eqnarray}
Let us determine the first Poisson bracket in this formula (other terms are considered analogously, i.e., term-by-term). The explicit expression
for this term is
\begin{eqnarray}
&-& \Bigl[ \frac{I_{mnpq}}{\sqrt{-g} g^{00}}, \pi^{\alpha\beta} \Bigr] \pi^{mn} \pi^{pq} = - \frac{[ I_{mnpq}, \pi^{\alpha\beta}]}{\sqrt{-g}
g^{00}} \pi^{mn} \pi^{pq} - [ \frac{1}{\sqrt{-g} g^{00}}, \pi^{\alpha\beta} \Bigr] I_{mnpq} \pi^{mn} \pi^{pq} \; \; \label{eq256}
\end{eqnarray}
There are three following cases: (1) for a pair of space-like indexes, i.e., for $(\alpha\beta) = (a b)$, we have
\begin{eqnarray}
\Bigl( \frac{d \pi^{a b}}{d x_0}\Bigr)_1 = -\frac{2}{d - 2} g_{m n} \pi^{m n} \pi^{a b} + 2 g_{m p} \pi^{m a} \pi^{p b} +
\frac{I_{mnpq}}{2 \sqrt{-g} g^{00}} g^{a b} \pi^{m n} \pi^{p q} \; \; \; \label{eq257}
\end{eqnarray}
while for the $(\alpha\beta) = (0 a)$ indexes the expression is
\begin{eqnarray}
\Bigl( \frac{d \pi^{0 a}}{d x_0}\Bigr)_1 = \frac{I_{mnpq}}{2 \sqrt{-g} g^{00}} g^{0 a} \pi^{m n} \pi^{p q} \; \; \label{eq2561}
\end{eqnarray}
Finally, for the $(\alpha\beta) = (0 0)$ pair of indexes one finds
\begin{eqnarray}
\Bigl( \frac{d \pi^{0 0}}{d x_0}\Bigr)_1 = \frac{I_{mnpq}}{2 \sqrt{-g}} \Bigl( 1 + \frac{2}{(g^{00})^{2}} \Bigr) \pi^{mn} \pi^{pq} \; \;
\label{eq2562}
\end{eqnarray}
In general, analytical calculations of other Poisson brackets in the formula, Eq.(\ref{eq255}), is a straightforward task, but the final formula
contains more than 150 terms. This drastically complicates all operations with the formula, Eq.(\ref{eq255}), for the $\frac{d \pi^{\gamma\rho}}{d
x_0}$ (temporal) derivative. Nevertheless, the complete set of Hamilton equations for the free gravitational field in metric GR has been produced
in closed and explicit form \cite{FroUnp}.
\subsection{Truly canonical transformations in the metric GR}
As is well known all canonical transformations for an arbitrary Hamilton system form a closed algebraic group. This means that in any Hamilton
system: (1) consequence of the two canonical transformations is the new canonical transformation, (2) identical transformation of dynamical
variables is the canonical transformation, (3) any canonical transformation has its inverse transformation which is also canonical and unique.
In general, there are quite a few canonical transformations in the metric General Relativity, and some of them can be used to simplify either
Hamiltonian(s), or secondary constraints, or some other crucial quantities, including a few important Poisson brackets. As is well known (see,
e.e., \cite{LLTF}, \cite{Carm}) the metric General Relativity is a non-linear theory which cannot rigorously be linearized even in lower-order
approximations. Therefore, the linear canonical transformations of dynamical variables have no interest for the Hamiltonian formulations which
have been developed for the metric GR. Furthermore, it can be shown that among all possible non-linear canonical transformations the following
`special' transformations play a great role in derivation of the new Hamiltonian formulations of the metric GR. These special canonical
transformations can be written in the form: $\{ g_{\alpha\beta}, \pi^{\mu\nu}\} \rightarrow \{ g_{\alpha\beta}, \Pi^{\rho\sigma}\}$, where the
new momenta $\Pi^{\rho\sigma}$ are the linear functions (or linear combinations) of old momenta $\pi^{\mu\nu}$ to which a cubic functions (or
cubic polynomials) of the contravariant components of metric tensor $g^{\alpha\beta}$. The coefficient(s) in front of this cubic function can
also contain factors such as $\sqrt{- g}$ and/or $g^{00}$, or their product. As follows from our experience only such canonical transformations
can be used for equivalent transformation of the two different sets of dynamical variables in the metric GR. In particular, this form can be
found for the canonical transformation of dynamical variables constructed in \cite{FK&K} has such a form. This canonical transformation relates
the two correct Hamiltonian formulations known to this moment in metric GR, i.e., formulation by Dirac \cite{Dir58} and K$\&$K \cite{K&K}
formulation. Our new canonical transformation of dynamical variables described below also has this form. Furthermore, if some `new' set of
dynamical variables (in metric GR) is related to another `old' set of dynamical variables by a canonical transformation which has the mentioned
form, then it can be shown that this transformation of variables will preserve the complete diffeomorphism as a gauge symmetry of the free
gravitational field. Very likely, the explicit form ofsuch `special' canonical transformations and all possible consequencies of this fact are
substantially determined by the $\Gamma - \Gamma$ Lagrangian presented in Section II. Indeed, the $\Gamma - \Gamma$ Lagrangian, Eq.(\ref{eq05}),
is a polinomial of power six upon the $g^{\alpha\beta}$ components and a quadratic function of the space-like velocities $g_{mn,0}$.
\section{Canonical Hamiltonian reduced to its natural form}
In this Section we reduce the canonical Hamiltonian $H_C$ to its natural form, which will play a significant role in numerous applications to
the metric gravity. We perform such a reduction of $H_C$ by using some canonical transformation of the dynamical variables $g_{\alpha\beta}$
and $\pi^{\rho\sigma}$ defined above. First, let us write the canonical Hamiltonian, Eq.(\ref{eq5}), in the form
\begin{eqnarray}
H_C &=& \frac{I_{mnpq}}{\sqrt{-g} g^{00}} \Bigl[ \pi^{mn} \pi^{pq} - \sqrt{-g} \pi^{mn} B^{(p q 0|\mu \nu k)} g_{\mu\nu,k} +
\frac14 (- g) B^{(m n 0|\mu \nu k)} B^{(p q 0|\alpha \beta l)} g_{\mu\nu,k} g_{\alpha\beta,l} \Bigr] \nonumber \\
&+& \frac14 \sqrt{-g} \Bigl \{ \frac{1}{g^{00}} I_{mnpq} B^{([mn] 0|\mu\nu k)} B^{(p q 0|\alpha \beta l)} -
B^{\mu\nu k \alpha\beta l}\Bigr \} g_{\mu\nu,k} g_{\alpha\beta,l} \; \; \label{eq5a}
\end{eqnarray}
which is more appropriate for our purposes in this study. In Eq.(\ref{eq5a}) the notation $B^{([mn] 0|\mu\nu k)}$ stands for the $B^{(m n 0
\mid \mu\nu k)}$ cubic function of the contravariant components of the metric tensor which is completely anti-symmetric in respect to the
$m$ and $n$ indexes. The explicit formula for the $B^{([mn] 0|\mu\nu k)}$ function is
\begin{eqnarray}
B^{([mn] 0|\mu\nu k)} &=& g^{m k} g^{n \nu} g^{\nu 0} - g^{n k} g^{m \nu} g^{\nu 0} + \frac12 \Bigl( g^{n \mu} g^{m \nu} g^{k 0} +
g^{n k} g^{\mu \nu} g^{m 0} - g^{m \mu} g^{n \nu} g^{k 0} \nonumber \\
&-& g^{m k} g^{\mu \nu} g^{m 0} \Bigr) \; \; \label{AsBcoef}
\end{eqnarray}
Now, we can see that the first term in $\Bigl[ \ldots \Bigr]$ brackets in Eq.(\ref{eq5a}) can be written as a pure quadratic function of the new
$P^{mn} = \pi^{mn} - \frac12 \sqrt{-g} B^{(m n 0|\mu\nu k)} g_{\mu\nu, k}$ variables, i.e.,
\begin{eqnarray}
H_C &=& \frac{I_{mnpq}}{\sqrt{-g} g^{00}} \Bigl( \pi^{mn} - \frac12 \sqrt{-g} B^{(m n 0|\mu\nu k)} g_{\mu\nu, k} \Bigr)
\Bigl( \pi^{pq} - \frac12 \sqrt{-g} B^{(p q 0|\alpha\beta l)} g_{\alpha\beta, l} \Bigr) \nonumber \\
&+& \frac14 \sqrt{-g} \Bigl\{ \frac{1}{g^{00}} I_{mnpq} B^{([mn] 0|\mu\nu k)} B^{(p q 0|\alpha \beta l)} - B^{\mu\nu k \alpha\beta l}\Bigr\}
g_{\mu\nu,k} g_{\alpha\beta,l} + T_1 + T_2 \; \; , \; \label{H_Cnew}
\end{eqnarray}
where the two additional terms $T_1$ and $T_2$ take the following form
\begin{eqnarray}
T_1 = \frac{I_{mnpq}}{2 \sqrt{-g} g^{00}} [ \pi^{mn}, \sqrt{- g}] B^{(p q 0|\alpha \beta l)} g_{\alpha\beta,l} = - \frac{I_{mnpq} g^{mn}}{2
g^{00}} B^{(p q 0|\alpha \beta l)} g_{\alpha\beta,l} \; \; \; \label{eq5b}
\end{eqnarray}
and
\begin{eqnarray}
T_2 &=& - \frac{I_{mnpq}}{2 g^{00}} [ B^{(m n 0|\mu \nu k)}, \pi^{pq} ] g_{\mu\nu,k} = - \frac{I_{mnpq}}{2 g^{00}} \Bigl[ \frac12 \Bigl(
g^{\mu p} g^{m q} + g^{\mu q} g^{m p} \Bigr) g^{n \nu} g^{k 0} \nonumber \\
&+& \frac12 g^{\mu m} \Bigl( g^{n p} g^{\nu q} + g^{n q} g^{\nu p} \Bigr) g^{k 0} + \frac12 g^{\mu m} g^{n \nu} \Bigl( g^{k p} g^{0 q}
+ g^{k q} g^{0 p} \Bigr) \nonumber \\
&-& \frac12 \Bigl( g^{m p} g^{n q} + g^{m p} g^{n q} \Bigr) g^{k 0} g^{\mu\nu} - \frac12 g^{m n} \Bigl( g^{p k} g^{q 0} + g^{p 0}
g^{q k} \Bigr) - \frac12 g^{m n} g^{k 0} \Bigl( g^{\mu p} g^{\nu q} + g^{\mu q} g^{\nu p} \Bigr) \nonumber \\
&-& \Bigl( g^{m p} g^{k q} + g^{m q} g^{k p} \Bigr) g^{n \nu} g^{\mu 0} - g^{m k} \Bigl( g^{n p} g^{\nu q} + g^{n q} g^{\nu p} \Bigr)
g^{\mu 0} - \frac12 g^{m k} g^{n \nu} \Bigl( g^{\mu p} g^{0 q} + g^{\mu q} g^{0 p} \Bigr) \nonumber \\
&+& \frac12 \Bigl( g^{m p} g^{n q} + g^{m q} g^{n p} \Bigr) g^{\nu k} g^{0 \mu} + \frac12 g^{m n} \Bigl( g^{\nu p} g^{k q} + g^{\nu q}
g^{k p} \Bigr) g^{\mu 0} + \frac12 g^{m n} g^{\nu k} \Bigl( g^{p 0} g^{\mu q} + g^{0 q} g^{\mu p} \Bigr) \nonumber \\
&+& \frac12 \Bigl( g^{k p} g^{m q} + g^{k q} g^{m p} \Bigr) g^{\nu k} g^{0 \mu} + \frac12 g^{k m} \Bigl( g^{\mu p} g^{\nu q} + g^{p \nu}
g^{\mu q} \Bigr) g^{n 0} + \frac12 g^{k m} g^{\mu \nu} \Bigl( g^{n p} g^{0 q} \nonumber \\
&+& g^{n q} g^{0 p} \Bigr) \Bigr] g_{\mu\nu,k} \; \; . \; \label{eq5c}
\end{eqnarray}
Now, we can introduce the new momenta $P^{\gamma\rho}$ which is written in the form
\begin{eqnarray}
P^{\gamma\rho} = \pi^{\gamma\rho} - \frac12 \sqrt{-g} B^{(\gamma\rho 0|\mu\nu k)} g_{\mu\nu, k} \; \; \; \label{canvar}
\end{eqnarray}
where $\pi^{\gamma\rho}$ are the `old' momenta used in \cite{K&K}. These new momenta can be considered as the contravariant components of the
tensor of `united' momentum $P = g_{\alpha\beta} P^{\alpha\beta}$. Note that the explicit expressions for the old velocities written in terms
of new momenta $P^{ab}$ are even simpler $g_{mn, 0} = \frac{1}{\sqrt{-g} g^{00}} I_{m n q p} P^{pq}$ (compare with Eq.(\ref{veloc}) from above).
The explicit formulas for the primary constraints are also simpler: $P^{0\gamma} \approx 0$ for $\gamma = 0, 1, \ldots, d - 1$. The generalized
coordinates are chosen in the old (or traditional) form, i.e., they coincide with the covariant components of the metric tensor $g_{\alpha\beta}$.
It is clear that similar choice of the generalized coordinates provides a number of additional advantages in applications to the metric GR. For
instance, by using the metric tensor one can rise and lower indexes in arbitrary vectors and tensors. Also, all covariant and contravariant
derivatives of the metric tensor always equal zero, i.e., this tensor behaves as a constant during these operations. More unique and remarkable
properties of the metric tensor are discussed, e.g., in \cite{Kochin}. For the purposes of this study it is important to note only that our new
system of dynamical variables contains the same `coordinates' $g_{\alpha\beta}$ and new momenta $P^{\gamma\rho}$. The Poisson brackets between
our new dynamical variables can easily be determined by using the known values of Poisson brackets written in the old dynamical variables
$\Bigl\{ g_{\alpha\beta}, \pi^{\gamma\rho} \Bigr\}$ defined above. We have $[ g_{\alpha\beta}, P^{\gamma\rho} ] = [ g_{\alpha\beta},
\pi^{\gamma\rho} ] = \Delta^{\gamma\rho}_{\alpha\beta} = \frac12 \Bigl( \delta^{\gamma}_{\alpha} \delta^{\sigma}_{\beta} +
\delta^{\sigma}_{\alpha} \delta^{\gamma}_{\beta} \Bigr), [ g_{\alpha\beta}, g_{\gamma\rho} ] = 0$ (these two basic variables coincide with
the original (or traditional) `coordinates' used in \cite{Dir58}, \cite{K&K}, \cite{FK&K}) and $[ P^{\alpha\beta}, P^{\gamma\rho} ] = 0$. The
last equality we consider in detail
\begin{eqnarray}
& &[ P^{\alpha\beta}, P^{\gamma\rho} ] = [ \pi^{\alpha\beta}, \pi^{\gamma\rho} ] - \frac12 [ \sqrt{-g} B^{(\alpha\beta 0|\mu\nu k)},
\pi^{\gamma\rho} ] g_{\mu\nu, k} + \frac12 [ \sqrt{-g} B^{(\alpha\beta 0|\lambda\sigma, l)}, \pi^{\gamma\rho} ] g_{\lambda\sigma, l}
\nonumber \\
&+& [ \sqrt{-g} B^{(\alpha\beta 0|\mu\nu k)} g_{\mu\nu, k}, \sqrt{-g} B^{(\gamma\rho 0|\lambda\sigma l)} g_{\lambda\sigma, l} ] \label{PBV}
\end{eqnarray}
where the first and last terms equal zero identically, since the variables $g_{\alpha\beta}$ and $\pi^{\mu\nu}$ are canonical. This directly
leads to the formula
\begin{eqnarray}
[ P^{\alpha\beta}, P^{\gamma\rho} ] = - \frac12 [ \sqrt{-g} B^{(\alpha\beta 0|\mu\nu k)}, \pi^{\gamma\rho} ] g_{\mu\nu, k} +
\frac12 [ \sqrt{-g} B^{(\alpha\beta 0|\lambda\sigma, l)}, \pi^{\gamma\rho} ] g_{\lambda\sigma, l} \label{PBV1}
\end{eqnarray}
Now, we can replace the dummy indexes in the second term of this equation by the values which coincide with the corresponding dummy indexes
in the first term,i.e., $\lambda \rightarrow \mu, \sigma \rightarrow \nu$ and $l \rightarrow k$. This substitution reduces Eq.(\ref{PBV1})
to the form
\begin{eqnarray}
[ P^{\alpha\beta}, P^{\gamma\rho} ] = - \frac12 [ \sqrt{-g} B^{(\alpha\beta 0|\mu\nu k)}, \pi^{\gamma\rho} ] g_{\mu\nu, k} +
\frac12 [ \sqrt{-g} B^{(\alpha\beta 0|\mu\nu, k)}, \pi^{\gamma\rho} ] g_{\mu\nu, k} = 0 \label{PBV2}
\end{eqnarray}
which is the difference of the two identical expressions. This shows that the new dynamical variables $\{ g_{\alpha\beta}, P^{\mu\nu}\}$ are
also canonical, and they can be used in the metric gravity, since they are canonically related to the old set of such variables
$\{ g_{\alpha\beta}, \pi^{\mu\nu}\}$ \cite{K&K}.
As follows from the formulas derived above the canonical Hamiltonian $H_C$ is reduced to the following form
\begin{eqnarray}
H_C &=& \frac{I_{mnpq}}{\sqrt{-g} g^{00}} P^{mn} P^{pq} + \frac14 \sqrt{-g} \Bigl[ \frac{I_{mnpq}}{g^{00}} B^{([mn] 0|\mu\nu k)}
B^{(p q 0|\alpha \beta l)} - B^{\mu\nu k \alpha\beta l}\Bigr] g_{\mu\nu,k} g_{\alpha\beta,l} \nonumber \\
&-& \frac{I_{mnpq}}{2 g^{00}} g^{mn} B^{(pq 0| \alpha\beta l)} g_{\alpha\beta,l} + T_2 \; \; \label{eq5d}
\end{eqnarray}
which can be re-written in the following symbolic form
\begin{eqnarray}
H_C = \frac12 \sum^{n}_{i,j=1} \hat{M}_{ij}(q_1, q_2, \ldots, q_n) p_i p_j + \sum^{n}_{i,j=1} \hat{V}_{mn}(q_1, q_2, \ldots, q_n)
\; \; \label{ClassH}
\end{eqnarray}
where $\hat{M}$ is a positively defined $n \times n$ matrix which is often called the inverse mass matrix (or matrix of inverse masses),
while the $\hat{V}$ matrix is an arbitrary, in principle, symmetric $n \times n$ matrix which is called the potential matrix (or matrix
of the potential energy). Here $n$ is the total number of generalized coordinates $q_1, q_2, \ldots, q_n$. Each matrix element of the
potential matrix $\hat{V}$ in Eq.(\ref{ClassH}) is a polynomial of these generalized coordinates. Also, in Eq.(\ref{ClassH}) the notations
$p_i$ and $p_j$ designate the momenta conjugate to the corresponding generalized coordinates $q_i$ and $q_j$, respectively, i.e., $[ q_k,
p_l] = \delta_{kl}$. In classical mechanics the phase space is flat, and, therefore, the both covariant and contravariant components of
any vector coincide with each other. The form of the Hamiltonian $H_C$, Eq.(\ref{ClassH}), is called normal, and it is well known in
classical mechanics of Hamiltonian systems. Furthermore, more than 90 \% of all problems ever solved in classical Hamiltonian mechanics
with the use of Hamilton methods either have Hamiltonians which are already written in the normal form, or their Hamiltonians can be
reduced to such a form by some canonical transformation(s) of variables.
To improve the overall quality of our analogy between metric GR and classical Hamiltonian mechanics one can introduce the new set of
dynamical variables which include the total momentum of the free gravitational field $P = g_{\alpha\beta} P^{\alpha\beta}$ (tensor
invariant) and its tensor `projections' $P_{\alpha}^{\beta} = g_{\alpha\gamma} P^{\gamma\beta}$. The corresponding space-like quantities
$P = g_{mn} P^{mn}$ and $P_{m}^{n} = g_{m p} P^{p n}$ are already included in our canonical Hamiltonian $H_C$. By using our formulas
presented above one easily finds a few following Poisson brackets:
\begin{eqnarray}
&[& P, P^{ab} ] = [ g_{mn}, P^{ab} ] P^{mn} = \Delta^{ab}_{mn} P^{mn} = P^{ab} \; , \; [ g_{cd}, P ] = g_{mn} [ g_{cd}, P^{mn} ]
= g_{cd} \nonumber \\
&[& g_{\alpha\beta}, P^{\gamma}_{\sigma} ] = \frac12 ( g_{\beta\sigma} \delta^{\gamma}_{\alpha} + g_{\alpha\sigma}
\delta^{\gamma}_{\beta} ) \; \; , \; [ g^{\alpha\beta}, P ] = g^{\alpha\beta} \; \; \;
\nonumber
\end{eqnarray}
and many others. Here we cannot present all of them explicitly. Note only that with the total momentum $P$ and its tensor projections
(i.e., $P^{\alpha\beta}, P^{\gamma}_{\sigma}$, etc) one can write the Hamilton equations in the form which is almost coincides with
analogous equations known for Hamiltonian systems in classical mechanics. This is another interesting direction for future development
of the Hamiltonian formulation(s) of metric GR. Applications of our new canonical variables $\{ g_{\lambda\kappa}, P^{\alpha\beta} \}$
to some interesting problems in metric GR will be considered elsewhere. Relations between our dynamical variables $\{ g_{\lambda\kappa},
P^{\alpha\beta} \}$ and analogous variables used in Dirac formulation of the metric General Relativity $\{ g_{\lambda\kappa},
\pi^{\alpha\beta} \}$ are discussed in the Appendix A.
\section{Discussions and Conclusion}
Thus, we have shown that the canonical Hamiltonian $H_C$ of the free gravitational field(s), Eq.(\ref{eq5a}), can be reduced to the natural
form which includes a pure quadratic function of the space-like momenta $P^{mn}$ with a positive coefficient in front of it. Indeed, the
factor, which is located in front of the $P^{mn} P^{pq}$ product in the $H_C$ Hamiltonian, is the positively defined space-like tensor of
the fourth rank $I_{mn pq}$ (or $\frac{1}{\sqrt{-g}} I_{mn pq}$). This factor can be considered as an effective inverse `quasi-mass' of the
free gravitational field in metric GR. Also, as directly follows from the explicit form of the canonical Hamiltonian $H_C$, Eq.(\ref{eq5a}),
each of the remaining terms in canonical Hamiltonian $H_C$, Eq.(\ref{eq5a}), is a polynomial function of contravariant components
$g^{\alpha\beta}$ of the metric tensor. The maximal power of such polynomials upon $g^{\alpha\beta}$ does not exceed eight. Some terms in
the $H_C$ also include the factors $\sqrt{-g}$ (or $\frac{1}{\sqrt{-g}})$ and/or $g^{00}$.
The new canonical $\{ g_{\alpha\beta}, P^{\gamma\rho} \}$ variables have been constructed for the metric GR. The total number of canonical
variables does not changed and it always equals $2 d$. The Poisson brackets between these variables are: $[ g_{\alpha\beta},
P^{\gamma\rho} ] = \Delta^{\gamma\rho}_{\alpha\beta} = \frac12 \Bigl( \delta^{\gamma}_{\alpha} \delta^{\rho}_{\beta} + \delta^{\rho}_{\alpha}
\delta^{\gamma}_{\beta} \Bigr) = [ P_{\gamma\rho}, g^{\alpha\beta} ], [ g_{\alpha\beta}, g_{\gamma\sigma} ] = 0$and $[ P^{\alpha\beta},
P_{\gamma\rho} ] = 0$. This indicates clearly that these new dynamical variables are truly canonical and can be used in the new Hamiltonian
formulation of the General Relativity. Analogous set of dynamical variables $\{ g^{\alpha\beta}, P_{\gamma\rho} \}$ is the dual set of
canonical variables which can also be used to develop a different (but equivalent!) Hamiltonian formulation of the metric GR.
Thus, in this study we have finished development of the complete and correct Hamiltonian formulation of the metric General Relativity. Also,
we have determined all essential (fundamental and secondary) Poisson brackets which can now be used to perform a large amount of analytical
and numerical calculations. The fundamental Poisson brackets are defined between all components of the gravitational fieldand corresponding
momenta (or components of the momentum tensor). The secondary Poisson brackets define commutation relations between arbitrary, in principle,
analytical functions of coordinates (components of the gravitational field) and momenta. These Poisson brackets become the main working tools
of the metric General Relativity, which can now be considered as a Hamiltonian system. In addition to this, our Poisson brackets can be used
to solve various problems in metric GR, e.g., obtain trajectories, derive conservation laws, find integrals of motion, derive and investigate
the laws of time-evolution for different quantities, vectors and tensors. A remarkable result obtained in this study should be emphasized
again: the canonical Hamiltonian $H_C$, which describes time-evolution of relativistic gravitational fields, can be reduced to its natural
form, and this form essentially coincides with the Hamiltonian of the non-relativistic system of $N (= d)$ interacting particles. Physical
sense of dynamical variables is obviously very different in both these cases, but almost identical coincidence of their Hamiltonians was
absolutely unexpected and shocking.
In conclusion, it should be emphasized again that the first non-cotradictory Hamiltonian formulation of metric GR was presented by P.A.M.
Dirac in 1958 \cite{Dir58}. The second `alternative' formulation was developed in \cite{K&K}. The both these correct Hamiltonian
formulations of metric GR preserve the complete diffeomorphism as the gauge symmetry of this theory. In our earlier paper \cite{FK&K} we
have shown that these two Hamiltonian formulations are related by a true canonical transformation $\{ g^{\alpha\beta}, \pi^{\gamma\rho} \}
\rightarrow \{ g^{\alpha\beta}, p^{\gamma\rho} \}$. In this study we have solved a number of remaining problems which were never discussed
in earlier papers. In particular, we obtained formulas for various Poisson brackets which are need in different Hamiltonian formulation(s)
of the metric GR. This also includes the Poisson brackets from the two sets of basic dynamical variables: (a) set of straight (or Dirac)
dynamical variables, e.g., $\{ g_{\alpha\beta}, \pi^{\gamma\rho} \}$ (or $\{ g_{\alpha\beta}, P^{\gamma\rho} \}$), and (b) dual set of basic
dynamical variables $\{ g^{\alpha\beta}, \pi_{\gamma\rho} \}$ (or $\{ g^{\alpha\beta}, P_{\gamma\rho} \}$). The fundamental relation between
these two sets of dynamical variables is given by the Poisson bracket, Eq.(\ref{eq1551}). In our new dynamical variables the same relation
takes the form $[ g_{\alpha\beta}, P^{\mu\nu}] = \Delta^{\mu\nu}_{\alpha\beta} = [ P_{\alpha\beta}, g^{\mu\nu}]$. Applications of our
Hamiltonian formulation of the metric GR to some interesting problems will be considered in the next studies.
Finally, as we all know many physists called and considered the General Relativity (or metric GR in our words) as "the most beautiful
of all existing physical theories" (see, e.g., \cite{LLTF}, page 228). Here I wish to note that the correct Hamiltonian formulation
of the metric General Relativity (or, Gravity, for short) is also very beautiful physical theory. Furthermore, the truly covariant, very
powerful and explicitly beautiful apparatus of this theory corrects everybody (even authors), if they steps away from the unique, truly
covariant and correct road of actual theory. No comparison can be made with an ugly form of the original geometro-dynamics (see, Appendix B)
and similar Hamiltonian-like creations, which were decleared to be `canonicaly related' with the geometro-dynamics.
I am grateful to my friends N. Kiriushcheva, S.V. Kuzmin and D.G.C. (Gerry) McKeon (all from the University of Western Ontario, London,
Ontario, Canada) for helpful discussions and inspiration.
{\bf Appendix A}
In this Appendix we discuss relations between dynamical variables which are used in our and Dirac formulations of the metric General Relativity.
In our earlier papers \cite{FK&K} we have shown that dynamical variables $\{ g_{\lambda\kappa}, \pi^{\alpha\beta} \}$, which are used in the $K\&K$
formulation of the metric GR, and analogous Dirac dynamical variables $\{ g_{\lambda\kappa}, p^{\alpha\beta} \}$ of the metric GR \cite{Dir58} are
related to each other by some canonical transfromation. This canonical transfromation can be written in the form \cite{FK&K} (from Dirac to $K\&K$)
\begin{eqnarray}
g_{\lambda\kappa} \rightarrow g_{\lambda\kappa} \; \; \; {\rm and} \; \; \; p^{\alpha\beta} \rightarrow \pi^{\alpha\beta} - \frac12 \sqrt{- g}
A^{(\alpha\beta) 0 \mu \nu k} g_{\mu\nu,k} \; \; \label{p_mom}
\end{eqnarray}
where the quantity $A^{(\alpha\beta) 0 \mu \nu k}$ is
\begin{eqnarray}
A^{(\alpha\beta) 0 \mu \nu k} = B^{((\alpha\beta) 0 \mid \mu \nu k)} - g^{0 k} E^{(\alpha\beta) \mu\nu} + 2 g^{0 \mu} E^{(\alpha\beta) k\nu}
\end{eqnarray}
where $B^{((\alpha\beta) 0 \mid \mu \nu k)}$ is the $B^{(\alpha\beta 0 \mid \mu \nu k)}$ quantity (see, Eq.(\ref{Bcoef})) symmetrized in terms of
all $\alpha \leftrightarrow \beta$ permutations. Analogously, the $E^{(\alpha\beta) \mu\nu}$ and $E^{(\alpha\beta) k\nu}$ are the two symmetrized
quantities (in respect to the $\alpha \leftrightarrow \beta$ permutations), i.e.,
\begin{eqnarray}
E^{(\alpha\beta) \mu\nu} = e^{\alpha\beta} e^{\mu\nu} - \frac12 ( e^{\alpha\mu} e^{\beta\nu} + e^{\alpha\nu} e^{\beta\mu} ) \; \; {\rm and} \;
E^{(\alpha\beta) k\nu} = e^{\alpha\beta} e^{k\nu} - \frac12 ( e^{\alpha k} e^{\beta\nu} + e^{\alpha\nu} e^{\beta k} ) \nonumber
\end{eqnarray}
respectively.
As is shown in the main text the relation between our dynamical variables and dynamical variables inroduced in \cite{K&K} is $g_{\lambda\kappa}
\rightarrow g_{\lambda\kappa}$ and $P^{\alpha\beta} \rightarrow \pi^{\alpha\beta}$, where
\begin{eqnarray}
P^{\alpha\beta} \rightarrow \pi^{\alpha\beta} - \frac12 \sqrt{- g} B^{(\alpha\beta 0 \mid \mu \nu k)} g_{\mu\nu,k} \; \; \;
\end{eqnarray}
From the last equation it is easy to obtain the following expression for our momenta $P^{\alpha\beta}$ written in terms of the Dirac momenta
$p^{\alpha\beta}$
\begin{eqnarray}
P^{\alpha\beta} = p^{\alpha\beta} - \frac12 \sqrt{- g} \Bigl[ B^{([\alpha\beta] 0 \mid \mu \nu k)} - g^{0 k} E^{(\alpha\beta)
\mu\nu} + 2 g^{0 \mu} E^{(\alpha\beta) k\nu} \Bigr] \; \; \label{P_p}
\end{eqnarray}
where the quantity $B^{([\alpha\beta] 0 \mid \mu \nu k)}$ is the $B^{(\alpha\beta 0 \mid \mu \nu k)}$ coefficient, Eq.(\ref{Bcoef}),
anti-symmetrized in respect to all permutations of the $\alpha$ and $\beta$ indexes. The transformation of dynamical variales $g_{\lambda\kappa}
\rightarrow g_{\lambda\kappa}$ and $P^{\alpha\beta} \rightarrow p^{\alpha\beta}$, Eq.(\ref{P_p}), is the canonical transformation (this can be
shown in the same way as it is done in the main text (see also \cite{K&K}). Its inverse transformation is also canonical. This means that
currently we have three different sets of dynamical variables which can be applied for the known and new Hamiltonian formulations of the metric
GR: (a) Dirac variables, (b) $K\&K$ variables \cite{K&K}, and (c) our variables defined in this study. These three sets of dynamical variables
are related to each other by simple canonical transformations. \\
{\bf Appendix B}
In this Appendix we want to show that dynamical variables which are used in geometro-dynamics \cite{ADM} are not canonical. Therefore, this theory
has nothing to do with the regular Hamiltonian formulation(s) of the metric GR. Furthermore, this theory (geometro-dynamics) cannot canonicaly be
related to any of the correct Hamiltonian formulations known for the metric GR. On the other hand, all similar `theories' which are canonicaly
related to the geometro-dynamics are equaly wrong quasi-Hamiltonian constructions which cannot help anybody to solve problems currently known and
constantly arising in the metric GR.
The history of creation of geometro-dynamics, which is also often called the ADM gravity, is straightforward. After an obvious success of Dirac
paper \cite{Dir58} a small group of young authors, which included Arnowitt, Deser and Misner \cite{ADM} (under general supervision of J.A. Wheeler),
decided to create some alternative (but Dirac-like!) formulation of the metric GR. Dynamical variables in this ADM approach were chosen as follows.
The generalized six coordinates coincide with the corresponding space-space components $g_{pq}$ of the metric tensor $g_{\alpha\beta}$ defined
in the four-dimensional space-time (or (3+1)-dimensional space-time, if we want to be historically precise). Four remaining coordinates were chosen
in the form: the "lapse" $N = \frac{1}{\sqrt{- g^{00}}}$ and three "shifts" $N^{k} = - \frac{g^{0k}}{g^{00}}$, where $k = 1, 2, 3$ (very likely, the
idea to use these four coordinates was proposed by Wheeler). The corresponding momenta $\Pi^{mn}$ were simply taken from Dirac paper \cite{Dir58}
(see also our Appendix A), i.e., they coincide with the $p^{mn}$ momenta introduced by Dirac (see Appendix A). The four remaining momenta were not
defined in the original ADM papers. Probably, this was done, since these four momenta lead to the (primary) constraints anyway. In general, it is
very hard to describe and discuss the internal logic of this quasi-theory, but we have to note that geometro-dynamics was carefully analyzed earlier
in \cite{KK2011} with a large number of details and references.
In fact, we do not need to bother ourselves with deep discussion of ADM formulation, since we already have their ten generalized coordinates (one
laps $N$, three shifts $N^{k}$ and six components of the metric tensor $g_{pq}$) and six momenta $\Pi^{mn}$ which coincide with the momenta $p^{mn}$
defined in Dirac's paper. By using only these dynamic variables of ADM gravity we can prove that these variables are not canonical. To prove this
statement we need to calculate the two following Poisson brackets: (1) between "laps" $N$ and $\Pi^{mn}$ (or $p^{mn}$) momenta, and (2) between
"shifts" and the same $\Pi^{mn}$ (or $p^{mn}$) momenta. If this theory is a truly Hamiltonian, then all these Poisson brackets must be equal zero
identically. Now we want to check this fact. The first Poisson bracket is
\begin{eqnarray}
&[&N, \Pi^{mn} ] = [ \frac{1}{\sqrt{- g^{00}}}, p^{mn} ] = - \frac{1}{\sqrt{(- g^{00})^{3}}} [ g^{00}, p^{mn} ] \nonumber \\
&=&
\frac{1}{\sqrt{(- g^{00})^{3}}} \frac12 ( g^{0 m} g^{0 n} + g^{0 n} g^{0 m} ) = \frac{1}{\sqrt{(- g^{00})^{3}}} g^{0 m} g^{0 n} \ne 0
\; , \; \label{N}
\end{eqnarray}
while for the second bracket one finds
\begin{eqnarray}
&[&N^{k}, \Pi^{mn} ] = [ -\frac{g^{0k}}{g^{00}}, p^{mn} ] = \frac{1}{2 g^{00}} ( g^{0 m} g^{0 n} + g^{0 n} g^{0 m} ) -
\frac{1}{( g^{00} )^{2}} g^{0 k} g^{0 m} g^{0 n} \nonumber \\
&=& \frac{1}{2 ( g^{00} )^{2}} ( g^{0 0} g^{0 m} g^{k n} + g^{0 0} g^{0 n} g^{k m}
- 2 g^{0 k} g^{0 m} g^{0 n} ) \ne 0 \; , \; \label{N-k}
\end{eqnarray}
where $k = 1, 2, 3$. So, I am sorry to say, but none of these four Poisson brackets equal zero identically. Therefore, these dynamical variables are
not canonical and theory which uses these variables is not a Hamiltonian theory. Furthermore, it cannot be transformed into such a theory by any
correct procedure and/or by applying any canonical transformation. Now, we can only guess that P.A.M. Dirac calculated these four Poisson brackets
in the end of 1950's. Very likely, he was trying to say something to that "enthusiastic group of young fellows" (he worked in Frorida at that time),
but those fellows simply ignored all his comments and doubts about their new and `far-advanced' Hamiltonian formulation of the metric GR. Finally,
these yong authors created the new `super-advanced' geometro-dynamics, which later was called (and considered) by Hawking \cite{Hawk} as a theory
which "contradicts to the whole spirit of General Relativity". However, such a contradiction is only a small problem for geometrodynamics, which
proved to be incorrect and incomplete in its applications to the real problems of metric gravity (more details can be found in \cite{KK2011}).
|
1,314,259,993,164 | arxiv | \section{Introduction}
Deploying multiple-antenna nodes in communication networks offers several significant benefits including a multiplicative increase in transmission rates~\cite{MIMO}. This makes multiple-antenna nodes an integral part of data communication networks.
Broadcast channels~\cite{BC}, as a building block of such networks, model the scenario where a transmitter wants to send a number of messages to multiple receivers through a shared medium. The capacity region of the Gaussian multiple-input multiple-output (MIMO) broadcast channel with two receivers is know when the receivers have both common- and private-message requests~\cite{MIMOBCwithCommon}; the capacity region of the channel with more than two receivers is also known when the receivers have only private-message requests~\cite{MIMOBCwithoutCommon}. These capacity results quantify the increase in transmission rates achieved by increasing the number of antennas at the nodes.
In communication networks, receivers may know a priori some of the messages requested by other receivers as receiver messages side information (RMSI). This form of side information
appears in, for example, multimedia broadcasting with packet loss, and the downlink phase of applications
modeled by multi-way relay channels~\cite{MWRCFullExchange}. It is known that using RMSI in code design can increase transmission rates over the Gaussian broadcast channel with single-antenna nodes~\cite{BCwithSI2UsersGeneral,ThreeReceiverAWGNwithMSI}. The capacity region of the Gaussian MIMO broadcast channel with two receivers is known when each receiver knows the private message requested by the other receiver as RMSI, i.e., complementary RMSI~\cite{MIMOandRMSI}. For the considered setting (which is equivalent to multicasting a common message to the receivers), this result quantifies the further possible increase in transmission rates due to RMSI. However, the capacity region of the Gaussian MIMO broadcast channel with non-complementary RMSI is not known. It is particularly difficult to characterize the capacity region where there are more than two receivers. This is because non-complementary RMSI leads to a scenario that implicitly involves transmitting both common and private messages, and as mentioned earlier, the capacity region of the Gaussian MIMO broadcast channel with more than two receivers has resisted solution when the receivers have both common- and private-message requests.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{FiguresPerfectCSITCSIR/MIMOSystemModel.pdf}
\vskip-0pt
\caption{The three-receiver Gaussian MIMO broadcast channel where $\mathbf{X}_{0}\in\mathbb{C}^{N_0\times 1}$, $\mathbf{H}_i\in\mathbb{C}^{N_i\times N_0}$, $\mathbf{Y}_i\in\mathbb{C}^{N_i\times 1}$, $\mathbf{Z}_i\in\mathbb{C}^{N_i\times 1}$, and $\mathbf{Z}_i\sim\mathcal{CN}(\mathbf{0},\mathbf{I}_i)$, $i\in\{1,2,3\}$. $M_i$ is the message requested by receiver~$i$, $K_i\subseteq \{M_1,M_2,M_3\}\setminus M_i$ is the set of messages known a priori to receiver~$i$, and $\hat{M}_i$ is the decoded message at receiver~$i$.}
\vskip-0pt
\label{Fig:SystemModel}
\end{figure}
When the capacity region of a Gaussian channel is difficult to determine, it has been of great interest to derive the degrees-of-freedom (DoF) region of the channel, such as for the MIMO interference channel~\cite{DoF2PairIC}, and the MIMO X channel~\cite{DoF2PairXJafar,DoF2PairXMoh}. The DoF region characterizes the limit of the capacity region normalized by the logarithm of the transmission power as the power goes to infinity. So establishing the DoF region of the Gaussian MIMO broadcast channel with RMSI quantifies the further possible increase in transmission rates (due to RMSI) in the high signal-to-noise ratio regime. Concerning DoF results for MIMO broadcast channels with RMSI, Jafar and Shamai~\cite{DoF2PairXJafar} characterized the sum-DoF of the Gaussian MIMO X channel (which includes the two-receiver Gaussian MIMO broadcast channel as a special case) where all the nodes have the same number of antennas, and one of the receivers knows a priori one of the messages requested by the other receiver. Zhang and Elia~\cite{MISOBCReceiverCache} considered the fading broadcast channel with an $N$-antenna transmitter, and $N$ single-antenna receivers equipped with a cache. They assumed that the transmitter knows partially the current channel state, and perfectly a delayed version of it. They characterized the sum-DoF within a multiplicative factor of four.
\subsection{Contributions}\label{Sec:mainresults}
In this work, we consider the three-receiver Gaussian MIMO broadcast channel with an arbitrary number of antennas at the transmitter and the receivers. We assume that (i) channel matrices are known to the transmitter and all the receivers, (ii) the receivers have private-message requests, and (iii) each receiver may know some of the messages requested by the other receivers as RMSI. This results in 16 possible non-isomorphic RMSI configurations in the sense that we cannot transform one to another by re-labeling the receivers and their requested messages. We derive tight inner and outer bounds on the DoF region of the channel for all 16 possible RMSI configurations, thereby establishing their DoF region. We construct our proposed schemes by utilizing both the null space and the side information of the receivers. We derive our outer bounds by upper bounding the DoF region for enhanced versions of the channel. In addition, in the case where all the nodes have the same number of antennas, we draw an analogy between the DoF region, and the capacity region of the index coding problem~\cite{CapacityRegionIndexCoding1}.
\section{System Model}\label{Section:SystemModel}
We consider the three-receiver Gaussian MIMO broadcast channel, depicted in Fig.~\ref{Fig:SystemModel}, where the transmitter is equipped with $N_0$ antennas, and receiver~$i$, $i\in\{1,2,3\}$, is equipped with $N_i$ antennas. In this channel, at time instant~$j$, we have
\begin{align*}
\mathbf{Y}_{i,j}=\mathbf{H}_{i}\mathbf{X}_{0,j}+\mathbf{Z}_{i,j}.
\end{align*}
$\mathbf{X}_{0,j}\in\mathbb{C}^{N_0\times 1}$ is the transmitted vector where $\mathbb{C}$ represents the set of complex numbers, $\mathbf{Y}_{i,j}\in\mathbb{C}^{N_i\times 1}$ is the channel-output vector at receiver~$i$, $\mathbf{H}_i\in\mathbb{C}^{N_i\times N_0}$ is the channel matrix between the transmitter and receiver~$i$, and $\mathbf{Z}_{i,j}\in\mathbb{C}^{N_i\times 1}$ is a white circularly symmetric complex Gaussian noise with zero mean and an $N_i\times N_i$ identity covariance matrix, i.e., $\mathbf{Z}_{i,j}\sim\mathcal{CN}(\mathbf{0},\mathbf{I}_i)$. We represent random variables using upper-case letters, and their realizations using the corresponding lower-case letters. We denote the $k$-th entry of a column vector $\mathbf{A}$ as $A_{[k]}$, the entry in the $k$-th row and the $\ell$-th column of a matrix $\mathbf{B}$ as $B_{[k\ell]}$, the $k$-th row of a matrix $\mathbf{B}$ as $\mathbf{B}_{[k:]}$, and the $\ell$-th column of a matrix $\mathbf{B}$ as $\mathbf{B}_{[:\ell]}$. Then we have
\begin{align*}
Y_{i,j[k]}=\mathbf{H}_{i[k:]}\mathbf{X}_{0,j}+Z_{i,j[k]},\;\; k\in\{1,2,\ldots,N_i\}.
\end{align*}
We assume that the channel coefficients, $H_{i[kl]}$, are generated independently according to a continuous distribution. This yields the rank of the matrix $\mathbf{H}_i,\;i\in\{1,2,3\} ,$ to be almost surely $\min\{N_i,N_0\}$, i.e., full rank. This also yields the rank of the matrix
\begin{align*}
\begin{bmatrix}
\mathbf{H}_1^T&\mathbf{H}_2^T&\mathbf{H}_3^T
\end{bmatrix}
\end{align*}
to be almost surely $\min\{N_0,\sum_{i=1}^{3}N_i\}$ where $[\cdot]^T$ denotes the transpose operation.
Considering $n$ uses of the channel, the intended message for receiver~$i$, $M_i,\;i\in\{1,2,3\},$ is an $nR_i$-bit message, and is uniformly distributed over the set $\mathcal{M}_i=\{0,1,\ldots,2^{nR_i}-1\}$. The transmitted codeword $\mathbf{X}_{0}^n=\left(\mathbf{X}_{0,1},\mathbf{X}_{0,2},\ldots,\mathbf{X}_{0,n}\right)$, which is a function of source messages, $\{M_i\}_{i=1}^{3}$, has the power constraint of
\begin{equation}\label{powerconstraint}
\sum_{j=1}^{n}\text{tr}\left(\mathbf{X}_{0,j}(m_1,m_2,m_3)\mathbf{X}_{0,j}^*(m_1,m_2,m_3)\right)\leq nP,
\end{equation}
for any $(m_1,m_2,m_3)$ where $[\cdot]^*$ denotes the conjugate transpose operation, and $\text{tr}(\cdot)$ the trace of a square matrix.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{FiguresPerfectCSITCSIR/NonisomorphicGraphs.pdf}
\vskip-0pt
\caption{Non-isomorphic side information graphs modeling all possible side information configurations.}
\vskip-0pt
\label{Fig:Graphs}
\end{figure}
Receiver~$i$, $i\in\{1,2,3\}$, knows a priori an ordered set of messages $K_i\subseteq\{M_1,M_2,M_3\}\setminus M_i$ as RMSI. We model the side information configuration of the channel by a side information graph $\mathcal{G}=(\mathcal{V}_\mathcal{G}, \mathcal{A}_\mathcal{G})$ where $\mathcal{V}_\mathcal{G}=\{1,2,3\}$ is the set of \textit{vertices}, and $\mathcal{A}_\mathcal{G}$ is the set of \textit{arcs}. An arc from vertex~$i$ to vertex~$i'$ exists if and only if receiver~$i$ knows $M_{i'}$ as side information. Then the set of outneighbors of vertex~$i$ is $\mathcal{O}_i=\{i'\mid M_{i'}\in K_i\}$. As an example, the graph
\begin{center}
\vskip-3pt
\includegraphics[width=0.11\textwidth]{FiguresPerfectCSITCSIR/AnExampleGraph.pdf}
\end{center}
\vskip-7pt
represents the case where $K_1=\emptyset$, $K_2=\{M_1,M_3\}$, and $K_3=\{M_2\}$. Any side information configuration can be modeled by one of the 16 graphs shown in Fig.~\ref{Fig:Graphs}. These are all possible non-isomorphic RMSI configurations in the sense that we cannot transform one to another by re-labeling the receivers and their requested messages. For instance, the graph
\begin{center}
\vskip-3pt
\includegraphics[width=0.11\textwidth]{FiguresPerfectCSITCSIR/IsomorphicExample.pdf}
\end{center}
\vskip-7pt
is transformed to $\mathcal{G}_{11}$ by re-labeling vertex~1 as vertex~3, vertex~2 as vertex~1, and vertex~3 as vertex~2.
A $(2^{nR_1},2^{nR_2},2^{nR_3},n)$ code for the channel consists of an encoding function
\begin{align*}
f: \mathcal{M}_1\times\mathcal{M}_2\times\mathcal{M}_3\rightarrow \mathbb{C}^{N_0\times n},
\end{align*}
with the power constraint in \eqref{powerconstraint} where $\times$ denotes the Cartesian product when it is used for sets. Then the transmitted codeword is $\mathbf{X}_0^n=f(M_1,M_2,M_3)$. This code also consists of decoding functions
\begin{align*}
g_i:\mathbb{C}^{N_i\times n}\times\mathcal{K}_i\rightarrow \mathcal{M}_i,\;i\in\mathcal{V}_\mathcal{G},
\end{align*}
where
\begin{align*}
\mathcal{K}_i=\bigotimes_{\ell\in\mathcal{O}_i}\mathcal{M}_\ell.
\end{align*}
For instance, if $K_1=\{M_2,M_3\}$, we have $\mathcal{K}_1=\mathcal{M}_2\times\mathcal{M}_3$. Then the decoded message at receiver~$i$ is $\hat{M}_i=g_i\left(\mathbf{Y}_i^{n},K_i\right)$. The average probability of error for this code is defined as
\begin{align*}
P_e^{(n)}=P\left((\hat{M}_1,\hat{M}_2,\hat{M}_3)\neq({M}_1,{M}_2,{M}_3)\right).
\end{align*}
\begin{definition}
A rate triple $(R_1(P),R_2(P),R_3(P))$ is said to be achievable if there exists a sequence of $(2^{nR_1},2^{nR_2},2^{nR_3},n)$ codes with $P_e^{(n)}\rightarrow 0$ as $n\rightarrow \infty$.
\end{definition}
\begin{definition}
The capacity region of the channel, $\mathcal{C}(P)$, is the closure of the set of all achievable rate triples $(R_1(P),R_2(P),R_3(P))$.
\end{definition}
\begin{definition}
A DoF triple $(d_1,d_2,d_3)$ is said to be achievable if, for any $(w_1,w_2,w_3)\in\mathbb{R}^3_{+}$, there exists an achievable rate triple $(R_1(P),R_2(P),R_3(P))$ such that
\begin{align*}
\sum_{i=1}^3 w_id_i\leq\underset{P\rightarrow \infty}{\lim\sup}\left[\sum_{i=1}^{3}\frac{w_iR_i(P)}{\log{P}}\right],
\end{align*}
where $\mathbb{R}_{+}$ represents the set of positive real numbers.
\end{definition}
\begin{definition}
The DoF region of the channel is the set $\mathcal{D}$ which is defined as~\cite{DoF2PairXJafar}
\begin{align*}
&\mathcal{D}\hskip-3pt=\hskip-3pt\Bigg{\{}\hskip-2pt(d_1,d_2,d_3)\in\mathbb{R}^3_+\mid \forall (w_1,w_2,w_3)\in\mathbb{R}^3_+ \\
&\hskip33pt\sum_{i=1}^3 w_id_i\leq\underset{P\rightarrow \infty}{\lim\sup}\left[\left[\underset{\mathcal{C}(P)}{\sup}{\sum_{i=1}^{3}w_iR_i(P)}\right]\frac{1}{\log{P}}\right]\Bigg{\}}.
\end{align*}
\end{definition}
\section{DoF Region with RMSI}
In this section, we characterize the DoF region of the channel for all 16 possible non-isomorphic side information configurations, stated as Theorem~\ref{Theorem:DoF}.
\begin{theorem}\label{Theorem:DoF}
The DoF region of the three-receiver Gaussian MIMO broadcast channel with the side information graph $\mathcal{G}_k$ is $\mathcal{D}_k$ where
\begin{align*}
\mathcal{D}_k=\Big{\{}(d_1,d_2,d_3)\in&\mathbb{R}_+^3 \mid\\
d_1+d_2+d_3&\leq N_0, \\
d_i&\leq N_i,\;i\in\mathcal{V}_\mathcal{G}\Big{\}},\;k\in\{1,2,\ldots, 6\},\\
\mathcal{D}_7=\Big{\{}(d_1,d_2,d_3)\in&\mathbb{R}_+^3 \mid\\
d_1+d_2&\leq N_0,\\
d_1+d_3&\leq N_0,\\
d_2+d_3&\leq N_0,\\
d_i&\leq N_i,\;i\in\mathcal{V}_\mathcal{G}\Big{\}},
\end{align*}
\begin{align*}
\mathcal{D}_k=\Big{\{}(d_1,d_2,d_3)\in&\mathbb{R}_+^3 \mid\\
d_1+d_2&\leq N_0,\\
d_1+d_3&\leq N_0,\\
d_1+d_2+d_3&\leq\max\{N_0,N_2+N_3\},\\
d_i&\leq N_i,\;i\in\mathcal{V}_\mathcal{G}\Big{\}},\;k\in\{8,9,10\},\\
\mathcal{D}_k=\Big{\{}(d_1,d_2,d_3)\in&\mathbb{R}_+^3 \mid\\
d_1+d_2&\leq N_0,\\
d_1+d_3&\leq N_0,\\
d_i&\leq N_i,\;i\in\mathcal{V}_\mathcal{G}\Big{\}},\;k\in\{11,12,13\},\\
\mathcal{D}_k=\Big{\{}(d_1,d_2,d_3)\in&\mathbb{R}_+^3 \mid\\
d_1+d_3&\leq N_0,\\
d_2&\leq N_0,\\
d_i&\leq N_i,\;i\in\mathcal{V}_\mathcal{G}\Big{\}},\;k\in\{14,15\},\\
\text{and}\hskip90pt&\\
\mathcal{D}_{16}=\Big{\{}(d_1,d_2,d_3)\in&\mathbb{R}_+^3 \mid
d_i\leq\min\{N_0,N_i\},\;i\in\mathcal{V}_\mathcal{G}\Big{\}}.
\end{align*}
\end{theorem}
\begin{IEEEproof}
We prove the achievability of the integer DoF points within $\mathcal{D}_k,\;k\in\{1,2,\ldots,16\}$, in Section~\ref{Sec:IntegerDoF}, the achievability of the fractional DoF points in Section~\ref{Sec:FractionalDoF}, and the converse in Section~\ref{Sec:ConverseDoF}.
\end{IEEEproof}
\section{Achieving Integer DoF Points}\label{Sec:IntegerDoF}
In this section, we prove the achievability of all the integer points $(d_1,d_2,d_3)\in\mathbb{Z}_{+}^3\cap \mathcal{D}_k$, $k\in\{1,2,\ldots,16\}$, for the channel with $\mathcal{G}=\mathcal{G}_k$, where $\mathbb{Z}_+$ represents the set of positive integer numbers.
It is well-known that all the integer points within the region $\mathcal{D}_1$ are achievable for the three-receiver Gaussian MIMO broadcast channel without RMSI~\cite{DoFNoCSIT2}. Then these points are also achievable for the channel with $\mathcal{G}=\mathcal{G}_k,\;k\in\{2,\ldots,6\}$ as the receivers have some extra side information. Hence, we just need to present the proof for the channel with $\mathcal{G}=\mathcal{G}_k,\;k\in\{7,8,\ldots,16\}$.
To construct the transmission scheme, we first construct three subcodebooks. Subcodebook~$i$, $i\in\mathcal{V}_\mathcal{G}$, consists of $2^{nR_i}$ i.i.d. codewords
\begin{align*}
\mathbf{X}^n_i(m_i)=(\mathbf{X}_{i,1}(m_i),\mathbf{X}_{i,2}(m_i),\ldots,\mathbf{X}_{i,n}(m_i)),
\end{align*}
generated according to $\prod_{j=1}^{n}p_{{\mathbf{X}_i}}(\mathbf{x}_{i,j})$ where $\mathbf{X}_{i,j}\in\mathbb{C}^{d_i\times 1}$, $\mathbf{X}_i\sim\mathcal{CN}(\mathbf{0},\mathbf{\Sigma}_i)$, $\mathbf{\Sigma}_i$ is a $d_i\times d_i$ diagonal matrix, $\text{tr}(\mathbf{\Sigma}_i)=P_i$, and $\sum_{i=1}^{3}P_i=P$. We then construct the transmitted codeword as
\begin{align*}
\mathbf{X}_{0,j}(m_1,m_2,m_3)=\sum_{i=1}^{3}\mathbf{V}_{i}\mathbf{X}_{i,j}(m_i),\;j\in\{1,2,\ldots,n\},
\end{align*}
where $\mathbf{V}_{i}\in\mathbb{C}^{N_0\times d_i}$ has columns, each of Euclidean norm one. Using this scheme, $(d_1,d_2,d_3)$ is achievable if we can choose the matrices $\{\mathbf{V}_{i}\}_{i=1}^{3}$ such that, at receiver~$i$, $i\in\mathcal{V}_\mathcal{G}$, we have
\begin{align}
\mathbf{H}_i\mathbf{V}_{i[:\ell]}\notin\text{span}\left(\mathcal{F}_{i}\setminus \mathbf{H}_i\mathbf{V}_{i[:\ell]}\right),\;\ell\in\{1,2,\ldots,d_i\},\label{MainAchievCondition}
\end{align}
where $\mathcal{F}_i$ is the set of column vectors
\begin{align*}
\mathcal{F}_{i}=\Big{\{}\mathbf{H}_{i}\mathbf{V}_{i'[:\ell']}\mid i'\in\mathcal{V}_\mathcal{G}\hskip-3pt\setminus\hskip-3pt\mathcal{O}_i,\; \ell'\in\{1,2,\ldots,d_{i'}\}\Big{\}}.
\end{align*}
This is because, at receiver~$i$, we then can always find a vector $\mathbf{\Phi}_{i\ell}\in\mathbb{C}^{N_i\times 1}$, $\ell\in\{1,2,\ldots,d_i\}$, which is orthogonal to all the vectors in $\mathcal{F}_{i}\setminus \mathbf{H}_i\mathbf{V}_{i[:\ell]}$ but not to $\mathbf{H}_i\mathbf{V}_{i[:\ell]}$. This, i.e., $\mathbf{\Phi}^T_{i\ell}\mathbf{Y}_i^n$ for each $\ell$, provides us with an interference-free space dimension at receiver~$i$ which is equivalent to the output of a Gaussian single-antenna point-to-point channel.
The columns of the matrices $\{\mathbf{V}_{i}\}_{i=1}^{3}$ are generated either randomly according to an isotropic distribution, or using zero forcing~\cite{MIMOBook}. In this work, it suffices to consider zero forcing at individual receivers, and simultaneously at receivers~2 and~3. Then zero-forcing columns are selected from the columns of the matrices $\mathbf{S}_i\in\mathbb{C}^{N_0\times r_i},\;i\in\mathcal{V}_\mathcal{G}$, and $\mathbf{S}_{23}\in\mathbb{C}^{N_0\times r_{23}}$ where $r_i=(N_0-N_i)^+$, $r_{23}=(N_0-N_2-N_3)^+$, and $(a)^+=\max\{0,a\}$. The matrix $\mathbf{S}_i,\;i\in\mathcal{V}_\mathcal{G}$, is randomly generated in the null space of $\mathbf{H}_i$, i.e., $\mathbf{H}_i\mathbf{S}_{i}=\mathbf{0}$. The matrix $\mathbf{S}_{23}$ is randomly generated in the intersection of the null spaces of $\mathbf{H}_2$ and $\mathbf{H}_3$, i.e.,
\begin{align*}
\begin{bmatrix}
\mathbf{H}_2\\
\mathbf{H}_3
\end{bmatrix}\mathbf{S}_{23}=\mathbf{0}.
\end{align*}
Note that the rank of the matrix $[\mathbf{S}_{i},\mathbf{S}_{23}]\in\mathbb{C}^{N_0\times (r_i+r_{23})},\;i\in\{2,3\}$, is then almost surely $r_i$.
In the rest of this section, we show how the matrices $\{\mathbf{V}_{i}\}_{i=1}^{3}$ are chosen for the channel with $\mathcal{G}=\mathcal{G}_k,\;k\in\{7,8,\ldots,16\}$ in order to achieve all the integer points within the region $\mathcal{D}_k$.
\underbar{$\mathcal{G}=\mathcal{G}_7$:}
We choose the first $\min\{r_q,d_i\}$ columns of $\mathbf{V}_{i}$, $i\in\mathcal{V}_\mathcal{G}$, from the columns of $\mathbf{S}_q$ where $q=(i\hskip-4pt\mod 3)+1$, and we randomly generate the remaining $(d_i-r_q)^+$ columns.
Using these matrices, we can almost surely have $d_1$ interference-free dimensions at receiver~1 if
\begin{align*}
d_1+(d_3-r_1)^+&\leq \min\{N_0,N_1\}.
\end{align*}
This is because this condition yields the non-zero columns of the matrix $\left[\mathbf{H}_1\mathbf{V}_1,\mathbf{H}_1\mathbf{V}_3\right]$ to be almost surely linearly independent. Consequently, as receiver~1 knows $M_2$ as RMSI, condition~\eqref{MainAchievCondition} is satisfied at this receiver.
Similarly, we can almost surely have $d_2$ interference-free dimensions at receiver~2, and $d_3$ interference-free dimensions at receiver~3 if
\begin{align*}
d_2+(d_1-r_2)^+&\leq \min\{N_0,N_2\},\\
d_3+(d_2-r_3)^+&\leq \min\{N_0,N_3\}.
\end{align*}
Considering that $r_i+\min\{N_0,N_i\}=N_0,\;i\in\mathcal{V}_\mathcal{G}$, this completes the achievability proof of all the positive integer points within $\mathcal{D}_7$.
\underbar{$\mathcal{G}=\mathcal{G}_k,\;\;k\in\{8,9,10\}$:}
We just need to prove achievability for the channel with $\mathcal{G}=\mathcal{G}_8$ as each $K_i$ for $\mathcal{G}_8$ is a subset of the corresponding one for $\mathcal{G}_k,\;k\in\{9,10\}$.
To construct $\mathbf{V}_1$, we select the first $\min\{r_{23},d_1\}$ columns of $\mathbf{V}_1$ from the columns of $\mathbf{S}_{23}$. For the remaining $(d_1-r_{23})^+$ columns, we select $r'_2$ columns from $\mathbf{S}_2$, $r'_3$ columns from $\mathbf{S}_3$, and randomly generate $(d_1-r_{23})^+-r'_2-r'_3$ columns where
\begin{align}
r'_2&\leq r_2-r_{23},\label{G8Cond1}\\
r'_3&\leq r_3-r_{23},\label{G8Cond2}\\
r'_2+r'_3&\leq (d_1-r_{23})^+.\label{G8Cond3}
\end{align}
Conditions~\eqref{G8Cond1} and~\eqref{G8Cond2} are imposed to ensure that $\mathbf{V}_1$ is almost surely full column rank.
To construct $\mathbf{V}_2$, and $\mathbf{V}_3$, we define $i_\text{max}$ as an arbitrary element of the set $\{2,3\}$ at which $d_{i_\text{max}}=\max\{d_2,d_3\}$, and $i_\text{min}$ as the other element of this set. We choose the first $\min\{r_1,d_{i_\text{max}}\}$ columns of $\mathbf{V}_{i_\text{max}}$ from the columns of $\mathbf{S}_1$, and randomly generate the remaining $(d_{i_\text{max}}-r_1)^+$ columns. We then choose the columns of $\mathbf{V}_{i_\text{min}}$ to be the same as the first $d_{i_\text{min}}$ columns of $\mathbf{V}_{i_\text{max}}$.
Using these matrices, we can almost surely have $d_1$ interference-free dimensions at receiver~1 if
\begin{align}
d_1+(\max\{d_2,d_3\}-r_1)^+&\leq \min\{N_0,N_1\}.\label{G8Cond4}
\end{align}
This is because, if condition~\eqref{G8Cond4} holds, the non-zero columns of the matrix $[\mathbf{H}_1\mathbf{V}_1,\mathbf{H}_1\mathbf{V}_{i_\text{max}}]$ are almost surely linearly independent. Consequently, condition~\eqref{MainAchievCondition} is satisfied at this receiver.
As receiver~2 knows $M_3$ as RMSI, also, we can almost surely have $d_2$ interference-free dimensions at receiver~2 if
\begin{align}
d_2+(d_1-r_{23})^+-r'_2&\leq \min\{N_0,N_2\},\label{G8Cond5}
\end{align}
and, as receiver~3 knows $M_2$ as RMSI, we can almost surely have $d_3$ interference-free dimensions at receiver~3 if
\begin{align}
d_3+(d_1-r_{23})^+-r'_3&\leq \min\{N_0,N_3\}.\label{G8Cond6}
\end{align}
Applying Fourier-Motzkin method to conditions~\eqref{G8Cond1}--\eqref{G8Cond6} in order to eliminate $r'_2$ and $r'_3$ makes the proof complete for these configurations.
\underbar{$\mathcal{G}=\mathcal{G}_{k},\;\;k\in\{11,12,13\}$:} For the channel with $\mathcal{G}=\mathcal{G}_{11}$, we choose the first $\min\{r_3,d_1\}$ columns of $\mathbf{V}_1$ from the columns of $\mathbf{S}_3$, and randomly generate the remaining $(d_1-r_3)^+$ columns; we construct $\mathbf{V}_2$ and $\mathbf{V}_3$ the same as the ones for the channel with $\mathcal{G}=\mathcal{G}_8$.
For the channel with $\mathcal{G}=\mathcal{G}_{12}$, we choose the first $\min\{r_2,d_1\}$ columns of $\mathbf{V}_1$ from the columns of $\mathbf{S}_2$, and randomly generate the remaining $(d_1-r_2)^+$ columns; we randomly generate the $d_2$ columns of $\mathbf{V}_2$; we choose the first $\min\{r_1,d_3\}$ columns of $\mathbf{V}_3$ from the columns of $\mathbf{S}_1$, and randomly generate the remaining $(d_3-r_1)^+$ columns.
For the channel with $\mathcal{G}=\mathcal{G}_{13}$, we randomly generate $d_1$ columns of $\mathbf{V}_1$, and we construct $\mathbf{V}_2$ and $\mathbf{V}_3$ the same as the ones for the channel with $\mathcal{G}=\mathcal{G}_8$.
Using these matrices, we now prove achievability for the channel with $\mathcal{G}=\mathcal{G}_{11}$. We can similarly prove achievability for the channel with $\mathcal{G}=\mathcal{G}_{k},\;k\in\{12,13\}$.
For the channel with $\mathcal{G}=\mathcal{G}_{11}$, we can almost surely have $d_1$ interference-free dimensions at receiver~1 if
\begin{align*}
d_1+(\max\{d_2,d_3\}-r_1)^+&\leq \min\{N_0,N_1\};
\end{align*}
$d_2$ interference-free dimensions at receiver~2 if
\begin{align*}
d_2&\leq \min\{N_0,N_2\};
\end{align*}
$d_3$ interference-free dimensions at receiver~3 if
\begin{align*}
d_3+(d_1-r_3)^+&\leq \min\{N_0,N_3\}.
\end{align*}
\underbar{$\mathcal{G}=\mathcal{G}_{k},\;\;k\in\{14,15\}$:}
We just need to prove achievability for the channel with
$\mathcal{G}=\mathcal{G}_{14}$ as each $K_i$ for $\mathcal{G}_{14}$ is a subset of the corresponding one for $\mathcal{G}_{15}$.
We construct $\mathbf{V}_1$ by choosing its first $\min\{r_3,d_1\}$ columns from the columns of $\mathbf{S}_3$, and randomly generating the remaining $(d_1-r_3)^+$ columns. We randomly generate the $d_2$ columns of $\mathbf{V}_2$. We construct $\mathbf{V}_3$ by choosing its first $\min\{r_1,d_3\}$ columns from the columns of $\mathbf{S}_1$, and randomly generating the remaining $(d_3-r_1)^+$ columns.
Using these matrices, we can almost surely have $d_1$ interference-free dimensions at receiver~1 if
\begin{align*}
d_1+(d_3-r_1)^+&\leq \min\{N_0,N_1\};
\end{align*}
$d_2$ interference-free dimensions at receiver~2 if
\begin{align*}
d_2&\leq \min\{N_0,N_2\};
\end{align*}
$d_3$ interference-free dimensions at receiver~3 if
\begin{align*}
d_3+(d_1-r_3)^+&\leq \min\{N_0,N_3\}.
\end{align*}
\underbar{$\mathcal{G}_{16}$:} We randomly generate the matrices $\{\mathbf{V}_{i}\}_{i=1}^{3}$. Since each receiver knows a priori all the requested messages by the other receivers, the channel is equivalent to three Gaussian MIMO point-to-point channel. Therefore, $(d_1,d_2,d_3)$ is achievable if it satisfies
\begin{align*}
d_i\leq\min\{N_0,N_i\},\;i\in\mathcal{V}_\mathcal{G}.
\end{align*}
\section{Achieving fractional DoF Points}\label{Sec:FractionalDoF}
In this section, we prove the achievability of all the fractional points within the region $\mathcal{D}_k$, $k\in\{1,2,\ldots,16\}$, for the channel with $\mathcal{G}=\mathcal{G}_k$.
We here show that all the corner points of the polyhedron $\mathcal{D}_k,\;k\neq 7$, are integer points. Consequently, we can achieve the whole region using time sharing among the integer points.
Any corner point of the polyhedron $\mathcal{D}_k,\;k\in\{1,2,\ldots, 6\}$, is the intersection of three of the seven planes $d_1+d_2+d_3=N_0$, $d_i=N_i,\;i\in\mathcal{V}_\mathcal{G}$, and $d_i=0,\;i\in\mathcal{V}_\mathcal{G}$.
If a corner point lies on three of the last six planes, it clearly has three integer elements. If a corner point lies on $d_1+d_2+d_3=N_0$ and two of the last six planes, it also has three integer elements as having two integer elements yields the third to be an integer as well. This shows that all the corners points are integer points.
Considering the region $\mathcal{D}_k,\;\;k\in\{8,9,10\}$, if the inequality $N_2+N_3\leq N_0$ holds, the conditions $d_1+d_2\leq N_0$, and $d_1+d_3\leq N_0$ are redundant, and the resulting polyhedron is the same as $\mathcal{D}_1$ (which we showed that all of its corner points are integer points). If $N_0<N_2+N_3$, any corner point is the intersection of three of the nine planes $d_1+d_2=N_0$, $d_1+d_3=N_0$, $d_1+d_2+d_3=N_2+N_3$, $d_i=N_i,\;i\in\mathcal{V}_\mathcal{G}$, and $d_i=0,\;i\in\mathcal{V}_\mathcal{G}$. If a corner point lies on at least one of the last six planes, having one integer element yields the other two to be integers as well. The intersection of the first three planes is
\begin{align*}
(d_1,d_2,d_3)\hskip-2pt=\hskip-2pt(2N_0\hskip-2pt-\hskip-2ptN_2\hskip-2pt-\hskip-2ptN_3,N_2+N_3\hskip-2pt-\hskip-2ptN_0,N_2+N_3\hskip-2pt-\hskip-2ptN_0)
\end{align*}
which is an integer point as well. This shows that all the corner points are integer points.
A similar discussion shows that all the corner points of the polyhedron $\mathcal{D}_k,\;\;k\in\{11,12,\ldots,16\}$ are integer points.
In the rest of this section, we first show that if $N_0$ is odd, and $\frac{N_0}{2}\leq N_i,\;i\in\mathcal{V}_\mathcal{G}$, the polyhedron $\mathcal{D}_7$ has one fractional corner point. Otherwise all the corner points are integer points. We then prove the achievability of the fractional corner point using two-symbol extension of our scheme in Section~\ref{Sec:IntegerDoF}. Consequently, we can achieve the whole region using time sharing.
Any corner point of the polyhedron $\mathcal{D}_7$ is the intersection of three of the nine planes $d_1+d_2=N_0$, $d_1+d_3=N_0$, $d_2+d_3=N_0$, $d_i=N_i,\;i\in\mathcal{V}_\mathcal{G}$, and $d_i=0,\;i\in\mathcal{V}_\mathcal{G}$. If a corner point lies on at least one of the last six planes, having one integer element yields the other two to be integers as well. The intersection of the first three planes is $(d_1,d_2,d_3)=(\frac{N_0}{2},\frac{N_0}{2},\frac{N_0}{2})$ which is a fractional corner point when $N_0$ is odd, and $\frac{N_0}{2}\leq N_i,\;i\in\mathcal{V}_\mathcal{G}$. Fig.~\ref{Fig:OuterBoundDoFG7} shows the region $\mathcal{D}_7$ and its corner points for a specific antenna configuration.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{FiguresPerfectCSITCSIR/D7Fig2.pdf}
\vskip-0pt
\caption{The region $\mathcal{D}_7$ for the antenna configuration $(N_0,N_1,N_2,N_3)=(9,7,8,5)$. The corner points of $\mathcal{D}_7$ are $\{A_k\}_{k=0}^{13}$ where $A_{13}=(4.5,4.5,4.5)$ is the only fractional corner point. The corner points of the DoF region without RMSI, i.e., $\mathcal{D}_1$, are $\{A_k\}_{k=0}^{9}$. This shows the improvement in the DoF region using RMSI.}
\vskip-0pt
\label{Fig:OuterBoundDoFG7}
\end{figure}
\subsection{Two-Symbol Extension}
We here prove the achievability of the DoF point $(\frac{N_0}{2},\frac{N_0}{2},\frac{N_0}{2})$ when $N_0$ is odd and $\frac{N_0}{2}\leq N_i,\;i\in\mathcal{V}_\mathcal{G}$. To achieve this point, we use our proposed scheme in Section~\ref{Sec:IntegerDoF} in conjunction with two-symbol extension over time~\cite{DoF2PairXJafar} where the channel matrices are
\begin{align*}
\mathbf{\Theta}_i=\begin{bmatrix}
\mathbf{H}_i&\mathbf{0} \\
\mathbf{0}& \mathbf{H}_i
\end{bmatrix},\;i\in\mathcal{V}_\mathcal{G},
\end{align*}
$\mathbf{\Theta}_i\in\mathbb{C}^{2N_i\times2N_0}$. We again construct three subcodebooks. Subcodebook~$i$, $i\in\mathcal{V}_\mathcal{G}$, consists of $2^{nR_i}$ i.i.d. codewords
\begin{align*}
\mathbf{X}'^{\frac{n}{2}}_i(m_i)=(\mathbf{X}'_{i,1}(m_i),\mathbf{X}'_{i,2}(m_i),\ldots,\mathbf{X}'_{i,\frac{n}{2}}(m_i)),
\end{align*}
generated according to $\prod_{j=1}^{\frac{n}{2}}p_{{\mathbf{X}^{'}_i}}(\mathbf{x}'_{i,j})$ where $\mathbf{X}'_i\in\mathbb{C}^{N_0\times 1}$, $\mathbf{X}'_i\sim\mathcal{CN}(0,\mathbf{\Sigma}'_i)$, $\mathbf{\Sigma}'_i$ is a $N_0\times N_0$ diagonal matrix, $\text{tr}(\mathbf{\Sigma}'_i)=2P_i$, and $\sum_{i=1}^{3}P_i=P$. We then construct the transmitted codeword as
\begin{align*}
\begin{bmatrix}
\mathbf{X}_{0,2j-1}(m_1,m_2,m_3)\\
\mathbf{X}_{0,2j}(m_1,m_2,m_3)
\end{bmatrix}\hskip-4pt=\hskip-3pt\sum_{i=1}^{3}\mathbf{U}_{i}\mathbf{X}^{'}_{i,j}(m_i),\;j\hskip-3pt\in\hskip-3pt\{1,2,\ldots,\frac{n}{2}\},
\end{align*}
where $\mathbf{U}_{i}\in\mathbb{C}^{2N_0\times N_0}$ has columns, each of Euclidean norm one. We choose the first $\min\{2r_q,N_0\}$ columns of $\mathbf{U}_{i}$, $i\in\mathcal{V}_\mathcal{G}$, from the columns of $\mathbf{T}_q$, $q=(i\hskip-4pt\mod 3)+1$, which is randomly generated in the null space of $\mathbf{\Theta}_q$, i.e., $\mathbf{\Theta}_q\mathbf{T}_q=0$. We randomly generate the remaining $(N_0-2r_q)^+$ columns of $\mathbf{U}_i$ according to an isotropic distribution. Using this construction, the matrices $[\mathbf{U}_1,\mathbf{U}_3]$, $[\mathbf{U}_2,\mathbf{U}_1]$, and $[\mathbf{U}_3,\mathbf{U}_2]$ are almost surely full rank.
We perform decoding every two symbols which provides us with $2N_i$ space dimensions at receiver~$i$. Since $\frac{N_0}{2}\leq N_i$, the inequality
\begin{align*}
N_0+(N_0-2r_i)^+&\leq \min\{2N_0,2N_i\},\;i\in\mathcal{V}_\mathcal{G},
\end{align*}
holds, which yields the non-zero columns of the matrix $\left[\mathbf{\Theta}_i\mathbf{U}_i,\mathbf{\Theta}_i\mathbf{U}_q\right],\;i\in\mathcal{V}_\mathcal{G},\,q=((i+1)\hskip-4pt\mod 3)+1$, to be almost surely linearly independent. Then, at each receiver, the number of interference-free dimensions over two symbols is $N_0$, and we have achieved the DoF point $(\frac{N_0}{2},\frac{N_0}{2},\frac{N_0}{2})$.
\section{Outer Bound on the DoF Region}\label{Sec:ConverseDoF}
In this section, we first prove two lemmas. We then present the converse proof for Theorem~\ref{Theorem:DoF} using these two lemmas.
\begin{lemma}\label{Lemma:OuterusingIAS}
If $(d_1,d_2,d_3)$ is achievable for the three-receiver Gaussian MIMO broadcast channel with RMSI, then it must satisfy
\begin{align*}
\sum_{k\in\mathcal{V}_\mathcal{Q}}d_k&\leq \min\{N_0,\sum_{k\in\mathcal{V}_\mathcal{Q}}N_k\},
\end{align*}
for every acyclic induced subgraph $\mathcal{Q}$ of the side information graph ($\mathcal{V}_\mathcal{Q}$ is the vertex set of $\mathcal{Q}$).
\end{lemma}
\begin{IEEEproof}
The proof is presented in Appendix~A.
\end{IEEEproof}
\begin{lemma}\label{Lemma:3MCCapacity}
Considering the three-receiver memoryless broadcast channel with RMSI where channel input is $X_0$, channel outputs are $Y_i,\;i\in\mathcal{V}_\mathcal{G}$, $K_1\subseteq\{M_2,M_3\}$, $K_2=\{M_3\}$, $K_3=\{M_2\}$, and $X_0\rightarrow Y_1\rightarrow (Y_2,Y_3)$ form a Markov chain, the capacity region is the closure of the set of all rate triples $(R_1,R_2,R_3)$, each satisfying
\begin{align*}
R_1&<I(X_0;Y_1\mid U_0),\\
R_2&<I(U_0;Y_2),\\
R_3&<I(U_0;Y_3),
\end{align*}
for some distribution $p(u_0,x_0)$.
\end{lemma}
\begin{IEEEproof}
The proof is presented in Appendix B.
\end{IEEEproof}
We here present the converse proof for Theorem~\ref{Theorem:DoF}.
\underbar{$\mathcal{G}=\mathcal{G}_k,\;k\in\{1,2,\ldots, 16\}\setminus\{8,9,10\}$:} Lemma~\ref{Lemma:OuterusingIAS} provides a tight outer bound for all these side information configurations.
\underbar{$\mathcal{G}=\mathcal{G}_k,\;k\in\{8,9,10\}$:} Using Lemma~\ref{Lemma:OuterusingIAS}, we obtain the necessary conditions
\begin{align*}
d_1+d_2&\leq N_0,\\
d_1+d_3&\leq N_0,\\
d_i&\leq N_i,\;i\in\mathcal{V}_\mathcal{G}.
\end{align*}
If at least one of the three inequalities $N_2\geq N_0$, $N_3\geq N_0$, or $N_0\geq N_1+N_2+N_3$ holds, the condition
\begin{align}
d_1+d_2+d_3\leq\max\{N_0,N_2+N_3\}\label{specialcondition},
\end{align}
in the achievable region is redundant, and the converse proof for these side information configurations is complete. Otherwise, (i.e., when $N_2<N_0$, $N_3<N_0$, and $N_0<N_1+N_2+N_3$), to show that the condition in \eqref{specialcondition} is also a necessary condition, we construct an enhanced channel by providing the channel outputs at receivers~2 and~3 to receiver~1. In the enhanced channel, the channel output at receiver~1 is $(\mathbf{Y}_1,\mathbf{Y}_2,\mathbf{Y}_3)$. Since $\mathbf{X}_0\rightarrow(\mathbf{Y}_1,\mathbf{Y}_2,\mathbf{Y}_3)\rightarrow(\mathbf{Y}_2,\mathbf{Y}_3)$ form a Markov chain, we use Lemma~\ref{Lemma:3MCCapacity} to bound the sum-rate as follows.
\begin{align}
\hskip-0.5ptR_1\hskip-2pt+\hskip-2ptR_2\hskip-2pt+\hskip-2ptR_3\hskip-2pt&\leq \hskip-2ptI(\mathbf{X}_0;\hskip-2pt\mathbf{Y}_1,\mathbf{Y}_2,\mathbf{Y}_3\hskip-2pt\mid\hskip-2pt U_0)\hskip-2pt+\hskip-2ptI(U_0;\hskip-2pt\mathbf{Y}_2)\hskip-2pt+\hskip-2ptI(U_0;\hskip-2pt\mathbf{Y}_3)\nonumber\\
&=h(\mathbf{Y}_1,\mathbf{Y}_2,\mathbf{Y}_3\mid U_0)-h(\mathbf{Z}_1,\mathbf{Z}_2,\mathbf{Z}_3)\nonumber\\
&\hskip9pt+h(\mathbf{Y}_2)-h(\mathbf{Y}_2\mid U_0)+h(\mathbf{Y}_3)\hskip-2pt-\hskip-2pth(\mathbf{Y}_3\mid U_0)\nonumber\\
&\leq h(\mathbf{Y}_1\mid \mathbf{Y}_2,\mathbf{Y}_3, U_0)
\hskip-2pt+h(\mathbf{Y}_2)\hskip-2pt+h(\mathbf{Y}_3)\nonumber\\
&\hskip130pt-h(\mathbf{Z}_1,\mathbf{Z}_2,\mathbf{Z}_3)\nonumber\\
&= h(\mathbf{Y}_1\mid \mathbf{Y}_2,\mathbf{Y}_3, U_0)
+h(\mathbf{Y}_2)+h(\mathbf{Y}_3)\nonumber\\
&\hskip95pt-h(\mathbf{Z}_1)\hskip-2pt-\hskip-2pth(\mathbf{Z}_2)\hskip-2pt-\hskip-2pth(\mathbf{Z}_3)\nonumber\\
&=I(\mathbf{X}_0;\mathbf{Y}_2)+I(\mathbf{X}_0;\mathbf{Y}_3)\nonumber\\
&\hskip57pt+h(\mathbf{Y}_1\mid \mathbf{Y}_2,\mathbf{Y}_3, U_0)-h(\mathbf{Z}_1)\label{DoFOuterG81}.
\end{align}
From the Gaussian MIMO point-to-point channel, the mutual information terms in~\eqref{DoFOuterG81} are upper bounded as~\cite{MIMOBook}
\begin{align}
I(\mathbf{X}_0;\mathbf{Y}_2) \leq N_2\log P+o(\log P),\label{DoFOuterG82}\\
I(\mathbf{X}_0;\mathbf{Y}_3) \leq N_3\log P+o(\log P),\label{DoFOuterG83}
\end{align}
where $\underset{P\rightarrow\infty}{\lim}\frac{o(\log P)}{\log{P}}\rightarrow 0$. In order to upper bound the remaining terms in~\eqref{DoFOuterG81}, i.e.,
\begin{align*}
h(\mathbf{Y}_1\mid \mathbf{Y}_2,\mathbf{Y}_3, U_0)-h(\mathbf{Z}_1),
\end{align*}
we generalize the technique used by Weingarten et al. for a two-receiver compound broadcast channel~\cite[Theorem 4]{DoFCompoundBC}. We consider two cases: \textit{Case}~I where $N_0\leqslant N_2+N_3$, and \textit{Case}~II where $N_2+N_3<N_0$.
For \textit{Case}~I ($N_2,N_3<N_0\leqslant N_2+N_3$), we define $\mathbf{Y}'$, $\mathbf{H}'$, and $\mathbf{Z}'$ as
\begin{align*}
\underset{\mathbf{Y}'}{\underbrace{\begin{bmatrix}
\mathbf{Y}_{2}\\
Y_{3[1]}\\
\vdots\\
Y_{3[N_0-N_2]}
\end{bmatrix}}}=\underset{\mathbf{H}'}{\underbrace{\begin{bmatrix}
\mathbf{H}_{2}\\
\mathbf{H}_{3[1:]}\\
\vdots\\
\mathbf{H}_{3[N_0-N_2:]}
\end{bmatrix}}}\mathbf{X}_0+\underset{\mathbf{Z}'}{\underbrace{\begin{bmatrix}
\mathbf{Z}_{2}\\
Z_{3[1]}\\
\vdots\\
Z_{3[N_0-N_2]}
\end{bmatrix}}}.
\end{align*}
Since $\mathbf{H}'\in\mathbb{C}^{N_0\times N_0}$ is almost surely full rank, $\mathbf{H}_1$ can be written as
$\mathbf{H}_{1}=\mathbf{\Lambda}'\mathbf{H}'$ where $\mathbf{\Lambda}'\in\mathbb{C}^{N_1\times N_0}$. Then we have
\begin{align*}
\mathbf{Y}_1&=\mathbf{\Lambda}'\mathbf{H}'\mathbf{X}_0+\mathbf{Z}_1\\
&=\mathbf{\Lambda}'(\mathbf{Y}'-\mathbf{Z}')+\mathbf{Z}_{1},
\end{align*}
which results in
\begin{align}
&h(\mathbf{Y}_1\mid \mathbf{Y}_2,\mathbf{Y}_3, U_0)-h(\mathbf{Z}_1)\nonumber\\
&\hskip20pt=h(-\mathbf{\Lambda}'\mathbf{Z}'+\mathbf{Z}_{1}\mid \mathbf{Y}_2,\mathbf{Y}_3,U_0)-h(\mathbf{Z}_1)\nonumber\\
&\hskip20pt\leq h(-\mathbf{\Lambda}'\mathbf{Z}'+\mathbf{Z}_{1})-h(\mathbf{Z}_1)\nonumber\\
&\hskip20pt=o(\log P)\label{DoFOuterG84}.
\end{align}
Using \eqref{DoFOuterG81}--\eqref{DoFOuterG84}, the converse proof for \textit{Case}~I is complete.
For \textit{Case}~II ($N_2+N_3<N_0<N_1+N_2+N_3$), we define $\mathbf{Y}''$, $\mathbf{H}''$, and $\mathbf{Z}''$ as
\begin{align*}
\underset{\mathbf{Y}''}{\underbrace{\begin{bmatrix}
Y_{1[1]}\\
\vdots\\
Y_{1[N_0-N_2-N_3]}\\
\mathbf{Y}_{2}\\
\mathbf{Y}_{3}
\end{bmatrix}}}\hskip-2pt=\hskip-2pt\underset{\mathbf{H}''}{\underbrace{\begin{bmatrix}
\mathbf{H}_{1[1:]}\\
\vdots\\
\mathbf{H}_{1[N_0-N_2-N_3:]}\\
\mathbf{H}_{2}\\
\mathbf{H}_{3}
\end{bmatrix}}}\mathbf{X}_0\hskip-2pt+\hskip-4pt\underset{\mathbf{Z}''}{\underbrace{\begin{bmatrix}
Z_{1[1]}\\
\vdots\\
Z_{1[N_0-N_2-N_3]}\\
\mathbf{Z}_{2}\\
\mathbf{Z}_{3}
\end{bmatrix}}}.
\end{align*}
Since $\mathbf{H}''\in\mathbb{C}^{N_0\times N_0}$ is almost surely full-rank, we can write
\begin{align*}
\begin{bmatrix}
\mathbf{H}_{1[N_0-N_2-N_3+1:]}\\
\vdots\\
\mathbf{H}_{1[N_1:]}
\end{bmatrix}=\mathbf{\Lambda}''\mathbf{H}'',
\end{align*}
where $\mathbf{\Lambda}''\in\mathbb{C}^{N_1+N_2+N_3-N_0\times N_0}$. Then we have
\begin{align*}
\begin{bmatrix}
Y_{1[N_0-N_2-N_3+1]}\\
\vdots\\
Y_{1[N_1]}
\end{bmatrix}=
\mathbf{\Lambda}''\mathbf{H}''\mathbf{X}_0+
\begin{bmatrix}
Z_{1[N_0-N_2-N_3+1]}\\
\vdots\\
Z_{1[N_1]}
\end{bmatrix},
\end{align*}
where $\mathbf{H}''\mathbf{X}_0=\mathbf{Y}''-\mathbf{Z}''$. This results in
\begin{align}
& h(\mathbf{Y}_1\mid \mathbf{Y}_2,\mathbf{Y}_3, U_0)-h(\mathbf{Z}_1)\nonumber\\
&\hskip5pt=h([Y_{1[1]},\ldots,Y_{1[N_0-N_2-N_3]}]^T\mid \mathbf{Y}_2,\mathbf{Y}_3, U_0)\nonumber\\
&\hskip15pt+h(-\mathbf{\Lambda}''\mathbf{Z}''+[Z_{1[N_0-N_2-N_3+1]},\ldots,Z_{1[N_1]}]^T\mid \mathbf{Y}'', U_0)\nonumber\\
&\hskip15pt-h(\mathbf{Z}_1)\nonumber\\
&\hskip5pt\leq h([Y_{1[1]},\ldots,Y_{1[N_0-N_2-N_3]}]^T)\nonumber\\
&\hskip15pt+h(-\mathbf{\Lambda}''\mathbf{Z}''+[Z_{1[N_0-N_2-N_3+1]},\ldots,Z_{1[N_1]}]^T)\nonumber\\
&\hskip15pt-h(\mathbf{Z}_1)\nonumber\\
&\hskip5pt=I(\mathbf{X}_0;[Y_{1[1]},\ldots,Y_{1[N_0-N_2-N_3]}]^T)\nonumber\\
&\hskip15pt+h(-\mathbf{\Lambda}''\mathbf{Z}''+[Z_{1[N_0-N_2-N_3+1]},\ldots,Z_{1[N_1]}]^T)\nonumber\\
&\hskip15pt-h([Z_{1[N_0-N_2-N_3+1]},\ldots,Z_{1[N_1]}]^T)\nonumber\\
&\hskip5pt\leq(N_0-N_2-N_3)\log P+o(\log P).\label{DoFOuterG85}
\end{align}
Using \eqref{DoFOuterG81}, \eqref{DoFOuterG82}, \eqref{DoFOuterG83}, and \eqref{DoFOuterG85}, the converse proof for \textit{Case}~II is complete.
This completes the converse proof for Theorem~\ref{Theorem:DoF}.
\section{Remarks on the Schemes and the DoF Region}
In this section, we provide some remarks on the proposed transmission schemes, and the DoF region for the three-receiver Gaussian MIMO broadcast channel with RMSI. These remarks provide some hints about the DoF region for the channel when there are four or more receivers.
\subsection{On the Transmission Schemes}
In this subsection, we make the observation that we can achieve the DoF region for all 16 possible RMSI configurations using three transmission schemes. One for the side information configurations $\{\mathcal{G}_1,\mathcal{G}_2,\ldots,\mathcal{G}_6\}$ (the scheme for the channel without RMSI, i.e., $\mathcal{G}_1$, can be used for all these configurations as they have the same DoF region), one for $\mathcal{G}_7$, and one for the rest, i.e., $\{\mathcal{G}_8,\mathcal{G}_{9},\ldots,\mathcal{G}_{16}\}$. For $\mathcal{G}_9$ and $\mathcal{G}_{10}$, the scheme for $\mathcal{G}_8$ can be used as they have the same DoF region. For $\mathcal{G}_k$, $k\in\{11,12,\ldots,16\}$, based on the following two points, the region achieved by the scheme for $\mathcal{G}_8$ is at least as large as the region achieved by the scheme for this configuration. First, the construction of $\mathbf{V}_1$ for $\mathcal{G}_k$ is a special case of the construction of $\mathbf{V}_1$ for $\mathcal{G}_8$. Second, the construction of $\mathbf{V}_2$ and $\mathbf{V}_3$ for $\mathcal{G}_k$ can be viewed as a modification of those for $\mathcal{G}_8$ in the following way: we replace some columns of these matrices that are constructed using i) the side information of the receivers, and/or ii) zero forcing in the scheme for $\mathcal{G}_8$, with the columns that are constructed independently and randomly in the scheme for $\mathcal{G}_k$. The replacement is done---depending on the extra side information in $\mathcal{G}_k$ compared to $\mathcal{G}_8$---in a way that we still have the same number of interference-free dimensions at the receivers. Consequently, the scheme for $\mathcal{G}_8$ can also achieve the DoF region of the channel with $\mathcal{G}=\mathcal{G}_k$, $k\in\{11,12,\ldots,16\}$.
\subsection{On the DoF Region with RMSI}
In this subsection, we introduce some properties of the DoF region of the Gaussian MIMO broadcast channel with RMSI under two specific antenna configurations where i) the number of antennas at the transmitter is greater than or equal to the sum-number of antennas at the receivers, and ii) the number of antennas at all the nodes are equal.
\subsubsection{$N_0\geq N_1+N_2+N_3$} Under this configuration, according to Theorem~\ref{Theorem:DoF}, the DoF region of the channel is
\begin{align*}
\mathcal{D}_{k}=\Big{\{}(d_1,d_2,d_3)\in&\mathbb{R}_+^3 \mid
d_i\leq N_i,\;i\in\mathcal{V}_\mathcal{G}\Big{\}},\;\forall k.
\end{align*}
This shows that the side information available at the receivers cannot enlarge the DoF region of the channel when $N_0\geq N_1+N_2+N_3$. This is because zero forcing can cancel the interference at all the receivers, and the transmitter can simultaneously create three independent virtual MIMO point-to-point channels, one to each receiver~$i$ with DoF $N_i$. Then the side information which is used to alleviate the interference at the receivers is no longer useful as far as the DoF region is concerned.
\subsubsection{$N_0\hskip-2pt=\hskip-2ptN_1\hskip-2pt=\hskip-2ptN_2\hskip-2pt=\hskip-2ptN_3$} Under this configuration, according to Theorem~\ref{Theorem:DoF}, the DoF region of the channel is
\begin{align}
\mathcal{D}_{k}=\Big{\{}(d_1,d_2,d_3)\in&\mathbb{R}_+^3 \mid
\sum_{k\in\mathcal{V}_\mathcal{Q}}d_k\leq N_0,\;\forall\mathcal{Q}\Big{\}},\;\forall k,\label{DoFEqualAntenna}
\end{align}
where $\mathcal{Q}$ is an acyclic induced subgraph of the side information graph of the channel.
In this configuration, as opposed to the previous configuration, we cannot perform zero forcing, and the side information plays a key role. Based on this, and as the DoF region is defined in the high signal-to-noise ratio region, we show that there are some common properties between the DoF region and the capacity region of the index coding problem~\cite{IndexCoding}. The three-receiver index coding problem considers a noiseless broadcast channel with RMSI where there are three messages, $M_i\in\mathcal{M}_i$, $i\in\mathcal{V}_\mathcal{G}$, each requested by one receiver, and a common noiseless link that carries $n$ bits. Arbabjolfaei et al.~\cite{CapacityRegionIndexCoding1} established the capacity region of the index coding problem for up to five receivers. We present their result for the three-receiver case as Proposition~\ref{Proposition:CapIndexCoding}. The capacity region of the three-receiver index coding problem is achieved using flat coding and time sharing~{\cite[Section III]{CapacityRegionIndexCoding1}}.
\begin{proposition}\label{Proposition:CapIndexCoding}
The capacity region of the three-receiver index coding problem is the set of all rate triples $(R_1,R_2,R_3)$, each satisfying
\begin{align}
\sum_{k\in\mathcal{V}_\mathcal{Q}}R_k\leq 1,\; \forall \mathcal{Q},\label{CapIndexCoding}
\end{align}
where $\mathcal{Q}$ is an acyclic induced subgraph of the side information graph of the channel.
\end{proposition}
The first property that we can see from~\eqref{DoFEqualAntenna} and~\eqref{CapIndexCoding} is that the DoF region normalized by the number of antennas at each node is the same as the capacity region of index coding.
Another property for the capacity region of index coding is that removing the arcs that are not part of a directed cycle does not decrease the capacity region~\cite{CapacityRegionIndexCoding2}. According to~\eqref{DoFEqualAntenna}, this property is also valid for the DoF region of the three-receiver Gaussian MIMO broadcast channel with the same number of antennas at all the nodes.
However, this property is not valid for the DoF region in general. For instance, the DoF region of the channel with $\mathcal{G}=\mathcal{G}_{11}$ is strictly larger than the one of the channel with $\mathcal{G}=\mathcal{G}_{8}$ when $N_2<N_0$, $N_3<N_0$, and $N_0<N_1+N_2+N_3$. This property is not valid also for the capacity region of the Gaussian MIMO broadcast channel when even all the nodes have the same number of antennas. For example, in a previous study~\cite{ThreeReceiverAWGNwithMSI}, we established the capacity region of the Gaussian channel with the side information graphs $\mathcal{G}_{8}$ and $\mathcal{G}_{11}$ where all the nodes have one antenna. The results show that the capacity region of the channel with $\mathcal{G}=\mathcal{G}_{11}$ is strictly larger than the one of the channel with $\mathcal{G}=\mathcal{G}_{8}$ when the absolute value of the channel gain for receiver~1 is the largest one and the one for receiver~3 is the smallest one.
\section{Conclusion}
We considered the three-receiver Gaussian multiple-input multiple-output (MIMO) broadcast channel with an arbitrary number of antennas at each node. We assumed that (i) channel matrices are known to all the nodes, (ii) the receivers have private-message requests, and (iii) each receiver may know some of the messages requested by the other receivers as receiver message side information (RMSI). We established the degrees-of-freedom (DoF) region of the channel for all 16 possible non-isomorphic RMSI configurations. To this end, we first proposed a scheme for each RMSI configuration which utilizes both the null space and the side information of the receivers. We used our schemes in conjunction with time sharing for 15 RMSI configurations, and with time sharing and two-symbol extension for the remaining one. We then derived a tight outer bound for each RMSI configuration by constructing enhanced versions of the channel, and upper bounding their DoF region. Furthermore, we showed that some properties for the capacity region of the index coding problem also hold for the DoF region where all the nodes have the same number of antennas.
\section*{Appendix A}\label{AppendixA}
In this section, we present the proof of Lemma~\ref{Lemma:OuterusingIAS}.
\begin{IEEEproof}
Any acyclic induced subgraph, $\mathcal{Q}$, of a side information graph represents a channel with RMSI where there are three or fewer receivers. We construct an enhanced channel for this channel. To this end, we first choose a receiver of this channel with outdegree zero, say receiver~$\ell$, $\ell\in\mathcal{V}_\mathcal{Q}$ (outdegree of a vertex (receiver) is the number of its outgoing arcs). We then provide the channel outputs at the other receivers of this channel to receiver~$\ell$. In the enhanced channel, receiver~$\ell$ can first decode its own message and the messages of the other receivers with outdgeree zero (if any) as it has all the information using which they decode their messages. Receiver~$\ell$ can then decode the messages of the receivers whose side information is already decoded at this receiver. By continuing this approach, as we have an acyclic subgraph, receiver~$\ell$ can decode all the messages $\{M_k\}$, $k\in\mathcal{V}_\mathcal{Q}$. Consequently, from the Gaussian MIMO point-to-point channel where a transmitter with $N_0$ antennas wants to transmit the messages $\{M_k\}$, $k\in\mathcal{V}_\mathcal{Q}$, to a receiver with $\sum_{k\in\mathcal{V}_\mathcal{Q}}N_k$ antennas, we obtain the necessary condition
\begin{align*}
\sum_{k\in\mathcal{V}_\mathcal{Q}}d_k&\leq \min\{N_0,\sum_{k\in\mathcal{V}_\mathcal{Q}}N_k\}.
\end{align*}
\vskip-35pt
\end{IEEEproof}
\vskip30pt
\section*{Appendix B}\label{AppendixB}
In this section, we present the proof of Lemma~\ref{Lemma:3MCCapacity}.
\begin{IEEEproof}
(\textit{Achievability}) To construct the codebook, we first convert the messages $M_2$ and $M_3$ into binary vectors, and XOR them, i.e., $M_\text{x}=M_2\oplus M_3$, where $\oplus$ denotes the bitwise XOR operation with zero padding for messages of unequal length ($M_\text{x}$ is an $n\max\{R_2,R_3\}$-bit message). We then, choose a distribution $p_{_{U_0,X_0}}(u_0,x_0)$, and generate $2^{n\max\{R_2,R_3\}}$ codewords
\begin{align*}
U_0^n(m_\text{x})=(U_{0,1}(m_\text{x}),U_{0,2}(m_\text{x}),\ldots,U_{0,n}(m_\text{x})),
\end{align*}
according to $\prod_{j=1}^np_{_{U_0}}(u_{0,j})$. We finally, using superposition coding, generate $2^{nR_1}$ codewords $X_0^n(m_\text{x}, m_1)$ for each $U_0^n(m_\text{x})$ according to $\prod_{j=1}^np_{_{X_0\mid U_0}}(x_{0,j}\mid u_{0,j}(m_\text{x}))$.
Receiver~1 decodes $\hat{m}_1$ if there exits a unique $\hat{m}_1$ such that $\left(U_0^n(m_\text{x}),X_0^n(m_\text{x},\hat{m}_1),Y_1^n\right)\in\mathcal{T}_{\epsilon}^{n}$ for some $m_\text{x}$ where $\mathcal{T}_\epsilon^n$ is the set of jointly $\epsilon$-typical $n$-sequences with respect to the considered distribution~\cite[p. 30]{NITBook}; otherwise an error is declared. We assume without loss of generality that the transmitted messages are equal to zero by the symmetry of the codebook construction. Then receiver~1 makes a decoding error only if one or more of the following events occur.
\begin{align*}
\mathcal{E}_{11}&:\left(U_0^n(0),X_0^n(0,0),Y_1^n\right)\notin\mathcal{T}_{\epsilon}^{n},\\
\mathcal{E}_{12}&:\left(U_0^n(0),X_0^n(0,m_1),Y_1^n\right)\in\mathcal{T}_{\epsilon}^{n}\;\;\text{for some }m_1\neq 0,\\
\mathcal{E}_{13}&:\left(U_0^n(m_\text{x}),X_0^n(m_\text{x},m_1),Y_1^n\right)\in\mathcal{T}_{\epsilon}^{n}\\
&\hskip120pt\text{for some }m_\text{x}\neq 0, m_1\neq 0.
\end{align*}
According to these error events, and using the packing lemma~\cite[p. 45]{NITBook}, receiver~1 can reliably decode $M_1$ if
\begin{align}
R_1&<I(X_0;Y_1\mid U_0),\label{achiev11}\\
R_1+\max\{R_2,R_3\}&<I(X_0;Y_1).\label{achiev12}
\end{align}
Since receiver~2 knows $m_3$ a priori, it decodes $\hat{m}_2$ if there exits a unique $\hat{m}_2$ such that $\left(U_0^n(\hat{m}_2\oplus 0),Y_2^n\right)\in\mathcal{T}_{\epsilon}^{n}$; otherwise an error is declared. Then receiver~2 makes a decoding error only if one or more of the following events occur.
\begin{align*}
\mathcal{E}_{21}&:\left(U_0^n({0}),Y_2^n\right)\notin\mathcal{T}_{\epsilon}^{n},\\
\mathcal{E}_{22}&:\left(U_0^n(m_2\oplus {0}),Y_2^n\right)\in\mathcal{T}_{\epsilon}^{n}\text{ for some }m_2\neq{0}.
\end{align*}
Hence, using the packing lemma, receiver~2 can reliably decode $M_2$ if $R_2<I(U_0;Y_2)$.
Since receiver~3 knows $m_2$ a priori, it decodes $\hat{m}_3$ if there exits a unique $\hat{m}_3$ such that $\left(U_0^n(0\oplus \hat{m}_3),Y_3^n\right)\in\mathcal{T}_{\epsilon}^{n}$; otherwise an error is declared. Then receiver~3 makes a decoding error only if one or more of the following events occur.
\begin{align*}
\mathcal{E}_{31}&:\left(U_0^n({0}),Y_3^n\right)\notin\mathcal{T}_{\epsilon}^{n},\\
\mathcal{E}_{32}&:\left(U_0^n({0}\oplus m_3 ),Y_3^n\right)\in\mathcal{T}_{\epsilon}^{n}\text{ for some }m_3\neq\mathbf{0}.
\end{align*}
Hence, using the packing lemma, receiver~3 can reliably decode $M_3$ if $R_3<I(U_0;Y_3)$.
Since $U_0\rightarrow X_0\rightarrow Y_1\rightarrow (Y_2,Y_3)$ form a Markov chain, we have $I(U_0;Y_i)\leq I(U_0;Y_1),\;i=2,3$. This makes the condtion in \eqref{achiev12} redundant, and completes the achievability proof. Note that receiver~1 does not use its side information during the decoding process. Then the achievability proof is irrespective of $K_1$.
(\textit{Converse}) By Fano's inequality~\cite[p. 19]{NITBook}, we have
\begin{align}
H(M_1\mid Y_1^n,M_2,M_3)&\leq n\epsilon_n,\label{fano1111}\\
H(M_2\mid Y_2^n,M_3)&\leq n\epsilon_n,\label{fano1112}\\
H(M_3\mid Y_3^n,M_2)&\leq n\epsilon_n,\label{fano1113}
\end{align}
where $\epsilon_n\rightarrow 0$ as $n\rightarrow \infty$. Using \eqref{fano1111}--\eqref{fano1113}, if a rate triple is achievable, then it must satisfy
\begin{align}
nR_1&\leq I(M_1;Y_1^n\mid M_2,M_3)+n\epsilon_n,\label{conv1111}\\
nR_2&\leq I(M_2;Y_2^n\mid M_3)+n\epsilon_n,\label{conv1112}\\
nR_3&\leq I(M_3;Y_3^n\mid M_2)+n\epsilon_n.\label{conv1113}
\end{align}
We define the auxiliary random variable $U_{0,j}=(M_2,M_3,Y_1^{j-1})$, where $Y_1^{j-1}=(Y_{1,1},Y_{1,2},\ldots,Y_{1,j-1})$, and expand the mutual information terms in \eqref{conv1111}--\eqref{conv1113} respectively as follows.
\begin{align*}
nR_1&\leq I(M_1;Y_1^n\mid M_2,M_3)+n\epsilon_n\\
&=\sum_{j=1}^{n}I(M_1;Y_{1,j}\mid Y_1^{j-1},M_2,M_3)+n\epsilon_n\\
&\overset{(a)}{=}\sum_{j=1}^{n}I\left(X_{0,j};Y_{1,j}\mid Y_1^{j-1},M_2,M_3\right)+n\epsilon_n\\
&=\sum_{j=1}^{n}I\left(X_{0,j};Y_{1,j}\mid U_{0,j}\right)+n\epsilon_n,
\end{align*}
\begin{align*}
nR_2&\leq I(M_2;Y_2^n\mid M_3)+n\epsilon_n\\
&=\sum_{j=1}^{n}I(M_2;Y_{2,j}\mid Y_2^{j-1},M_3)+n\epsilon_n\\
&\leq\sum_{j=1}^{n}I(M_2,M_3,Y_2^{j-1};Y_{2,j})+n\epsilon_n\\
&\leq\sum_{j=1}^{n}I(M_2,M_3,Y_2^{j-1},Y_1^{j-1};Y_{2,j})+n\epsilon_n\\
&\overset{(b)}{=}\sum_{j=1}^{n}I(M_2,M_3,Y_1^{j-1};Y_{2,j})+n\epsilon_n\\
&=\sum_{j=1}^{n}I(U_{0,j};Y_{2,j})+n\epsilon_n,
\end{align*}
and
\begin{align*}
nR_3&\leq I(M_3;Y_3^n\mid M_2)+n\epsilon_n\\
&=\sum_{j=1}^{n}I(M_3;Y_3(j)\mid Y_3^{j-1},M_2)+n\epsilon_n\\
&\leq\sum_{j=1}^{n}I(M_2,M_3,Y_3^{j-1};Y_{3,j})+n\epsilon_n\\
&\leq\sum_{j=1}^{n}I(M_2,M_3,Y_3^{j-1},Y_1^{j-1};Y_{3,j})+n\epsilon_n\\
&\overset{(b)}{=}\sum_{j=1}^{n}I(M_2,M_3,Y_1^{j-1};Y_{3,j})+n\epsilon_n\\
&=\sum_{j=1}^{n}I(U_{0,j};Y_{3,j})+n\epsilon_n,
\end{align*}
where $(a)$ follows since $X_{0,j}$ is a function of the messages $\{M_i\}_{i=1}^{3}$, and $(M_1,M_2,M_3,Y_1^{j-1})\rightarrow X_{0,j}\rightarrow Y_{1,j}$ form a Markov chain; $(b)$ follows form the Markov chain $X_0\rightarrow Y_1\rightarrow (Y_2,Y_3)$ which implies $Y_2^{j-1}\rightarrow (M_2,M_3,Y_1^{j-1})\rightarrow Y_{2,j}$, and $Y_3^{j-1}\rightarrow (M_2,M_3,Y_1^{j-1})\rightarrow Y_{3,j}$. Since $\epsilon_n\rightarrow\infty$ as $n\rightarrow\infty$, using the standard time-sharing argument~\cite[p. 114]{NITBook} completes the converse proof.
\end{IEEEproof}
\bibliographystyle{IEEEtran}
|
1,314,259,993,165 | arxiv |
\section{Introduction}
\label{sec:intro}
Many online platforms present algorithmic suggestions to help users explore the enormous content space.
The recommender systems, which produce such suggestions, are central to modern online platforms.
They have been employed in many applications, such as finding new friends on Twitter~\cite{su2016effect}, discovering interesting communities on LinkedIn~\cite{sharma2013pairwise}, and recommending similar goods on Amazon~\cite{oestreicher2012recommendation,dhar2014prediction}.
In the domain of multimedia, service providers (e.g., YouTube, Netflix, and Spotify) use recommender systems to suggest related videos or songs~\cite{davidson2010youtube,covington2016deep,gomez2016netflix,zhang2012auralist,celma2008hits}.
Much effort has been on generating more accurate recommendations, but relatively little is said about the effects of recommender systems on overall attention, such as their effects on item popularity ranking, the estimated strength of item-to-item links, and global patterns on the attention gained due to being recommended.
This work aims to answer such questions for online videos, using publicly available recommendation networks and attention time series.
We use the term \textit{attention} to refer to a broad range of user activities with respect to an online item, such as clicks, views, likes, comments, shares, or time spent watching.
The term \textit{popularity}, however, is used to denote observed attention statistics that are often used to rank online items against each other.
In this work, our measurement and estimation are carried out on the largest online video platform YouTube (as of 2019), and we specifically quantify popularity using the number of daily views for each video.
The outlined methods may well apply to other deeper forms of user engagement such as watch time.
Due to data availability constraints, the validation in this work is limited to popularity.
\input{images/fig-teaser.tex}
We illustrate the goals of this work through an example.
\cref{fig:teasers}(a) shows the recommendation network for six videos from the artist Adele.
It is a directed network and the directions imply how users can navigate between videos by following the recommendation links.
Some videos are not directly connected but reachable within a few hops.
For example, ``Skyfall'' is not on the recommended list of ``Hello'', but a user can visit ``Skyfall'' from ``Hello'' by first visiting ``Rolling in the deep''.
\cref{fig:teasers}(b) plots the daily view series since the upload of each of the six videos.
When ``Hello'' was released, it broke the YouTube debut records by attracting 28M views in the first 24 hours~\cite{billboard2015adele}.
Simultaneously, we observe a traffic spike in all of her other videos, even in three videos that were not directly pointed by ``Hello''.
This example illustrates that the viewing dynamics of videos connected directly or indirectly through recommendation links may correlate, and it prompts us to investigate the patterns of attention flowing between them.
This work bridges two gaps in the current literature.
The first gap measures and estimates the effects of recommender systems in complex social systems.
The main goals of recommender systems are maximizing the chance that a user clicks on an item in the next step~\cite{davidson2010youtube,covington2016deep,bendersky2014up,yi2014beyond} or in a longer time horizon~\cite{beutel2018latent,chen2019top,ie2019slateq}.
However, recommendation in social systems remains as an open problem for two reasons:
(1) a limited conceptual understanding of how finite human attention is allocated over the network of content, in which some items gain popularity at the expense of, or with the assistance of others;
(2) the computational challenge of jointly recommending a large collection of items.
The second gap comes from a lack of fine-grained measurements on the attention captured by items structured as a network.
There are recent measurements on the YouTube recommendation networks~\cite{airoldi2016follow,cheng2008statistics}, but their measurements are not connected to the attention patterns on content.
Similarly, measurement studies on YouTube attention~\cite{zhou2010impact} quantify the overall volume of views directed from recommended links.
However, no measurement that accounts for both the network structure and the attention flow is available for online videos.
This paper tackles three research questions:
\begin{enumerate}[label=\textbf{RQ\arabic*:}]
\item How to measure video recommendation network from publicly available information?
\item What are the characteristics of the video recommendation network?
\item Can we estimate the attention flow in the video recommendation network?
\end{enumerate}
We address the first question by curating a new YouTube dataset consisting of a large set of \textsc{Vevo}\xspace artists.
This is the first dataset that records both the temporal network snapshots of a recommender system, and the attention dynamics for items in it.
Our observation window lasts 9 weeks.
We present two means to construct the non-personalized recommendation network, and we discuss the relation between them in detail (\cref{sec:data}).
Addressing the second question, we conceptualize the global structure of the network as a bow-tie~\cite{broder2000graph} and we find that the largest strongly connected component accounts for $23.11\%$ of the videos while occupying $82.6\%$ of the attention.
Surprisingly, videos with high indegree are mostly songs with sustained interests, but not the latest released songs with high view counts.
We further find that the network structure is temporally consistent on the macroscopic level, however, there is a significant link turnover on the microscopic level.
For example, $50\%$ of the videos with an indegree of 100 on a particular day will gain or lose at least 10 links on the next day, and 25\% links appear only once during our 9-week observation window (\cref{sec:measures}).
Answering the third question, we build a model which employs both the temporal and network features to predict video popularity, and we estimate the amount of views flowing over each link.
Our networked model consistently outperforms the autoregressive and neural network baseline methods.
For an average video in our dataset, we estimate that $31.4\%$ of its views are contributed by the recommendation network.
We also find the evidence of YouTube recommender system boosting the popularity of some niche artists (\cref{sec:models}).
The new methods and observations in this work can be used by content owners, hosting sites, and online users alike.
For content owners, the understanding of how much traffic is driven among their own content or from/to other content can lead to better production and promotion strategies.
For hosting sites, such understanding can help avoid social optimization, and shed light on building a fair and transparent content recommender systems.
For online users, understanding how human attention is shaped by the algorithmic recommendation can help them be conscious of the relevance, novelty and diversity trade-offs in the content they are recommended to.
The main contributions of this work include:
\begin{itemize}[leftmargin=*]
\item We curate a new YouTube dataset, called \textsc{Vevo Music Graph}\xspace dataset\footnote{The code and datasets are publicly available at \url{https://github.com/avalanchesiqi/networked-popularity}}, which contains the daily snapshots of the video recommendation network over a span of 9 weeks, and the associated daily view series for each video since upload.
\item We perform, to our knowledge, the first large-scale measurement study that connects the structure of the recommendation network with video attention dynamics.
\item We propose an effective model that accounts for the network structure to predict video popularity and to estimate the attention flow over each recommendation link.
\end{itemize}
\section{Conclusion}
\label{sec:conclusion}
This work presents a large-scale study for online videos on YouTube.
We collect a new dataset that consists of 60,740 \textsc{Vevo}\xspace music videos, representing some of the most popular music clips and artists.
We construct the YouTube recommendation network.
We present measurements on the global component structure and temporal persistence of links.
A model that leverages the network information for predicting video popularity is proposed, which achieves superior results over other baselines.
It also allows us to estimate the amount of attention flow over each recommendation link.
We derive a metric --- estimated network contribution ratio, and we quantify this ratio at both the entire \textsc{Vevo}\xspace network level and individual artist level.
To the best of our knowledge, this is the first work that links the video recommendation network structure to the attention consumption for the videos in it.
\header{Discussion.}
Much progress has been made to algorithmically optimize or increase the attention for individual digital item (from videos to products to connections in social networks), whereas the theory about attention flow among different items is still fairly nascent.
Our data includes a series of network snapshots that are constructed by the platform's recommender systems, and visible to both content producers and consumers.
We believe that the area of understanding the implications of content recommendation networks has many worthy problems and fruitful applications.
However, definitions and properties of a recommendation network that is fair and transparent to the content hosting site, producers and consumers remain as open issues.
\header{Limitation and future work.}
The limitations of this work include: interpretations of importance are directly based on regression weights; some observations may not generalize to other digital items other than the most popular music videos; the prediction does not explore all the potential deep learning architecture and parameter tuning.
Future work includes modeling attention flow that takes into account item rank on the relevant list; connecting aggregate attention with individual click streams; and improving deep neural network models, specifically, three directions for us to exploit.
Firstly, extract additional features, such as audio-visual, artist, and network features.
Secondly, measure the relations between estimated link strength and link properties, such as the diversity and/or novelty of the target video relative to the source video~\cite{ziegler2005improving}.
Lastly, train a shared RNN model on videos with similar dynamics for increasing the volume of training data~\cite{figueiredo2016trendlearner}.
\section{Related work}
\label{sec:related}
In this section, we discuss three lines of research: design of (video) recommender systems, measurements on recommender systems, and studies on user attention towards online items.
\subsection{Recommender systems and video recommendation}
The goals of recommender systems can be summarized as two related yet distinct tasks.
The first task is user-centric, i.e., given users' profiles and past activities, finding a collection of items that might interest them~\cite{konstan2012recommender,covington2016deep}.
The resulting recommendations, often shown in user homepage feed, can be regarded as the entry point for the user action sequence.
The second task is item-centric, i.e., given the currently visited item, finding a ranked list of relevant items~\cite{davidson2010youtube,zhang2012auralist,gomez2016netflix}.
This can be regarded as recommending the next item in a sequence of actions.
In the same vein, we conceptualize and explain the behaviors on YouTube --- users start the action sequences by latent interests, and their subsequent actions are driven by network effects (see \cref{ssec:setting}).
\header{Recommender systems on YouTube.}
Recommender systems, along with YouTube search, have been shown as the two dominant factors driving user attention on YouTube~\cite{zhou2010impact}.
In 2010, \citet{davidson2010youtube} reported the usage of a collaborative filtering method in the YouTube recommender systems, i.e., videos are recommended by counting the number of co-watches.
This approach works well for videos with many views, however, it is less applicable for newly uploaded videos or least watched videos.
\citet{bendersky2014up} proposed two methods to enhance the collaborative filtering approach by embedding the video topic representation into the recommender.
\citet{covington2016deep} applied deep neural networks and indicated that the final recommendation is a top-K sample from a large candidate set generated by taking into the account content relevance, past watch and search activities, etc.
Other enhancements include incorporating contextual data \cite{beutel2018latent}.
Most recently, \citet{chen2019top} and \citet{ie2019slateq} showed success in applying reinforcement learning techniques in YouTube recommender systems.
Our work does not deal with designing a recommender system, nor does it attempt to reverse engineer the YouTube recommender.
Instead, we concentrate our analysis on the impacts of the recommender systems by presenting large-scale measurements.
\subsection{Measuring the effects of recommender systems}
Contrasting the extensive literature on evaluating the accuracy of recommendation~\cite{zhang2012auralist,beutel2018latent,chen2019top,li2018offline}, we focus on prior work that connects network structure with content consumption.
\citet{carmi2017oprah} reported how the book sales on Amazon react to exogenous demand shocks --- not only did the sales increase for the featured item, but the increase also
propagated a few hops away by following the links created by the recommender systems.
This is akin to our observation in \cref{fig:teasers} that attention ripples happen for videos too.
\citet{dhar2014prediction} further showed the effectiveness of using the recommendation network in predicting item demands.
\citet{su2016effect} linked the aggregate effects of recommendations and network structure, and found that popular items profit substantially more than the average ones.
However, \citet{sharma2015estimating} stressed the difficulty of inferring causal relations based on observational data in recommender systems.
\citet{cheng2008statistics} are among the first to study the statistics of YouTube recommender systems.
They scraped video webpages to construct the video network at a weekly interval.
\citet{airoldi2016follow} followed the video suggestions on YouTube to construct one static network snapshot for a random collection of music videos.
Note that both studies adopt a snowball sampling technique to construct the network, whereas in our work, we have the complete trace of an easily identifiable group of \textsc{Vevo}\xspace artists, and we capture the dynamics of network snapshots at a much finer daily granularity (see \cref{ssec:crawling}).
Most importantly, our work links the network with the item attention dynamics.
\subsection{Measuring and predicting online attention}
Attention is a scarce resource in online platforms.
While users have an unprecedented volume of information to choose from, online content competes for our limited attention~\cite{weng2012competition,zarezade2017correlated}.
\citet{salganik2006experimental} designed the ``MusicLab'' experiment, in which they explored how social influence and inherent quality affect a product's market share.
In a follow-up study, \citet{krumme2012quantifying} conceptualized user behaviors as two steps for characterizing how users consume digital items.
The first step is based on the appeal of the product, measured by the number of clicks; the second step is based on the quality of the product, measured by post-clicking metrics, e.g., dwell time, comments or shares.
A similar two-step process is employed in the web search community to differentiate between page views and dwell time on webpages~\cite{yue2010beyond,yi2014beyond}.
Following a similar idea, we categorize online attention into \textit{popularity} and \textit{engagement}.
On YouTube, popularity refers to the number of views that a video receives and engagement refers to the time spent on watching the video.
Predicting content popularity is an active field.
For online videos, future popularity has been shown to correlate with popularity in the past~\cite{pinto2013using,szabo2010predicting}, rendering autoregressive method a strong baseline.
Other works integrate additional information.
External sharing on social media has been linked to the popularity of online videos~\cite{li2013popularity,abisheva2014watches}, which is later developed by \citet{rizoiu2017expecting} to model popularity as an interplay of exogenous stimuli and endogenous responses.
Another line of work measures the temporal characteristics of content popularity.
\citet{yu2015lifecyle} revealed that the lifecycles of online videos exhibit a multi-phase pattern.
\citet{figueiredo2016trendlearner} stressed the necessity of predicting content popularity before the user interests exhaust.
For engagement studies on YouTube, we refer to our previous paper~\cite{wu2018beyond}, in which we found the most engaging videos do not always have highest view counts.
In contrast to the general understanding that content popularity is unpredictable in social systems~\cite{martin2016exploring,cheng2014can,rizoiu2018sir}, we also found that engagement metrics appear to be much more predictable.
This work focuses on the popularity measures.
To our knowledge, no prior work has attempted to predict video popularity with fine-grained recommendation network information due to the difficulty in constructing such network.
This is the first study that shows how to construct a persistent network by following the recommended links, and how to employ network features to improve the popularity prediction task (see \cref{sec:models}).
It is worth differentiating our work from the studies that collect individual user data from customized browser plugins.
Instead of measuring proactive user behaviors, we are interested in understanding how the platform-generated recommendation network guide the aggregate user attention.
\section{Measuring YouTube video network}
\label{sec:measures}
In this section, we present the macroscopic (\cref{ssec:macroscopic}), microscopic (\cref{ssec:microscopic}), and temporal (\cref{ssec:temporal}) profiling of the \textsc{Vevo}\xspace network.
\subsection{Macroscopic profiling of the \textsc{Vevo}\xspace network}
\label{ssec:macroscopic}
We first compute several basic statistics such as indegree distribution, view count distribution, and \textsc{Vevo}\xspace videos uploading trend (\cref{sssec:basic-statistics}).
Next, we study the connection between the network structure and video popularity (\cref{sssec:structure-viewcounts}).
Lastly, we use the bow-tie structure to characterize the \textsc{Vevo}\xspace network and we discuss the impact of different cutoff values (\cref{sssec:bowtie}).
\input{images/fig-statistics.tex}
\subsubsection{Basic statistics}
\label{sssec:basic-statistics}
\header{Over-represented medium-size indegree videos.}
Here we study the indegree distribution of the \textsc{Vevo}\xspace network.
Note that the outdegree of all nodes is bounded by the cutoff value on the relevant list and therefore not presented.
We remove all links pointing to non-\textsc{Vevo}\xspace videos, resulting in an average of 363,965 edges each day, and an average degree of 6.
Note that the average degree of 10 mentioned in \cref{sec:data} is obtained with a cutoff of 25, whereas here we study the relevant network constructed with a cutoff of 15, since the display probability of videos below position 15 appearing on recommended list is less than 0.32.
\cref{fig:measure-statistics}(a) shows the complementary cumulative density function (CCDF) of the indegree distribution for four different snapshots of the network, taken 15 days apart.
We notice that the indegree distribution does not resemble a straight line in the log-log plot, meaning it is not power-law, unlike for other online networks, e.g., the World Wide Web \cite{broder2000graph,meusel2014graph}, the network of interaction in online communities \cite{zhang2007expertise}, and the follower/following network on social media \cite{kwak2010twitter}.
The medium-sized indegree videos are over represented than that in the best fitted power-law model ($\alpha=2.02$, fitted by \textsc{powerlaw} package \cite{alstott2014powerlaw}, resulting in $x^{-1.02}$ in CCDF~\cite{clauset2009power}).
This result holds for all four snapshots.
\header{Attention is unequally allocated.}
\cref{fig:measure-statistics}(b) plots the average daily views against the view count percentile.
The daily view count at median is 81, but it is 4,575 at the 90th percentile.
These observations, together with a Gini coefficient of $0.946$, indicate that the attention allocation in the \textsc{Vevo}\xspace network is highly unequal --- the top 10\% most viewed videos occupy 93.1\% views.
We also find a moderate correlation between view count and indegree value (details in \cref{ssec:microscopic}).
\header{Uploading trend by music genre.}
To date, our dataset is the largest digital trace of \textsc{Vevo}\xspace artists on YouTube, allowing us to study the production dynamics of the \textsc{Vevo}\xspace platform.
\cref{fig:measure-statistics}(c) shows the number of \textsc{Vevo}\xspace videos that are uploaded each year from 2009 to 2017, broken down by their genres.
We omit year 2018 as we only observed 8 months for it (until August).
There is a significantly higher number of uploads (9,277) in 2009 as it is the year when \textsc{Vevo}\xspace was launched, and when many all-time favorite songs were syndicated to the YouTube platform.
Pop, Rock, and Hip hop music are the top 3 genres, accounting for 62.85\% of all uploads.
The \textsc{Vevo}\xspace videos upload rate is more or less constant around 7,000 since 2013.
The flattening production dynamics is somewhat surprising given the overall growth of YouTube~\cite{youtube2017billion}.
\subsubsection{Linking network structure and popularity}
\label{sssec:structure-viewcounts}
\input{images/fig-videos-connect.tex}
Here, we investigate the connection between the relevant network structure and video view counts.
Specifically, we divide the videos in the \textsc{Vevo Music Graph}\xspace dataset into four equal groups by computing the view count quartiles.
Each group contains 15,185 videos.
Next, we count the number of edges that originate and end in each pair of groups.
\cref{fig:measure-connect} represents the four groups together with the number of links between them.
The ``top $25\%$'' group contains the top $25\%$ most viewed videos, while the ``bottom $25\%$'' contains the $25\%$ least viewed videos.
The width of the arrows is scaled by the number of the edges between the videos placed in the two groups.
One can conceptualize that the edges act as conduits for the attention to flow between different groups and their thickness indicate the probability that a random user jumps from one group to the other.
We observe that all four groups have the most links pointing to the ``top $25\%$'' group.
In fact, every group disproportionately points towards more popular groups than towards the less popular ones.
This means the recommendation network built by the platform is likely to take a random viewer towards more popular videos and keep them there, therefore reinforcing the ``rich get richer'' phenomenon.
\subsubsection{The bow-tie structure of video networks}
\label{sssec:bowtie}
The bow-tie structure was first proposed by~\citet{broder2000graph} to visualize the structure of the whole web.
It classifies the complex web graph into five components:
(a) the largest strongly connected component (LSCC) as the core;
(b) the IN component which can reach the LSCC, but not the other way around;
(c) the OUT component which can be reached from the LSCC, but not the other way around;
(d) the Tendrils component which connect to either the IN or the OUT, bypassing the LSCC;
(e) the Disconnected components which are disconnected from the rest of the components.
The strongly connected component (SCC) can be easily computed in linear time by using Tarjan's algorithm \cite{tarjan1972depth}.
For the \textsc{Vevo}\xspace network, we quantify the sizes of different components in the bow-tie structure using both the number of nodes (videos) and the amount of attention (views).
Unlike the Web graph~\cite{broder2000graph}, we know the amount of views garnered by each video, this allows us to comparatively analyze the total attracted attention in each component.
The bow-tie structure is a good conceptual description, because the directed edges exist only from the IN to the LSCC component (similarly, LSCC to OUT, and IN to OUT) but not the other way around, indicating that the attention in the network can only flow in a single direction from IN to LSCC (similarly, LSCC to OUT, and IN to OUT).
\input{tables/tab-bowtie.tex}
\input{images/fig-bowtie.tex}
\cref{tab:compare-bowtie} compares the relative sizes of each component in prior literature and in our \textsc{Vevo}\xspace network.
The \textsc{Vevo}\xspace network is quite different with respect to other previously studied online networks, e.g., the Web graph \cite{broder2000graph,meusel2014graph} and user activity network in online community~\cite{zhang2007expertise,kim2012event}.
It has a much larger IN component, encompassing 68.54\% of all the videos.
The OUT, Tendrils, and Disconnected components are all very small, accounting for a total of 8.35\% videos.
\cref{fig:measure-bowtie}(a) visualizes the bow-tie structure of the \textsc{Vevo}\xspace network.
Unlike other graphs, our \textsc{Vevo}\xspace graph is the by-product of the recommender systems, which is subjected to the proprietary algorithm and its updating cycle.
This suggests there may exist considerable temporal variation in the composition of the bow-tie components, see \cref{ssec:temporal} for observations over time.
\input{images/fig-bowtie-cutoff.tex}
\cref{fig:measure-bowtie}(b) resizes each component of the \textsc{Vevo}\xspace bow-tie by the total view counts in it.
Visibly the roles of LSCC and IN are reversed: the LSCC now occupies $82.6\%$ attention (while accounting for only $23.11\%$ of the videos), while the big IN component ($68.54\%$ of the videos) only attract $12.74\%$ attention.
This is consistent with the observation in \cref{sssec:structure-viewcounts} that the attention is unequally allocated in the \textsc{Vevo}\xspace network.
Given the definition of the IN component, its $68.54\%$ of videos contribute attention towards the LSCC, but not the other way around (there is no link from LSCC towards IN).
As a result, the LSCC accumulates a large proportion of all attention.
The OUT, Tendrils, and Disconnected components account for almost negligible attention ($4.67\%$ of the views altogether).
\header{Impact of different cutoff values on the bow-tie structure.}
The \textsc{Vevo}\xspace network changes as we change the cutoff on the relevant list, as taking more edges into account densifies the network.
\cref{fig:measure-bowtie-cutoff} shows how the relative size of the bow-tie component changes with varying cutoff values.
As the cutoff increases, more edges are added to the network, especially for the videos in the Disconnected component.
Backwards links are formed between videos in the LSCC and IN, and as a result, the LSCC absorbs parts of the IN component.
Therefore, the LSCC increases, the IN decreases, while the other three components (OUT, Tendrils, and Disconnected) become negligible.
At cutoff of 50, the \textsc{Vevo}\xspace network structures into 2 distinct components: a LSCC component consisting of $77\%$ videos and $99\%$ attention, and an IN component consisting of the remaining $23\%$ videos and accounting for only $1\%$ of the attention.
\subsection{Microscopic profiling of the \textsc{Vevo}\xspace network}
\label{ssec:microscopic}
In this section, we jointly analyze the relation between video age, indegree, and popularity by examining overall correlation, as well as among top-ranked videos.
\input{images/fig-spearmanr.tex}
\header{The disconnect between network indegree and video view count.}
We measure the correlation between video indegree and view count using Spearman's $\rho$ --- a measure of the strength of the association between two ranked variables, and which takes values between -1 and +1.
A positive $\rho$ implies the ranks of the two variables move together in the same direction.
At the level of the entire dataset, we detect a moderate correlation between video indegree and view count (Spearman's $\rho = 0.421^{***}$, p < 0.001).
\cref{fig:measure-spearmanr} shows the Spearman's $\rho$ when we further break down the videos in the \textsc{Vevo Music Graph}\xspace based on their uploaded year.
We observe that the strength of the correlation decreases for fresher videos.
Videos uploaded in 2009 have a much stronger correlation ($\rho = 0.638^{***}$) than videos uploaded in 2018 ($\rho = 0.265^{***}$).
This suggests that video age is an important confounding factor when one tries to estimate the effects of the recommendation network.
Empirically, this may indicate the shift in what drives attention towards video consumption.
\citet{zhou2010impact} have measured that the two main drivers for video views are YouTube search and recommender.
One explanation of our observation above is that as videos get older, the effects of recommendation become more pronounced.
\header{A closer look at the top videos.}
\cref{tab:top20} presents the top 20 videos with highest average daily indegree (top panel) and top 20 videos with highest average daily views (bottom panel).
We observe a modest amount of discrepancy between these two dimensions, with only 5 videos being on both lists (shown in bold font).
Most of the top-viewed videos are relatively new to the platform --- 10 out 20 are published within one year and the top 5 are all within the past 7 months (relative to November 2018).
In contrast, the videos with high indegree are mostly songs with sustained interests, some dating back to 10 years ago, such as ``The Cranberries - Zombie'' and ``Bon Jovi - It's My Life''.
These two songs were respectively released in 1993 and 2000, having existed for a long time before being uploaded to YouTube.
Currently, they still attract half a million views everyday after nearly 20 years, ranking 3rd and 17th on the most-linked video list, respectively.
This may shed light onto why video popularity lifecycle exhibits a multi-phase pattern~\cite{yu2015lifecyle}.
Our observations do not conflict with the design of YouTube recommender systems, which promote ``reasonably recent and fresh'' content \cite{davidson2010youtube,covington2016deep,beutel2018latent}.
Fresh videos can be recommended due to the relevance, novelty and diversity trade-offs~\cite{konstan2012recommender,ziegler2005improving}.
Instead, our observed video relations are based on the content recommendation network~\cite{carmi2017oprah,dhar2014prediction}.
\input{tables/tab-indegree-views.tex}
Another group of interest is the videos that are highly viewed yet with low indegree.
We find this pattern appears at the level of the artist.
For instance, ``Becky G'' has 3 videos on the top 20 most-viewed list, ranking 2, 4, and 14.
However, the indegrees for her videos are extremely low (rank 2411, 40040, and 958 respectively).
Particularly, the video ``Cuando Te Bese'' attracts an average of 2.4M views every day for 9 consecutive weeks.
However, it has only one video pointing to it from the rest of the 60,739 \textsc{Vevo}\xspace videos.
A closer look reveals that ``Becky G'' is an American singer who often releases Spanish songs.
The above observation shows that her videos are either recommended from non-English and/or non-\textsc{Vevo}\xspace videos, e.g., the Spanish songs community, or that recommendation network is not the main traffic driver for her videos.
\subsection{Temporal evolution of \textsc{Vevo}\xspace network}
\label{ssec:temporal}
Here, we study the dynamics of the \textsc{Vevo}\xspace network over 9 weeks, namely the appearance and disappearance of recommendation links between videos.
We show that pairs of videos can have either ephemeral link or frequent link between them.
\input{images/fig-temporal-macro.tex}
\header{Macroscopic dynamics.}
\cref{fig:measure-statistics}(a) and (b) show that both the indegree distribution and the view count distribution are temporally consistent.
However, when we plot the size variation of the different components in the bow-tie structure, we obtain a more nuanced story.
\cref{fig:measure-temporal-macro} shows that the size of the LSCC ranges from 11.49\% to 30.13\%, while IN component from 60.37\% to 77.9\% over 9 weeks.
Similarly, the percentage of total views in the LSCC ranges from 80.46\% to 90.36\%, while IN component from 9.11\% to 18.07\%.
Given that the same set of videos is tracked throughout the observation period and no new video is added, the above observations imply a significant turnover in the recommendation links between videos.
For example, the appearance of a link will allow a node to transition from the IN to the LSCC component; the disappearance of the same link would make it drop back into IN component.
\header{Incoming ego-network dynamics.}
We study the link turnover using the incoming ego-network for each video.
Ego network consists of an individual focal node and the edges pointed towards it.
We only consider incoming edges, as the number of outgoing edges is capped by the relevant list cutoff (here the cutoff is 15).
For each video, we first extract the days with at least 20 incoming links.
Then for each day $t$, we compute the indegree change ratio between day $t$ and day $t+1$ by dividing the indegree delta (positive or negative) by the value in day $t$.
We obtain a number between -1 and 1, where -1 means that the video loses all of its incoming edges, and a value of 1 signifies that the video doubles the number of incoming edges.
\cref{fig:measure-temporal-micro}(a) shows the indegree change ratio summarized as quantiles, broken down by the value of indegree.
We highlight the 10th, 25th, median, 75th, and 90th percentile for the videos with an indegree of $100$.
$25\%$ videos with an indegree of $100$ will gain at least 8 in-links on the next day while another 25\% lose at least 11 in-links.
The median is around zero, meaning that there are as many videos that gain links as these that lose links.
Overall, this suggests that videos have very dynamic incoming ego-networks, with a non-trivial number of edges prone to appear and disappear from one day to another.
\input{images/fig-temporal-micro.tex}
\header{Ephemeral links and frequent links.}
Given the rate at which links appear and disappear, here we ask the question if there exist videos that are frequently connected.
For each pair of connected videos, we count the number of times that a link appears between them over the 63 daily snapshots.
\cref{fig:measure-temporal-micro}(b) plots the link frequency (taking values between 1 and 63) on the x-axis and the number of video-to-video pairs with that link frequency on the y-axis.
We find that many links are ephemeral --- they appear several times, scattering in the 63 days time window.
We count that 434K ($25.2\%$) video-to-video links only appear once.
On the other hand, there are links that appear in every snapshot --- we count 54K ($3.1\%$) such links.
Ephemeral links may contribute to bursty popularity dynamics of YouTube videos, and to the generally perceived unpredictability in complex social systems~\cite{martin2016exploring,rizoiu2017expecting,rizoiu2018sir}.
Frequent links may hold the answer to understanding and predicting the attention flow in a network of content.
\section{Constructing YouTube video network}
\label{sec:data}
In this section, we first introduce our newly curated \textsc{Vevo Music Graph}\xspace dataset (\cref{ssec:vmg}).
Next, we detail the data collection strategy (\cref{ssec:crawling}) and analyze the relation between two types of non-personalized video recommendation lists (\cref{ssec:nonpersonal}).
\subsection{\textsc{Vevo Music Graph}\xspace dataset}
\label{ssec:vmg}
The \textsc{Vevo Music Graph}\xspace dataset consists of the verified \textsc{Vevo}\xspace artists who are active in six English-speaking countries (United States, United Kingdom, Canada, Australia, New Zealand, and Ireland), together with their complete record of videos uploaded on YouTube from the launch of \textsc{Vevo}\xspace (Dec 8, 2009) until Aug 31, 2018.
Our dataset contains 4,435 \textsc{Vevo}\xspace artists and 60,740 music videos.
For each video, we collect its metadata (e.g., title, description, uploader), its view count time series, and its recommendation relations with other videos.
The videos and their recommendation relations form a dynamic directed network, which we capture daily between Sep 1, 2018 and Nov 2, 2018 (63 days, 9 weeks).
\header{Why \textsc{Vevo}\xspace?}
\textsc{Vevo}\xspace\footnote{The \textsc{Vevo}\xspace website was shut down on May 24, 2018; however, videos syndicated on YouTube before are still embedded with a ``VEVO'' watermark on their thumbnails. See screenshot in \cref{fig:data-layout} for illustration.} is the largest syndication hub that provides licensed music videos from major record companies to YouTube \cite{wikipediavevo}.
We choose to study the networked attention flow on \textsc{Vevo}\xspace for several reasons.
First, \textsc{Vevo}\xspace is an ecosystem of its own that attracts tremendous attention --- 94 of all-time top 100 most viewed videos on YouTube are music, and 64 of which are distributed via \textsc{Vevo}\xspace \cite{wikipediatop}.
On average, our dataset accounts for 310 millions views and 9.1 millions watch hours every day.
Second, many users utilize YouTube as their music streaming player, listening to non-stop playlists generated by the recommender systems.
After the completion of the current video, YouTube automatically plays the ``Up next'' video --- the video in the first position of the recommended list, as illustrated in \cref{fig:data-layout}.
This usage pattern for music videos makes the network effects of YouTube recommender systems more significant for directing user attention from one video to another.
Third, \textsc{Vevo}\xspace artists and their videos form a tightly connected network.
The average degree in the \textsc{Vevo}\xspace video network is 10, compared to 3.2 in the YouTube video network collected by \citet{airoldi2016follow} via snowball sampling (see \cref{ssec:nonpersonal}).
The nodes are homogeneous in terms of content --- they are all music videos from artists based in English-speaking countries.
Lastly, the \textsc{Vevo}\xspace artists are easily identifiable --- they include the keyword ``VEVO'' in the channel title, they possess a verification badge on the channel page, and they publish licensed videos with a ``VEVO'' watermark.
\subsection{Data collection strategy}
\label{ssec:crawling}
We identify \textsc{Vevo}\xspace artists starting from Twitter.
We capture every tweet that mentions YouTube videos by feeding the rule \texttt{"youtube" OR ("youtu" AND "be")} into the Twitter Streaming API\footnote{\url{https://developer.twitter.com/en/docs/tweets/filter-realtime/api-reference/post-statuses-filter.html}}.
Our Twitter crawler has been running continuously since Jun 2014.
From the ``\texttt{extended\_urls}'' field of each tweet, we extract the associated YouTube video ID, and we use our open-source tool \textsc{youtube-insight}~\cite{wu2018beyond} to retrieve the video's metadata, daily view count series and the ranked list of relevant videos.
Next, we select the \textsc{Vevo}\xspace artists by keeping only the channels that have the keyword ``VEVO'' in the channel title and a ``verified'' status badge on the channel homepage.
Note that a channel refers to a user who uploads videos on YouTube.
We query an open music database MusicBrainz\footnote{\url{https://musicbrainz.org}} to retrieve more features about each artist, such as the music genres and the geographical area of activities.
We retain the artists who are active in the six aforementioned English-speaking countries, and the videos that are classified into the ``Music'' category.
For completeness, we also implement a snowball-like procedure to retrieve further artists and their videos by following the recommendation relations from the tweeted videos.
However, this procedure only adds 2 more artists (out of the 4,435 \textsc{Vevo}\xspace artists in our dataset) and 5 more videos (out of the 60,740 music videos).
This is not surprising, considering most artists would promote their works on social media platforms.
One data limitation is that artists who are not affiliated with \textsc{Vevo}\xspace will not appear in our collection, such as Ed Sheeran and Christina Perry.
\subsection{The network of YouTube videos}
\label{ssec:nonpersonal}
For any YouTube video, there are two publicly accessible sources of recommendation relations.
The first is the right-hand panel of videos that YouTube displays on its interface.
We denote this as the \textit{recommended list} (visualized in \cref{fig:data-layout}).
The second is from the YouTube Data API\footnote{\url{https://developers.google.com/youtube/v3/docs/search/list}}, which retrieves a list of videos that are relevant to the query video, ranked by the relevance.
We denote this as the \textit{relevant list}.
We retrieve both the recommended and the relevant lists for every video in our dataset.
We construct the recommended list by simulating a browser to access the video webpage and scraping the list on the right-hand panel.
We retrieve the first 20 videos from the panel, which are the default number of videos shown to the viewers on YouTube.
Note that typically, YouTube customizes the viewers' recommendation panel based on their personal interests and prior interaction history.
Here, we retrieve the non-personalized recommended list by sending all requests from a non-logged in client and by clearing the cookies before each request.
We denote the networks of videos constructed using the recommended and the relevant lists as the \textit{recommended network} and the \textit{relevant network}, respectively.
From Sep 1, 2018 to Nov 2, 2018, we crawled both the recommended and the relevant lists for each of the 60,740 \textsc{Vevo}\xspace videos on a daily basis.
The crawling jobs were distributed across 20 virtual machines, and took about 2 hours to finish.
In this way, we obtain successive snapshots for both the recommended and the relevant networks over 9 weeks.
\input{images/fig-layout.tex}
\header{An illustrative example.}
\cref{fig:data-layout} illustrates the YouTube webpage layout for the video ``Hello'' by Adele, together with its recommended and relevant lists.
Videos belonging to the \textsc{Vevo}\xspace artists are colored in blue (e.g., Adele and The Cranberries), while others are colored in grey (e.g., Ed Sheeran and Christina Perry).
Visibly, not all videos on the recommended and relevant lists belong to the \textsc{Vevo}\xspace artists (e.g., ``Ed Sheeran - Perfect'').
Notice that for Music videos, a platform-generated playlist is always shown at the second position of the recommended list (here, ``Mix - Adele - Hello''), effectively capping the size of this list at 19.
The length of the relevant list often exceeds 100.
We observe that not all relevant videos appear in the recommended list (e.g., ``The Cranberries - Zombie''), nor all recommended videos originate from the relevant list (e.g., ``Adele - Skyfall'').
Also, the relative positions of two videos can appear flipped between the two lists (e.g., ``Ed Sheeran - Perfect'' and ``Christina Perry - A Thousand Years'').
\header{Display probabilities from the relevant to the recommended list.}
We study the relation between the positions of videos on the relevant and on the recommended lists.
We construct four bins based on the video position on the recommended list (position 1, position 2-5, position 6-10, and position 11-15).
\cref{fig:data-rel2rec}(a) shows as stacked bars the probability that a video ends up in each of the bins, as a function of its position on the relevant list.
The total height of the stacked bars gives the overall probability that a video originating from the relevant list appears at the top 15 positions on the recommended list.
We observe videos that appear at an upper position on the relevant list are more likely to appear on the recommended list, and at an upper position.
For example, the video at position 1 on the relevant list has 0.34 probability to appear at the first position and 0.84 probability to appear at the top 15 positions on the recommended list.
The probability decays for videos that appear at lower positions.
A relevant video appearing in position 41 to 50 has less than 0.05 probability to appear on the recommended list.
We compute the probabilities of appearance between each pair of positions in the relevant and the recommended lists --- denoted as \textit{display probabilities} --- using the 9-week dynamic network snapshots.
\input{images/fig-rel2rec.tex}
In \cref{fig:data-rel2rec}(b), we show the plot of the probability that a video at a given position on the recommended list originates from the relevant list.
We observe videos that appear at an upper position on the recommended list are more likely to originate from the upper position of relevant list.
The other notable observation is that the overall recall for recommended videos are high at over 0.8, meaning for any video on the recommended list, we are likely to see it on the relevant list.
\header{YouTube video network density.}
\citet{airoldi2016follow} used the first 25 videos on the relevant list to construct the relevant network, which had an average degree of $3.2$.
By comparison, our \textsc{Vevo}\xspace video network is much denser at the same cutoff, with an average degree of 10.
One could expect that the relevant network becomes even denser when videos at lower position are included; however, the display probabilities also need to be considered.
In this paper and unless otherwise specified, we use the first 15 positions on the relevant list ($0.35 \leq P_{\text{display}} \leq 0.84$) to construct the relevant network.
We denote this threshold as the \textit{cutoff} and we study the impact of different cutoff values on the network structure in \cref{sssec:bowtie}.
Measurements with other cutoff values yield similar results and thus are omitted.
\header{Discussion on the recommended and relevant lists.}
The notions of recommended and relevant lists have been previously adopted in the field
of recommender systems \cite{herlocker2004evaluating}.
The relevant list is usually hidden from the user-interface and ranked according to the semantic relevance between the query and the items.
In contrast, the recommended list reflects the final recommendations in the user interface, i.e., displaying on the right-hand panel of the video webpage.
On YouTube, the recommended list is a top-K sample from the concatenation of the relevant list, user demographics, watch history, search history, and spatial-temporal information \cite{covington2016deep}.
All features, apart from the relevant list, are user-, time- and location-dependent.
Hence, the displayed recommended list of the same video can be very different for two viewers, regardless of their logged-in state, location or viewing time.
On the other hand, the relevant list is consistent for all requests, from any client during any period of time.
We also observe the relevant list changes less frequently than the recommended list, which suggests it is more robust to the update of YouTube recommender systems.
For these reasons, we use the relevant list to construct and measure YouTube video network in \cref{sec:measures}.
\section{Estimating attention flow in the YouTube video network}
\label{sec:models}
The goal of this section is to estimate how well can the view counts of a video $v$ at day $t$ (denoted by $\mathbf{y}_v[t]$) be predicted, given (1) the view series of $v$ in the past $w$ days, $\mathbf{y}_{v}[t-w],\ldots\mathbf{y}_{v}[t-1]$; (2) the view series, $\mathbf{y}_{u}[t-w],\ldots\mathbf{y}_{u}[t]$, for the set of videos $\{u | (u \rightarrow v) \in G \}$ pointing to $v$.
To this end, we first define and extract a persistent network that contains links appearing throughout all the snapshots (\cref{ssec:persistent}).
Next, we detail the setup of predicting video popularity with recommendation network information (\cref{ssec:setting}).
We analyze the prediction results and provide an analysis on the strength of each link (\cref{subsec:prediction-results}).
Finally, we introduce a new metric --- estimated network contribution ratio.
We use it to identify the types of content that benefit most from being recommended in the network (\cref{subsec:result-interpretation}).
\subsection{Constructing a network with persistent links}
\label{ssec:persistent}
In order to reliably estimate the effects of the recommendation network on the viewing behaviors, we apply two filters:
(a) target videos should have at least 100 daily views on average;
(b) the average daily views of the source videos should be at least 1\% of those of the target videos as such videos cannot substantially influence their far more popular neighbors.
In the resulting network, we further remove the \textit{ephemeral links} that appear sporadically over time and correct for the \textit{missing links} that appear frequently, but with scattered gaps in between their appearances.
We assume that the missing links are likely to exist in the scattered gaps, and we use a majority smoothing method to find them (detailed next).
Links appearing in all the 63 daily snapshots and the corrected missing links, both dubbed \textit{persistent links}, make up the \textit{persistent network}.
\header{Finding persistent links.}
We use a moving window of length 7, same as the weekly seasonality, to extract the persistent structure of the \textsc{Vevo}\xspace network over the 63-day observation window.
A link from video $u$ to video $v$, $(u \rightarrow v)$, is maintained on day $t$ if $(u \rightarrow v)$ appears in a majority ($\geq 4$) of the days in time window $[t-3, t+3]$.
Likewise, if a link is missing on the current day $t$ but it appears in the majority of surrounding 7-day window, we consider it is a missing link and add it back to the network.
When $t-3$ is earlier than the first day of data collection, or $t+3$ later than the last day, we still apply the majority rule on the available days.
The resulting graph has 52,758 directed links, pointing from 28,657 source videos to 13,710 target videos.
Among them, 2,696 links are reciprocal, meaning two videos mutually recommend each other.
We find significant homophily in the persistent network:
33,908 (64.3\%) links have both the source and the target videos belonging to the same \textsc{Vevo}\xspace artist, and 44,154 (83.7\%) links are between videos of the same music genre.
\input{images/fig-persistent.tex}
\header{Validating persistent links via simulation.}
We illustrate the probability of persistent links by simulating a simple link presence/absence model.
We assume a link is independently presented on each day with probability $p_l \in [0, 1]$, and absent with probability $1 - p_l$.
We first simulate the link formulation for 63 times, then apply our 7-day majority smoothing to determine if it is persistent.
We repeat the simulation for 100,000 times, and compute the probability of a link being persistent, denoted by $\xi$.
In \cref{fig:model-persistent}(a), we plot the obtained $\xi$ against varying $p_l$.
For $p_l=0.5$ the edge is never persistent ($\xi = 0$), whereas for $p_l=0.9$ the edge is very likely to be persistent ($\xi = 0.92$).
From the simulation results, we can see that our 7-day majority smoothing rule favors links that appear much more frequent than chance, and suppresses links that appear lower or closer to chance.
\header{Videos connected by persistent links have correlated popularity dynamics.}
We use Pearson's $r$ to measure the correlation between the popularity dynamics of two videos connected by a persistent link.
It is known that the cross-correlation of time series data is affected by the within-series dependence.
Therefore, we deseasonalize, detrend, and normalize the view count series by following the benchmark steps in the M4 forecasting competition~\cite{m4forecasting}.
This is to ensure that the residual time series data is stationary and to avoid spurious correlations.
We compute the Pearson's $r$ on the obtained residual data, and we perform a paired correlation test which we consider statistically significant for $p < 0.05$.
\cref{fig:model-persistent}(b) shows the fraction of links for which the correlation test is statistically significant over four groups of links.
The $persistent^{-}$ group contains all the 52,758 persistent links we identified but excluding the 2,696 pairs of \textit{reciprocal} links --- resulting in 47,366 persistent yet non-reciprocal links.
The \textit{ephemeral} group consists of all links which have been deemed as non-persistent after applying the 7-day majority smoothing.
The \textit{random} group is constructed by randomly selecting pairs of unconnected videos and pretending that they have a link.
All groups are filtered based on the same two criteria mentioned before.
There are a total of 694,617 links in the ephemeral group and we sample 700,000 links in the random group.
We find that $75.4\%$ of the reciprocal links connect videos with statistically correlated popularity series.
We include both positive and negative correlations as two user attention series may cooperate or compete with each other~\cite{zarezade2017correlated}.
Combining the reciprocal and persistent$^{-}$ groups, 26,460 (50.2\%) links in our persistent network have correlated dynamics.
This is much higher than the percentage for ephemeral links (40.9\%) and that for unconnected random video pairs (22.1\%).
We further examine the content similarity in the persistent links by grouping links that connect videos from the same artist or with the same music genre (described in~\cref{fig:measure-statistics}(c)).
\cref{fig:model-persistent}(c) top shows that most reciprocal links (93.1\%) connect videos from the same artist, while 71.1\% of them have statistically correlated popularity dynamics.
The percentages are slightly lower for the persistent$^{-}$ group (61\% from the same artist, and 32.6\% with correlated popularity) and it drops even lower for ephemeral group (28.2\% and 12.2\%, respectively).
The situation is slightly different when we study the links that connect videos of the same genre, as shown in \cref{fig:model-persistent}(c) bottom.
We find that more than 80\% of the links connect videos of the same genre, irrespective of whether they are sporadically or persistently connected.
The percentages of statistically correlated links with the same genre follow the same trend as those from the same artist, i.e., highest for reciprocal (65\%), followed by persistent$^{-}$ (39.8\%), ephemeral (33.6\%) and lowest for random (6.6\%).
The above observations indicate that not all persistent links have the same effect on video popularity, and motivate us to build a prediction model for each of the links.
\subsection{Prediction setup and models}
\label{ssec:setting}
\header{Prediction setting.}
One important observation is that viewing dynamics exhibit a 7-day seasonality~\cite{huang2018user,cheng2008statistics}.
In our temporal hold-out setting, we use the first 8 weeks (2018-09-01 to 2018-10-26) to train the model and we predict the daily view counts in the last week (2018-10-27 to 2018-11-02).
This chronological split ensures that the training data temporally precedes the testing data.
If at any point we are required to use the day $t+1$ to predict the day $t+2$ (when both $t+1$ and $t+2$ are in the testing period), we use the predicted value $\hat{\mathbf y}[t+1]$ instead of observed value $\mathbf{y}[t+1]$.
\header{Evaluation metric.}
The predicting performance is quantified using the symmetric mean absolute percentage error (SMAPE).
SMAPE is an alternative to the mean absolute percentage error (MAPE) that can handle the case when the true value or the predicted value is zero.
It is a scale-independent metric and suitable for our task in which the volume of views for different videos vary considerably.
Formally, SMAPE can be defined as
\input{equations/eq-smape.tex}
where $\mathbf{y}_v[t]$ is the true value for video $v$ on day $t$, $\hat{\mathbf{y}}_v[t]$ is the predicted value, $\mathrm{T}$ is maximal forecast horizon, and $G$ is the persistent network.
SMAPE$(v)$ averages the forecast errors over different horizons for an individual video $v$, while SMAPE$(t)$ averages over different videos for a certain forecast horizon $t$.
The overall SMAPE for each model is computed by taking the arithmetic mean of SMAPEs over different horizons and over all videos.
SMAPE ranges from 0 to 200, while 0 indicates perfect prediction and 200 the largest error, when one of the true or the predicted values is 0.
When the true and the predicted are both 0, we define SMAPE to be 0.
\header{Baseline models.}
We use a few off-the-shelf time series forecasting methods from naive forecast to recurrent neural network.
The baseline models are estimated on a per-video basis.
\begin{itemize}[leftmargin=*]
\item {Naive}: The forecast at all future times is the last known observation.
\input{equations/eq-naive.tex}
where $\mathrm{T}^*$ is the last day in the training phase.
\item {Seasonal naive (SN)}: The forecast is the corresponding observation in the last seasonal cycle.
This method often works well for seasonal data.
We observe that many videos in the \textsc{Vevo Music Graph}\xspace dataset exhibit a 7-day seasonality.
Therefore we set the periodicity length $\mathrm{m}^*$ to be 7.
\input{equations/eq-snaive.tex}
\item {Autogressive (AR)}: AR is one of the most commonly used model in time series forecasting.
An AR model of order $p$ describes the relation between each of the past $p$ days and current day, formally defined as:
\input{equations/eq-ar.tex}
We choose the order $p$ to be 7.
$\alpha_{v, \tau}$ represents the relation between current day and $\tau$ days before.
\item {Recurrent neural network (RNN)}: RNN is a deep learning architecture that models temporal sequences.
We implement RNN with long short-term memory (LSTM) units.
LSTM-based approaches have been competitive in time series forecast tasks, mainly in a sequence-to-sequence (seq2seq) setup, see \cite{kuznetsov2019foundations} for detailed discussions.
\end{itemize}
\header{Networked popularity model.}
Built on top of the AR model, we model the network effects by assigning a weight $\beta_{u, v}$ to each link $(u \rightarrow v)$ existing in the persistent graph $G$, which modulates the inbound traffic received via that link, defined as:
\input{equations/eq-network.tex}
$\beta_{u, v}$ can be explained as the probability that a generic user clicks on video $v$ from video $u$, therefore, we impose the constraint $0 \leq \beta_{u, v} \leq 1$.
We refer to this model as ARNet.
One way to interpret the ARNet is to conceptualize a YouTube watching session as a sequence of video clicking.
We therefore categorize views on YouTube into two classes: \textit{initial} views and \textit{subsequent} views.
The initial views start the clicking sequences.
Some possible entry points include homepage feed, search results, or YouTube URLs on other social media.
The subsequent views model the behaviors of users clicking by following the recommendation links.
The session ends when the user navigates back to YouTube homepage, or quits the browser.
Although in the dataset we cannot differentiate initial views from subsequent views, we consider that initial views are driven by the latent interest of users, modelled as autoregression of the past $p$ days; in contrast, subsequent views are directed by the recommendation network, modelled as contribution from its incoming neighbours $\{u | (u \rightarrow v) \in G \}$ and mediated by estimated link strength $\beta_{u, v}$.
We use the \textsc{statsmodels.tsa} package for the AR model, \textsc{keras} package for the RNN, and build a customized optimization task with constrained \textsc{L-BFGS} for the ARNet.
We use the SMAPE as objective function in both RNN and ARNet.
\subsection{Popularity prediction results}
\label{subsec:prediction-results}
\cref{fig:model-results}(a) summarizes the prediction errors achieved by the five methods defined in \cref{ssec:setting}.
The Naive model alone is a weak predictor, however accounting for the seasonal effects (SN model) yields a significant error decrease.
It is worth noticing that the AR model yields similar performance as the advanced RNN model --- due to the known result that future popularity of online videos correlates with their past popularity \cite{pinto2013using}.
We observe that using recommendation network information further improves the prediction performance:
the ARNet model achieves a 9.66\% relative error reduction compared to the RNN model.
This prediction task shows that one can better predict the view series for a video if the list of videos pointing to it is known.
Next we study the prediction performance with respect to the forecast horizon, i.e., how many days in advance do we predict.
We average the SMAPEs over all videos against predictions for a given forecast horizon $t$, computed as $\mathrm{SMAPE}(t)$ in \cref{eq:smape}.
\cref{fig:model-results}(b) shows a nuanced story: the prediction performances decrease for all models as the forecast horizon extends.
Nevertheless, the ARNet model consistently outperforms other baselines across all forecast horizons, especially for larger horizons.
\input{images/fig-prediction.tex}
We posit two factors in preventing the models from obtaining even better results.
Firstly, it is well known that the attention dynamics tend to be bursty when items are first uploaded~\cite{rizoiu2017online,cheng2016cascades,martin2016exploring}, and the interest dissipates with time~\cite{figueiredo2016trendlearner}.
Given that 56,845 (93.6\%) videos in our dataset have been uploaded for more than one year and 9,277 (15.3\%) videos for almost ten years, most of the videos have passed the phases of the initial attention burst.
As a result, a large part of popularity variation comes from the weekly seasonality, rendering the simple seasonal naive model particularly competitive when compared to the more advanced RNN method.
The second is data sparsity when we build the models on a per-video basis.
RNN works best when it has ample volumes of data to train.
However, we use a sliding 7-day windows to predict the views in the next 7 days as suggested in~\cite{kuznetsov2019foundations}, therefore our data size is limiting the effectively training of the RNN model.
In our ARNet model, the estimated link strength $\beta_{u, v}$ can be used to quantify the influence from a video to its neighbours.
In \cref{fig:model-results}(c), we plot the distribution of $\beta_{u, v}$ against the ratio of views of source video to that of target video.
We split the x-axis into 40 equally wide bins in log scale.
Within each bin, we compute the values at each percentile, and then connect the same percentile across all bins.
The median line is highlighted in black.
The lighter the color shades are, the further the corresponding percentiles are away from the median.
We observe the distribution has a bi-modal shape with the first mode in 0.01 and second in 0.40 (for the median), meaning users are more likely to click a much more popular video (100 times more popular), or a moderate more popular video (2.5 times).
In contrast, the estimated link strength towards a less popular video is very low.
This observation, together with the measurement that videos disproportionately point to more popular videos (\cref{sssec:structure-viewcounts}), further reinforces the ``rich get richer'' phenomenon.
\subsection{The impacts of network on video popularity prediction}
\label{subsec:result-interpretation}
From the ARNet model, we derive a metric called the estimated network contribution ratio $\eta_v$, which is defined as
\input{equations/eq-network-ratio.tex}
$\eta_v$ is the fraction of estimated inbound traffic from video $v$'s neighbours against its own predicted popularity.
As we constrain all coefficients in \cref{eq:network} to be non-negative, $\eta_v$ is bounded in $[0, 1]$.
In our dataset, the mean $\eta_v$ is $0.314$.
In other words, for an average video in the \textsc{Vevo Music Graph}\xspace dataset, 31.4\% of its views are estimated from the recommendation network.
This value is slightly higher than the YouTube network contribution measured by \citet{zhou2010impact} in 2010 (reported below 30\%).
We posit two potential reasons: (1) the \textsc{Vevo}\xspace network is more tightly connected than a random YouTube video network~\cite{airoldi2016follow}; (2) traffic on recommendation links may have increased since then, signifying the advances of modern recommender systems.
Furthermore, among the 31.4\% networked views, 85.9\% are estimated from the same artist, echoing the network homogeneity found by \citet{airoldi2016follow}.
On average, the 13,710 target videos in the persistent network attract 245.3M views every day.
Our ARNet model estimates that 78.6M (32\%) of these views are contributed via the recommendation network.
\input{images/fig-error.tex}
Firstly, we explore the relation between prediction performance and content similarity concerning the artist and music genre.
In \cref{fig:model-error}(a), we compute $\eta_v$ conditioned on that ${(u, v) \in G}$ and that $u$ and $v$ are from the same artist (top) or with the same genre (bottom).
We then slice the x-axis into 20 bins, 5 percentiles apart, based on the artist/genre network contribution ratio.
We compute the mean SMAPEs for the videos in each bin.
Videos that are connected solely by videos from other artists/genres will be placed in the leftmost bin ($\eta_v = 0$).
The plot shows that the SMAPE error decreases with the increasing percentage of views from videos with the same artist or genre.
Secondly, we study the question that which artists are affected most \textit{if} the recommender systems were to be turned off?
\cref{fig:model-error}(b) shows the popularity percentile \textit{change} at the level of artist.
We first compute the network-subtracted views, i.e., subtracting the network contribution $\sum_{t=1}^{\mathbf T} \sum_{(u, v) \in G} \beta_{u, v} \mathbf{y}_{u}[t]$ from the observed views $\sum_{t=1}^{\mathbf T} \mathbf{y}_{v}[t]$.
We then aggregate and compute the popularity percentiles for both observed views and network-subtracted views at the level of artist.
The x-axis plots the artists' popularity percentiles without recommendation network, and y-axis plots the percentile changes when turning on the network.
The range of percentiles stays constant between $[0, 100\%]$, reflecting the concept of finite attention --- one video gains popularity at the expense of others.
The top outliers identify artists who gain much more popularity than their peers with similar popularity due to the recommendation network; whereas the bottom outliers represent artists who lose popularity.
There are 2,340 artists having target videos in the persistent network.
We observe that 1,378 (58.89\%) artists losing a small amount of popularity (less than 5\%) while 948 (40.51\%) gaining.
We notice there is no bottom outlier.
On the contrary, the top outliers show that the network can help some artists massively increase their relative popularity (as high as 26\%, J-Kwon (American rapper) in 4th bin).
We take a closer look at the outliers by scattering them in \cref{fig:model-error}(c).
70 artists gain significant popularity from the recommendation network, implying a better utilization of network effects.
We retrieve the artist genres from the music database MusicBrainz, and we notice two notable groups.
One is the Indie group by matching genre keywords ``indie'', ``alternative'', or ``new wave''.
The top 3 most popular Indie artists are 4 Non Blondes, Hoobastank, and The Police.
The other is the Hip hop group by matching genre keywords ``hip hop'', ``rap'', ``reggae'', or ``r\&b''.
The top 3 most popular Hip hop artists are Mark Ronson, French Montana, and Pharrell Williams.
This finding reveals that the recommender systems can lead users to find niche artists.
\section{Findings on the prediction task}
\label{sec:findings}
Models here...
\subsection{Linking parameters with item quality}
\label{ssec:quality}
Is item deemed to be higher quality having higher probability to be clicked?
Or users are more likely to continue watching next video?
\subsection{Linking parameters with a ``what-if'' situation}
\label{ssec:marketing}
Given a fixed budget, which video should I promote?
Answer: I want to promote the videos with high reciprocity value. This means attention flows outside can flow back. Or look at this reciprocity from the perspective of users.
\section{Introduction}
\label{sec:intro}
Many online platforms present algorithmic suggestions to help users explore the enormous content space.
The recommender systems, which produce such suggestions, are central to modern online platforms.
They have been employed in many applications, such as finding new friends on Twitter~\cite{su2016effect}, discovering interesting communities on LinkedIn~\cite{sharma2013pairwise}, and recommending similar goods on Amazon~\cite{oestreicher2012recommendation,dhar2014prediction}.
In the domain of multimedia, service providers (e.g., YouTube, Netflix, and Spotify) use recommender systems to suggest related videos or songs~\cite{davidson2010youtube,covington2016deep,gomez2016netflix,zhang2012auralist,celma2008hits}.
Much effort has been on generating more accurate recommendations, but relatively little is said about the effects of recommender systems on overall attention, such as their effects on item popularity ranking, the estimated strength of item-to-item links, and global patterns on the attention gained due to being recommended.
This work aims to answer such questions for online videos, using publicly available recommendation networks and attention time series.
We use the term \textit{attention} to refer to a broad range of user activities with respect to an online item, such as clicks, views, likes, comments, shares, or time spent watching.
The term \textit{popularity}, however, is used to denote observed attention statistics that are often used to rank online items against each other.
In this work, our measurement and estimation are carried out on the largest online video platform YouTube (as of 2019), and we specifically quantify popularity using the number of daily views for each video.
The outlined methods may well apply to other deeper forms of user engagement such as watch time.
Due to data availability constraints, the validation in this work is limited to popularity.
\input{images/fig-teaser.tex}
We illustrate the goals of this work through an example.
\cref{fig:teasers}(a) shows the recommendation network for six videos from the artist Adele.
It is a directed network and the directions imply how users can navigate between videos by following the recommendation links.
Some videos are not directly connected but reachable within a few hops.
For example, ``Skyfall'' is not on the recommended list of ``Hello'', but a user can visit ``Skyfall'' from ``Hello'' by first visiting ``Rolling in the deep''.
\cref{fig:teasers}(b) plots the daily view series since the upload of each of the six videos.
When ``Hello'' was released, it broke the YouTube debut records by attracting 28M views in the first 24 hours~\cite{billboard2015adele}.
Simultaneously, we observe a traffic spike in all of her other videos, even in three videos that were not directly pointed by ``Hello''.
This example illustrates that the viewing dynamics of videos connected directly or indirectly through recommendation links may correlate, and it prompts us to investigate the patterns of attention flowing between them.
This work bridges two gaps in the current literature.
The first gap measures and estimates the effects of recommender systems in complex social systems.
The main goals of recommender systems are maximizing the chance that a user clicks on an item in the next step~\cite{davidson2010youtube,covington2016deep,bendersky2014up,yi2014beyond} or in a longer time horizon~\cite{beutel2018latent,chen2019top,ie2019slateq}.
However, recommendation in social systems remains as an open problem for two reasons:
(1) a limited conceptual understanding of how finite human attention is allocated over the network of content, in which some items gain popularity at the expense of, or with the assistance of others;
(2) the computational challenge of jointly recommending a large collection of items.
The second gap comes from a lack of fine-grained measurements on the attention captured by items structured as a network.
There are recent measurements on the YouTube recommendation networks~\cite{airoldi2016follow,cheng2008statistics}, but their measurements are not connected to the attention patterns on content.
Similarly, measurement studies on YouTube attention~\cite{zhou2010impact} quantify the overall volume of views directed from recommended links.
However, no measurement that accounts for both the network structure and the attention flow is available for online videos.
This paper tackles three research questions:
\begin{enumerate}[label=\textbf{RQ\arabic*:}]
\item How to measure video recommendation network from publicly available information?
\item What are the characteristics of the video recommendation network?
\item Can we estimate the attention flow in the video recommendation network?
\end{enumerate}
We address the first question by curating a new YouTube dataset consisting of a large set of \textsc{Vevo}\xspace artists.
This is the first dataset that records both the temporal network snapshots of a recommender system, and the attention dynamics for items in it.
Our observation window lasts 9 weeks.
We present two means to construct the non-personalized recommendation network, and we discuss the relation between them in detail (\cref{sec:data}).
Addressing the second question, we conceptualize the global structure of the network as a bow-tie~\cite{broder2000graph} and we find that the largest strongly connected component accounts for $23.11\%$ of the videos while occupying $82.6\%$ of the attention.
Surprisingly, videos with high indegree are mostly songs with sustained interests, but not the latest released songs with high view counts.
We further find that the network structure is temporally consistent on the macroscopic level, however, there is a significant link turnover on the microscopic level.
For example, $50\%$ of the videos with an indegree of 100 on a particular day will gain or lose at least 10 links on the next day, and 25\% links appear only once during our 9-week observation window (\cref{sec:measures}).
Answering the third question, we build a model which employs both the temporal and network features to predict video popularity, and we estimate the amount of views flowing over each link.
Our networked model consistently outperforms the autoregressive and neural network baseline methods.
For an average video in our dataset, we estimate that $31.4\%$ of its views are contributed by the recommendation network.
We also find the evidence of YouTube recommender system boosting the popularity of some niche artists (\cref{sec:models}).
The new methods and observations in this work can be used by content owners, hosting sites, and online users alike.
For content owners, the understanding of how much traffic is driven among their own content or from/to other content can lead to better production and promotion strategies.
For hosting sites, such understanding can help avoid social optimization, and shed light on building a fair and transparent content recommender systems.
For online users, understanding how human attention is shaped by the algorithmic recommendation can help them be conscious of the relevance, novelty and diversity trade-offs in the content they are recommended to.
The main contributions of this work include:
\begin{itemize}[leftmargin=*]
\item We curate a new YouTube dataset, called \textsc{Vevo Music Graph}\xspace dataset\footnote{The code and datasets are publicly available at \url{https://github.com/avalanchesiqi/networked-popularity}}, which contains the daily snapshots of the video recommendation network over a span of 9 weeks, and the associated daily view series for each video since upload.
\item We perform, to our knowledge, the first large-scale measurement study that connects the structure of the recommendation network with video attention dynamics.
\item We propose an effective model that accounts for the network structure to predict video popularity and to estimate the attention flow over each recommendation link.
\end{itemize}
\section{Related work}
\label{sec:related}
In this section, we discuss three lines of research: design of (video) recommender systems, measurements on recommender systems, and studies on user attention towards online items.
\subsection{Recommender systems and video recommendation}
The goals of recommender systems can be summarized as two related yet distinct tasks.
The first task is user-centric, i.e., given users' profiles and past activities, finding a collection of items that might interest them~\cite{konstan2012recommender,covington2016deep}.
The resulting recommendations, often shown in user homepage feed, can be regarded as the entry point for the user action sequence.
The second task is item-centric, i.e., given the currently visited item, finding a ranked list of relevant items~\cite{davidson2010youtube,zhang2012auralist,gomez2016netflix}.
This can be regarded as recommending the next item in a sequence of actions.
In the same vein, we conceptualize and explain the behaviors on YouTube --- users start the action sequences by latent interests, and their subsequent actions are driven by network effects (see \cref{ssec:setting}).
\header{Recommender systems on YouTube.}
Recommender systems, along with YouTube search, have been shown as the two dominant factors driving user attention on YouTube~\cite{zhou2010impact}.
In 2010, \citet{davidson2010youtube} reported the usage of a collaborative filtering method in the YouTube recommender systems, i.e., videos are recommended by counting the number of co-watches.
This approach works well for videos with many views, however, it is less applicable for newly uploaded videos or least watched videos.
\citet{bendersky2014up} proposed two methods to enhance the collaborative filtering approach by embedding the video topic representation into the recommender.
\citet{covington2016deep} applied deep neural networks and indicated that the final recommendation is a top-K sample from a large candidate set generated by taking into the account content relevance, past watch and search activities, etc.
Other enhancements include incorporating contextual data \cite{beutel2018latent}.
Most recently, \citet{chen2019top} and \citet{ie2019slateq} showed success in applying reinforcement learning techniques in YouTube recommender systems.
Our work does not deal with designing a recommender system, nor does it attempt to reverse engineer the YouTube recommender.
Instead, we concentrate our analysis on the impacts of the recommender systems by presenting large-scale measurements.
\subsection{Measuring the effects of recommender systems}
Contrasting the extensive literature on evaluating the accuracy of recommendation~\cite{zhang2012auralist,beutel2018latent,chen2019top,li2018offline}, we focus on prior work that connects network structure with content consumption.
\citet{carmi2017oprah} reported how the book sales on Amazon react to exogenous demand shocks --- not only did the sales increase for the featured item, but the increase also
propagated a few hops away by following the links created by the recommender systems.
This is akin to our observation in \cref{fig:teasers} that attention ripples happen for videos too.
\citet{dhar2014prediction} further showed the effectiveness of using the recommendation network in predicting item demands.
\citet{su2016effect} linked the aggregate effects of recommendations and network structure, and found that popular items profit substantially more than the average ones.
However, \citet{sharma2015estimating} stressed the difficulty of inferring causal relations based on observational data in recommender systems.
\citet{cheng2008statistics} are among the first to study the statistics of YouTube recommender systems.
They scraped video webpages to construct the video network at a weekly interval.
\citet{airoldi2016follow} followed the video suggestions on YouTube to construct one static network snapshot for a random collection of music videos.
Note that both studies adopt a snowball sampling technique to construct the network, whereas in our work, we have the complete trace of an easily identifiable group of \textsc{Vevo}\xspace artists, and we capture the dynamics of network snapshots at a much finer daily granularity (see \cref{ssec:crawling}).
Most importantly, our work links the network with the item attention dynamics.
\subsection{Measuring and predicting online attention}
Attention is a scarce resource in online platforms.
While users have an unprecedented volume of information to choose from, online content competes for our limited attention~\cite{weng2012competition,zarezade2017correlated}.
\citet{salganik2006experimental} designed the ``MusicLab'' experiment, in which they explored how social influence and inherent quality affect a product's market share.
In a follow-up study, \citet{krumme2012quantifying} conceptualized user behaviors as two steps for characterizing how users consume digital items.
The first step is based on the appeal of the product, measured by the number of clicks; the second step is based on the quality of the product, measured by post-clicking metrics, e.g., dwell time, comments or shares.
A similar two-step process is employed in the web search community to differentiate between page views and dwell time on webpages~\cite{yue2010beyond,yi2014beyond}.
Following a similar idea, we categorize online attention into \textit{popularity} and \textit{engagement}.
On YouTube, popularity refers to the number of views that a video receives and engagement refers to the time spent on watching the video.
Predicting content popularity is an active field.
For online videos, future popularity has been shown to correlate with popularity in the past~\cite{pinto2013using,szabo2010predicting}, rendering autoregressive method a strong baseline.
Other works integrate additional information.
External sharing on social media has been linked to the popularity of online videos~\cite{li2013popularity,abisheva2014watches}, which is later developed by \citet{rizoiu2017expecting} to model popularity as an interplay of exogenous stimuli and endogenous responses.
Another line of work measures the temporal characteristics of content popularity.
\citet{yu2015lifecyle} revealed that the lifecycles of online videos exhibit a multi-phase pattern.
\citet{figueiredo2016trendlearner} stressed the necessity of predicting content popularity before the user interests exhaust.
For engagement studies on YouTube, we refer to our previous paper~\cite{wu2018beyond}, in which we found the most engaging videos do not always have highest view counts.
In contrast to the general understanding that content popularity is unpredictable in social systems~\cite{martin2016exploring,cheng2014can,rizoiu2018sir}, we also found that engagement metrics appear to be much more predictable.
This work focuses on the popularity measures.
To our knowledge, no prior work has attempted to predict video popularity with fine-grained recommendation network information due to the difficulty in constructing such network.
This is the first study that shows how to construct a persistent network by following the recommended links, and how to employ network features to improve the popularity prediction task (see \cref{sec:models}).
It is worth differentiating our work from the studies that collect individual user data from customized browser plugins.
Instead of measuring proactive user behaviors, we are interested in understanding how the platform-generated recommendation network guide the aggregate user attention.
\section{Constructing YouTube video network}
\label{sec:data}
In this section, we first introduce our newly curated \textsc{Vevo Music Graph}\xspace dataset (\cref{ssec:vmg}).
Next, we detail the data collection strategy (\cref{ssec:crawling}) and analyze the relation between two types of non-personalized video recommendation lists (\cref{ssec:nonpersonal}).
\subsection{\textsc{Vevo Music Graph}\xspace dataset}
\label{ssec:vmg}
The \textsc{Vevo Music Graph}\xspace dataset consists of the verified \textsc{Vevo}\xspace artists who are active in six English-speaking countries (United States, United Kingdom, Canada, Australia, New Zealand, and Ireland), together with their complete record of videos uploaded on YouTube from the launch of \textsc{Vevo}\xspace (Dec 8, 2009) until Aug 31, 2018.
Our dataset contains 4,435 \textsc{Vevo}\xspace artists and 60,740 music videos.
For each video, we collect its metadata (e.g., title, description, uploader), its view count time series, and its recommendation relations with other videos.
The videos and their recommendation relations form a dynamic directed network, which we capture daily between Sep 1, 2018 and Nov 2, 2018 (63 days, 9 weeks).
\header{Why \textsc{Vevo}\xspace?}
\textsc{Vevo}\xspace\footnote{The \textsc{Vevo}\xspace website was shut down on May 24, 2018; however, videos syndicated on YouTube before are still embedded with a ``VEVO'' watermark on their thumbnails. See screenshot in \cref{fig:data-layout} for illustration.} is the largest syndication hub that provides licensed music videos from major record companies to YouTube \cite{wikipediavevo}.
We choose to study the networked attention flow on \textsc{Vevo}\xspace for several reasons.
First, \textsc{Vevo}\xspace is an ecosystem of its own that attracts tremendous attention --- 94 of all-time top 100 most viewed videos on YouTube are music, and 64 of which are distributed via \textsc{Vevo}\xspace \cite{wikipediatop}.
On average, our dataset accounts for 310 millions views and 9.1 millions watch hours every day.
Second, many users utilize YouTube as their music streaming player, listening to non-stop playlists generated by the recommender systems.
After the completion of the current video, YouTube automatically plays the ``Up next'' video --- the video in the first position of the recommended list, as illustrated in \cref{fig:data-layout}.
This usage pattern for music videos makes the network effects of YouTube recommender systems more significant for directing user attention from one video to another.
Third, \textsc{Vevo}\xspace artists and their videos form a tightly connected network.
The average degree in the \textsc{Vevo}\xspace video network is 10, compared to 3.2 in the YouTube video network collected by \citet{airoldi2016follow} via snowball sampling (see \cref{ssec:nonpersonal}).
The nodes are homogeneous in terms of content --- they are all music videos from artists based in English-speaking countries.
Lastly, the \textsc{Vevo}\xspace artists are easily identifiable --- they include the keyword ``VEVO'' in the channel title, they possess a verification badge on the channel page, and they publish licensed videos with a ``VEVO'' watermark.
\subsection{Data collection strategy}
\label{ssec:crawling}
We identify \textsc{Vevo}\xspace artists starting from Twitter.
We capture every tweet that mentions YouTube videos by feeding the rule \texttt{"youtube" OR ("youtu" AND "be")} into the Twitter Streaming API\footnote{\url{https://developer.twitter.com/en/docs/tweets/filter-realtime/api-reference/post-statuses-filter.html}}.
Our Twitter crawler has been running continuously since Jun 2014.
From the ``\texttt{extended\_urls}'' field of each tweet, we extract the associated YouTube video ID, and we use our open-source tool \textsc{youtube-insight}~\cite{wu2018beyond} to retrieve the video's metadata, daily view count series and the ranked list of relevant videos.
Next, we select the \textsc{Vevo}\xspace artists by keeping only the channels that have the keyword ``VEVO'' in the channel title and a ``verified'' status badge on the channel homepage.
Note that a channel refers to a user who uploads videos on YouTube.
We query an open music database MusicBrainz\footnote{\url{https://musicbrainz.org}} to retrieve more features about each artist, such as the music genres and the geographical area of activities.
We retain the artists who are active in the six aforementioned English-speaking countries, and the videos that are classified into the ``Music'' category.
For completeness, we also implement a snowball-like procedure to retrieve further artists and their videos by following the recommendation relations from the tweeted videos.
However, this procedure only adds 2 more artists (out of the 4,435 \textsc{Vevo}\xspace artists in our dataset) and 5 more videos (out of the 60,740 music videos).
This is not surprising, considering most artists would promote their works on social media platforms.
One data limitation is that artists who are not affiliated with \textsc{Vevo}\xspace will not appear in our collection, such as Ed Sheeran and Christina Perry.
\subsection{The network of YouTube videos}
\label{ssec:nonpersonal}
For any YouTube video, there are two publicly accessible sources of recommendation relations.
The first is the right-hand panel of videos that YouTube displays on its interface.
We denote this as the \textit{recommended list} (visualized in \cref{fig:data-layout}).
The second is from the YouTube Data API\footnote{\url{https://developers.google.com/youtube/v3/docs/search/list}}, which retrieves a list of videos that are relevant to the query video, ranked by the relevance.
We denote this as the \textit{relevant list}.
We retrieve both the recommended and the relevant lists for every video in our dataset.
We construct the recommended list by simulating a browser to access the video webpage and scraping the list on the right-hand panel.
We retrieve the first 20 videos from the panel, which are the default number of videos shown to the viewers on YouTube.
Note that typically, YouTube customizes the viewers' recommendation panel based on their personal interests and prior interaction history.
Here, we retrieve the non-personalized recommended list by sending all requests from a non-logged in client and by clearing the cookies before each request.
We denote the networks of videos constructed using the recommended and the relevant lists as the \textit{recommended network} and the \textit{relevant network}, respectively.
From Sep 1, 2018 to Nov 2, 2018, we crawled both the recommended and the relevant lists for each of the 60,740 \textsc{Vevo}\xspace videos on a daily basis.
The crawling jobs were distributed across 20 virtual machines, and took about 2 hours to finish.
In this way, we obtain successive snapshots for both the recommended and the relevant networks over 9 weeks.
\input{images/fig-layout.tex}
\header{An illustrative example.}
\cref{fig:data-layout} illustrates the YouTube webpage layout for the video ``Hello'' by Adele, together with its recommended and relevant lists.
Videos belonging to the \textsc{Vevo}\xspace artists are colored in blue (e.g., Adele and The Cranberries), while others are colored in grey (e.g., Ed Sheeran and Christina Perry).
Visibly, not all videos on the recommended and relevant lists belong to the \textsc{Vevo}\xspace artists (e.g., ``Ed Sheeran - Perfect'').
Notice that for Music videos, a platform-generated playlist is always shown at the second position of the recommended list (here, ``Mix - Adele - Hello''), effectively capping the size of this list at 19.
The length of the relevant list often exceeds 100.
We observe that not all relevant videos appear in the recommended list (e.g., ``The Cranberries - Zombie''), nor all recommended videos originate from the relevant list (e.g., ``Adele - Skyfall'').
Also, the relative positions of two videos can appear flipped between the two lists (e.g., ``Ed Sheeran - Perfect'' and ``Christina Perry - A Thousand Years'').
\header{Display probabilities from the relevant to the recommended list.}
We study the relation between the positions of videos on the relevant and on the recommended lists.
We construct four bins based on the video position on the recommended list (position 1, position 2-5, position 6-10, and position 11-15).
\cref{fig:data-rel2rec}(a) shows as stacked bars the probability that a video ends up in each of the bins, as a function of its position on the relevant list.
The total height of the stacked bars gives the overall probability that a video originating from the relevant list appears at the top 15 positions on the recommended list.
We observe videos that appear at an upper position on the relevant list are more likely to appear on the recommended list, and at an upper position.
For example, the video at position 1 on the relevant list has 0.34 probability to appear at the first position and 0.84 probability to appear at the top 15 positions on the recommended list.
The probability decays for videos that appear at lower positions.
A relevant video appearing in position 41 to 50 has less than 0.05 probability to appear on the recommended list.
We compute the probabilities of appearance between each pair of positions in the relevant and the recommended lists --- denoted as \textit{display probabilities} --- using the 9-week dynamic network snapshots.
\input{images/fig-rel2rec.tex}
In \cref{fig:data-rel2rec}(b), we show the plot of the probability that a video at a given position on the recommended list originates from the relevant list.
We observe videos that appear at an upper position on the recommended list are more likely to originate from the upper position of relevant list.
The other notable observation is that the overall recall for recommended videos are high at over 0.8, meaning for any video on the recommended list, we are likely to see it on the relevant list.
\header{YouTube video network density.}
\citet{airoldi2016follow} used the first 25 videos on the relevant list to construct the relevant network, which had an average degree of $3.2$.
By comparison, our \textsc{Vevo}\xspace video network is much denser at the same cutoff, with an average degree of 10.
One could expect that the relevant network becomes even denser when videos at lower position are included; however, the display probabilities also need to be considered.
In this paper and unless otherwise specified, we use the first 15 positions on the relevant list ($0.35 \leq P_{\text{display}} \leq 0.84$) to construct the relevant network.
We denote this threshold as the \textit{cutoff} and we study the impact of different cutoff values on the network structure in \cref{sssec:bowtie}.
Measurements with other cutoff values yield similar results and thus are omitted.
\header{Discussion on the recommended and relevant lists.}
The notions of recommended and relevant lists have been previously adopted in the field
of recommender systems \cite{herlocker2004evaluating}.
The relevant list is usually hidden from the user-interface and ranked according to the semantic relevance between the query and the items.
In contrast, the recommended list reflects the final recommendations in the user interface, i.e., displaying on the right-hand panel of the video webpage.
On YouTube, the recommended list is a top-K sample from the concatenation of the relevant list, user demographics, watch history, search history, and spatial-temporal information \cite{covington2016deep}.
All features, apart from the relevant list, are user-, time- and location-dependent.
Hence, the displayed recommended list of the same video can be very different for two viewers, regardless of their logged-in state, location or viewing time.
On the other hand, the relevant list is consistent for all requests, from any client during any period of time.
We also observe the relevant list changes less frequently than the recommended list, which suggests it is more robust to the update of YouTube recommender systems.
For these reasons, we use the relevant list to construct and measure YouTube video network in \cref{sec:measures}.
\section{Measuring YouTube video network}
\label{sec:measures}
In this section, we present the macroscopic (\cref{ssec:macroscopic}), microscopic (\cref{ssec:microscopic}), and temporal (\cref{ssec:temporal}) profiling of the \textsc{Vevo}\xspace network.
\subsection{Macroscopic profiling of the \textsc{Vevo}\xspace network}
\label{ssec:macroscopic}
We first compute several basic statistics such as indegree distribution, view count distribution, and \textsc{Vevo}\xspace videos uploading trend (\cref{sssec:basic-statistics}).
Next, we study the connection between the network structure and video popularity (\cref{sssec:structure-viewcounts}).
Lastly, we use the bow-tie structure to characterize the \textsc{Vevo}\xspace network and we discuss the impact of different cutoff values (\cref{sssec:bowtie}).
\input{images/fig-statistics.tex}
\subsubsection{Basic statistics}
\label{sssec:basic-statistics}
\header{Over-represented medium-size indegree videos.}
Here we study the indegree distribution of the \textsc{Vevo}\xspace network.
Note that the outdegree of all nodes is bounded by the cutoff value on the relevant list and therefore not presented.
We remove all links pointing to non-\textsc{Vevo}\xspace videos, resulting in an average of 363,965 edges each day, and an average degree of 6.
Note that the average degree of 10 mentioned in \cref{sec:data} is obtained with a cutoff of 25, whereas here we study the relevant network constructed with a cutoff of 15, since the display probability of videos below position 15 appearing on recommended list is less than 0.32.
\cref{fig:measure-statistics}(a) shows the complementary cumulative density function (CCDF) of the indegree distribution for four different snapshots of the network, taken 15 days apart.
We notice that the indegree distribution does not resemble a straight line in the log-log plot, meaning it is not power-law, unlike for other online networks, e.g., the World Wide Web \cite{broder2000graph,meusel2014graph}, the network of interaction in online communities \cite{zhang2007expertise}, and the follower/following network on social media \cite{kwak2010twitter}.
The medium-sized indegree videos are over represented than that in the best fitted power-law model ($\alpha=2.02$, fitted by \textsc{powerlaw} package \cite{alstott2014powerlaw}, resulting in $x^{-1.02}$ in CCDF~\cite{clauset2009power}).
This result holds for all four snapshots.
\header{Attention is unequally allocated.}
\cref{fig:measure-statistics}(b) plots the average daily views against the view count percentile.
The daily view count at median is 81, but it is 4,575 at the 90th percentile.
These observations, together with a Gini coefficient of $0.946$, indicate that the attention allocation in the \textsc{Vevo}\xspace network is highly unequal --- the top 10\% most viewed videos occupy 93.1\% views.
We also find a moderate correlation between view count and indegree value (details in \cref{ssec:microscopic}).
\header{Uploading trend by music genre.}
To date, our dataset is the largest digital trace of \textsc{Vevo}\xspace artists on YouTube, allowing us to study the production dynamics of the \textsc{Vevo}\xspace platform.
\cref{fig:measure-statistics}(c) shows the number of \textsc{Vevo}\xspace videos that are uploaded each year from 2009 to 2017, broken down by their genres.
We omit year 2018 as we only observed 8 months for it (until August).
There is a significantly higher number of uploads (9,277) in 2009 as it is the year when \textsc{Vevo}\xspace was launched, and when many all-time favorite songs were syndicated to the YouTube platform.
Pop, Rock, and Hip hop music are the top 3 genres, accounting for 62.85\% of all uploads.
The \textsc{Vevo}\xspace videos upload rate is more or less constant around 7,000 since 2013.
The flattening production dynamics is somewhat surprising given the overall growth of YouTube~\cite{youtube2017billion}.
\subsubsection{Linking network structure and popularity}
\label{sssec:structure-viewcounts}
\input{images/fig-videos-connect.tex}
Here, we investigate the connection between the relevant network structure and video view counts.
Specifically, we divide the videos in the \textsc{Vevo Music Graph}\xspace dataset into four equal groups by computing the view count quartiles.
Each group contains 15,185 videos.
Next, we count the number of edges that originate and end in each pair of groups.
\cref{fig:measure-connect} represents the four groups together with the number of links between them.
The ``top $25\%$'' group contains the top $25\%$ most viewed videos, while the ``bottom $25\%$'' contains the $25\%$ least viewed videos.
The width of the arrows is scaled by the number of the edges between the videos placed in the two groups.
One can conceptualize that the edges act as conduits for the attention to flow between different groups and their thickness indicate the probability that a random user jumps from one group to the other.
We observe that all four groups have the most links pointing to the ``top $25\%$'' group.
In fact, every group disproportionately points towards more popular groups than towards the less popular ones.
This means the recommendation network built by the platform is likely to take a random viewer towards more popular videos and keep them there, therefore reinforcing the ``rich get richer'' phenomenon.
\subsubsection{The bow-tie structure of video networks}
\label{sssec:bowtie}
The bow-tie structure was first proposed by~\citet{broder2000graph} to visualize the structure of the whole web.
It classifies the complex web graph into five components:
(a) the largest strongly connected component (LSCC) as the core;
(b) the IN component which can reach the LSCC, but not the other way around;
(c) the OUT component which can be reached from the LSCC, but not the other way around;
(d) the Tendrils component which connect to either the IN or the OUT, bypassing the LSCC;
(e) the Disconnected components which are disconnected from the rest of the components.
The strongly connected component (SCC) can be easily computed in linear time by using Tarjan's algorithm \cite{tarjan1972depth}.
For the \textsc{Vevo}\xspace network, we quantify the sizes of different components in the bow-tie structure using both the number of nodes (videos) and the amount of attention (views).
Unlike the Web graph~\cite{broder2000graph}, we know the amount of views garnered by each video, this allows us to comparatively analyze the total attracted attention in each component.
The bow-tie structure is a good conceptual description, because the directed edges exist only from the IN to the LSCC component (similarly, LSCC to OUT, and IN to OUT) but not the other way around, indicating that the attention in the network can only flow in a single direction from IN to LSCC (similarly, LSCC to OUT, and IN to OUT).
\input{tables/tab-bowtie.tex}
\input{images/fig-bowtie.tex}
\cref{tab:compare-bowtie} compares the relative sizes of each component in prior literature and in our \textsc{Vevo}\xspace network.
The \textsc{Vevo}\xspace network is quite different with respect to other previously studied online networks, e.g., the Web graph \cite{broder2000graph,meusel2014graph} and user activity network in online community~\cite{zhang2007expertise,kim2012event}.
It has a much larger IN component, encompassing 68.54\% of all the videos.
The OUT, Tendrils, and Disconnected components are all very small, accounting for a total of 8.35\% videos.
\cref{fig:measure-bowtie}(a) visualizes the bow-tie structure of the \textsc{Vevo}\xspace network.
Unlike other graphs, our \textsc{Vevo}\xspace graph is the by-product of the recommender systems, which is subjected to the proprietary algorithm and its updating cycle.
This suggests there may exist considerable temporal variation in the composition of the bow-tie components, see \cref{ssec:temporal} for observations over time.
\input{images/fig-bowtie-cutoff.tex}
\cref{fig:measure-bowtie}(b) resizes each component of the \textsc{Vevo}\xspace bow-tie by the total view counts in it.
Visibly the roles of LSCC and IN are reversed: the LSCC now occupies $82.6\%$ attention (while accounting for only $23.11\%$ of the videos), while the big IN component ($68.54\%$ of the videos) only attract $12.74\%$ attention.
This is consistent with the observation in \cref{sssec:structure-viewcounts} that the attention is unequally allocated in the \textsc{Vevo}\xspace network.
Given the definition of the IN component, its $68.54\%$ of videos contribute attention towards the LSCC, but not the other way around (there is no link from LSCC towards IN).
As a result, the LSCC accumulates a large proportion of all attention.
The OUT, Tendrils, and Disconnected components account for almost negligible attention ($4.67\%$ of the views altogether).
\header{Impact of different cutoff values on the bow-tie structure.}
The \textsc{Vevo}\xspace network changes as we change the cutoff on the relevant list, as taking more edges into account densifies the network.
\cref{fig:measure-bowtie-cutoff} shows how the relative size of the bow-tie component changes with varying cutoff values.
As the cutoff increases, more edges are added to the network, especially for the videos in the Disconnected component.
Backwards links are formed between videos in the LSCC and IN, and as a result, the LSCC absorbs parts of the IN component.
Therefore, the LSCC increases, the IN decreases, while the other three components (OUT, Tendrils, and Disconnected) become negligible.
At cutoff of 50, the \textsc{Vevo}\xspace network structures into 2 distinct components: a LSCC component consisting of $77\%$ videos and $99\%$ attention, and an IN component consisting of the remaining $23\%$ videos and accounting for only $1\%$ of the attention.
\subsection{Microscopic profiling of the \textsc{Vevo}\xspace network}
\label{ssec:microscopic}
In this section, we jointly analyze the relation between video age, indegree, and popularity by examining overall correlation, as well as among top-ranked videos.
\input{images/fig-spearmanr.tex}
\header{The disconnect between network indegree and video view count.}
We measure the correlation between video indegree and view count using Spearman's $\rho$ --- a measure of the strength of the association between two ranked variables, and which takes values between -1 and +1.
A positive $\rho$ implies the ranks of the two variables move together in the same direction.
At the level of the entire dataset, we detect a moderate correlation between video indegree and view count (Spearman's $\rho = 0.421^{***}$, p < 0.001).
\cref{fig:measure-spearmanr} shows the Spearman's $\rho$ when we further break down the videos in the \textsc{Vevo Music Graph}\xspace based on their uploaded year.
We observe that the strength of the correlation decreases for fresher videos.
Videos uploaded in 2009 have a much stronger correlation ($\rho = 0.638^{***}$) than videos uploaded in 2018 ($\rho = 0.265^{***}$).
This suggests that video age is an important confounding factor when one tries to estimate the effects of the recommendation network.
Empirically, this may indicate the shift in what drives attention towards video consumption.
\citet{zhou2010impact} have measured that the two main drivers for video views are YouTube search and recommender.
One explanation of our observation above is that as videos get older, the effects of recommendation become more pronounced.
\header{A closer look at the top videos.}
\cref{tab:top20} presents the top 20 videos with highest average daily indegree (top panel) and top 20 videos with highest average daily views (bottom panel).
We observe a modest amount of discrepancy between these two dimensions, with only 5 videos being on both lists (shown in bold font).
Most of the top-viewed videos are relatively new to the platform --- 10 out 20 are published within one year and the top 5 are all within the past 7 months (relative to November 2018).
In contrast, the videos with high indegree are mostly songs with sustained interests, some dating back to 10 years ago, such as ``The Cranberries - Zombie'' and ``Bon Jovi - It's My Life''.
These two songs were respectively released in 1993 and 2000, having existed for a long time before being uploaded to YouTube.
Currently, they still attract half a million views everyday after nearly 20 years, ranking 3rd and 17th on the most-linked video list, respectively.
This may shed light onto why video popularity lifecycle exhibits a multi-phase pattern~\cite{yu2015lifecyle}.
Our observations do not conflict with the design of YouTube recommender systems, which promote ``reasonably recent and fresh'' content \cite{davidson2010youtube,covington2016deep,beutel2018latent}.
Fresh videos can be recommended due to the relevance, novelty and diversity trade-offs~\cite{konstan2012recommender,ziegler2005improving}.
Instead, our observed video relations are based on the content recommendation network~\cite{carmi2017oprah,dhar2014prediction}.
\input{tables/tab-indegree-views.tex}
Another group of interest is the videos that are highly viewed yet with low indegree.
We find this pattern appears at the level of the artist.
For instance, ``Becky G'' has 3 videos on the top 20 most-viewed list, ranking 2, 4, and 14.
However, the indegrees for her videos are extremely low (rank 2411, 40040, and 958 respectively).
Particularly, the video ``Cuando Te Bese'' attracts an average of 2.4M views every day for 9 consecutive weeks.
However, it has only one video pointing to it from the rest of the 60,739 \textsc{Vevo}\xspace videos.
A closer look reveals that ``Becky G'' is an American singer who often releases Spanish songs.
The above observation shows that her videos are either recommended from non-English and/or non-\textsc{Vevo}\xspace videos, e.g., the Spanish songs community, or that recommendation network is not the main traffic driver for her videos.
\subsection{Temporal evolution of \textsc{Vevo}\xspace network}
\label{ssec:temporal}
Here, we study the dynamics of the \textsc{Vevo}\xspace network over 9 weeks, namely the appearance and disappearance of recommendation links between videos.
We show that pairs of videos can have either ephemeral link or frequent link between them.
\input{images/fig-temporal-macro.tex}
\header{Macroscopic dynamics.}
\cref{fig:measure-statistics}(a) and (b) show that both the indegree distribution and the view count distribution are temporally consistent.
However, when we plot the size variation of the different components in the bow-tie structure, we obtain a more nuanced story.
\cref{fig:measure-temporal-macro} shows that the size of the LSCC ranges from 11.49\% to 30.13\%, while IN component from 60.37\% to 77.9\% over 9 weeks.
Similarly, the percentage of total views in the LSCC ranges from 80.46\% to 90.36\%, while IN component from 9.11\% to 18.07\%.
Given that the same set of videos is tracked throughout the observation period and no new video is added, the above observations imply a significant turnover in the recommendation links between videos.
For example, the appearance of a link will allow a node to transition from the IN to the LSCC component; the disappearance of the same link would make it drop back into IN component.
\header{Incoming ego-network dynamics.}
We study the link turnover using the incoming ego-network for each video.
Ego network consists of an individual focal node and the edges pointed towards it.
We only consider incoming edges, as the number of outgoing edges is capped by the relevant list cutoff (here the cutoff is 15).
For each video, we first extract the days with at least 20 incoming links.
Then for each day $t$, we compute the indegree change ratio between day $t$ and day $t+1$ by dividing the indegree delta (positive or negative) by the value in day $t$.
We obtain a number between -1 and 1, where -1 means that the video loses all of its incoming edges, and a value of 1 signifies that the video doubles the number of incoming edges.
\cref{fig:measure-temporal-micro}(a) shows the indegree change ratio summarized as quantiles, broken down by the value of indegree.
We highlight the 10th, 25th, median, 75th, and 90th percentile for the videos with an indegree of $100$.
$25\%$ videos with an indegree of $100$ will gain at least 8 in-links on the next day while another 25\% lose at least 11 in-links.
The median is around zero, meaning that there are as many videos that gain links as these that lose links.
Overall, this suggests that videos have very dynamic incoming ego-networks, with a non-trivial number of edges prone to appear and disappear from one day to another.
\input{images/fig-temporal-micro.tex}
\header{Ephemeral links and frequent links.}
Given the rate at which links appear and disappear, here we ask the question if there exist videos that are frequently connected.
For each pair of connected videos, we count the number of times that a link appears between them over the 63 daily snapshots.
\cref{fig:measure-temporal-micro}(b) plots the link frequency (taking values between 1 and 63) on the x-axis and the number of video-to-video pairs with that link frequency on the y-axis.
We find that many links are ephemeral --- they appear several times, scattering in the 63 days time window.
We count that 434K ($25.2\%$) video-to-video links only appear once.
On the other hand, there are links that appear in every snapshot --- we count 54K ($3.1\%$) such links.
Ephemeral links may contribute to bursty popularity dynamics of YouTube videos, and to the generally perceived unpredictability in complex social systems~\cite{martin2016exploring,rizoiu2017expecting,rizoiu2018sir}.
Frequent links may hold the answer to understanding and predicting the attention flow in a network of content.
\section{Estimating attention flow in the YouTube video network}
\label{sec:models}
The goal of this section is to estimate how well can the view counts of a video $v$ at day $t$ (denoted by $\mathbf{y}_v[t]$) be predicted, given (1) the view series of $v$ in the past $w$ days, $\mathbf{y}_{v}[t-w],\ldots\mathbf{y}_{v}[t-1]$; (2) the view series, $\mathbf{y}_{u}[t-w],\ldots\mathbf{y}_{u}[t]$, for the set of videos $\{u | (u \rightarrow v) \in G \}$ pointing to $v$.
To this end, we first define and extract a persistent network that contains links appearing throughout all the snapshots (\cref{ssec:persistent}).
Next, we detail the setup of predicting video popularity with recommendation network information (\cref{ssec:setting}).
We analyze the prediction results and provide an analysis on the strength of each link (\cref{subsec:prediction-results}).
Finally, we introduce a new metric --- estimated network contribution ratio.
We use it to identify the types of content that benefit most from being recommended in the network (\cref{subsec:result-interpretation}).
\subsection{Constructing a network with persistent links}
\label{ssec:persistent}
In order to reliably estimate the effects of the recommendation network on the viewing behaviors, we apply two filters:
(a) target videos should have at least 100 daily views on average;
(b) the average daily views of the source videos should be at least 1\% of those of the target videos as such videos cannot substantially influence their far more popular neighbors.
In the resulting network, we further remove the \textit{ephemeral links} that appear sporadically over time and correct for the \textit{missing links} that appear frequently, but with scattered gaps in between their appearances.
We assume that the missing links are likely to exist in the scattered gaps, and we use a majority smoothing method to find them (detailed next).
Links appearing in all the 63 daily snapshots and the corrected missing links, both dubbed \textit{persistent links}, make up the \textit{persistent network}.
\header{Finding persistent links.}
We use a moving window of length 7, same as the weekly seasonality, to extract the persistent structure of the \textsc{Vevo}\xspace network over the 63-day observation window.
A link from video $u$ to video $v$, $(u \rightarrow v)$, is maintained on day $t$ if $(u \rightarrow v)$ appears in a majority ($\geq 4$) of the days in time window $[t-3, t+3]$.
Likewise, if a link is missing on the current day $t$ but it appears in the majority of surrounding 7-day window, we consider it is a missing link and add it back to the network.
When $t-3$ is earlier than the first day of data collection, or $t+3$ later than the last day, we still apply the majority rule on the available days.
The resulting graph has 52,758 directed links, pointing from 28,657 source videos to 13,710 target videos.
Among them, 2,696 links are reciprocal, meaning two videos mutually recommend each other.
We find significant homophily in the persistent network:
33,908 (64.3\%) links have both the source and the target videos belonging to the same \textsc{Vevo}\xspace artist, and 44,154 (83.7\%) links are between videos of the same music genre.
\input{images/fig-persistent.tex}
\header{Validating persistent links via simulation.}
We illustrate the probability of persistent links by simulating a simple link presence/absence model.
We assume a link is independently presented on each day with probability $p_l \in [0, 1]$, and absent with probability $1 - p_l$.
We first simulate the link formulation for 63 times, then apply our 7-day majority smoothing to determine if it is persistent.
We repeat the simulation for 100,000 times, and compute the probability of a link being persistent, denoted by $\xi$.
In \cref{fig:model-persistent}(a), we plot the obtained $\xi$ against varying $p_l$.
For $p_l=0.5$ the edge is never persistent ($\xi = 0$), whereas for $p_l=0.9$ the edge is very likely to be persistent ($\xi = 0.92$).
From the simulation results, we can see that our 7-day majority smoothing rule favors links that appear much more frequent than chance, and suppresses links that appear lower or closer to chance.
\header{Videos connected by persistent links have correlated popularity dynamics.}
We use Pearson's $r$ to measure the correlation between the popularity dynamics of two videos connected by a persistent link.
It is known that the cross-correlation of time series data is affected by the within-series dependence.
Therefore, we deseasonalize, detrend, and normalize the view count series by following the benchmark steps in the M4 forecasting competition~\cite{m4forecasting}.
This is to ensure that the residual time series data is stationary and to avoid spurious correlations.
We compute the Pearson's $r$ on the obtained residual data, and we perform a paired correlation test which we consider statistically significant for $p < 0.05$.
\cref{fig:model-persistent}(b) shows the fraction of links for which the correlation test is statistically significant over four groups of links.
The $persistent^{-}$ group contains all the 52,758 persistent links we identified but excluding the 2,696 pairs of \textit{reciprocal} links --- resulting in 47,366 persistent yet non-reciprocal links.
The \textit{ephemeral} group consists of all links which have been deemed as non-persistent after applying the 7-day majority smoothing.
The \textit{random} group is constructed by randomly selecting pairs of unconnected videos and pretending that they have a link.
All groups are filtered based on the same two criteria mentioned before.
There are a total of 694,617 links in the ephemeral group and we sample 700,000 links in the random group.
We find that $75.4\%$ of the reciprocal links connect videos with statistically correlated popularity series.
We include both positive and negative correlations as two user attention series may cooperate or compete with each other~\cite{zarezade2017correlated}.
Combining the reciprocal and persistent$^{-}$ groups, 26,460 (50.2\%) links in our persistent network have correlated dynamics.
This is much higher than the percentage for ephemeral links (40.9\%) and that for unconnected random video pairs (22.1\%).
We further examine the content similarity in the persistent links by grouping links that connect videos from the same artist or with the same music genre (described in~\cref{fig:measure-statistics}(c)).
\cref{fig:model-persistent}(c) top shows that most reciprocal links (93.1\%) connect videos from the same artist, while 71.1\% of them have statistically correlated popularity dynamics.
The percentages are slightly lower for the persistent$^{-}$ group (61\% from the same artist, and 32.6\% with correlated popularity) and it drops even lower for ephemeral group (28.2\% and 12.2\%, respectively).
The situation is slightly different when we study the links that connect videos of the same genre, as shown in \cref{fig:model-persistent}(c) bottom.
We find that more than 80\% of the links connect videos of the same genre, irrespective of whether they are sporadically or persistently connected.
The percentages of statistically correlated links with the same genre follow the same trend as those from the same artist, i.e., highest for reciprocal (65\%), followed by persistent$^{-}$ (39.8\%), ephemeral (33.6\%) and lowest for random (6.6\%).
The above observations indicate that not all persistent links have the same effect on video popularity, and motivate us to build a prediction model for each of the links.
\subsection{Prediction setup and models}
\label{ssec:setting}
\header{Prediction setting.}
One important observation is that viewing dynamics exhibit a 7-day seasonality~\cite{huang2018user,cheng2008statistics}.
In our temporal hold-out setting, we use the first 8 weeks (2018-09-01 to 2018-10-26) to train the model and we predict the daily view counts in the last week (2018-10-27 to 2018-11-02).
This chronological split ensures that the training data temporally precedes the testing data.
If at any point we are required to use the day $t+1$ to predict the day $t+2$ (when both $t+1$ and $t+2$ are in the testing period), we use the predicted value $\hat{\mathbf y}[t+1]$ instead of observed value $\mathbf{y}[t+1]$.
\header{Evaluation metric.}
The predicting performance is quantified using the symmetric mean absolute percentage error (SMAPE).
SMAPE is an alternative to the mean absolute percentage error (MAPE) that can handle the case when the true value or the predicted value is zero.
It is a scale-independent metric and suitable for our task in which the volume of views for different videos vary considerably.
Formally, SMAPE can be defined as
\input{equations/eq-smape.tex}
where $\mathbf{y}_v[t]$ is the true value for video $v$ on day $t$, $\hat{\mathbf{y}}_v[t]$ is the predicted value, $\mathrm{T}$ is maximal forecast horizon, and $G$ is the persistent network.
SMAPE$(v)$ averages the forecast errors over different horizons for an individual video $v$, while SMAPE$(t)$ averages over different videos for a certain forecast horizon $t$.
The overall SMAPE for each model is computed by taking the arithmetic mean of SMAPEs over different horizons and over all videos.
SMAPE ranges from 0 to 200, while 0 indicates perfect prediction and 200 the largest error, when one of the true or the predicted values is 0.
When the true and the predicted are both 0, we define SMAPE to be 0.
\header{Baseline models.}
We use a few off-the-shelf time series forecasting methods from naive forecast to recurrent neural network.
The baseline models are estimated on a per-video basis.
\begin{itemize}[leftmargin=*]
\item {Naive}: The forecast at all future times is the last known observation.
\input{equations/eq-naive.tex}
where $\mathrm{T}^*$ is the last day in the training phase.
\item {Seasonal naive (SN)}: The forecast is the corresponding observation in the last seasonal cycle.
This method often works well for seasonal data.
We observe that many videos in the \textsc{Vevo Music Graph}\xspace dataset exhibit a 7-day seasonality.
Therefore we set the periodicity length $\mathrm{m}^*$ to be 7.
\input{equations/eq-snaive.tex}
\item {Autogressive (AR)}: AR is one of the most commonly used model in time series forecasting.
An AR model of order $p$ describes the relation between each of the past $p$ days and current day, formally defined as:
\input{equations/eq-ar.tex}
We choose the order $p$ to be 7.
$\alpha_{v, \tau}$ represents the relation between current day and $\tau$ days before.
\item {Recurrent neural network (RNN)}: RNN is a deep learning architecture that models temporal sequences.
We implement RNN with long short-term memory (LSTM) units.
LSTM-based approaches have been competitive in time series forecast tasks, mainly in a sequence-to-sequence (seq2seq) setup, see \cite{kuznetsov2019foundations} for detailed discussions.
\end{itemize}
\header{Networked popularity model.}
Built on top of the AR model, we model the network effects by assigning a weight $\beta_{u, v}$ to each link $(u \rightarrow v)$ existing in the persistent graph $G$, which modulates the inbound traffic received via that link, defined as:
\input{equations/eq-network.tex}
$\beta_{u, v}$ can be explained as the probability that a generic user clicks on video $v$ from video $u$, therefore, we impose the constraint $0 \leq \beta_{u, v} \leq 1$.
We refer to this model as ARNet.
One way to interpret the ARNet is to conceptualize a YouTube watching session as a sequence of video clicking.
We therefore categorize views on YouTube into two classes: \textit{initial} views and \textit{subsequent} views.
The initial views start the clicking sequences.
Some possible entry points include homepage feed, search results, or YouTube URLs on other social media.
The subsequent views model the behaviors of users clicking by following the recommendation links.
The session ends when the user navigates back to YouTube homepage, or quits the browser.
Although in the dataset we cannot differentiate initial views from subsequent views, we consider that initial views are driven by the latent interest of users, modelled as autoregression of the past $p$ days; in contrast, subsequent views are directed by the recommendation network, modelled as contribution from its incoming neighbours $\{u | (u \rightarrow v) \in G \}$ and mediated by estimated link strength $\beta_{u, v}$.
We use the \textsc{statsmodels.tsa} package for the AR model, \textsc{keras} package for the RNN, and build a customized optimization task with constrained \textsc{L-BFGS} for the ARNet.
We use the SMAPE as objective function in both RNN and ARNet.
\subsection{Popularity prediction results}
\label{subsec:prediction-results}
\cref{fig:model-results}(a) summarizes the prediction errors achieved by the five methods defined in \cref{ssec:setting}.
The Naive model alone is a weak predictor, however accounting for the seasonal effects (SN model) yields a significant error decrease.
It is worth noticing that the AR model yields similar performance as the advanced RNN model --- due to the known result that future popularity of online videos correlates with their past popularity \cite{pinto2013using}.
We observe that using recommendation network information further improves the prediction performance:
the ARNet model achieves a 9.66\% relative error reduction compared to the RNN model.
This prediction task shows that one can better predict the view series for a video if the list of videos pointing to it is known.
Next we study the prediction performance with respect to the forecast horizon, i.e., how many days in advance do we predict.
We average the SMAPEs over all videos against predictions for a given forecast horizon $t$, computed as $\mathrm{SMAPE}(t)$ in \cref{eq:smape}.
\cref{fig:model-results}(b) shows a nuanced story: the prediction performances decrease for all models as the forecast horizon extends.
Nevertheless, the ARNet model consistently outperforms other baselines across all forecast horizons, especially for larger horizons.
\input{images/fig-prediction.tex}
We posit two factors in preventing the models from obtaining even better results.
Firstly, it is well known that the attention dynamics tend to be bursty when items are first uploaded~\cite{rizoiu2017online,cheng2016cascades,martin2016exploring}, and the interest dissipates with time~\cite{figueiredo2016trendlearner}.
Given that 56,845 (93.6\%) videos in our dataset have been uploaded for more than one year and 9,277 (15.3\%) videos for almost ten years, most of the videos have passed the phases of the initial attention burst.
As a result, a large part of popularity variation comes from the weekly seasonality, rendering the simple seasonal naive model particularly competitive when compared to the more advanced RNN method.
The second is data sparsity when we build the models on a per-video basis.
RNN works best when it has ample volumes of data to train.
However, we use a sliding 7-day windows to predict the views in the next 7 days as suggested in~\cite{kuznetsov2019foundations}, therefore our data size is limiting the effectively training of the RNN model.
In our ARNet model, the estimated link strength $\beta_{u, v}$ can be used to quantify the influence from a video to its neighbours.
In \cref{fig:model-results}(c), we plot the distribution of $\beta_{u, v}$ against the ratio of views of source video to that of target video.
We split the x-axis into 40 equally wide bins in log scale.
Within each bin, we compute the values at each percentile, and then connect the same percentile across all bins.
The median line is highlighted in black.
The lighter the color shades are, the further the corresponding percentiles are away from the median.
We observe the distribution has a bi-modal shape with the first mode in 0.01 and second in 0.40 (for the median), meaning users are more likely to click a much more popular video (100 times more popular), or a moderate more popular video (2.5 times).
In contrast, the estimated link strength towards a less popular video is very low.
This observation, together with the measurement that videos disproportionately point to more popular videos (\cref{sssec:structure-viewcounts}), further reinforces the ``rich get richer'' phenomenon.
\subsection{The impacts of network on video popularity prediction}
\label{subsec:result-interpretation}
From the ARNet model, we derive a metric called the estimated network contribution ratio $\eta_v$, which is defined as
\input{equations/eq-network-ratio.tex}
$\eta_v$ is the fraction of estimated inbound traffic from video $v$'s neighbours against its own predicted popularity.
As we constrain all coefficients in \cref{eq:network} to be non-negative, $\eta_v$ is bounded in $[0, 1]$.
In our dataset, the mean $\eta_v$ is $0.314$.
In other words, for an average video in the \textsc{Vevo Music Graph}\xspace dataset, 31.4\% of its views are estimated from the recommendation network.
This value is slightly higher than the YouTube network contribution measured by \citet{zhou2010impact} in 2010 (reported below 30\%).
We posit two potential reasons: (1) the \textsc{Vevo}\xspace network is more tightly connected than a random YouTube video network~\cite{airoldi2016follow}; (2) traffic on recommendation links may have increased since then, signifying the advances of modern recommender systems.
Furthermore, among the 31.4\% networked views, 85.9\% are estimated from the same artist, echoing the network homogeneity found by \citet{airoldi2016follow}.
On average, the 13,710 target videos in the persistent network attract 245.3M views every day.
Our ARNet model estimates that 78.6M (32\%) of these views are contributed via the recommendation network.
\input{images/fig-error.tex}
Firstly, we explore the relation between prediction performance and content similarity concerning the artist and music genre.
In \cref{fig:model-error}(a), we compute $\eta_v$ conditioned on that ${(u, v) \in G}$ and that $u$ and $v$ are from the same artist (top) or with the same genre (bottom).
We then slice the x-axis into 20 bins, 5 percentiles apart, based on the artist/genre network contribution ratio.
We compute the mean SMAPEs for the videos in each bin.
Videos that are connected solely by videos from other artists/genres will be placed in the leftmost bin ($\eta_v = 0$).
The plot shows that the SMAPE error decreases with the increasing percentage of views from videos with the same artist or genre.
Secondly, we study the question that which artists are affected most \textit{if} the recommender systems were to be turned off?
\cref{fig:model-error}(b) shows the popularity percentile \textit{change} at the level of artist.
We first compute the network-subtracted views, i.e., subtracting the network contribution $\sum_{t=1}^{\mathbf T} \sum_{(u, v) \in G} \beta_{u, v} \mathbf{y}_{u}[t]$ from the observed views $\sum_{t=1}^{\mathbf T} \mathbf{y}_{v}[t]$.
We then aggregate and compute the popularity percentiles for both observed views and network-subtracted views at the level of artist.
The x-axis plots the artists' popularity percentiles without recommendation network, and y-axis plots the percentile changes when turning on the network.
The range of percentiles stays constant between $[0, 100\%]$, reflecting the concept of finite attention --- one video gains popularity at the expense of others.
The top outliers identify artists who gain much more popularity than their peers with similar popularity due to the recommendation network; whereas the bottom outliers represent artists who lose popularity.
There are 2,340 artists having target videos in the persistent network.
We observe that 1,378 (58.89\%) artists losing a small amount of popularity (less than 5\%) while 948 (40.51\%) gaining.
We notice there is no bottom outlier.
On the contrary, the top outliers show that the network can help some artists massively increase their relative popularity (as high as 26\%, J-Kwon (American rapper) in 4th bin).
We take a closer look at the outliers by scattering them in \cref{fig:model-error}(c).
70 artists gain significant popularity from the recommendation network, implying a better utilization of network effects.
We retrieve the artist genres from the music database MusicBrainz, and we notice two notable groups.
One is the Indie group by matching genre keywords ``indie'', ``alternative'', or ``new wave''.
The top 3 most popular Indie artists are 4 Non Blondes, Hoobastank, and The Police.
The other is the Hip hop group by matching genre keywords ``hip hop'', ``rap'', ``reggae'', or ``r\&b''.
The top 3 most popular Hip hop artists are Mark Ronson, French Montana, and Pharrell Williams.
This finding reveals that the recommender systems can lead users to find niche artists.
\section{Conclusion}
\label{sec:conclusion}
This work presents a large-scale study for online videos on YouTube.
We collect a new dataset that consists of 60,740 \textsc{Vevo}\xspace music videos, representing some of the most popular music clips and artists.
We construct the YouTube recommendation network.
We present measurements on the global component structure and temporal persistence of links.
A model that leverages the network information for predicting video popularity is proposed, which achieves superior results over other baselines.
It also allows us to estimate the amount of attention flow over each recommendation link.
We derive a metric --- estimated network contribution ratio, and we quantify this ratio at both the entire \textsc{Vevo}\xspace network level and individual artist level.
To the best of our knowledge, this is the first work that links the video recommendation network structure to the attention consumption for the videos in it.
\header{Discussion.}
Much progress has been made to algorithmically optimize or increase the attention for individual digital item (from videos to products to connections in social networks), whereas the theory about attention flow among different items is still fairly nascent.
Our data includes a series of network snapshots that are constructed by the platform's recommender systems, and visible to both content producers and consumers.
We believe that the area of understanding the implications of content recommendation networks has many worthy problems and fruitful applications.
However, definitions and properties of a recommendation network that is fair and transparent to the content hosting site, producers and consumers remain as open issues.
\header{Limitation and future work.}
The limitations of this work include: interpretations of importance are directly based on regression weights; some observations may not generalize to other digital items other than the most popular music videos; the prediction does not explore all the potential deep learning architecture and parameter tuning.
Future work includes modeling attention flow that takes into account item rank on the relevant list; connecting aggregate attention with individual click streams; and improving deep neural network models, specifically, three directions for us to exploit.
Firstly, extract additional features, such as audio-visual, artist, and network features.
Secondly, measure the relations between estimated link strength and link properties, such as the diversity and/or novelty of the target video relative to the source video~\cite{ziegler2005improving}.
Lastly, train a shared RNN model on videos with similar dynamics for increasing the volume of training data~\cite{figueiredo2016trendlearner}.
\section{Findings on the prediction task}
\label{sec:findings}
Models here...
\subsection{Linking parameters with item quality}
\label{ssec:quality}
Is item deemed to be higher quality having higher probability to be clicked?
Or users are more likely to continue watching next video?
\subsection{Linking parameters with a ``what-if'' situation}
\label{ssec:marketing}
Given a fixed budget, which video should I promote?
Answer: I want to promote the videos with high reciprocity value. This means attention flows outside can flow back. Or look at this reciprocity from the perspective of users.
|
1,314,259,993,166 | arxiv | \section{Introduction}
Density functional theory (DFT)\cite{dft1,dft2} is by far the most widely used method in solid state physics, owing to its immense success in predicting solid state properties such as crystal structures, ionization energies, electrical, magnetic and vibrational properties. However, treating electron correlating within an effectively single-particle framework makes it inadequate, even with the best available exchange correlation potentials, for an important class of materials: strongly correlated electron systems. This is the realm of dynamical mean field theory (DMFT) \cite{Metzner89a,Georges92a,Georges96a} which incorporates local, dynamic correlations, and has been merged with DFT for the calculation of realistic correlated materials \cite{Anisimov97a,Lichtenstein98a,dmft0,dmft1, dmft2}. In DMFT, the electrons can stay at a lattice site or dynamically hop between lattice sites in order to suppress double occupation and hence the cost of the Coulomb interaction, without any symmetry breaking unlike in the static DFT+U approach\cite{ldau}. The method has been successfully applied to transition metals \cite{Lichtenstein98a} and their oxides \cite{Held01a}, molecules\cite{sbmoldftp}, adatoms\cite{panda} and f-electron systems\cite{SAVRASOV,Held01b}, thus proving its versatility.
The early developments in this direction are ``one-shot'' DFT+DMFT\cite{Savrasov04,Minar05,millis_csc,Frank,Pouroskii07, Aichhorn2011,haule2010,csc_sb} calculations. In a ``one-shot'' calculation, first a DFT calculation is converged for a given material. Subsequently the DFT Hamiltonian is supplemented with a local Coulomb interaction for the correlated orbitals and this problem is subsequently solved within the DMFT framework. The physical properties such as the spectral function, susceptibility or magnetization are calculated from this ``one-shot'' DMFT solution of a DFT derived Hamiltonian.
Subsequently charge self-consistent (CSC) DFT+DMFT calculations have been implemented and applied. Here, the total electronic charge density is updated after the DMFT
calculation, now including effects of electronic correlations. With this updated charge density the Kohn-Sham equations of DFT are solved, a new Hamiltonian is derived which is again solved by DMFT etc. Both cycles, DFT and DMFT, are converged simultaneously. The physical properties are calculated from the converged solution. The correlation-induced change in the charge density can be significant. Hence for some materials using CSC leads to a major correction; for other materials the corrections are minute. Incorporation of this CSC correction in a site-to-site charge transfer has been studied extensively \cite{Savrasov04,Minar05,millis_csc,Frank,Pouroskii07,Aichhorn2011,haule2010}. More recently, also the effect of the inter-orbital and momentum-dependent charge redistribution has been studied \cite{csc_sb}.
While DFT provides a reasonable starting point for both ``one-shot'' and CSC DFT+DMFT, the incompatibility of the DFT and DMFT approach is seen in many occasions, e.\,g., in so-called ``$d+p$" DMFT calculation for transition metal oxides
\cite{Held02,Wang07,haule2010,Parragh2013,Haule2014,Dang2014,Hansmann2014}. The reason behind this is that in DFT the $p$ bands are too close to the Fermi level. Hence, there is a too strong intermixture of $d$ and $p$ bands and the $d$-orbitals or not strongly correlated. Within the framework of DFT+DMFT, one consequently needs to introduce an adjustment to the $d$-$p$ splitting, adjusting it either to the experimental oxygen $p$ position\cite{Dang2014,zhong}, adding a $d$-$p$ interaction parameter\cite{Hansmann2014}, or modifying the double counting \cite{haule2010,Haule2014} or exchange-correlation potentials \cite{Nekrasov2012,Nekrasov2013}. For example, in SrVO$_3$, the proper renormalization of the t$_{2g}$~ band has been obtained with an additional shift applied to the $O$ $p$-bands which is as large as 5$\,$eV relative to the t$_{2g}$~ bands \cite{zhong}, for correcting the position of the $O$-$p$ bands to that observed in experiment.
There have been considerable efforts to improve on the exchange part of the
exchange-correlation potential. Approaches in this direction include GW\cite{Hedin}+DMFT\cite{GWDMFT1,GWDMFT2,Tomczak12,GWDMFT3} and quasiparticle self-consistent GW (QSGW)\cite{QSGW1,QSGW2}+DMFT \cite{QSGWDMFT1,QSGWDMFT2}; also hybrid functionals \cite{hydmft} instead of the more widespread local density approximation (LDA) or generalized gradient approximation (GGA) exchange-correlation potential can be employed.
But all of these approaches do not solve the problem of the wrong position of the oxygen $p$ band. In this paper, we propose an alternative self-energy self-consistent (\sig) DFT+DMFT scheme. For the correlated orbitals, i.e., those that acquire a Coulomb interaction in DMFT, \sig{} DFT+DMFT takes the (linearized) DMFT self-energy as the exchange correlation potential in a similar way as proposed by Schilfgaarde and Kotani \cite{QSGW1,QSGW2} for QSGW. That is, when solving the Kohn-Sham equation, these correlated orbitals sense the (linearized) DMFT self-energy instead of the conventional LDA or GGA exchange-correlation potential. For the less correlated orbitals, that do not acquire an interaction in DMFT, the GGA is still employed. The method is self-consistent, for both electronic charge density and self-energy, and free from the double counting ambiguity. We employ the approach to SrVO$_3${} and find that it renders the correct position of the oxygen $p$-orbitals.
The outline of the paper is as follows:
In Section \ref{sec:method}, we introduce the \sig{} DFT+DMFT. In this section, we first recapitulate the conventional steps of DFT in Section \ref{Sec:DFT}, the projection onto Wannier functions in
Section \ref{Sec:WF}, and DMFT in
Section \ref{Sec:DMFT}. Carrying out these three steps constitutes a so-called ``one-shot'' DFT+DMFT calculation, whereas, as discussed in
Section \ref{Sec:CSC},
in a CSC scheme the charge recalculated after the DMFT is feed back to the Kohn-Sham equation to obtain a new one-particle Kohn-Sham Hamiltonian until self-consistency is obtained. The decisive step of the present paper, described in Section \ref{Sec:SCS}, is to take not only the charge but also the DMFT self-energy as the exchange-correlation potential of the correlated orbitals
when going back to the Kohn-Sham equation after the DMFT step.
The proper subtraction of the Hartree term to avoid a double counting is discussed in Section \ref{Sec:dc}.
An overview of the method in form of a flow diagram of the individual steps as well as of the full \sig{} DFT+DMFT scheme is provided in
Section \ref{Sec:flow} and Fig.~\ref{cycle}.
Section \ref{sec:results} presents the results for SrVO$_3$, and Section \ref{sec:summary} a summary and outlook.
\section{Methodology}
\label{sec:method}
In this section, we present the formalism and implementation of self-energy self-consistency (\sig). The actual implementation is
based on the maximally localized Wannier functions (MLWF) and extends
our previous CSC DFT+DMFT \cite{csc_sb} implementation.
Let us emphasize, that the \sig{} is a major improvement on the CSC: not only the charge but---based on the DMFT self-energy---also the exchange-correlation potential of the Kohn-Sham equations is changed.
Specifically, our starting point is a DFT calculation within Wien2k\cite{w2k}, followed by a DMFT calculation which is performed with w2dynamics\cite{w2d} using continuous-time quantum Monte Carlo (CTQMC) \cite{CTQMC} as an impurity solver. The identification of localized orbitals in DMFT is done with Wien2wannier\cite{wien2wannier}, an interface between Wien2k\cite{w2k} and wannier90\cite{wanrev}. In \sig, the self-consistency does not only include an update of the charge in the Kohn-Sham equation but further modifies the exchange-correlation potential on the basis of the linearized DMFT self-energy. This step, distinguishing our work from previous DFT+DMFT implementations, is presented in Section \ref{Sec:SCS}. This way, genuine effects of electronic correlations are included in the exchange correlation-potential and a double counting is avoided.
\subsection{DFT cycle}
\label{Sec:DFT}
Let us start by defining the central quantities of the \sig{} DFT+DMFT: the electronic charge density as the key quantity in DFT and the Greens function (or the related self-energy) as the central component of DMFT.
The charge density at a given spatial position \br, is given by the equal-time Green's function or as a sum of all Matsubara frequency:
\begin{equation}
\label{Eq:charge}
\rho({\bf r}) = \frac{1}{\beta}\sum_n G({\bf r},{\bf r};i\omega_n)e^{i\omega_n0^+},
\end{equation}
While the local DMFT Green's function defined at localized Wannier orbitals $\chi_m$ is given by
\begin{equation}
G_{mm'}(i\omega_n)=\! \int\! d{\bf r}d{\bf r'} \chi^*_m({\bf r})
\chi_{m'}({\bf r'})G({\bf r},\!{\bf r'}\!;\!i\omega_n).
\end{equation}
Here $m$, $m'$ denote the orbitals on the same site, $\beta$ is the inverse temperature and the factor $e^{i\omega_n0^+}$ ensures the convergence of the summation over Matsubara frequencies $\omega_n=(2n+1)\pi /\beta$. The full Greens function for the solid appears in both equations and can be written as
\begin{equation}
\begin{aligned}
\label{Eq:GFKS}
G({\bf r},{\bf r'};i\omega)=\bra{{\bf r}}[i\omega_n+\mu-\hat{H}_{KS}-\Delta\hat{\Sigma}]^{-1}\ket{{\bf r'}}.
\end{aligned}
\end{equation}
Here, $\mu$ is the chemical potential and $\hat{H}_{KS}$ the one-particle Hamiltonian of the Kohn-Sham equation consisting of the kinetic energy operator $\hat{T}$ and the effective Kohn-Sham (KS) potential $\hat{V}_{KS}$.
In a DFT calculation, the
KS potential $\hat{V}_{KS}$ has an explicit dependence on the total electronic charge $\rho({\bf r})$ and consists of an external potential $\hat{V}_{ion}$ due to the nuclei (ions), a Hartree potential $\hat{V}_{H}$, describing the electron-electron Coulomb repulsion and an exchange-correlation potential $\hat{V}_{xc}$, i.e., $\hat{V}_{KS}=\hat{V}_{ext}+\hat{V}_{H}+\hat{V}_{xc}$. Altogether this yields
\begin{equation}
\label{Eq:KS}
\hat{H}_{KS}=\hat{T}+\hat{V}_{ext}+\hat{V}_{H}+\hat{V}_{xc}\;.
\end{equation}
There are several existing formulations of the latter term, such as using LDA \cite{lda}, GGA \cite{gga} or hybrid functionals\cite{mbj,b3lyp}. For our calculations on \sig, we have employed GGA but this is of little importance as the potential will be later replaced by a newly formulated one that is obtained from the self-energy, $\hat{\Sigma}$.
The DFT self-consistency cycle (``DFT cycle'') hence consists of the following two steps:
(i) The calculation of the exchange correlation potential from the electronic charge distribution
$\rho({\bf r}) \rightarrow V_{KS}({\bf r})$.
(ii) The solution of the Kohn-Sham equation [written in Eq.~(\ref{Eq:GFKS}) in form of a Green's function] and the recalculation of the
the charge [through Eq.(\ref{Eq:charge})] provide together the second step $ V_{KS}({\bf r})\rightarrow \rho({\bf r})$.
\subsection{Wannier projection}
\label{Sec:WF}
Our starting point is a self-consistent DFT calculation with a converged electronic charge density. At this point $V_{xc}$ is calculated with GGA. The next step is to construct a localized orbital basis, which is required in DMFT that treats local correlations. To this end, we employ MLWFs, which are constructed by a Fourier transform of the DFT Bloch waves $\ket{\psi_{\nu {\bf k}}}$:
\begin{equation}\label{wan0}
\begin{aligned}
\ket{w_{\alpha\bf R}}=\frac{\Omega}{(2\pi)^3}\int_{BZ} d{\bf k}~e^{-i{\bf kR}}\sum_{\nu=1}^{ {\mathcal C}}U_{\nu\alpha}({\bf k})\ket{\psi_{\nu {\bf k}}}.
\end{aligned}
\end{equation}
Here, $\hat{U}({\bf k})$ is the unitary transformation matrix, $\Omega$ the volume of the unit cell, $\nu$ ($\alpha$) denotes the band indices of the Bloch waves (Wannier functions). Here and in the following hats denote matrices (operators) in the
orbital indices. In Eq.~(\ref{wan0}), we restrict ourselves to an isolated band window with ${\mathcal C}$ Bloch waves. This window may, e.g., include the $d$- or $t_{2g}$-orbitals of a transition metal oxide or, as in our example below, $t_{2g}$ plus oxygen $p$-orbital. In the scheme of maximally localized Wannier functions\cite{wanrev}, the spread (spatial extension) of the Wannier functions describing the DFT bandstructure in the given energy window is minimized; and $\hat{U}({\bf k})$ is obtained from this minimization.
In general, the target bands are ``entangled" with other, less important bands---at least at a few \tk-points. These bands are projected out by a so-called ``disentanglement" procedure. That is, at each \tk-point, there is a set of ${\mathcal C}^o(\tk)$ Bloch functions which is larger than or equal to the number of target bands, i.e., ${\mathcal C}^o({\bf k)} \ge {\mathcal C}$. The disentanglement transformation takes the form
\begin{equation}\label{wan1}
\begin{aligned}
\ket{w_{\alpha\bf R}}\! =\! \frac{\Omega}{(2\pi)^3}\! \! \int\limits_{BZ} \! \! d{\bf k}~e^{-ik{\bf R}}\! \sum_{\nu'=1}^{ {\mathcal C}}\! \sum_{\nu=1 }^{ {\mathcal C^o({\mathbf k})}}V_{\nu\nu'}({\bf k}) U_{\nu'\alpha}({\bf k})\ket{\psi_{\nu {\bf k}}}.
\end{aligned}
\end{equation}
Here, the band index $\nu$ belongs to the $``$outer window" with ${\mathcal C}^{o}({\bf k})$ Bloch wave functions, while $\nu',\alpha$ label the $\mathcal C$ target bands. Hence, the disentanglement matrix $\hat{V}({\bf k})$ is a rectangular ${\mathcal C}^{o}({\bf k}) \times {\mathcal C}$ matrix. A Fourier transformation of $\ket{w_{\alpha\bf R}}$ leads to the Wannier orbitals in \tk-space
\begin{equation}\label{wan2}
\begin{aligned}
\ket{w_{\alpha{\bf k}}}=\sum_{\bf R}e^{ik{\bf R}}\ket{w_{\alpha\bf R}}=\sum_{\nu'\nu}V_{\nu\nu'}({\bf k})U_{\nu'\alpha}({\bf k})\ket{\psi_{\nu {\bf k}}}
\end{aligned}
\end{equation}
and the corresponding Wannier Hamiltonian
\begin{eqnarray}\label{ham}
\hat{H}^{\mathcal{W}}_{KS}({\bf k})&=&\hat{U}^{\dagger}({\bf k})\hat{H}_{KS}({\bf k})\hat{U}({\bf k}),\\
\hat{H}^{\mathcal{W}}_{KS}({\bf k})&=&\hat{U}^{\dagger}({\bf k})\hat{V}^{\dagger}({\bf k})\hat{H}_{KS}({\bf k})\hat{V}({\bf k})\hat{U}({\bf k}).
\end{eqnarray}
The two equations correspond to the case without and with disentanglement.
\subsection{DMFT cycle}
\label{Sec:DMFT}
The Hamiltonian is supplemented with a local Coulomb interaction, and the resulting lattice problem is solved in DMFT by mapping it onto an auxiliary impurity problem, which is solved self-consistently in DMFT\cite{Georges92a,Georges96a}. Here, either the non-interacting Green's function $\hat{\mathcal{G}}(i\omega_n)$ of the impurity problem or the local self-energy can be considered as a dynamical mean field. The DMFT formalism consists of the following four steps:
(i) The \tk-integrated lattice Dyson equation yields the local interacting Green's function $\hat{G}(i\omega_n)$
\begin{eqnarray}\label{locg}
\begin{aligned}
\!\!\!\!\!\hat{G}(i\omega_n)=\!\!\frac{1}{n_{\bf k}}\!\!\sum_{\bf k}[i\omega_n\! + \!\mu\!-\!\hat{H}^{\mathcal{W}}_{KS}({\bf k})\!-\!\hat{\Sigma}(i\omega_n)\!+\!\hat{\Sigma}_{dc}]^{-1}
\end{aligned}
\end{eqnarray}
from the local self-energy $\hat{\Sigma}=\hat{\Sigma}_{dc}$ and one-particle Kohn-Sham Hamiltonian $\hat{H}^{\mathcal{W}}_{KS}$
; $n_{\bf k}$ \tk-points are considered in the reducible Brillouin Zone.
(ii) The impurity Dyson equation provides the non-interacting impurity Green's function
\begin{eqnarray}
\hat{\mathcal{G}}(i\omega_n)^{-1}=\hat{\Sigma}(i\omega_n)+[\hat{G}(i\omega_n)]^{-1}.
\end{eqnarray}
(iii) Solving the Anderson impurity problem (AIM) defined by $\hat{\mathcal{G}}$ and $U$ gives interacting Green's function
\begin{equation}
\hat{\mathcal{G}}(i\omega_n), U \stackrel{AIM}{\longrightarrow} \hat{{G}}_{imp}(i\omega_n).
\end{equation}
This is numerically the most involved step; we employ the continuous-time quantum Monte-Carlo method \cite{CTQMC} in the w2dynamics implementation\cite{w2d} to this end.
(iv) Applying the impurity Dyson equation a second time once again gives the self-energy
\begin{eqnarray}
\hat{\Sigma}(i\omega_n) = \hat{\mathcal{G}}^{-1}(i\omega_n)-\hat{G}_{imp}^{-1}(i\omega_n).
\end{eqnarray}
In the DMFT self-consistency cycle (``DMFT cycle"), the obtained self-energy is now used again in step (i) to recalculate a new local Green's function until a convergence is achieved. The ``one-shot DFT+DMFT'' ends after a full ``DFT cycle'' and one subsequent ``DMFT cycle". Physical quantities, e.\,g., spectral function, susceptibility etc. are extracted at this point. Both in a charge CSC and \sig{} DFT-DMFT one goes instead back to the DFT-part as discussed in the following.
\iffalse
We adapt a similar ideology to treat the later problem avoiding heavy GW calculations. This step is in one sense a step ahead of regular CSC. We define a new exchange correlation potential but form DMFT self-energy, which is local unlike GW self-energy which has explicit momentum dependence. The idea is following:
\begin{itemize}
\item Formulate an exchange correlation potential, \ensuremath{\hat{V}^c_{\text{xc}}}~, from DMFT self-energy within the correlated subspace.
\item Replace correlated part of DFT exchange correlation potential with \ensuremath{\hat{V}^c_{\text{xc}}}~.
\item Recalculate DFT Hamiltonian, which is to be served to DMFT.
\end{itemize}
We henceforth make a new development by creating new potential on top of density correction, done in full-CSC.\\
\par Like full-CSC, we start with a converged DFT calculation. The correlated subspace is identified and we obtain a DFT Hamiltonian, and hence a DFT Green's function in Wannier representation.
\begin{eqnarray}\label{ham}
\hat{H}^{\mathcal{W}}_{KS}({\bf k})&=&\hat{U}^{\dagger}({\bf k})\hat{H}_{KS}({\bf k})\hat{U}({\bf k}),\\
\hat{H}^{\mathcal{W}}_{KS}({\bf k})&=&\hat{U}^{\dagger}({\bf k})\hat{V}^{\dagger}({\bf k})\hat{H}_{KS}({\bf k})\hat{V}({\bf k})\hat{U}({\bf k})
\end{eqnarray}
\begin{eqnarray}\label{loc}
\begin{aligned}
\hat{G}(i\omega_n)=\frac{1}{n_{\bf k}}\sum_{\bf k}[i\omega_n + \mu-\hat{H}^{\mathcal{W}}_{KS}({\bf k})]^{-1}.
\end{aligned}
\end{eqnarray}
The Anderson impurity problem is solved to obtain impurity Green's function. Employing Dyson equation a new self-energy is calculated. The procedure proceeds with ``Density update" state to find a new density, $\rho=\Delta\rho+\rho_{DFT}$. Until this point we were following the foot steps of full-CSC methodology. The sigma-SC requires few additional steps, henceforth. \\
{\bf Potential update: } Similar to the density update in CSC, potential update requires separation of DFT exchange correlation potential, \ensuremath{\hat{V}^{DFT}_{\text{xc}}}~, into correlated and non-interacting parts. We perform this separation by dividing the total charge density, $\rho(\br)$.
\fi
\subsection{Recalculation of the charge density}
\label{Sec:CSC}
For the \sig approach, we now go one step further. We construct a new electronic charge density °(as has been done before) and a new exchange correlation potential for the correlated sub-space. The total charge density is separable into two parts; (i) the correlated part, $\rho^c(\br)$, formed by the correlated orbitals (typically the $d$- or $f$-orbitals) and (ii) the non-interacting part, $\rho^{\text{rest}}(\br)$, formed by the rest of the system, i.e.:
\begin{eqnarray}
\rho(\br)=\rho^c(\br)+\rho^{\text{rest}}(\br).
\end{eqnarray}
Including the DMFT correlations, $\rho^c(\br)$ can be calculated from the local DMFT Green's function as follows:
\begin{eqnarray}\label{dn}
\rho^c({\bf r})&=&\frac{1}{n_{\bf k}}\sum_{{\bf k},\alpha\alpha'} \braket{{\bf r}|w_{\alpha{\bf k}}} N^{\mathcal{W}}_{\alpha\alpha'}({\bf k})\braket{w_{\alpha'{\bf k}}|{\bf r}}\;.
\end{eqnarray}
Here, $N^{\mathcal{W}}_{\alpha\alpha'}({\bf k})=\langle c^\dagger_{\alpha{\mathbf k}}c^{\phantom{\dagger}}_{\alpha'{\mathbf k}} \rangle$ is the expectation value of the occupation operator in the localized Wannier orbitals basis $\alpha$, $\alpha'$ which can be directly calculated from the equal time (or Matsubara sum) of the corresponding DMFT Green's function $\hat{G}$ which is again a matrix with respect to the orbitals. For a faster convergence of the Matsubara sum, it is advisable to express $\hat{N}^{\mathcal{W}}$ as
\begin{eqnarray}\label{nw}
\hat{N}^{\mathcal W}({\bf k})&=&\frac{1}{\beta}\sum_n[\hat{G}({\bf k},i\omega_n)-\hat{G}^{*}({\bf k},i\omega_n)]+ \hat{f}(\tk)
\end{eqnarray}
Here, the functional behavior of $\hat{G}$ at higher frequency is considered by a model Green's function $\hat{G}^{*}$, and $\hat{f}$ provides the analytical frequency sum of $\hat{G}^{*}$.
\begin{eqnarray}
\hat{G}^{*}({\bf k},i\omega_n)&=&[i\omega_n-\hat{h}(\tk)]^{-1},\\
\hat{h}(\tk)&=&[-\mu+\hat{H}^{\mathcal{W}}_{KS}(\tk)+\hat{\Sigma}(\infty)-\hat{\Sigma}^{dc}],\\
\hat{f}(\tk)&=&\frac{1}{2}\big(1-{\rm tanh}[\frac{\beta}{2}\hat{h}(\tk)]\big)
\end{eqnarray}
Note that $\hat{H}^{\mathcal{W}}_{KS}$ is, in general, not diagonal in Wannier representation. To calculate the analytical sum, $\hat{f}$, we diagonalize $\hat{h}(\tk)$. If $v_{\alpha i}$ is the $i$'s eigenvectors and $w_i$ the $i$'s eigenvalue of $\hat{h}(\tk)$, we get
\begin{eqnarray}
\hat{f}'_i(\tk)&=&\frac{1}{2}\big(1-{\rm tanh}[\frac{\beta}{2}w_i(\tk)]\big)\\
\hat{f}_{\alpha \alpha'}(\tk)&=&v_{\alpha i}f'_i(\tk)(v_{\alpha' i})^* \;.
\end{eqnarray}
The operator $N^{\mathcal{W}}$ is then transformed to the Bloch basis utilizing the unitary and the disentanglement matrices, $\hat{U}(\tk)$ and $\hat{V}(\tk)$:
\begin{eqnarray}
\hat{N}({\bf k}) &=& \hat{U}({\bf k}) \hat{N}^{\mathcal{W}}({\bf k})\hat{U}^{{\dagger}}({\bf k}) \\ \label{nk1}
\hat{N}({\bf k}) &=& \hat{V}({{\bf k}})\hat{U}({\bf k})\hat{N}^{\mathcal{W}}({\bf k}) \hat{U}^{\dagger}({\bf k})\hat{V}^{\dagger}({\bf k})\label{nk2}
\end{eqnarray}
From this, the correlated charge density is finally obtained as:
\begin{equation}\label{rhor}
\rho^c({\bf r})=\frac{1}{n_{\bf k}}\sum_{\bf k} \sum_{\nu\nu'=1}^{{\mathcal C}^o}D^{{\bf k}}_{\nu'\nu}({\bf r}) N_{\nu\nu'}({\bf k})
\end{equation}
The remaining density $\rho^{\text{rest}}(\br)$ is calculated within DFT and added to $\rho^c({\bf r})$ to obtain the total electronic charge density.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{fig1.jpg}
\caption{\label{cycle} Schematic representation of the \sig{} DFT+DMFT. In a ``one shot'' DFT+DMFT calculation, the DFT Hamiltonian is not updated and both the DFT and DMFT cycle close individually, i.e., we have the orange and green arrows in the schematic. In a \sig{} DFT+DMFT caculation, neither DFT nor DMFT is iterated individually. Instead, both steps are closed together, i.e., we have the green and the purple arrows in the schematic, but not the orange ones.}
\end{figure*}
\subsection{Recalculation of the exchange-correlation potential from the DMFT self-energy}
\label{Sec:SCS}
\par The next step is the key aspect of the \sig{} DFT+DMFT approach: recalculating the exchange-correlation potential for the next iteration step on the DFT side. The Hartree potential, $V_H({\bf r})$ is calculated as usual from the total density, including the effect of electronic correlations on the density. The exchange-correlation (XC) potential for the next step is however not derived from the total charge density (e.g.\ using the GGA functional) as in previous CSC DFT+DMFT calculations. Instead, we have adapted the following assumption: If the correlated orbitals are fairly localized, the XC potential can be divided into two parts:
\begin{equation}\label{vs}
\ensuremath{\hat{V}^{DFT}_{\text{xc}}}~ \approx \ensuremath{\hat{V}^c_{\text{xc}}}~ + \ensuremath{\hat{V}^{\text{rest}}_{\text{xc}}}~.
\end{equation}
Here, \ensuremath{\hat{V}^c_{\text{xc}}}~ correspond to the XC potential for the correlated subspace and \ensuremath{\hat{V}^{\text{rest}}_{\text{xc}}}~ accounts for the XC of the rest of the system. To determine these two XC potentials, we first calculate $\ensuremath{\hat{V}^{DFT}_{\text{xc}}}~$ and $\ensuremath{\hat{V}^c_{\text{xc}}}~$ from the corresponding densities $\rho({\bf r})$ and $\rho^c({\bf r})$, respectively, employing the GGA functional for both densities.
From these, we obtain also the difference $\ensuremath{\hat{V}^{\text{rest}}_{\text{xc}}}~=\ensuremath{\hat{V}^{DFT}_{\text{xc}}}~-\ensuremath{\hat{V}^c_{\text{xc}}}~$. This procedure has the following advantage: the total XC potential, \ensuremath{\hat{V}^{DFT}_{\text{xc}}}~, calculated from $\rho({\bf r})$ includes the core-valance interaction and the interaction between correlated and uncorrelated subspace. Even after subtraction of $\ensuremath{\hat{V}^c_{\text{xc}}}~$, $\ensuremath{\hat{V}^{\text{rest}}_{\text{xc}}}~$ will still possess that part of the interaction. Only the XC potential stemming from the interaction within the correlated subspace is taken out in $\ensuremath{\hat{V}^{\text{rest}}_{\text{xc}}}~$. Similar subtractions of the $d$-contributions to the exchange-correlation potential have been done before \cite{Nekrasov2012,Nekrasov2013,Haule2015}, but not the next step: taking the DMFT self-energy for the exchange-correlation potential instead.
\par That is, we employ a new XC potential within the correlated subspace, \nvd, which is given by the (linearized) DMFT self-energy, $\Sigma$. By construction, $\Sigma$ is local (in Wannier space) and represented in Matsubara frequencies. Because of this frequency-dependence, $\Sigma$ cannot be employed directly in the one-particle Kohn-Sham equation.
As we focus on the low energy part of the spectrum, we linearize the self-energy around zero frequency
\begin{equation}
\hat{\Sigma}^{lin}(\omega)=\hat{\Sigma}(0)+\omega\frac{\partial \hat{\Sigma}}{\partial \omega}\bigg|_{\omega=0}.
\label{Eq:Siglin}
\end{equation}
This linearized self-energy is still frequency-dependent and still cannot be included in the Kohn-Sham equations which is based on a frequency-independent Hamiltonian.
But thanks to the linearized self-energy, we can use the fact that the relevant self-energy, when determining the
eigenvalues of the Kohn-Sham equation, is taken for a particular frequency:
the frequency $\omega$ that is equal to the Kohn-Sham eigenvalue.
Hence we can approximate Eq.~(\ref{Eq:Siglin}) by a Hermitian operator
\begin{equation}
\hat{\Sigma}^{lin}_{\alpha,\alpha}(\tk)=\hat{\Sigma}_{\alpha,\alpha}(0)+\epsilon_{\alpha}(\tk)\frac{\partial \hat{\Sigma}_{\alpha,\alpha}}{\partial \omega}\bigg|_{\omega=0}.
\label{Eq:Siglin2}
\end{equation}
That is, the $\omega$ dependence of the DMFT self-energy in Eq.~(\ref{Eq:Siglin}) is replaced by a $\mathbf k$-dependence in Eq.~(\ref{Eq:Siglin2}), taking $\omega=\epsilon_{\alpha}(\tk)$ at the most important frequency, namely the quasiparticle energy.
One further technical complication is that we do not have the self-energy for real frequencies. Hence, we instead estimate the (constant plus) linear behavior as following:
\begin{eqnarray}
{\rm Re}\hat{\Sigma}(\omega\to 0)= {\rm Re}[\hat{\Sigma}(\omega_n \to 0^+)] \\
\frac{\partial {\rm Re}\hat{\Sigma}(\omega)}{\partial \omega}\bigg|_{\omega=0} =
\frac{{\rm Im}[\hat{\Sigma}(i\omega_n)]}{\omega_n}\bigg|_{\omega_n \to 0} \label{Eq:Sigmap}
\end{eqnarray}
For the results below we take the limit $\omega_n\rightarrow 0$ in Eq.~(\ref{Eq:Sigmap}) by simply considering the value at the lowest Matsubara frequency, but more complicated fitting procedures may be taken.
We also have to take into account that the DMFT self-energy contains a Hartree contribution. This is to be subtracted from the XC potential since the same is already included in the effective KS-potential, i.e.,
\begin{equation}\label{sigp}
\hat{\Sigma'}^{\mathcal{W}}(\tk)=\hat{\Sigma}^{lin}(\tk)-\hat{\Sigma}^H \;.
\end{equation}
Here, one can deduce the Hartree term of DMFT as
\begin{equation}
\Sigma^{H\uparrow}_i=Un_{i\downarrow}+\sum^{i'\neq i}_{i'}[(U-2J)n_{i'\downarrow}+(U-3J)n_{i'\uparrow}]
\end{equation}
from the spin-orbital-resolved occupations $n_{i'\uparrow}$ of the Wannier orbitals, and the equivalent formula for the opposite spin.
When we recalculate the Kohn-Sham states with the linearized DMFT self-energy,
we need the exchange correlation potential in real space $\br$. Hence,
we now have to transform the (linearized) self-energy back to the Bloch basis utilizing the pre-obtained transformation matrices (the formulas are without and with disentanglement, respectively):
\begin{eqnarray}\label{transig}
\hat{\Sigma'}({\bf k}) &=& \hat{U}({\bf k}) {\hat{\Sigma'}}^{\mathcal{W}}(\tk) {\hat{U}}^{\dagger}({\bf k}) \\
\hat{\Sigma'}({\bf k}) &=& \hat{V}({{\bf k}})\hat{U}({\bf k}) {\hat{\Sigma'}}^{\mathcal{W}}(\tk) {\hat{U}}^{\dagger}({\bf k})\hat{V}^{\dagger}({\bf k})
\end{eqnarray}
Finally, the exchange-correlation potential within the correlated sub-space can be written on a radial grid as,
\begin{equation}\label{genxc}
\mathscr{V}^d_{xc}(\br)=\frac{1}{n_{\tk}}\sum_{\tk}\sum_{\nu\nu'=1}^{{\mathcal C}^o}D^{{\tk}}_{\nu'\nu}({\br})\Sigma'(\tk)_{\nu\nu'}.
\end{equation}
In the Kohn-Sham equation we henceforth employ the XC potential $\ensuremath{\hat{V}^{\text{rest}}_{\text{xc}}}~+\mathscr{V}^d_{xc}(\br)$ or the following one-particle Hamiltonian instead of Eq.~(\ref{Eq:KS}):
\begin{eqnarray}\label{hamnew}
\hat{H}_{KS} &=& \hat{T} + \hat{V}_{ext}+\hat{V}_{H}+\ensuremath{\hat{V}^{\text{rest}}_{\text{xc}}}~+\hat{\mathscr{V}}^c_{xc}.
\end{eqnarray}
\subsection{Exact double counting subtraction}
\label{Sec:dc}
In the \sig~formalism, the part of the self-energy used as exchange correlation within the correlated subspace is now explicitly defined through Eq.~(\ref{hamnew}).
One can hence subtract this contribution exactly when calculating the DMFT Green's function in Eq.~(\ref{locg}), simply by setting
\begin{eqnarray}\label{Eq:SigmaDC}
\hat{\Sigma}^{dc}(\tk)=\hat{\Sigma}^{lin}(\tk).
\end{eqnarray}
where $\hat{\Sigma}^{lin}(\tk)$ comes from the previous iteration. Let us remid the reader that the $\mathbf k$-dependence of the double counting originates from the linearization process where we replaced approximately $\omega\approx\epsilon_{\alpha}(\tk)$ when going from Eq.~(\ref{Eq:Siglin}) to Eq.~(\ref{Eq:Siglin2}).
Let us note again that the Hartree term enters $\hat{H}_{KS}$ only once in form of $\hat{\Sigma}^{H}$ but not in $\hat{\mathscr{V}}^c_{xc}$ thanks to the subtraction $\hat{\Sigma'}^{\mathcal{W}}$ in Eq.~(\ref{sigp}); using $\hat{\Sigma}^{lin}$ instead of $\hat{\Sigma'}^{\mathcal{W}}$ for the double-counting warranties that the Hartree term cancels for the self-energy.
In \sig{} DFT+DMFT, the ambiguity of the double counting term is hence avoided altogether. The correlated orbitals that acquire a Coulomb interaction in DMFT obtain a linearized $\hat{\Sigma'}$ in the Kohn-Sham equation which is known exactly and can be hence subtracted as $\hat{\Sigma}^{dc}$ when going back to the DMFT side.
Indeed after subtracting $\hat{\Sigma}^{dc}$ in Eq.~(\ref{locg}) not even the linearization approximation of the self-energy enters the DMFT Green's function any longer---but is replaced by the full, frequency dependent DMFT self-energy. The linearization and including it as $\hat{\mathscr{V}}^c_{xc}$ in the Kohn-Sham potential only serves the purpose that the Kohn-Sham wave functions and eigenvalues are adjusted to correlation effects included in the DMFT self-energy. On the DMFT side the full self-energy is taken; and no further XC potential within the correlated subspace.
\subsection{Flow diagram of \sig{} DFT+DMFT}
\label{Sec:flow}
The full \sig{} DFT+DMFT, altogether consists of the following workflow, as depicted schematically in Fig.~\ref{cycle}:
\begin{itemize}
\item A converged charge density is obtained within DFT to have a reasonable electronic structure to start with (upper left part of Fig.~\ref{cycle}). The target bands are identified as a prelude for the Wannier projection. In the following \sig{} DFT+DMFT cycles (green arrows in Fig.~\ref{cycle}), a single DFT iteration is performed with an updated DFT Kohn-Sham Hamiltonian (i.e., without the orange arrow in the upper left part). The XC potential for the correlated sub-space is supplemented with the one obtained from the DMFT self-energy as discussed in Section \ref{Sec:SCS}. For this step, we employ the modified Wien2k program package.
\item Maximally localized Wannier functions are computed within the target subspace as explained in Eqs. (\ref{wan0})-(\ref{wan2}) (upper right section of Fig.~\ref{cycle}). The DFT Kohn-Sham Hamiltonian is transformed into the Wannier basis following Eq.\ (\ref{ham}). We employ wien2wannier \cite{wien2wannier} and Wannier90 \cite{wanrev} to this end.
\item A single DMFT cycle is performed using w2dynamics\cite{w2d} (lower right part of Fig.~\ref{cycle}). This provides the self-energy $\hat \Sigma$, local Green's function $\hat{G}$, and the DMFT chemical potential $\mu$, which is fixed to the particle number. It needs to be noted that at this point $\hat{\Sigma}^{lin}$ is used as double counting term and $\mu$ is calculated accordingly. Moreover, for practical purposes, it is beneficial to start with a converged ``one-shot" DFT + DMFT calculation. Further, a mixing (under-relaxation) between old and new DMFT self-energy is employed.
\item The correlated charge distribution as well as the XC potential are updated (lower left part of Fig.~\ref{cycle}). At first, $\hat{N}^{\mathcal W}({\bf k})$ is calculated from the DMFT Green's functions, $\hat{G}$ as in Eq.\ (\ref{nw}). As described in Eqs.\ (\ref{nk1})-(\ref{nk2}), $\hat{N}^{\mathcal W}({\bf k})$ is transformed back to the DFT eigenbasis to calculate the correlated charge distribution $\rho^c({\bf r})$ in real space. In a similar fashion, the XC potential $\hat{\mathscr{V}}^c_{xc}$ in the correlated sub-space is calculated from the DMFT self-energy through Eq. (\ref{sigp}) and transformed back to DFT eigenbasis as presented in Eqs.\ (\ref{transig}), on a radial grid by employing Eq. (\ref{genxc}).
\item The new DFT+DMFT charge density is compared with the old density. If the difference does not match the convergence criteria, the new density is mixed with the old density, serving as the new density. The charge density of the correlated orbitals $\rho^c({\bf r})$ is then used to calculate $\hat{V}^c_{xc}$, which provides $\hat{V}^{rest}_{xc}$ as described in Eq.~(\ref{vs}). The exchange correlation potential in the KS Hamiltonian is updated with $\hat{V}^{rest}_{xc}$ and $\hat{\mathscr{V}}^c_{xc}$ according to Eq.~(\ref{hamnew}). At the same time, the DMFT self-energy is also compared for two consecutive iterations for convergence.
\end{itemize}
\section{Results}
\label{sec:results}
The \sig{} DFT+DMFT scheme is applied to SrVO$_3$, a testbed material for methodological developments for strongly correlated electrons systems. The cubic perovskite structure of SrVO$_3${} results in degenerate t$_{2g}$~ bands near Fermi energy that are singly occupied and unoccupied $e_g$ bands. Bulk SrVO$_3${} exhibits a strongly correlated metallic behavior and the electronic features are mostly governed by partially filled t$_{2g}$~ bands. In DFT+DMFT schemes, one typically treats isolated t$_{2g}$~ bands with explicit electron correlation in DMFT---coined ``$d$-only'' model. As a consequence of the DMFT correlations, the wide band of DFT are renormalized by a factor of about 0.5, yielding a strongly correlated metal. Additional lower and upper Hubbard peaks appears at -1.7 eV and 2.5 eV, respectively, see e.g.\ Refs.~\onlinecite{svoexptheo,Pavarini03,Liebsch03a,Nekrasov05a,Nekrasov05b} for previous DFT+DMFT calculations. In the energy range of the latter, also the $e_g$ bands are located. The agreement of the t$_{2g}$~ spectral function with experiment is reasonably good\cite{svoexptheo}.
SrVO$_3${} has also been studied in GW+DMFT by various groups \cite{Tomczak12,P7:Casula12b,Taranto13,Tomczak14,bohenke,GWDMFT3}. GW+DMFT yields a somewhat better position of the lower Hubbard band\cite{Tomczak12,Taranto13,Tomczak14} but does not solve
the problem with the wrong position of the oxygen $p$-bands \cite{Tomczak12,Tomczak14,GWDMFT3}.
One can include non-interacting $p$-bands within DMFT in a co-called ``$d$+$p$'' calculation. However, the energy difference between $d$ and $p$ bands derived {\em ab initio} in DFT is underestimated. Consequently there is a too strong hybridization between $d$ and $p$ orbitals, and the effective $p$ orbitals have a significant $d$ contribution. This in turn means that the $d$ occupation is much larger than one.
A $d$+$p$ calculation with interaction in the t$_{2g}$~ bands and no interaction in the uncorrelated $p$ bands hence yields only a weakly correlated solution with too wide t$_{2g}$~ bands around the Fermi energy and no Hubbard bands \cite{Held02,Wang07,haule2010,Parragh2013,Haule2014,Dang2014,Hansmann2014}.
\begin{figure}[!h]
\includegraphics[width= 0.5\textwidth] {akw_oneshot.pdf}
\caption{\label{akw0} (Color online) One-shot DFT+DMFT for SrVO$_3${} at $U_{dd}$=9.5 eV and $J_{dd}$=0.75 eV. The {\bf k}-dependent spectral function is plotted together with the DFT band structure (white dotted lines) along high symmetry lines of the Brillouin zone. }
\end{figure}
A $d$-$p$ interaction \cite{Hansmann2014} or
an ad-hoc ``double-counting'' term \cite{Dang2014,zhong}, which corrects the onsite energies of the $p$-orbitals to the experimental position, needs to be introduced in order to obtain a proper Hubbard peak below Fermi energy, as observed in experiment. Let us note that the origin of this peak, has been debated. Namely, within a GW+extended DMFT calculation\cite{bohenke} it has been identified as a plasmon peak, which is however much less pronounced than in experiment, while Backes \textit{et al.\/}\cite{ovac} identified it coming form oxygen vacancy in a GW+DMFT framework.
Altogether, this leaves $d$+$p$ DFT+DMFT calculations in a quite unsatisfactory state, relying on parameter tuning or ad-hoc corrections of the $p$-level or exchange correlation potential for getting the correct position of the oxygen $p$-level.
In our implementation, we employ instead the DMFT self-energy as the (self-consistently updated) exchange-correlation potential for the $t_{2g}$ orbitals of SrVO$_3$. That is, the GGA potential is only used for the less correlated oxygen $p$ orbitals, whereas for the correlated, localized $t_{2g}$ orbitals the local DMFT self-energy from a $d$+$p$ calculation is used. In principle, this DMFT potential should also be employed for the $e_g$ orbitals, but since these are essentially unoccupied, the DMFT self-energy would reduce to a Hartree term which is included in the GGA as well.
In Fig.\ref{akw0}, we first present the \tk-dependent spectral function of SrVO$_3${} as obtained in a $d$+$p$ model within a standard one-shot DFT+DMFT calculation, using $U_{dd}$=9.5 eV, $J_{dd}$=0.75 eV, zero $U_{dp}$ and $U_{pp}$, and room temperature ($\beta$=40). Fully localized limit (FLL) double counting term is considered here. Let us note that within a $d$-$p$ model the impurity orbitals are more localized compared to those in a $d$-only model, causing larger values of the interaction parameters than in $d$-only calculation. The specific values are chosen following Aryasetiawan \textit{et al.\/} \cite{clda} and will be considered for all the calculations, presented in this article.
The band renormalization is reasonable with $Z \sim$ 0.48. However, the $p$-bands appear around -2 eV to -7 eV, which does not agree with the experimental photoemission spectra \cite{morikawa,svoexptheo,svo_expt, svo_arpes}. As explained before, the $p$-bands have to be adjusted to describe photoemission spectra. In SrVO$_3$, the required shift is as large as $\sim$5.0 eV \cite{zhong}, which combined with the large $U_{dd}$(9.5 eV) of Ref.~\onlinecite{clda} would even result in an insulating solution.
Next, we turn to the \sig{} DFT+DMFT, which does not necessitate such an ad-hoc shift and treats SrVO$_3${} in a completely {\it ab-initio} manner.
As mentioned in section \ref{sec:method}, we started from a converged `one-shot' DFT+DMFT self-energy (i.e. from the solution of Fig.~\ref{akw0}).
Upon \sig{} self-consistency we however obtain the solution Fig.~\ref{akw2}.
With the linearized DMFT self-energy as an input the Kohn-Sham equations in the DFT part of the loop now reproduces the DMFT spectral function very well, which is not the case in one-shot calculations (Fig.~\ref{akw0}) or conventional CSC DFT+DMFT calculations.
This is not surprising since the Kohn-Sham equations in the \sig{} DFT+DMFT are especially adjusted to the electronic correlations. Indeed the only difference between the DMFT self-energy on the DMFT side and the DMFT-derived exchange correlation functional on the Kohn-Sham side is the linearization procedure.
There are deviations between the DFT band structure and the DMFT spectrum at larger frequencies, where we are simply outside the linear regime of the self-energy. Further the DMFT spectral function shows (hardly visible) Hubbard bands which lead to a different chemical potential. With more complicated, e.g., piecewise-linear, forms of the self-energy exchange correlation potential, one should be able to remedy this in the future. But we do not expect that the actual physical quantity, i.e., the DMFT spectral function, will be affected by this strongly.
Let us now turn to the
position of the $p$-bands relative to those of the $t_{2g}$-bands. They are shifted to much lower energies. Now the oxygen positions, without any adjustable parameters (as $U$ is obtained from Ref.~\onlinecite{clda}) much better agree with the experimental spectra, see e.g.~Fig. \ref{aw1}. Please note that now, with Sigma-SC the Kohn-Sham and DMFT p-bands agree very well. In addition, interestingly, over the iteration in \sig, also the dispersion of the $p$-bands is slightly changed compared to that DFT.
\begin{figure}
\centering
\includegraphics[width= 0.5\textwidth] {gkw_err.pdf}
\caption{\label{akw2} (Color online) Same as Fig.~\ref{akw0} but now with \sig{} and the DFT bandstructure (white dotted lines) is obtained from the linearized DMFT self-energy at self-consistency. Note that the DFT bandstructure nicely follows the DMFT spectral function at low frequencies. }
\end{figure}
The scenario can be further clarified by inspecting the \tk-integrated spectral function, Fig.~\ref{aw1}, which compares our \sig{}-spectra with photoemission spectroscopy (PES) by K. Morikawa \textit{et al.\/}\cite{morikawa}.
In Fig. \ref{aw1}, the lower and upper Hubbard peaks are not very well pronounced in \sig{} DFT+DMFT. They are however present; and can also been seen at around $\pm 2\,$eV when zooming in Fig.~\ref{akw2}. These positions of the Hubbard bands agree with the PES spectrum. But the weight is smaller. In this respect, please keep in mind that more bulk-sensitive PES \cite{svoexptheo} has a larger weight in the quasiparticle peak than in the lower Hubbard band, similar but not as pronounced as in our \sig{} DFT+DMFT calculation. Further, there is additional spectral weight of the $e_g^\sigma$ orbitals (not included in our calculations as these are unoccupied) which should be located slightly above our upper Hubbard band, as was already discussed in the very first DFT+DMFT calculations \cite{svoexptheo}.
The main improvement with respect to previous DFT+DMFT calculations is that we also obtain an excellent description of the position of the oxygen $p$ orbitals without any adjustable parameter or ad-hoc $p$-$d$-shift. This includes their width and relative weight to the $t_{2g}$-bands and, in particular, their splitting into two subgroups of oxygen $p$ orbitals: out of 9 orbitals the first branch consists of 6 orbitals with a peak at -5.0 eV while the rest are peaked at -6.1 eV. A substantial shift of the $p$-orbitals in the right direction has already been obtained when taking out the $d$-electron contribution from the exchange correlation potential \cite{Nekrasov2012,Nekrasov2013,Haule2015}. But replacing it by the DMFT self-energy in \sig{} DFT+DMFT is not only more appealing from a fundamental point of view, it also gives a much larger shift which is needd to obtain the correct (experimental) oxygen position.
\begin{figure}
\centering
\vspace {-20pt}
\includegraphics[width= 0.5\textwidth] {expt-theo-comp.pdf}
\vspace {-20pt}
\caption{\label{aw1} (Color online) Comparison of calculated spectral function and experimental photoemission spectra (PES) by K. Morikawa \textit{et al.\/} \cite{morikawa}. The black circles/dots present experimental results. The red solid line represents $V$-t$_{2g}$~ spectra while blue and green solid lines correspond to $O-2p$. $U_{dd}$=9.5 eV and $J_{dd}$=0.75 eV. }
\vspace {-10pt}
\end{figure}
\section{Summary and Outlook}
\label{sec:summary}
We have introduced the \sig{} DFT+DMFT method which is free from any double counting problem, and employed it to SrVO$_3$. It yields largely improved results, in particular with regard to the position of the oxygen $p$-bands, which has been a major shortcoming of previous DFT+DMFT calculations.
The essential step is to take the DMFT self-energy as the exchange-correlation potential of the correlated orbitals in the Kohn-Sham equation of the ``DFT step''. As the latter is a one-particle equation, we must employ a linearized self-energy at the proper quasiparticle-energy in a similar manner as in QSGW \cite{QSGW1,QSGW2}.
However, when going back to the ``DMFT step'' this self-energy is readily replaced by the correct, frequency-dependent DMFT self-energy, using the many-body Dyson equation. Hence, solving the Kohn-Sham equations with the linearized self-energy can be seen as an intermediate step, only to adjust the one-particle orbitals to the actuality of electronic correlations. Thereafter the self-energy with its full frequency dependence is taken again.
This is not fully correct, since for the less correlated orbitals we still take the plain vanilla GGA potential of DFT. One might be tempted to extend the correlated subspace to all orbitals, using a DMFT self-energy also for these. Indeed, this is what is done in QSGW. However, we believe that in contrast to the QSGW this is not adequate for \sig{} DFT+DMFT since the local DMFT self-energy should only provide a proper exchange-correlation potential for the more localized orbitals, typically the $d$- or $f$-orbitals of a transition metal oxide, lanthanide or actinide. For these orbitals the local correlations as described in DMFT are prevalent. For the more extended, e.g., $p$-orbitals, on the other hand the exchange-part is more important. This can be described to a large extent by the GGA---at least for metals, but not in DMFT.
Using a combination of DMFT self-energy for the correlated orbitals and GW for the less correlated orbitals, and feeding both back to the Kohn-Sham equation in a linearized form is at least appealing,
and possibly even better than \sig{} DFT+DMFT method, pending extensive further implementations and examination which are beyond the scope of the present paper. An even further step is to include also non-local correlations beyond $GW$ which is possible using the {\em ab initio} dynamical vertex approximation (D$\Gamma$A) \cite{AbinitioDGA,DGA,RMP}, and to feed the obtained non-local self-energy back to the Kohn-Sham equation in the same way as we do in the present paper for the local DMFT self-energy.
The decisive step has been however already done in the
present paper, using a linearized DMFT-like self-energy in the Kohn-Sham equation. In other words,
we calculate and readjust the Kohn-Sham states so that these most closely resemble the correlated DMFT spectrum. In our paper we have shown that this \sig{} DFT+DMFT method does not only work properly, but also yields largely improved results compared to previous $d$+$p$ calculations.
\acknowledgments
We thank Josef Kaufmann for his devoted support regarding the maximum entropy analytical continuation; Markus Aichhorn, Elias Assmann and Peter Blaha for scientific discussions on implementing charge self-consistency in Wien2k.
This work has been supported by the European Research Council under the European Union's Seventh Framework Programme (FP/2007-2013)/ERC grant agreement n. 306447 (AbinitioD$\Gamma$A). SB acknowledges the support of Science Foundation Ireland [19/EPSRC/3605] and the Engineering and Physical Sciences Research Council [EP/S030263/1].The computational results presented have been achieved using the Vienna Scientific Cluster (VSC).
|
1,314,259,993,167 | arxiv | \section{Introduction}
The development of theoretical approaches to treat light-matter interactions is nowadays a very active and productive field in various domains, from chemical physics to condensed-matter physics, from biophysics to material sciences. The most innovative studies are perhaps phenomena exhibiting strong coupling between the radiation field and matter, and requiring special frameworks even beyond the standard classical description of light~\cite{Ebbesen_ACR2016, Rubio_PRA2018, Rubio_NRC2018, Tokatly_EPJB2018, Maitra_EPJB2018, Rubio_JCTC2017, Maitra_PRL2019}. In other situations, for instance when the external field initiates and drives the highly non-equilibrium dynamics of molecular systems or solids, it is necessary to develop non-perturbative approaches to adequately describe light-matter interactions~\cite{Nitzan_JPCM2017, Rubio_JPB2020, Yang_JPB1995, Posthumus_RPP2004, Maitra_PCCP2017, Maitra_PRL2015, Suzuki_PRA2014, Nauts_JCP2007}.
In photochemistry, the coupling of a molecular system with light is routinely used to steer or to drive chemical reactions~\cite{Banares_PCCP2015}. Molecular dynamics simulations of photochemical reactions, however, not always consider the external field explicitly in the calculations, even though some examples have been reported~\cite{Gonzelez_JCTC2011, Martinez_JCP2016, Curchod_JPCA2019, Vrakking_PCCP2011, Gonzalez_FD2011}. In condensed matter physics an emerging frontier is the optical control of material properties~\cite{Oka_AnnRevCondMat2018} achieved by shining strong and frequency selective light fields on solid state materials~\cite{Nicoletti_AdvOpt2016} and leading to a variety of light-induced collective phenomena, from transient superconductivity~\cite{Fausti_Science2011} to non-trivial topological phases~\cite{McIver_NatPhys2020}.
In recent years, in order to simulate and interpret these phenomena, some interest has been devoted to the integration of Floquet theory for time-periodic Hamiltonians with techniques for excited-state, nonadiabatic dynamics. Floquet treatment of the periodicity induced on a quantum system by an external field effectively maps the time-dependent Schrodinger equation into an eigenvalue problem for Floquet states and their associated quasi-energies, analogously to the Bloch theorem for electrons in periodic potentials~\cite{Floquet_1880, Sambe_PRA1973, Hanggi_PhysRep98}.
In the \textsl{Floquet picture}, time-dependent processes with periodic drive are analyzed in an extended vector space of ``physical'' electronic states and harmonics of the driving frequency. Therefore, electronic states can be interpreted as \textsl{dressed} states by the drive harmonics. The concept of static Floquet potential energy surfaces is, thus, introduced, which clearly generalizes the concept of static adiabatic, or Born-Oppenheimer, potential energy surfaces that are routinely used for interpreting field-free nonadiabatic processes~\cite{Curchod_WIRES2019}. Such a Floquet picture has been often employed in strong-field molecular physics, for interpretation purposes of bond-softening~\cite{Schumacher_PRL1990} and bond-hardening~\cite{Mies_PRL1992} processes, photo-dissociation~\cite{Mies_PRL1990, Taday_JPB2000, GiustiSuzor_PRA1988, Atabek_PRA1992}, dynamical alignment~\cite{Schmidt_PRA2005} and anti-alignment~\cite{Langley_PRL2001}. Recently, and due to the analogy between Born-Oppenheimer states/surfaces and Floquet dressed states/surfaces, the Floquet picture has been employed for actual simulations of driven time-dependent processes in molecules~\cite{Schmidt_PRA2017, Welsch_PRA2020, Subotnik_JCTC2020, Shalashilin_CP2018, Schmidt_PRA2016, Gonzalez_JPCA2012}. In particular, trajectory-based schemes for excited-state dynamics, where the concept of electronic potential energy surfaces that guide nuclear dynamics is of utmost importance, lend themselves, naturally, to be combined with the Floquet picture.
In this paper, we focus on the combination of the exact factorization of the electronic-nuclear wavefunction~\cite{EF_bookchapter_2020, Gross_PRL2010, Curchod_WIRES2019, Gross_TDDFTbook2018} with the Floquet formalism~\cite{Fiedlschuster_PhD2018, Schmidt_PRA2017}, devoting particular attention to the coupled-trajectory mixed quantum-classical (CT-MQC) algorithm~\cite{Gross_PRL2015, Gross_JCTC2016, Gross_JPCL2017, Maitra_JCTC2018, Tavernelli_EPJB2018, Agostini_EPJB2018}. CT-MQC is the numerical scheme allowing to solve the exact-factorization equations based on the quantum-classical approximation~\cite{Gross_EPL2014, Gross_JCP2014, Ciccotti_EPJB2018, Ciccotti_JPCA2020} of the nonadiabatic electron-nuclear problem, by introducing a trajectory-based solution of nuclear dynamics as formulated within the exact factorization. In the exact factorization, the nuclear wavefunction evolves according to a standard time-dependent Schr\"odinger equation where the dynamic, fully nonadiabatic, effect of the electrons is represented by a time-dependent vector potential and a time-dependent potential energy surface (TDPES). It has been shown~\cite{Fiedlschuster_PhD2018, Schmidt_PRA2017} in various situations where nonadiabatic effects are induced by an external time-dependent field, either laser pulse or continuous wave laser, that such TDPES very much resembles Floquet surfaces, rather than Born-Oppenheimer surfaces or quasi-static surfaces~\cite{Vrakking_PCCP2011, Suzuki_PCCP2015}. Therefore, it seems natural to employ the Floquet representation of electronic dynamics driven by an external time-periodic field in combination with CT-MQC. Note that, despite the fact that CT-MQC has been derived from the exact-factorization electronic and nuclear equations, electronic dynamics is solved in a basis. The choice of the ``most suitable'' electronic basis is, thus, crucial. It is worth mentioning here that other trajectory-based approaches to excited-state dynamics have been combined with the Floquet formalism in order to explicitly include the effect of the photo-exciting or driving field, as the quantum-classical Liouville equation~\cite{Schuette_JCP2001}, ab initio multiple spawning~\cite{Bucksbaum_JPB2015}, ab initio multiple cloning~\cite{Shalashilin_CP2018}, and trajectory surface hopping~\cite{Subotnik_JCP2020, Subotnik_JCTC2020, Gonzalez_JPCA2012, Schmidt_PRA2016}.
The present work develops and applies CT-MQC using the Floquet formalism (F-CT-MQC). To this end, in Section~\ref{sec: theory}, we briefly recall the exact factorization, introduce the elements of the Floquet theory necessary for F-CT-MQC, and derive F-CT-MQC equations. In Section~\ref{sec: results}, F-CT-MQC is applied to simulate the periodically-driven, nonadiabatic dynamics of a model system, for which the exact solution is available, allowing us to test the performance of the algorithm. Our conclusions are presented in Section~\ref{sec: conclusions}.
\section{Exact factorization for periodically driven systems: The Floquet picture}\label{sec: theory}
We study a system of interacting electrons and nuclei subject to an external time-dependent classical field $\hat{V}(\mathbf r,\mathbf R,t)$. The Hamiltonian describing the system is
\begin{align}\label{eqn: H}
\hat H(\mathbf r,\mathbf R,t) = \hat T_n(\mathbf R)+\hat H_{el}(\mathbf r,\mathbf R)+\hat{V}(\mathbf r,\mathbf R,t)
\end{align}
with electronic positions labelled by $\mathbf r$ and nuclear positions labelled by $\mathbf R$. The field-free molecular Hamiltonian is the sum of the nuclear kinetic energy $\hat T_n(\mathbf R)$ and of the electronic Hamiltonian $\hat H_{el}(\mathbf r,\mathbf R)$ which is the sum of the electronic kinetic energy and the interaction potential.
The solution of the time-dependent Schr\"{o}dinger equation (tdSE), $\Psi(\mathbf r,\mathbf{R},t)$, with Hamiltonian~(\ref{eqn: H}) can be factored as the product of a nuclear wavefunction, $\chi(\mathbf R,t)$, and an electronic conditional factor, $\Phi(\mathbf r,t;\mathbf R)$, that parametrically depends on $\mathbf R$, namely~\cite{Gross_PRL2010, EF_bookchapter_2020}
\begin{align}\label{eqn: EF}
\Psi(\mathbf r,\mathbf{R},t) = \chi(\mathbf R,t)\Phi(\mathbf r,t;\mathbf R)
\end{align}
The evolution equations for $\chi(\mathbf R,t)$ and $\Phi(\mathbf r,t;\mathbf R)$ are derived from the full tdSE~\cite{Gross_JCP2012, Alonso_JCP2013, Gross_JCP2013, Ciccotti_EPJB2018} and are
\begin{align}
i\hbar \partial_t \chi(\mathbf R,t) &= \left[\frac{1}{2}\mathbf M^{-1}\left[-i\hbar\boldsymbol{\nabla} + \mathbf A(\mathbf R,t)\right]^2 + \epsilon(\mathbf R,t)+ \epsilon_{\mathrm{ext}}(\mathbf R,t)\right] \chi(\mathbf R,t)\label{eqn: EF n} \\
i\hbar \partial_t \Phi(\mathbf r,t;\mathbf R) &= \left[\hat H_{el}(\mathbf r,\mathbf R)+\hat{V}(\mathbf r,\mathbf R,t)+\hat U\left[\Phi,\chi\right]-\epsilon(\mathbf R,t)-\epsilon_{\mathrm{ext}}(\mathbf R,t)\right]\Phi(\mathbf r,t;\mathbf R)\label{eqn: EF el}
\end{align}
where the symbol $\mathbf M$ stands for the diagonal (constant) mass tensor and $\boldsymbol\nabla$ for the spatial derivative with respect to nuclear positions. The time-dependent potentials of the exact factorization mediate the coupling between electrons and nuclei, and are the time-dependent vector potential~\cite{Requist_PRA2015, Requist_PRA2017, Agostini_JPCL2017, Curchod_EPJB2018}
\begin{align}\label{eqn: TDVP}
\mathbf A(\mathbf R,t) = \left\langle\Phi(\mathbf r,t;\mathbf R)\right| \left.-i\hbar\boldsymbol{\nabla}\Phi(\mathbf r,t;\mathbf R)\right\rangle
\end{align}
and the time-dependent scalar potential, or time-dependent potential energy surface (TDPES)~\cite{Gross_PRL2013, Gross_MP2013, Min_PRL2014, Gross_JCP2015, Curchod_JCP2016, Maitra_PRL2019}, which we decompose into two contributions, namely
\begin{align}\label{eqn: TDPES}
\epsilon(\mathbf R,t) = \left\langle\Phi(\mathbf r,t;\mathbf R)\right|\hat H_{el}(\mathbf r,\mathbf R)+\hat U\left[\Phi,\chi\right]-i\hbar \partial_t\left|\Phi(\mathbf r,t;\mathbf R)\right\rangle
\end{align}
and
\begin{align}\label{eqn: TDPES ext}
\epsilon_{\mathrm{ext}}(\mathbf R,t) = \left\langle\Phi(\mathbf r,t;\mathbf R)\right|\hat{V}(\mathbf r,\mathbf R,t)\left|\Phi(\mathbf r,t;\mathbf R)\right\rangle
\end{align}
The electron-nuclear coupling operator $\hat U\left[\Phi,\chi\right]=\hat U\left[\Phi(\mathbf r,t;\mathbf R),\chi(\mathbf R,t)\right]$ is~\cite{Gross_EPL2014, Gross_JCP2014, Agostini_ADP2015}
\begin{align}\label{eqn: enco}
\hat U [\Phi,\chi]=& \mathbf M^{-1}\Bigg[\frac{1}{2}[-i\hbar \boldsymbol{\nabla}+\mathbf A(\mathbf R,t)]^2+\left(\frac{-i\hbar\boldsymbol{\nabla}\chi(\mathbf R,t)}{\chi(\mathbf R,t)}+\mathbf A(\mathbf R,t)\right)\cdot\big(-i\hbar\boldsymbol{\nabla}-\mathbf A(\mathbf R,t)\big)\Bigg]
\end{align}
and depends explicitly on the nuclear wavfunction $\chi(\mathbf R,t)$, and implicitly on the electronic factor $\Phi(\mathbf r,t;\mathbf R)$, via its dependence on the vector potential. The integration operation, indicated as $\langle \,\cdot \,\rangle$ in previous equations, will be discussed below.
While the derivation just presented is valid in general situations, we focus here on the case of an external drive $\hat V(\mathbf r,\mathbf R,t)$, with constant amplitude and periodic in time with frequency $\Omega=2\pi/T$ (continuous wave (cw) laser). In such case, we will generalize the trajectory-based approach developed to solve Eqs.~(\ref{eqn: EF n}) and~(\ref{eqn: EF el}), and dubbed coupled-trajectory mixed quantum-classical (CT-MQC) algorithm, to the Floquet formalism.
\subsection{Elements of Floquet theory for CT-MQC}
The implementation of CT-MQC relies on the expansion of the electronic wavefunction $\Phi(\mathbf r,t;\mathbf R)$ on an electronic basis. Since the system is subject to a time-periodic Hamiltonian it seems natural to expand on a basis that takes already into account, to some extent, the driven nature of the problem. As we discuss below this can be done using Floquet theory and will naturally lead to two different classes of states that can be used as electronic basis, the Floquet adiabatic and diabatic (or dressed) states, that we briefly introduce here.
The electronic time-dependent periodic Hamiltonian is
$\hat H_{el}(\mathbf r,\mathbf R)+\hat{V}(\mathbf r,\mathbf R,t)$, at fixed nuclear positions. As such, according to the Floquet theorem~\cite{Floquet_1880,Sambe_PRA1973,Hanggi_PhysRep98}, a complete set of solutions of the electronic tdSE, if there were no nonadiabatic coupling to the nuclei, takes the form $e^{i\mathcal E_{\alpha}(\mathbf R)t}\phi_{\alpha}(\mathbf r,t;\mathbf R)$, where $\phi_{\alpha}(\mathbf r,t;\mathbf R)$ are the so called Floquet \emph{adiabatic states} which are constructed as the eigenstates of the Floquet Hamiltonian
\begin{align}
\hat H_{Fl}(\mathbf r, \mathbf R,t)=\hat H_{el}(\mathbf r,\mathbf R)+\hat{V}(\mathbf r,\mathbf R,t)- i\hbar\partial_t
\end{align}
i.e., they satisfy the eigenvalue problem
\begin{align}\label{eqn: Fl adiabatic eqn}
\hat H_{Fl}(\mathbf r, \mathbf R,t) \phi_{\alpha}(\mathbf r,t;\mathbf R) = \mathcal E_{\alpha}(\mathbf R)
\phi_{\alpha}(\mathbf r,t;\mathbf R)
\end{align}
In the equation above the eigenvalues $\mathcal E_{\alpha}(\mathbf R)$ are called Floquet quasi-energies and do not depend on time (but they depend on $\mathbf R$ as effect of the parametric dependence of the electronic Hamiltonian on the nuclear positions); $\phi_{\alpha}(\mathbf r,t;\mathbf R) = \phi_{\alpha}(\mathbf r,t+T;\mathbf R)$ has the same periodicity of the external drive. Given the periodicity of the Floquet states $\phi_{\alpha}(\mathbf r,t;\mathbf R)$, we can expand them in harmonics of the drive, as
\begin{align}
\phi_{\alpha}(\mathbf r,t;\mathbf R) = \sum_{n=-\infty}^{n=+\infty} e^{i\omega_nt}\phi_{\alpha,n}(\mathbf r;\mathbf R)
\end{align}
with $\omega_n=n\Omega$ and $n$ an integer. Inserting this expression into Floquet equation~(\ref{eqn: Fl adiabatic eqn}) and projecting onto the harmonic $m$ we get
\begin{align}\label{eqn: ad_floquet_harmonics}
\sum_n \left[\left(\hat H_{el}(\mathbf r,\mathbf R) +\hbar\omega_m\hat I\right)\delta_{mn} + \hat V_{mn}(\mathbf r,\mathbf R)\right] \phi_{\alpha,n}(\mathbf r;\mathbf R) = \mathcal E_{\alpha}(\mathbf R)\phi_{\alpha,m}(\mathbf r;\mathbf R)
\end{align}
where
\begin{align}
\hat{V}_{mn}(\mathbf r,\mathbf R)=\frac{1}{T}\int_0^T\,dt \;e^{i\left(\omega_n-\omega_m\right)t}\,\hat{V}(\mathbf r,\mathbf R,t)\,.
\end{align}
We recognize in Eq.~(\ref{eqn: ad_floquet_harmonics}) an eigenvalue problem in an extended space, including both the electronic degrees of freedoms as well as the harmonics of the drive. Note that, here, the scalar product in the space of the harmonics is defined via a time integral over a period of the drive and satisfies the orthonormality condition
$$
\frac{1}{T}\int_0^T\,dt \;e^{i\left(\omega_n-\omega_m\right)t}=\delta_{nm}
$$
The Floquet adiabatic states obtained solving the eigenproblem~(\ref{eqn: ad_floquet_harmonics}) have a mixed electronic-field nature since $\hat{V}_{mn}(\mathbf r,\mathbf R)$ couples different harmonics, while the operator $\hat H_{el}(\mathbf r,\mathbf R) +\hbar\omega_m\hat I$ is diagonal in the space of harmonics.
We can now expand the electronic wavefunction of the exact factorization in the basis of Floquet adiabatic states as
\begin{align}\label{eqn:expansion_adiafloq}
\Phi(\mathbf r,t;\mathbf R)=\sum_\alpha C_\alpha(\mathbf R,t) \phi_{\alpha}(\mathbf r,t;\mathbf R)=
\sum_{\alpha }\sum_n C_{\alpha}\big(\mathbf R,t\big)e^{i\omega_nt}\phi_{\alpha,n}(\mathbf r;\mathbf R)
\end{align}
where the sum over $\alpha$ runs over the complete set of Floquet states, solution of Eq.~(\ref{eqn: Fl adiabatic eqn}), and the expansion coefficients $C_{\alpha}(\mathbf R,t)$ depend on the nuclear positions as well as on time. Note that, while the Floquet adiabatic states $\phi_{\alpha}(\mathbf r,t;\mathbf R)$ have periodicity $T$, this is in general not the case for the electronic wavefunction $\Phi(\mathbf r,t;\mathbf R)$, which is obtained from the full electron-nuclear wavefunction of the problem (for which one could also use Floquet theorem) through the exact factorization ansatz in Eq.~(\ref{eqn: EF}). This observation justifies the use of time-dependent coefficients in the expansion in Eq.~(\ref{eqn:expansion_adiafloq}).
As we have seen so far, constructing the Floquet adiabatic states requires the solution of an eigenvalue problem in an extended space, which can be a burden for realistic applications beyond model systems. A different set of states, called Floquet \emph{diabatic states}, can be obtained considering the electronic field-free Floquet Hamiltonian
\begin{align}\label{eqn: FH diabatic eqn}
\hat H_{Fl}^{(0)}(\mathbf r,\mathbf R,t) = \hat H_{el}(\mathbf r, \mathbf R) - i\hbar\partial_t
\end{align}
Following from Floquet theorem~\cite{Sambe_PRA1973}, the eigenvalue equation
\begin{align}\label{eqn: Fl diabatic eqn}
\hat H_{Fl}^{(0)}(\mathbf r, \mathbf R,t) \varphi_{\alpha}(\mathbf r,t;\mathbf R) = \mathcal E_{\alpha}^{(0)}(\mathbf R) \varphi_{\alpha}(\mathbf r,t;\mathbf R)
\end{align}
yields the Floquet eigenmodes $\varphi_{\alpha}(\mathbf r,t; \mathbf R)$ and the Floquet quasi-energies $\mathcal E_{\alpha}^{(0)}(\mathbf R)$. As for the adiabatic modes, the diabatic eigenmodes are periodic in time $\varphi_{\alpha}(\mathbf r,t+T;\mathbf R)=\varphi_{\alpha}(\mathbf r,t;\mathbf R)$ with the same period $T$ of the drive. However, as compared to the Floquet states introduced previously, we expect the diabatic states to have a much simpler structure in the space of harmonics, since by construction the time-dependent drive is not included in the Floquet Hamiltonian of Eq.~(\ref{eqn: FH diabatic eqn}). In fact, we can write explicitly the expression of these Floquet diabatic states, in terms of the standard Born-Oppenheimer basis $\lbrace\psi_{k}(\mathbf r;\mathbf R)\rbrace_{k=1,\ldots,+\infty}$, defined as the eigenstates of the electronic Hamiltonian $\hat H_{el}(\mathbf r,\mathbf R)$ for each nuclear configuration $\mathbf R$ and satisfying the condition $\hat H_{el}(\mathbf r,\mathbf R)\psi_{k}(\mathbf r;\mathbf R) = E_k(\mathbf R)\psi_{k}(\mathbf r;\mathbf R)$. A direct calculation shows that the state
\begin{align}
\varphi_{\alpha}(\mathbf r,t; \mathbf R) = e^{i\omega_n t}\psi_{k}(\mathbf r;\mathbf R),\quad \alpha=(k,n)
\end{align}
is eigenstate of the drive-free Floquet Hamiltonian~(\ref{eqn: FH diabatic eqn}) with quasi-energy
$\mathcal E_{\alpha}^{(0)}(\mathbf R)=E_k(\mathbf R)+\hbar\omega_n$. The Floquet diabatic potential energy surfaces (PESs) are simply the standard Born-Oppenheimer PESs rigidly shifted by the energy of the corresponding harmonic.
Similarly to Eq.~(\ref{eqn:expansion_adiafloq}), we can choose the Floquet diabatic states as electronic representation for the electronic wavefunction $\Phi(\mathbf r,t;\mathbf R)$, and write
\begin{align}\label{eqn: Fl dia Phi}
\Phi(\mathbf r,t;\mathbf R) = \sum_k \sum_n C_{k,n}\big(\mathbf R,t\big)e^{i\omega_nt}\psi_{k}\big(\mathbf r;\mathbf R\big)
\end{align}
where the two sums run over the electronic physical states $k$ and over the harmonics $n$. As observed previously, we allow an explicit time-dependence for the coefficients $C_{k,n}\big(\mathbf R,t\big)$ in order to reproduce the dynamics of the electronic wavefunction which does not need to be periodic in time with the period of the drive.
While a priori the Floquet adiabatic and diabatic bases could be used in trajectory-based calculations, and one might expect the Floquet adiabatic basis to perform better since it contains already information on the external drive, we will argue that for the practical implementation of F-CT-MQC, the Floquet diabatic basis is preferred. For this reason in the next section we will derive F-CT-MQC equations only for this basis, although the derivation for the Floquet adiabatic basis can be obtained via an obvious generalization.
\subsection{F-CT-MQC with Floquet diabatic states}\label{sec: F-CT-MQC}
We introduce the trajectory-based representation of nuclear dynamics~\cite{Agostini_JCTC2020_1} that follows from the exact-factorization equations~(\ref{eqn: EF n}) and~(\ref{eqn: EF el}). The nuclear wavefunction is written in polar representation as $\chi(\mathbf R,t)= |\chi(\mathbf R,t)|\exp{[(i/\hbar)S(\mathbf R,t)]}$, such that the coupled evolution equations
\begin{align}
-\partial_t S(\mathbf R,t) &=\mathbf M^{-1}\frac{\left[\boldsymbol{\nabla} S(\mathbf R,t)+\mathbf A(\mathbf R,t)\right]^2}{2}+\epsilon(\mathbf R,t)+\epsilon_{\mathrm{ext}}(\mathbf R,t)\label{eqn: HJ}\\
\partial_t\left|\chi(\mathbf R,t)\right|^2&= -\boldsymbol{\nabla} \cdot \left[\mathbf M^{-1}\left(\boldsymbol{\nabla} S(\mathbf R,t)+\mathbf A(\mathbf R,t)\right)\left|\chi(\mathbf R,t)\right|^2\right]\label{eqn: continuity}
\end{align}
are derived from the nuclear time-dependent Schr\"odinger equation~(\ref{eqn: EF n}). The equation for the phase $S(\mathbf R,t)$ is given in the classical limit by neglecting in Eq.~(\ref{eqn: HJ}) the quantum potential term. The first equation can be solved with the method of characteristics, introducing a set of ordinary differential equations -- the characteristic equations -- yielding the values of the field $S(\mathbf R,t)$ $\forall\,\, \mathbf R,t$. The characteristic equations are Hamilton-like equations that describe the evolution in time of the ``variables'' $\mathbf R(t)$ and $\mathbf P(t)\equiv\boldsymbol{\nabla} S(\mathbf R(t),t)+\mathbf A(\mathbf R(t),t)$ appearing in Eq.~(\ref{eqn: HJ}), i.e.,
\begin{align}
\dot{\mathbf R}(t) &=\mathbf M^{-1} \mathbf P(t) \label{eqn: R dot}\\
\dot{\mathbf P}(t) &=-\boldsymbol{\nabla} \left(\epsilon\big(\mathbf R(t),t\big) +\epsilon_{\mathrm{ext}}\big(\mathbf R(t),t\big)+ \left[\dot{\mathbf R}(t)\cdot\mathbf A\big(\mathbf R(t),t\big)\right]\right)+ \dot{\mathbf A}\big(\mathbf R(t),t\big)\label{eqn: P dot}
\end{align}
The procedure to derive Eqs.~(\ref{eqn: R dot}) and~(\ref{eqn: P dot}) has been illustrated in detail in Refs.~\cite{Ciccotti_EPJB2018, Ciccotti_JPCA2020}, therefore, we just mention here that the first term on the right-hand side of Eq.~(\ref{eqn: P dot}) is the gradient of the (pseudo-classical) Hamiltonian defined on the right-hand side of Eq.~(\ref{eqn: HJ}). The time-dependent potentials $\epsilon\big(\mathbf R(t),t\big)$, $\epsilon_{\mathrm{ext}}\big(\mathbf R(t),t\big)$ and $\mathbf A\big(\mathbf R(t),t\big)$ are evaluated along each characteristic $\mathbf R(t)$ by solving the electronic equation~(\ref{eqn: EF el}) along the same characteristic. Solving the partial differential equation~(\ref{eqn: EF el}) along the \textsl{flow} of trajectories -- the characteristics -- requires to switch from the Eulerian frame to the Lagrangian frame, but before discussing the electronic equation, let us discuss the continuity equation~(\ref{eqn: continuity}).
The evolution of the nuclear density, Eq.~(\ref{eqn: continuity}), is described by a ``standard'' continuity equation, that can be solved coupled to Eqs.~(\ref{eqn: R dot}) and~(\ref{eqn: P dot}). This would allow us to reconstruct the nuclear wavefunction, the only approximation being that we dropped the quantum potential in Eq.~(\ref{eqn: HJ}). Neglecting the quantum potential has the effect of decoupling the evolution of the phase from the evolution of the density, while the continuity equation still depends on the phase. Therefore, as done in previous work~\cite{Ciccotti_EPJB2018, Gross_JCTC2016}, we will not solve the continuity equation, and we will only reconstruct a classical-like nuclear density from the distribution of the trajectories, working in the hypothesis that for short enough times, an ensemble of trajectories evolving with Eqs.~(\ref{eqn: R dot}) and~(\ref{eqn: P dot}) will sample portions of nuclear configuration space with high probability density.
Equations~(\ref{eqn: R dot}) and~(\ref{eqn: P dot}) are the basis of the CT-MQC algorithm, that has been derived and tested in Refs.~\cite{Gross_PRL2015, Gross_JCTC2016}, applied to study the relaxation dynamics through conical intersections in Refs.~\cite{Agostini_JCTC2020_2, Agostini_JCP2021}, and in Refs.~\cite{Gross_JPCL2017, Tavernelli_EPJB2018} in combination with time-dependent density functional theory to study the photo-induced dynamics in oxirane, and thoroughly analyzed in Refs.~\cite{Agostini_EPJB2018, Maitra_JCTC2018}.
The exact-factorization product form of the solution of the tdSE is invariant under a $(\mathbf R,t)$-dependent phase transformation of the nuclear and electronic wavefunctions. Therefore, the gauge freedom has to be fixed by choosing the gauge. As previously done, in CT-MQC the gauge is chosen such that $\epsilon\big(\mathbf R(t),t\big)+\dot{\mathbf R}(t)\cdot\mathbf A\big(\mathbf R(t),t\big)=0$. With this choice of gauge, the characteristic equation~(\ref{eqn: P dot}) for the nuclear momentum yields
\begin{align}\label{eqn: classical force 1}
\dot{\mathbf P}(t) &=-\boldsymbol{\nabla} \epsilon_{\mathrm{ext}}\big(\mathbf R(t),t\big)+ \dot{\mathbf A}\big(\mathbf R(t),t\big)
\end{align}
which gives the expression of the classical force $\dot{\mathbf P}(t) = \mathbf F(t)$ that drives F-CT-MQC trajectories. The explicit form of each term will be given below, after determining the evolution equation for the electronic wavefunction.
The electronic equation~(\ref{eqn: EF el}) is expressed along the characteristics $\mathbf R(t)$ as
\begin{align}\label{eqn: dot Phi}
i\hbar\dot{\Phi}(\mathbf r,t;\mathbf R(t)) = \left[\hat H_{el}\big(\mathbf r,\mathbf R(t)\big)+\hat V\big(\mathbf r,\mathbf R(t),t\big)+\hat U\left[\Phi,\chi\right]-\epsilon\big(\mathbf R(t),t\big)-\epsilon_{\mathrm{ext}}\big(\mathbf R(t),t\big)\right.\nonumber \\
\left.+i\hbar\dot{\mathbf R}(t)\cdot\boldsymbol{\nabla} \right]\Phi(\mathbf r,t;\mathbf R(t))
\end{align}
Switching from the Eulerian to the Lagrangian frame, only total time derivatives can be evaluated \textsl{along the flow}, that is why the symbol $\dot{\Phi}(\mathbf r,t;\mathbf R(t))$ has been introduced. Furthermore, we used the relation $\dot\Phi(\mathbf r,t;\mathbf R(t)) = \partial_t\Phi(\mathbf r,t;\mathbf R(t))+ \dot{\mathbf R}(t)\cdot \boldsymbol{\nabla}\Phi(\mathbf r,t;\mathbf R(t))$ to replace the partial time derivative of Eq.~(\ref{eqn: EF el}) with the total time derivative, and thus obtain the last term in square brackets on the right-hand side of Eq.~(\ref{eqn: dot Phi}). The expansion of the electronic wavefunction in the Floquet diabatic basis is expressed as well along a trajectory $\mathbf R(t)$
\begin{align}\label{eqn: Fl dia Phi with R(t)}
\Phi(\mathbf r,t;\mathbf R(t)) = \sum_k \sum_n C_{k,n}\big(\mathbf R(t),t\big)e^{i\omega_nt}\psi_{k}\big(\mathbf r;\mathbf R(t)\big)
\end{align}
and is inserted into Eq.~(\ref{eqn: dot Phi}). When using it in the left-hand side of Eq.~(\ref{eqn: dot Phi}), we have
\begin{align}
\dot{\Phi}(\mathbf r,t;\mathbf R(t))= \sum_{k,n} \left[\dot C_{k,n}\big(\mathbf R(t),t\big)\psi_k(\mathbf r;\mathbf R(t)) + C_{k,n}\big(\mathbf R(t),t\big) \dot\psi_k(\mathbf r;\mathbf R(t))\right.\nonumber \\
\left.+i\omega_nC_{k,n}\big(\mathbf R(t),t\big) \psi_k(\mathbf r;\mathbf R(t))\right]e^{i\omega_nt}
\end{align}
Applying the total time derivative to the electronic state in the second term on the right-hand side, we find that the partial time derivative is zero because $\psi_k(\mathbf r;\mathbf R(t))$ depends on time only via its dependence on the trajectory, then
\begin{align}\label{eqn: Phi dot with Floquet diabatic}
\dot{\Phi}(\mathbf r,t;\mathbf R(t))=& \sum_{k,n} \left[\dot C_{k,n}\big(\mathbf R(t),t\big)
+C_{k,n}\big(\mathbf R(t),t\big) \dot{\mathbf R}(t)\cdot \boldsymbol{\nabla}+i\omega_nC_{k,n}\big(\mathbf R(t),t\big) \right]\psi_k(\mathbf r;\mathbf R(t)) e^{i\omega_nt}
\end{align}
With the aim to derive the evolution equations for $C_{k,n}\big(\mathbf R(t),t\big)$, the expansion in Floquet diabatic states is used as well in the right-hand side of Eq.~(\ref{eqn: dot Phi}). The whole procedure is detailed in Appendix~\ref{app: el eqn}, and we discuss here only the main differences if compared to the standard derivation of CT-MQC equations.
In order to isolate in Eq.~(\ref{eqn: dot Phi}) a term $\dot C_{l,m}\big(\mathbf R(t),t\big) $, it is necessary to project onto the state $e^{i\omega_mt}\psi_{l}\big(\mathbf r;\mathbf R(t)\big)$. Working in the Floquet basis, the projection operation involves an integral over time, as well as an integral over the electronic configuration space. To do this, we introduce the change of variable $t\rightarrow s$ in $e^{i\omega_nt}$ (and $e^{i\omega_mt}$). This is done to separate the time dependence in the electronic equation into a periodic contribution induced by the drive, and indicated with $s$, and a non-periodic contribution, indicated with $t$. The time $s$ is only used when the integral over the period is computed, in order to distinguish it from the non-periodic time dependence $t$. After the projection operation, the dependence on $s$ disappears because it has been integrated out over a period, but the dependence on $t$ remains. The projection operation onto the state $e^{i\omega_ms}\psi_{l}\big(\mathbf r;\mathbf R(t)\big)$ is performed in the extended harmonic-electronic space, with the time integration over a period and the spatial integration over the whole electronic configuration space. Therefore, the instantaneous expectation value on the electronic wavefunction $\Phi(\mathbf r,t;\mathbf R(t))$ of a general electron-nuclear operator $\hat O(\mathbf r,\mathbf R(t))$ that does not depend explicitly on time is computed as
\begin{align}
&\left\langle \Phi(\mathbf r,t;\mathbf R(t))\left|\hat O\big(\mathbf r,\mathbf R(t)\big)\right|\Phi(\mathbf r,t;\mathbf R(t)) \right\rangle =\nonumber\\
& \frac{1}{T}\int_0^T ds \int d\mathbf r \sum_{k,n}\sum_{l,m}\bar C_{k,n}\big(\mathbf R(t),t\big) e^{-i\omega_n s} \bar\psi_k(\mathbf r;\mathbf R(t)) \hat O\big(\mathbf r,\mathbf R(t)\big)C_{l,m}\big(\mathbf R(t),t\big) e^{i\omega_m s} \psi_l(\mathbf r;\mathbf R(t))\label{eqn: average over s and r}
\end{align}
showing that the periodic time dependence carried by the harmonics and the variable $s$ is integrated out, whereas the time dependence in $t$ remains. This operation is applied in the definition of the TDPES and of the time-dependent vector potential. Note that, this separation of times is crucial to keep working in a time-dependent picture while using the Floquet formalism. As discussed in the Introduction, the Floquet picture maps a periodic time-dependent problem into an effective time-independent problem in an enlarged harmonic-electronic vector space, thus reducing the solution of the time evolution to a diagonalization in this extended space. When separating the -- full -- molecular time-periodic process into a coupled electron-nuclear process to apply trajectory-based quantum-classical schemes, the time periodicity is only partially accounted for in the definition of the Floquet diabatic states, because, in general, the electronic and the nuclear dynamics are not time periodic (only the full molecular wavefunction is). It should be mentioned that the benefit of this procedure has been identified in previous work on the combination of the Floquet formalism with the quantum-classical Liouville equation~\cite{Schuette_JCP2001}. The identification of two time scales allows one to account for the oscillatory time dependence induced by the drive, while the remaining part of the time evolution is explicitly accounted for via the evolution of the expansion coefficients (as in Eq.~(\ref{eqn: Fl dia Phi with R(t)})). Essentially, the operation described in Eq.~(\ref{eqn: average over s and r}) allows us to obtain time-dependent quantities after their dependence on the driving frequency of the laser has been averaged out, when considering their effect on the nuclei; the effect of the driving frequency on the electrons is treated in Fourier space, as it is treated in terms of Floquet diabatic states and transitions among them. A similar separation of time scales has been used in Floquet surface hopping~\cite{Schmidt_PRA2016} with the aim to extend the Floquet picture to processes that are not strictly periodic, as for instance nonadiabatic processes initiated by a laser pulse.
Henceforth, the dependence on the trajectories $\mathbf R(t)$ will be replaced by a superscript $\nu$, which stands for a trajectory index -- this is done mainly to simplify the notation used in the equations below. The solution of Eq.~(\ref{eqn: HJ}) with the method of characteristics requires to solve Eqs.~(\ref{eqn: R dot}) and~(\ref{eqn: P dot}) for a large number of initial conditions, $\nu=1,\ldots,N_{tr}$. The index $\nu$ is, thus, used to label the trajectories, or characteristics. Similarly to the original form of the electronic evolution equation of CT-MQC~\cite{Maitra_JCTC2018, Gross_JCTC2016, Agostini_EPJB2018, Agostini_JCTC2020_1}, the final expression of the time derivative of the Floquet diabatic coefficients, derived in Appendix~\ref{app: el eqn}, is
\begin{align}\label{eqn: dot Csd 3}
\dot C_{l,m}^\nu(t) = \dot C_{l,m}^\nu(t)\Big|_{\mathrm{EH}} + \dot C_{l,m}^\nu(t)\Big|_{\mathrm{QM}} + \dot C_{l,m}^\nu(t)\Big|_{\mathrm{EXT}}
\end{align}
which has to be solved along a trajectory $\nu$. The three contributions identified in Eq.~(\ref{eqn: dot Csd 3}) are: an Ehrenfest-like term (EH)
\begin{align}\label{eqn: dot Csd 3a}
\dot C_{l,m}^\nu(t)\Big|_{\mathrm{EH}}=-\frac{i}{\hbar}\left(E_l^\nu+\hbar\omega_m\right)C_{l,m}^\nu(t)-\dot{\mathbf R}^\nu(t) \cdot\sum_{k,n} \mathbf d_{lk}^\nu\,C_{k,n}^\nu(t)\delta_{mn}
\end{align}
a term depending on the quantum momentum (QM)
\begin{align}\label{eqn: dot Csd 3b}
\dot C_{l,m}^\nu(t)\Big|_{\mathrm{QM}}= \frac{1}{\hbar}\left[\mathbf M^{-1}\boldsymbol{\mathcal P}^\nu(t)\right]\cdot\left[\mathbf f_{l,m}^\nu-\mathbf A^\nu(t)\right]C_{l,m}^\nu(t)
\end{align}
and a term with explicit dependence on the external field
\begin{align}\label{eqn: dot Csd 3c}
\dot C_{l,m}^\nu(t)\Big|_{\mathrm{EXT}}=-\frac{i}{\hbar}\epsilon_{\mathrm{ext}}^\nu(t)C_{l,m}^\nu(t) -\sum_{k,n}\frac{i}{\hbar} V_{mn,lk}^\nu C_{k,n}^\nu(t)
\end{align}
Note that, the only difference between the expression given in~(\ref{eqn: dot Csd 3}) and the field-free CT-MQC electronic equation resides in the additional terms depending on the external field (EXT), via $\epsilon_{\mathrm{ext}}^\nu(t)$ and $V_{mn,lk}^\nu$ given in Eq.~(\ref{eqn: dot Csd 3c}). The contribution $\epsilon_{\mathrm{ext}}^\nu(t)$ to the TDPES in the Floquet diabatic basis reads
\begin{align}\label{eqn: ext TDPES with states}
\epsilon_{\mathrm{ext}}^\nu(t) = \sum_{k,l} \sum_{n,m} \bar C_{k,n}^\nu(t) C_{l,m}^\nu(t)V_{mn,lk}^\nu
\end{align}
where the matrix elements of the driving field are
\begin{align}\label{eqn: ext field matrix}
V_{mn,lk}^\nu=\frac{1}{T}\int_0^T\,ds \int d\mathbf r\;e^{i\left(\omega_n-\omega_m\right)s}\,\bar{\psi}_l^\nu(\mathbf r)\hat{V}^\nu(\mathbf r,s)\psi_k^\nu(\mathbf r)
\end{align}
As done above, all quantities depending on $\mathbf R(t)$ have acquired a superscript $\nu$. The ``periodic'' time dependence has been indicated with the symbol $s$, which is integrated over a period in the projection operation. Combining Eqs.~(\ref{eqn: ext TDPES with states}) and~(\ref{eqn: ext field matrix}) shows that the expression of $\epsilon_{\mathrm{ext}}^\nu(t)$ easily follows from its definition in Eq.~(\ref{eqn: TDPES ext}). Since only the periodic time dependence is integrated out, $\epsilon_{\mathrm{ext}}^\nu(t)$ still depends on time $t$. The quantity indicated as $V_{mn,lk}^\nu$ stands for the Fourier transform of the matrix elements of the external drive, from Eq.~(\ref{eqn: ext field matrix}). The term $V_{mn,lk}^\nu$ induces population transfer along the trajectory $\nu$ between different electronic states and different harmonics as effect of the action of the external field. In particular, if only one driving frequency is considered, as in the present case, the non-zero matrix elements of $V_{mn,lk}^\nu$ are those satisfying the condition $m-n=\pm1$.
In Eqs.~(\ref{eqn: dot Csd 3a}) and~(\ref{eqn: dot Csd 3b}), the ``standard'' terms of the electronic evolution equation of F-CT-MQC contain: the time-dependent vector potential $\mathbf A^\nu(t)$, the Floquet diabatic force $\mathbf f_{l,m}^\nu$ accumulated along the trajectory $\nu$, the quantum momentum $\boldsymbol{\mathcal P}^\nu(t)$, the classical velocity of the trajectory $\dot{\mathbf R}^\nu(t)$, and the nonadiabatic coupling vectors $ \mathbf d_{lk}^\nu$. In Eq.~(\ref{eqn: dot Csd 3b}), the time-dependent vector potential is
\begin{align}
\mathbf A^\nu(t) = \frac{1}{T}\int_0^Tds\int d\mathbf r\sum_{k,l} \sum_{n,m} \bar C_{k,n}^\nu(t) e^{-i\omega_ns}\bar{\psi}_{k}^\nu(\mathbf r)\left(-i\hbar\boldsymbol{\nabla}\right)C_{l,m}^\nu(t)e^{i\omega_ms}\psi_{l}^\nu(\mathbf r)\label{eqn: tdvp in basis}
\end{align}
following its definition in Eq.~(\ref{eqn: TDVP}). Also in this case, the average operation is performed over the periodic time dependence, such that $\mathbf A^\nu(t)$ still depends on time $t$. The quantity indicated with the symbol $\mathbf f_{l,m}^\nu$ in Eq.~(\ref{eqn: dot Csd 3b}) is the force from the Floquet diabatic PES $E_l^\nu+\hbar\omega_m$ accumulated over time up to time $t$ along the trajectory $\nu$. Since the Floquet diabatic PESs are parallel to the adiabatic PESs, such an accumulated force can be computed from the gradient of $E_l^\nu$ as in standard CT-MQC. Such an accumulated force is used in the F-CT-MQC-approximate expression of the time-dependent vector potential, namely
\begin{align}
\mathbf A^\nu(t) = \sum_{l,m} \left|C_{l,m}^\nu(t)\right|^2\mathbf f_{l,m}^\nu
\end{align}
which is related to the gradient of the harmonic-electronic coefficients $C_{l,m}^\nu(t)$ of Eq.~(\ref{eqn: tdvp in basis}) (see for details Refs.~\cite{Gross_PRL2016, Gross_JCTC2016, Agostini_JCTC2020_1}). In Eqs.~(\ref{eqn: dot Csd 3a}) and~(\ref{eqn: dot Csd 3b}), the nuclear velocity $\dot{\mathbf R}^\nu(t)$ and the quantum momentum $\boldsymbol{\mathcal P}^\nu(t)$ appear, via the contribution of the electron-nuclear coupling operator~(\ref{eqn: enco}) that depends on the nuclear wavefunction (as discussed in Appendix~\ref{app: el eqn}). The nuclear velocity in Eq.~(\ref{eqn: dot Csd 3a}) couples to the nonadiabatic coupling vectors $\mathbf d_{lk}^\nu$, also known as derivative couplings, driving the population transfer between the electronic states $k$ and $l$ that belong to the same harmonic ($\delta_{mn}$), as effect of the nuclear displacement operator $\boldsymbol{\nabla}$. They can be evaluated numerically or analytically via the Hellmann-Feynman formula, exactly as it is done usually in the the field-free case.
To finalize F-CT-MQC equations using the Floquet diabatic states, we give the explicit expression of the classical force of Eq.~(\ref{eqn: classical force 1}) for the trajectory $\nu$. In particular, we can compute the gradient of $\epsilon_{\mathrm{ext}}^\nu(t)$ from Eq.~(\ref{eqn: ext TDPES with states}), and the time derivative of the vector potential. Therefore, as in Eq.~(\ref{eqn: dot Csd 3}), we identify three contributions to the classical force,
\begin{align}\label{eqn: cl force}
\mathbf F_\nu(t)=\mathbf F_\nu^{\mathrm{EH}}(t)+\mathbf F_\nu^{\mathrm{QM}}(t)+\mathbf F_\nu^{\mathrm{EXT}}(t)
\end{align}
where `EH' and `QM' indicate the -- standard -- Ehrenfest-like~\cite{Agostini_CTC2019} and quantum-momentum terms, whereas `EXT' stands for the term depending on the external drive. The Ehrenfest-like contribution is
\begin{align}
\mathbf F_\nu^{\mathrm{EH}}(t)&=\sum_{k,n}\left(-\boldsymbol{\nabla}E_{k}^\nu\right) \left|C_{k,n}^\nu(t)\right|^2+\sum_{k,n}\sum_{l,m}\bar C_{l,m}^\nu(t)C_{k,n}^\nu(t)\left(E_l^\nu-E_k^\nu\right)\mathbf d_{lk}^\nu\delta_{mn}
\end{align}
the quantum-momentum contribution, that couples the trajectories, is
\begin{align}
\mathbf F_\nu^{\mathrm{QM}}(t)&=\frac{2}{\hbar}\sum_{k,n}\left|C_{k,n}^\nu(t)\right|^2\left(\mathbf M^{-1}\boldsymbol{\mathcal P}^\nu(t)\cdot \mathbf f_{k,n}^\nu\right)\Big(\mathbf f_{k,n}^\nu-\mathbf A^\nu(t)\Big)
\end{align}
and the part that depends on the external field is
\begin{align}
\mathbf F_\nu^{\mathrm{EXT}}(t)&=\sum_{k,n}\sum_{l,m}\left(-\boldsymbol{\nabla}V_{mn,lk}^\nu\right)\bar C_{l,m}^\nu(t)C_{k,n}^\nu(t)+\frac{1}{\hbar}\sum_{k,n}\sum_{l,m}\mathrm{Im}\left[\bar C_{l,m}^\nu(t)C_{k,n}^\nu(t)\right]V_{mn,lk}^\nu\left(\mathbf f_{k,n}^\nu-\mathbf f_{l,m}^\nu\right)
\end{align}
Note that the last term in the definition of $\mathbf F_\nu^{\mathrm{EXT}}(t)$ is formally similar to the contribution due to spin-orbit coupling recently introduced in G-CT-MQC~\cite{Agostini_PRL2020, Agostini_JCTC2020_1}.
Before concluding this section, it is worth mentioning the partial normalization condition on the electronic wavefunction that is used to derive Eqs.~(\ref{eqn: EF n}) and~(\ref{eqn: EF el}). The partial normalization condition still holds, and using the expansion in harmonic-electronic eigenstates of the field-free Floquet Hamiltonian it reads
\begin{subequations}
\begin{align}
1&=\frac{1}{T}\int_0^T ds\int d\mathbf r\sum_{k,n}\sum_{l,m} \bar C_{k,n}(\mathbf R,t)C_{l,m}(\mathbf R,t)e^{i(\omega_m-\omega_n)s} \bar{\psi}_k(\mathbf r;\mathbf R) \psi_l(\mathbf r;\mathbf R) \\
&= \sum_{k,n}\left|C_{k,n}(\mathbf R,t)\right|^2\quad \forall\,\, \mathbf R,t.
\end{align}
\end{subequations}
Equations~(\ref{eqn: R dot}), (\ref{eqn: dot Csd 3}) and~(\ref{eqn: cl force}) define the F-CT-MQC algorithm in the Floquet representation, that will be used in the following section to study the dynamics of a two-electronic-state one-dimensional model system subject to a cw laser of different intensities.
\section{Numerical studies}\label{sec: results}
The one-dimensional model employed for the numerical studies using F-CT-MQC is defined by the field-free electronic Hamiltonian
\begin{align}\label{eqn: model Hel}
\hat H_{el}= \left(
\begin{array}{cc}
H_{11}(R) & H_{12}(R) \\
H_{12}(R) & H_{22}(R)
\end{array}
\right) =
\left(
\begin{array}{cc}
\frac{1}{2}K(R-R_1)^2 & \gamma e^{-\alpha(R-R_3)^2} \\
\gamma e^{-\alpha(R-R_3)^2} & \frac{1}{2}K(R-R_2)^2+\Delta
\end{array}
\right)
\end{align}
given in the diabatic basis and depending on one nuclear coordinate $R$. The parameters are chosen as: $K=0.02$~au, $\Delta=0.01$~au, $\gamma=0.01$~au, $\alpha=3.0$~au, $R_1=6.0$~au, $R_2=2.0$~au, $R_3=3.875$~au. The diabatic PESs are parabolas, one with minimum in $R_1$ and the other with minimum in $R_2<R_1$; the parabolas are, thus, displaced in $R$, and in energy as well by the amount $\Delta$; the diabatic coupling has a Gaussian shape, with maximum in $R_3$, and strength regulated by the parameter $\gamma$. The energy difference between the minimum of the $H_{22}(R)$ potential, at $R=R_2$~au, where the initial wavepacket will be prepared for the dynamics described in the following, and $H_{11}(R_2)$ is $\Delta E = 0.15$~au
The interaction with the laser field is described in the dipole approximation as $\hat V(t) =-\hat\mu E(t) = -\hat\mu E_0\cos(\Omega t)$, with $\hat\mu$ the dipole moment operator. Neglecting the nuclear contribution to the total dipole moment, and setting to zero the diagonal elements of $\hat\mu$ in the diabatic basis, the off-diagonal elements of the system-laser interaction Hamiltonian are $-\mu(R)E_0\cos(\Omega t)=-\beta RE_0\cos(\Omega t)$.
The transition dipole moment $\mu(R)=\beta R$ is chosen to be a linear function of $R$, and $\beta=0.05$~au. The frequency of the driving field is set as $\Omega=0.05$~au, and two cases will be studied to test F-CT-MQC, a weak field case with $E_0=0.25$~au
and a strong field case with $E_0=0.5$~au.
Quantum dynamics is initiated in the ground vibrational state of the left diabatic potential well (corresponding to $S_0$), centered at $R=2$~au with average zero initial momentum. Nuclear mass has been set to the value $M=20000$~au. Reference results are obtained by the integrating the full tdSE in the diabatic basis by using the split-operator technique~\cite{spo} with a time step of $dt=0.1$~au. Since the Hamiltonian explicitly depends on time, the stability of the numerical integration has been confirmed by monitoring the norm of the wavefunction. The diabatic-to-adiabatic change of basis has been performed to compute the population of the adiabatic states $S_0$ and $S_1$, for comparison with F-CT-MQC calculations. While quantum dynamics is performed to determine the full wavefunction as function of time, its exact-factorization form can be determined as well, in order to compute the TDPES (that will be shown below). To this end, a choice of gauge has to be made. In the quantum dynamics case, the gauge can be set by imposing that the time-dependent vector potential is zero.
Trajectory-based F-CT-MQC calculations are performed in the Floquet diabatic basis. $N_{tr}=100$ trajectories are evolved according to Eqs.~(\ref{eqn: R dot}) and~(\ref{eqn: cl force}) with the velocity-Verlet algorithm starting with Wigner-sampled initial conditions (the Wigner distribution is determined as the Wigner transform of the initial nuclear probability density of quantum calculations). The Fourier series used to represent the Floquet eigenmodes is truncated to include $2 N_{max}+1$ components. In the calculations presented below, convergence is tested by increasing the size of the basis with $N_{max}=1,2,3,4,5$ (calculations with $N_{max}=6,7$ have been performed as well but they are not shown because convergence is reached with $N_{max}=5$). The electronic initial condition is chosen by setting equal to one the harmonic-electronic coefficient corresponding to $k=S_0$ and $n=0$ for all trajectories. We recall that the index $k$ indicates the electronic state, in this case $k$ can be $S_0$ or $S_1$, and $n$ indicates the harmonic, in this case $n\in[-N_{max},N_{max}]$. With this initial condition, the electronic equation~(\ref{eqn: dot Csd 3}) is integrated with the forth-order Runge-Kutta algorithm. The time step for the nuclear and the electronic integration is $dt=0.1$~au.
F-CT-MQC calculations provide direct access to the populations of the Floquet diabatic states along a trajectory $\nu$. In order to compare trajectory-based results to the reference results, the populations $\rho_k(t)$ of the adiabatic states $k=S_0,S_1$ as functions of time are determined as
\begin{align}\label{eqn: BO pop}
\rho_k(t) = \frac{1}{N_{tr}} \sum_{\nu=1}^{N_{tr}} \left|\sum_{n=-N_{max}}^{n=N_{max}}C_{k,n}^\nu(t)e^{i\omega_nt}\right|^2
\end{align}
Similarly, it is possible to interpret the dynamics in terms of the number of photons exchanged between the system and the external field. To this end, the photon population can be estimated as
\begin{align}\label{eqn: photon pop}
\rho_n(t) = \frac{1}{N_{tr}} \sum_{\nu=1}^{N_{tr}}\sum_{k=S_0,S_1} \left|C_{k,n}^\nu(t)\right|^2
\end{align}
where $n\in[-N_{max},N_{max}]$.
\subsection{Case of a weak cw laser}
In the weak-field case, some of the density that is initially prepared in the vibrational ground state of the left well, in $S_0$, is transferred to the electronic excited state $S_1$. While the field keeps driving population back and forth between the two states, with the main portion of the nuclear density still centered around $R=2$~au in $S_0$, the excited wavepacket evolves towards the avoided crossing located at $R\sim 4$~au, and transfers back population to the ground state. The dynamics just described is detailed in Fig.~\ref{fig: TDPES weak laser} where the nuclear density at three different times is shown, along with its decomposition in terms of Born-Oppenheimer contributions and the gauge-invariant part of the TDPES given as the sum of Eqs.~(\ref{eqn: TDPES}) and~(\ref{eqn: TDPES ext}) (left panels).
The gauge-invariant part of the TDPES does not contain the term with the partial time derivative of Eq.~(\ref{eqn: TDPES}). It has been shown in previous work~\cite{Gross_MP2013, Gross_JCP2015, Curchod_JCP2016}, and it is the case here as well, that this gauge-dependent contribution to the TDPES nearly mirrors the shape of the gauge-invariant part, and it is a piecewise constant function of $R$; its main effect is to reduce the height of the steps, as the step observed at $t=856$~au in Fig.~\ref{fig: TDPES weak laser} at about $R=3$~au, but leaving unchanged the slope of the gauge-invariant TDPES outside the steps. It should be noted that, in the present case, where an external time-dependent field is applied, the features of the gauge-invariant and gauge-dependent parts of the TDPES are not as neat as in the field-free case.
\begin{figure}[h!]
\centering
\includegraphics[width=.8\textwidth]{tdpes_weak.pdf}
\caption{Weak-field case. Left panels: Floquet diabatic PESs ($S_0$ red lines and $S_1$ blue lines) with $N_{max}=4$. The thick red and blue lines identify the $S_0$ and $S_1$ PESs for the harmonic $n=0$. Nuclear density (black thin lines), along with its decomposition in $S_0$ (dashed red lines) and $S_1$ (dashed blue lines) contributions, and the gauge-invariant part of the TDPES (black line-dots) are shown at times $t=856,1585,1962$~au. Right panels: Floquet adiabatic PESs computed with $N_{max}=4$. Nuclear density (black thin lines) and the TDPES (black line-dots) are shown at the same times as in the left panels.}
\label{fig: TDPES weak laser}
\end{figure}
The TDPES provides information about the energy scale involved in the studied process, and, consequently, its comparison with the Floquet (a)diabatic PESs allows us to estimate the number of harmonics required to achieve convergence of the Floquet-based calculations. With this comparison in mind, we show in Fig.~\ref{fig: TDPES weak laser} the Floquet diabatic PESs (left panel) and the Floquet adiabatic PESs (right panels). Floquet diabatic PESs are simply the Born-Oppenheimer PESs, obtained by diagonalization of the electronic Hamiltonian~(\ref{eqn: model Hel}) at each $R$, shifted by the constant energy of the harmonics $n\hbar\Omega$. In Fig.~\ref{fig: TDPES weak laser}, the case $N_{max}=4$ is shown, and the thick red and blue lines refer to the harmonic $n=0$. In this case, all along the dynamics the TDPES oscillates within energy values comprised between $E_{S_0}(R)-4\hbar\Omega$ and $E_{S_1}(R)+4\hbar\Omega$, suggesting that F-CT-MQC results should converge with $N_{max}=4$.
On the right panels of Fig.~\ref{fig: TDPES weak laser}, nuclear dynamics and the TDPES are superimposed to the Floquet adiabatic PESs, which are the eigenvalues of the Floquet Hamiltonian explicitly including the external drive. Also in this case, the Floquet basis is truncated to $N_{max}=4$. The main observation arising from the comparison of the left panels and right panels of Fig.~\ref{fig: TDPES weak laser} is that the Floquet adiabatic PESs can be very different from the Floquet diabatic PESs, as they represent the energy of hybrid electronic and field states that are strongly coupled to each other. Additional observations on the relation between Floquet adiabatic and Floquet diabatic PESs is reported in Section~\ref{sec: observations}.
Figure~\ref{fig: Floquet pop weak laser} shows the populations of the Floquet diabatic states for the simulation where $N_{max}=5$. These quantities are estimated as the average over the trajectories of the squared moduli of the coefficients $C_{k,n}^\nu(t)$. The analysis of the populations reported in Fig.~\ref{fig: Floquet pop weak laser} gives information about the number of photons exchanged between the system and the external field during the dynamical process.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{floquet_pop_weak.pdf}
\caption{Weak-field case. Populations of the Floquet diabatic states as functions of time. Only the non-zero populations are shown.}
\label{fig: Floquet pop weak laser}
\end{figure}
Figure~\ref{fig: Floquet pop weak laser} shows that only few Floquet diabatic states are substantially populated during the dynamics. The state labelled $k=S_0,n=0$ starts with full occupation, but rapidly transfers some population to $k=S_1,n=-1$, suggesting that a one-photon process is taking place to excite a portion of the ground-state wavepacket. Since the nonadiabatic couplings mediate population transfer between different electronic states within the same harmonic, and since in the region where the nuclear density is prepared these couplings are mostly zero, population transfer at the initial times is only driven by the external field. A small amount of population is also transferred to $k=S_1,n=+1$ at the initial time, once again suggesting a one-photon process, but the population remains close to zero ($<0.02 \,\forall\,t$), as well as all other populations but $k=S_0,n=0$ and $k=S_1,n=-1$.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{pop_weak.pdf}
\caption{Weak-field case. Upper panel: population of the $S_0$ state as function of time, from exact quantum-dynamics calculations (black line), and from F-CT-MQC calculations with $N_{max}=1$ (red line), $N_{max}=2$ (blue line), $N_{max}=3$ (purple line), $N_{max}=4$ (orange line), $N_{max}=5$ (green line). Lower panel: population of the $S_0$ state as function of time, averaged over a period of the driving field with Eq.~(\ref{eqn: average BO pop}), from exact quantum-dynamics calculations, and from F-CT-MQC calculations with $N_{max}=1,2,3,4,5$. The colore code is the same as in the upper panel.}
\label{fig: pop weak laser}
\end{figure}
Using Eq.~(\ref{eqn: BO pop}), we can estimate the populations of the physical electronic states as functions of time from the Floquet-based calculations. The population of the ground state $S_0$ is shown in Fig.~\ref{fig: pop weak laser}. In the upper panel, the estimate from Eq.~(\ref{eqn: BO pop}) with $k=S_0$ is used, with different values of $N_{max}$ for F-CT-MQC calculations, to test convergence with the number of harmonics by comparing with exact results. In the lower panel, the oscillations in the populations are averaged out by computing a moving average over a period, namely
\begin{align}\label{eqn: average BO pop}
\langle\rho_k(t)\rangle_T = \frac{1}{T}\int_t^{t+T} d\tau\frac{1}{N_{tr}} \sum_{\nu=1}^{N_{tr}} \left|\sum_{n=-N_{max}}^{n=N_{max}}C_{k,n}^\nu(\tau)e^{i\omega_n\tau}\right|^2
\end{align}
and similarly for quantum calculations, which allows us to focus on the overall behavior of the population as function of time. As predicted from the analysis of Fig.~\ref{fig: TDPES weak laser} of the TDPES, $N_{max}=4$ is sufficient to converge F-CT-MQC, even though not to the exact value of the $S_0$ population. However, for the simulated dynamics, the relative error on the population remains within 10\% of the reference (estimated from the lower panel of Fig.~\ref{fig: pop weak laser}). Finally, note that, in the upper panel of Fig.~\ref{fig: pop weak laser}, the amplitude of the oscillations of F-CT-MQC results is smaller if compared to exact results, probably as effect of the average operation over the trajectories.
Using Eq.~(\ref{eqn: photon pop}), we can estimate the photon population to interpret the dynamics in terms of single- or multi-photon absorption and emission processes. In order to average out the fast oscillations in photon populations that are similar to the oscillations observed in the upper panel of Fig.~\ref{fig: pop weak laser}, we show in Fig.~\ref{fig: Nph weak laser} $\langle\rho_n(t)\rangle_T$, which is a quantity analogous to the one derived in Eq.~(\ref{eqn: average BO pop}), but applied to the photon population.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{nph_weak.pdf}
\caption{Weak-field case. Photon population as function of time, estimated from Eq.~(\ref{eqn: photon pop}) and averaged over a period of the driving field with an expression equivalent to Eq.~(\ref{eqn: average BO pop}). Only the populations with values $> 0.05$ $\forall\, t$ are shown.}
\label{fig: Nph weak laser}
\end{figure}
Figure~\ref{fig: Nph weak laser} shows that the dynamics is dominated by zero-photon processes, with a small contribution from one-photon absorption (negative values of $n$) mainly appearing after 2000~au.
In Fig.~\ref{fig: density weak laser}, we compare the nuclear density as function of time with the distribution of F-CT-MQC trajectories for $N_{max}=2$ (gray circles) and $N_{max}=5$ (black circles). As discussed above, the nuclear density mainly remains localized around $R=2$~au at all times, even though already at about 500~au, a small portion of the density -- evolving on the excited state -- moves towards large values of $R$. With $N_{max}=2$, classical trajectories follow closely the main portion of the nuclear density, but the diverging branch is missed. With $N_{max}=5$, instead, some trajectories diverge from the main bundle localized around $R=2$~au, even though the time scale is not as in the reference results, and the trajectories do not go as far as the quantum wavepacket within the simulated dynamics. \begin{figure}
\centering
\includegraphics[width=.8\textwidth]{density_weak.pdf}
\caption{Weak-field case. Colored areas: nuclear density as function of $R$ and $t$. Circle: distributions of classical trajectories for $N_{max}=2$ (gray circle) and $N_{max}=5$ (black circles) plotted every 100~au.}
\label{fig: density weak laser}
\end{figure}
\subsection{Case of a strong cw laser}
Even though the amplitude of the external field is only doubled in going from the weak-field case to the strong-field case, the dynamics in this second example is very different with respect to the previous one, which can be confirmed by comparing nuclear dynamics from Fig.~\ref{fig: TDPES weak laser} and from Fig.~\ref{fig: TDPES strong laser}. However, similar general conclusions on the performance of F-CT-MQC can be drawn.
Let us first analyze the dynamics shown in Fig.~\ref{fig: TDPES strong laser}, based on the comparison between the TDPES and the Floquet diabatic PESs (left panels) or the Floquet adiabatic PESs (right panels). The TDPES is almost at all times comprised within the Floquet diabatic PESs shown in the figure, for $N_{max}=4$.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{tdpes_strong.pdf}
\caption{Strong-field case. The panels are similar to those in Fig.~\ref{fig: TDPES weak laser} and the same color-code is used. The nuclear density and the TDPES are shown at times $t=867,1443,1874$~au.}
\label{fig: TDPES strong laser}
\end{figure}
In this case, the external field excites the nuclear wavepacket from $S_0$ to $S_1$. While population transfer between the two states is continuously driven, the excited portion of the wavepacket moves towards the avoided crossing and transfer population back to the ground state. At the same time, the ground-state wavepacket is driven towards the avoided crossing itself, where it encounters the excited-state wavepacket, producing complex interference patterns at long times. Clearly, this dynamics is very difficult to capture based on classical trajectories without accounting for interference effects.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{floquet_pop_strong.pdf}
\caption{Strong-field case. Populations of the Floquet diabatic states as functions of time. Only the non-zero populations are shown.}
\label{fig: Floquet pop strong laser}
\end{figure}
Analysis of the Floquet diabatic populations shows that multi-photon processes take place during the dynamics. As in the previous case, the state labelled $k=S_0,n=0$ is fully populated initially, but it rapidly transfers population mainly to $k=S_1,n=-3$, to $k=S_1,n=-1$ and to $k=S_1,n=+1$ . At later times, after 1000~au, other states become populated, even if their population remains below 0.1.
Applying Eq.~(\ref{eqn: BO pop}), the populations of $S_0$ and $S_1$ states are determined from the populations of the Floquet diabatic states, and the $S_0$ population is shown in Fig.~\ref{fig: pop strong laser}. As mentioned above, convergence in achieved for $N_{max}=4$, but the results slightly differ from the reference calculations. In particular, the low frequency oscillations of the $S_0$ population are not quite captured, as it is evident in the plot of the populations averaged over a period of the drive (lower panel). Indeed, inclusion of more harmonics, for instance from $N_{max}=2$ to $N_{max}=4$, allows to better reproduce the decay between 500~au and 1500~au, even though quantitative agreement is missing all along the simulated dynamics.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{pop_strong.pdf}
\caption{Strong-field case. The panels are similar to those in Fig.~\ref{fig: pop weak laser} and the same color-code is used.}
\label{fig: pop strong laser}
\end{figure}
Also in this case, and probably stronger than in the weak-field case, the amplitude of the high-frequency oscillations of the population (upper panel) is suppressed in F-CT-MQC calculations, once again as effect of the average over the trajectories.
Similarly to the weak-field case, the populations of the Floquet states allows for the calculation of photon populations. Figure~\ref{fig: Nph strong laser} shows that at short times the dynamics is dominated by a zero-photon process, since the population $\rho_{n=0}(t)$ is much larger that other contributions for $t<1000$~au. By contrast with the weak-field case, however, a single-photon absorption process has non-zero contribution since the beginning of the simulated dynamics. After 1000~au, however, multi-photon absorption processes are observed, as well as single- and two-photon emissions ($\rho_{n=+1}(t)$ and $\rho_{n=+2}(t)$ are small but non-zero after 2000~au).
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{nph_strong.pdf}
\caption{Strong-field case. Photon population as function of time, estimated from Eq.~(\ref{eqn: photon pop}) and averaged over a period of the driving field with an expression equivalent to Eq.~(\ref{eqn: average BO pop}). Only the populations with values $> 0.05$ $\forall\, t$ are shown.}
\label{fig: Nph strong laser}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{density_strong.pdf}
\caption{Strong-field case. Similar to the results reported in Fig.~\ref{fig: density weak laser} with the same color-code.}
\label{fig: density strong laser}
\end{figure}
Finally, we compare in Fig.~\ref{fig: density strong laser} the nuclear probability density with the distribution of F-CT-MQC trajectories for $N_{max}=2$ (gray circles) and $N_{max}=5$ (black circles). In general, we observe that the density spreads over time and moves towards large values of $R$. For $N_{max}=2$ the trajectories only follow one portion of the nuclear density, whereas for $N_{max}=5$ the whole nuclear configuration space spanned by the quantum wavepacket is sampled as well by the trajectories.
In this case, and in contrast with the weak-field case, the maximum number of harmonics included in F-CT-MQC calculations provides satisfactory results for nuclear dynamics, while it misses some features in the electronic populations. Conversely, in the weak-field case, nuclear dynamics was not correctly captured by the trajectories, but a better agreement was achieved for the electronic populations.
\subsection{General discussion on F-CT-MQC}\label{sec: observations}
The exact factorization, together with its trajectory-based treatment, can be formulated employing the Floquet formalism in different, equivalent, ways. In Section~\ref{sec: theory}, we have already discussed two possibilities, which we referred to as Floquet adiabatic and Floquet diabatic picture. Nonetheless, the explicit derivation of F-CT-MQC equations and the numerical results have been presented only in the Floquet diabatic representation. The choice has been made based on practical reasons: (i) the preparation of the initial state for the dynamics, and (ii) the calculation of energy gradients and derivative couplings. The initial state of a simulation is usually chosen to be one of the ``physical'' states of the system, e.g., the ground state, which is easily identified in the Floquet diabatic picture by selecting the harmonic $n=0$. However, in the Floquet adiabatic picture, this identification is not straightforward. In addition, quantum-chemistry codes usually provide gradients of the Born-Oppenheimer PESs, which are identical to the gradients of Floquet diabatic PESs, but not of Floquet adiabatic PESs. This is clearly a limitation for the possibility of exploiting a trajectory-based algorithm that employs the Floquet adiabatic representation in combination with quantum chemistry for molecular calculations. We should also mention that some Floquet adiabatic PESs present trivial crossings, where the PESs are degenerate and, thus, the derivative couplings are singular. Trivial crossings need special treatment in combination with trajectory-based simulations, and clearly depend on the particular form of the external field. All these issues are easily circumvented in the Floquet diabatic picture. However, an intriguing question remains, as to whether electronic-structure properties in the Floquet adiabatic representation are easily accessible based on standard quantum-chemistry theory (and codes).
It is important to mention that, F-CT-MQC is readily suitable for on-the-fly calculations in combination with ab initio PESs and derivative couplings. The only additional elements used in Section~\ref{sec: results} in comparison to field-free simulations are the electronic transition dipole moment of the molecule and its spatial derivatives, which have to be computed at each nuclear configuration visited by the trajectories, and are both accessible based on quantum-chemistry calculations~\cite{Martinez_PCCP2017}.
In the literature~\cite{Subotnik_JCP2020, Subotnik_JCTC2020, Schmidt_PRA2016, Gonzalez_JPCA2012}, the Floquet diabatic picture has been preferred over the Floquet adiabatic one, especially in combination with the trajectory surface hopping algorithm. In fact, as shown in Figs.~\ref{fig: TDPES weak laser} and~\ref{fig: TDPES strong laser}, the Floquet adiabatic PESs can present many avoided crossings, as well as trivial crossings. These features potentially pose challenges for the fewest switches surface hopping algorithm, because the trajectories hop often from one state to another; in addition, a large number of trajectories is required for convergence in order to sample satisfactorily the ``hopping space''. Perhaps, when working with dense manifolds of coupled Floquet adiabatic states, an Ehrenfest-like approach would be preferable. This suggests that F-CT-MQC in the Floquet adiabatic representation might perform well, similarly to the case of Floquet diabatic representation, due to its Ehrenfest-like form -- provided that the issues (i) and (ii) above are efficiently circumvented.
An alternative strategy for deriving F-CT-MQC can be envisaged: rather than invoking the Fourier representation of Floquet eigenmodes, and introducing the harmonics to account for the periodicity of the external drive in the electronic basis, one could directly work with Floquet eigenmodes, as, for instance, those determined by solving the eigenvalue problem~(\ref{eqn: Fl adiabatic eqn}). This avenue has not been explored in this work, but it would be interesting to investigate how the two formulations compare in the trajectory-based scheme. Clearly, if the exact equations were solved, the two formulations would yield the same result, however, this is not always true when approximations are invoked.
An additional approach to include the effect of the external drive has been tested, but the results have not been reported in the present work. Such an approach simply includes the external drive in CT-MQC equations, where the used electronic representation is the ``standard'' Born-Oppenheimer representation. CT-MQC yields, in this case, directly the populations of the physical electronic states, and interpretation in terms of photon absorption or emission processes is not possible. For the model studied here, trajectory-based results are very close to exact and Floquet-based results in both weak- and strong-field cases. Therefore, even though such strategy to include the external time-dependent field in CT-MQC has not been yet systematically investigated, it is a promising route for future theoretical and numerical developments.
\section{Conclusions}\label{sec: conclusions}
Trajectory-based excited-state simulations of systems subject to an external drive are nowadays very challenging. Standard trajectory-based schemes might yield different results depending on the used electronic basis, or electronic representation, since the approximations they are based on are derived in a particular electronic basis. The choice of electronic representation becomes, thus, of paramount importance. In order to easily generalize standard schemes, one might be tempted to use the so-called quasi-static representation, where the electronic states are defined at each time of the propagation as the eigenstates of the electronic problem for a given nuclear configuration (as in standard field-free situations) including the effect of the external time-dependent field. However, unless that external field varies slowly in time, the electronic quasi-static states are not \textsl{descriptive} of the state of the system, and numerical simulations yield unphysical results~\footnote{Note that, this strategy has been tested for the cases studied in the present paper. While the high-frequency oscillations of the adiabatic populations driven by the external field were captured in the quasi-static representation, no population transfer between the $S_0$ and $S_1$ states was observed for both field strengths studied in this work.}. The Floquet formalism is, instead, adequate to treat periodically driven systems because the periodic time dependence is somehow treated in a static way -- in Fourier space -- by extending the electronic space to the space of harmonics. The drawbacks are, clearly, the fact the electronic Hamiltonian becomes unbounded, and that convergence studies on the number of harmonics to be included have to be carried out (and depending on the intensity of the drive, a large number of harmonics need to be considered).
As it has been shown in previous work~\cite{Maitra_PRL2019, Tokatly_EPJB2018, Maitra_EPJB2018, Schmidt_PRA2017, Maitra_PCCP2017, Maitra_PRL2015, Gross_PRL2010, Suzuki_PCCP2015, Suzuki_PRA2014}, the exact-factorization formalism naturally lends itself to the treatment of dynamics in the presence of an external time-dependent field. In addition, the Floquet formalism can be used together with the exact factorization to interpret and to justify common approaches based on trajectories~\cite{Schmidt_PRA2017}. In the present work, we took a step forward, and we employed the trajectory-based solution of the exact-factorization equations, i.e., the CT-MQC scheme, together with the Floquet formalism, to propose a new algorithm designed to treat periodically driven electron-nuclear systems in the presence of nonadiabatic effects. The CT-MQC algorithm has the advantage of being easily adapted to treat various physical situations, like standard nonadiabatic effects~\cite{Gross_PRL2015, Gross_JCTC2016, Gross_JPCL2017}, spin-orbit coupling~\cite{Agostini_PRL2020, Agostini_JCTC2020_1} (G-CT-MQC), or external time-dependent fields (F-CT-MQC). The fundamental equations do not have to be completely modified or adapted to include different effects, since such additional effects can be easily included starting from the -- quantum-mechanical -- formulation of the time-dependent Schr\"odinger equation in its exact-factorization form.
We presented the first application of F-CT-MQC to the treatment of excited-state processes with explicit inclusion of a time-dependent external field. The algorithm has been adapted to treat periodically-driven systems with the support of the Floquet formalism, and tested on a model systems subject of an external field with different intensities. The results are promising, especially aiming at the combination with quantum-chemistry approaches to compute electronic-structure properties, but there is clearly room for the development of refined approximation strategies to F-CT-MQC that will be the focus of future studies.
\section*{Data availability statement}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
|
1,314,259,993,168 | arxiv | \section{Introduction}
We introduce the super Patalan numbers as a gneralization of the super Catalan numbers.
The super Catalan numbers \cite[A068555]{OEIS} were studied by Gessel in his paper on the super ballot numbers \cite{GESSEL}.
(The term super Catalan numbers is also used to refer to a different sequence, we are
generalizing the term as used by Gessel.)
Just as the super Catalan numbers form a two dimensional array that extends the Catalan numbers,
the super Patalan numbers of order $p$ form a two dimensional array that extends the Patalan numbers of order $p$.
We start with the definitions of the super Catalan numbers and of the Patalan numbers.
\begin{defin}
Define the \emph{super Catalan numbers} $S(m,n)$ by
\begin{equation*} S(m,n) = \frac{(2m)!(2n)!}{m!n!(m+n)!}.
\end{equation*}
\end{defin}
The Catalan numbers $C_n$ are contained in the super Catalan numbers as
$2C_n = S(n,1) = \frac{2(2n)!}{n!(n+1)!}$.
\begin{defin}
\label{PATALAN_DEF}
Let $p$ be a positive integer with $p>1,$
and let $q$ be a positive integer with $q<p$.
Define the \emph{Patalan numbers of order $p$} to be
the sequence $a(n)$ with
$a(0) = 1$, and
\begin{equation}
a(n) = p(pn-1)a(n-1)/(n+1).
\end{equation}
Also define the \emph{$(p,q)$-Patalan numbers} to be
the sequence $b(n)$ with
$b(0) = q$, and
\begin{equation}
b(n) = p(pn-q)b(n-1)/(n+1).
\end{equation}
\end{defin}
The Patalan numbers of order $p$ \cite[A025748, A025749, \ldots, A025757]{OEIS} generalize the Catalan numbers.
In particular the Catalan numbers are the Patalan numbers of order $2$. Also, the Patalan
numbers of order $p$ have generating function $\frac{1-\sqrt{1-p^2x}}{px}$, which generalizes the generating
function of the Catalan numbers.
Now we define the super Patalan numbers as an extension of the Patalan numbers,
and generalizing the super Catalan numbers.
\begin{defin}
\label{SUPER_PATALAN_DEF}
Define the sequence $Q(i,j)$ of \emph{$(p,q)$-super Patalan numbers} by
\begin{equation}
\label{SPDEF1}
Q(0,0) = 1,
\end{equation}
\begin{equation}
\label{SPDEF2}
Q(i,0) = p(pi-q)Q(i-1,0)/i,
\end{equation}
and
\begin{equation}
\label{SPDEF3}
Q(i,j) = p(pj-p+q) Q(i,j-1)/(i+j).
\end{equation}
Let the \emph{super Patalan numbers of order $p$} be the $(p,1)$-super Patalan numbers.
\end{defin}
The super Patalan numbers contain the
Patalan numbers similarly to how the super Catalan numbers contain the Catalan numbers,
but they do not have quite as simple an
expression as the super Catalan numbers.
In particular, they do not form a symmetric array.
While the super Patalan numbers are not symmetric, they do have a twisted symmetry in that
the arrays of $(p,q)$-super Patalan numbers and $(p,p-q)$-super Patalan numbers are transposes of each other.
The Patalan numbers are contained in the super Patalan numbers just as
the Catalan numbers are contained in the super Catalan numbers.
If $a(n)$ is the sequence of Patalan numbers of order $p$,
and $P(i,j)$ are the super Patalan numbers of order $p$,
then the Patalan numbers are contained in the super Patalan numbers as
\begin{equation}
\label{COLUMN_ONE_EQN}
pa(n) = P(n,1).
\end{equation}
It is the author's opinion that to be consistent with equation \eqref{COLUMN_ONE_EQN}
concerning column $1$ of the super Patalan matrix,
the Patalan numbers of order $p$
should start $1, \binom{p}{2}$ \cite[A097188]{OEIS},
and not start $1,1,\binom{p}{2}$ \cite[A025748]{OEIS}.
The fact that the Catalan numbers start $1,1$ is explained by
the Catalan numbers being the Patalan number of order $2$, and $\binom{2}{2} = 1$.
\section{Generating functions}
\begin{thm}
\label{PATALAN_THM1}
The $(p,q)$ super Patalan numbers $Q$ satisfy the identity
\begin{equation}
\label{BINOMIAL_IDENT}
Q(m,n)= (-1)^np^{2(m+n)}\binom{m-q/p}{m+n}.
\end{equation}
\end{thm}
\begin{proof}
Let $R(m,n)= (-1)^np^{2(m+n)}\binom{m-q/p}{m+n}$.
Then $R$ satisfies equations \eqref{SPDEF1}-\eqref{SPDEF3}.
We give some details showing that $R$ satisfies \eqref{SPDEF3}:
\begin{eqnarray}
R(i,j) & = & (-1)^jp^{2(i+j)}\binom{i-q/p}{i+j} \\
& = & (-1)^{j}p^{2(i+j)}\frac{-j+1-q/p}{i+j}\binom{i-q/p}{i+j-1} \\
& = & (-1)^{j-1}p^{2(i+j-1)}\frac{p(pj-p+q)}{i+j}\binom{i-q/p}{i+j-1} \\
& = & \frac{p(pj-p+q)}{i+j}R(i,j-1).
\end{eqnarray}
\end{proof}
Equation \eqref{BINOMIAL_IDENT} generalizes an identity of Gessel \cite[unlabelled equation before equation (31)]{GESSEL}. This indicates that $P(m,n)$ is the coefficient of $x^{m+n}$
in the generating function of
$(-1)^m(1-p^2x)^{m-q/p}$.
More generally, the above definitions may be extended to define super Patalan numbers for all $m$ and $n$.
\begin{defin}
\label{SUPER_PATALAN_EXTENDED}
Let $m,n$ be integers.
Define the \emph{extended $(p,q)$-super Patalan numbers} $E(m,n)$ to be the coefficient of $x^{m+n}$
in the generating function of
$(-1)^m(1-p^2x)^{m-q/p}$.
\end{defin}
While $E$ is defined in terms of the generating functions of its rows, the twisted symmetry of the super Patalan matrix implies that
$E(m,n)$ is also the coefficient of $x^{m+n}$ in $(-1)^n(1-p^2x)^{n-(p-q)/p}$.
The lower triangular matrix $L$ formed by permuting the columns of $E$ has the interesting property that it
has order $2$ under matrix multiplication.
\begin{thm}
\label{EXTENDED_PATALAN_THM}
Let $L$ be the lower triangular matrix given by $L(m,n) = E(m,-n)$, where $E$ is an extended super Patalan matrix. Then $L^2$ is the identity.
\end{thm}
\begin{proof}
Consider the $(m,n)$ entry of $L^2$ for $n < m$. The product of row $m$ of $L$ and column $n$ of $L$ is
the convolution of row $m$ of $E$ and column $-n$ of $E$.
The generating function of row $m$ of $E$ is $(-1)^m(1-p^2x)^{m-q/p}$,
while the generating function of column $-n$ of $E$ is $(-1)^{-n}(1-p^2x)^{-n-(p-q)/p}$.
Thus the $(m,n)$ entry of $L^2$ is
the coefficient of $x^{m-n}$ in $(-1)^{m-n}(1-p^2x)^{m-n-1}$, which equals $0$.
\end{proof}
Next we consider the two variable generating function of $P$.
\begin{thm}
Let $F(x,y)=\sum P(i,j)x^iy^j$ be the generating function of the super Patalan numbers $P(i,j)$. Then
\begin{equation}
\label{GEN_FCN_TWO_VAR}
F(x,y) = \bigg(\frac{x}{(1-p^2x)^{(p-1)/p}}+\frac{y}{(1-p^2y)^{1/p}}\bigg)\frac{1}{x+y-p^2xy}.
\end{equation}
\end{thm}
\begin{proof}
By Theorem \ref{PATALAN_THM1}, the generating function of the first row of the super Patalan matrix of order $p$ is $g(y)=(1-p^2y)^{-1/p}$ and
the generating function of the first column of the super Patalan matrix of order $p$ is $f(x)=(1-p^2x)^{-(p-1)/p}$.
We will take advantage of the recurrence
\begin{equation}
\label{RECURRENCE_TWO_VAR}
p^2P(i,j) = P(i,j+1) + P(i+1,j).
\end{equation}
Equation \eqref{RECURRENCE_TWO_VAR} implies the equation
\begin{equation}
\label{RECURRENCE_EQN}
p^2F(x,y) = \frac{F(x,y)-g(y)}{x}+\frac{F(x,y)-f(x)}{y}.
\end{equation}
Solving equation \eqref{RECURRENCE_EQN} for $F(x,y)$ gives equation \eqref{GEN_FCN_TWO_VAR}, as required.
\end{proof}
Equation \eqref{RECURRENCE_TWO_VAR} generalizes an identity attributed to D. Rubenstein by Gessel \cite[equation (36)]{GESSEL}.
Also, equation \eqref{GEN_FCN_TWO_VAR} generalizes a similar expression given by Gessel for the generating function of the super Catalan numbers \cite[equation (37)]{GESSEL}.
\section{Convolutional Recurrence}
The Catalan numbers have a very simple, well known, and interesting convolutional recurrence,
\begin{equation}
\label{CATALAN_RECURRENCE}
C_n = \sum_{k=0}^{n-1} C_kC_{n-k}.
\end{equation}
We show that the Patalan numbers of order $p$ have a similar convolutional recurrence of degree $p$,
and give the explicit recurrence for the Patalan numbers of order 3.
One could derive the recurrences by brute force, exploiting the fact that the generating function for column $1$
of the extended super Patalan numbers is given by the expression $-(1-p^2x)^{1/p}$.
We will instead work directly with the generating function of the Patalan numbers.
Let $A(x)$ be the generating function of the Patalan numbers of order $p$,
so that $\displaystyle A(x) = \frac{1-(1-p^2x)^{1/p}}{px}$.
Gessel observed that for $p=3$, $xA(x)$ is the compositional inverse of $x-3x^2+3x^3$ \cite[A097188]{OEIS}.
More generally, $xA(x)$ is the compositional inverse of
$\displaystyle \frac{1-(1-px)^p}{p^2} = -\sum_{k=1}^p \binom{p}{k} p^{k-2}(-x)^k$.
Applying the compositional inverse to the generating function results in a coefficient of zero for the higher degree terms.
Thus we can set the compositional inverse equal to $0$, solve for $x$, and derive a convolutional recurrence from the expression for $x$.
Setting the compositional inverse equal to zero and solving for $x$ gives
\begin{equation}
\label{COMPOSITIONAL_INVERSE_EQN}
x = \sum_{k=2}^p \binom{p}{k} p^{k-2}(-x)^k.
\end{equation}
Because we are working with the compositional inverse of $xA(x)$, not of $A(x)$,
we have to be careful when we translate equation \eqref{COMPOSITIONAL_INVERSE_EQN}
to a convolutional recurrence, by subtracting the number of factors of each term from the total degree in the recurrence.
We thus get a recurrence for the Patalan numbers
\begin{equation}
\label{CONVOLUTIONAL_RECURRENCE_EQN}
a(n) = \sum_{k=2}^p \binom{p}{k} p^{k-2}(-1)^k \sum_{i_1+\ldots+i_k = n-k+1} \prod a(i_j)
\end{equation}
It is easily verified that for $p=2$, equation \eqref{CONVOLUTIONAL_RECURRENCE_EQN} reduces to equation \eqref{CATALAN_RECURRENCE}.
For $p=3$, equation \eqref{CONVOLUTIONAL_RECURRENCE_EQN} reduces to
\begin{equation}
\label{PATALAN3_RECURRENCE}
a(n) = \sum_{k=0}^{n-1} 3a(k)a(n-k-1) - \sum_{i+j+k=n-2} 3a(i)a(j)a(k).
\end{equation}
Equation \eqref{CONVOLUTIONAL_RECURRENCE_EQN} for $n=1$ has only one non-trivial term on the right hand side,
and it implies that $a(1) = \binom{p}{2}$.
\section{Factorization of the super Patalan matrix}
\begin{defin}
Define the \emph{reciprocal Pascal matrix} to be the matrix $R$ with $R(i,j) = \binom{i+j}{i}^{-1}$.
\end{defin}
\begin{lem}
\label{FACTORIZATION_LEMMA}
Let $Q$ be the $(p,q)$-super Patalan numbers, and let $G_{p,q}$ be the diagonal matrix with $G_{p,q}(i,i) = Q(i,0)$.
Then
\begin{equation}
\label{FACTORIZATION_EQN}
Q = G_{p,q} R G_{p,p-q}.
\end{equation}
\end{lem}
The author previously used the factorization of equation \eqref{FACTORIZATION_EQN} to prove that the inverse of the reciprocal Pascal matrix is an integer matrix \cite{RPM}.
Next we prove that the inverse of the Hadamard inverse of the super Patalan matrix is an integer matrix.
\begin{thm}
\label{INVERSE_SUPER_PATALAN_THM}
Let $Q$ be the $(p,q)$ super Patalan matrix, and let $H$ be the $n \times n$ matrix given by $H(i,j) = \frac{1}{Q(i,j)}$ for $0 \le i,j < n$.
Then the inverse of $H$ is an integer matrix.
\end{thm}
\begin{proof}
By Lemma \ref{FACTORIZATION_LEMMA},
\begin{equation}
H = G_{p,q}^{-1} B G_{p,p-q}^{-1},
\end{equation}
where $B$ is the Pascal matrix with $B(i,j) = \binom{i+j}{i}$.
Then
\begin{equation}
H^{-1} = G_{p,p-q} B^{-1} G_{p,q}.
\end{equation}
Since $B^{-1}$, $G_{p,q}$, and $G_{p,p-q}$ are all integer matrices, it follows that $H^{-1}$ also is an integer matrix.
\end{proof}
\section{Conclusion}
We have proposed a definition of super Patalan numbers that generalizes the
super Catalan numbers of Gessel, in that the super Catalan numbers are
the super Patalan numbers of order $p$.
The super Patalan numbers have a number of properties that generalize the
corresponding properties of the super Catalan numbers, in particular
equations \eqref{COLUMN_ONE_EQN}, \eqref{BINOMIAL_IDENT}, \eqref{RECURRENCE_TWO_VAR} and \eqref{GEN_FCN_TWO_VAR}.
We also prove a multiplicative identity for the extended super Patalan matrix, and we give a convolutional recurrence generalizing the well known recurrence for the Catalan numbers.
|
1,314,259,993,169 | arxiv | \section*{Motivation}
The first few contacts of a beginner with semiconductor slang can be shocking. One hears or reads that there are electrons but also holes. The beginner can figure out the hole as the vacancy of an electron, and keeps reading and/or listening. But then it appears the effective mass. It turns out that in a semiconductor, electrons behave as if their mass were different from the well-known mass of the free electron. That is, a force produces different accelerations depending on the electron being placed in a semiconductor or in a vacuum. With a little additional effort, the neophyte imagines themselves resting on the moon and being pushed and then experiencing a different displacement than would have on earth. Right, they can imagine that the set of {\it cores} and the remaining crystal structure produce a field that intensifies or dampens the reaction to the force ... and continues reading. Difficulties grow when learning that the mass can be anisotropic. That is, depending on the force being exerted from top to bottom or from left to right, the dynamic response of the electron may be different. But one gets dumbfounded when listening that holes masses are negative. That is, should you exert a force to the right then the hole moves to the left... At this point the neophyte begins to think of Lewis Caroll and Alice in Wonderland. This doesn't look like science or rather it looks like science fiction. Well no, it is actually science. The problem with scientific dissemination is that best say nothing or say more. I will try to contextualize these definitions to make them acceptable without having to do {\it acts of faith}, which should never be done in science.
\section {Contextualizing the concepts}
The Hamiltonian eigenvalue equation of an electron in a crystal can be written as:
\begin{equation}
\label{kp1}
\left(-\frac{\hbar ^2}{2m}\nabla ^2+V(\mathbf{r})-E_n\right)\Psi _n(\mathbf{r})=0.
\end{equation}
\noindent where the first term is the kinetic energy and $V(\mathbf{r})$ is a periodic potential.\\
\noindent It is useful writing the wave function in the form $\Psi _{nk}(\mathbf{r})=Ne^{i\mathbf{k}\mathbf{r}}u_{nk}(\mathbf{r})$, where $u_{nk}(\mathbf{r})$ is a periodic function. Injecting this function into equation (\ref{kp1}) leads to the so-called $k\cdot p$ (hereafter just kp) Hamiltonian eigenvalue equation:\footnote{For details see e.g. \cite{JP1}.}
\begin{equation}
\label{kp2}
\left(-\frac{\hbar ^2}{2m}\nabla ^2+V(\mathbf{r})+\frac{\hbar ^2k^2}{2m}+\frac \hbar m\mathbf{k}\cdot \mathbf{p}-E_{nk}\right)u_{nk}(\mathbf{r})=0
\end{equation}
\noindent where $\mathbf{p}$ is the vectorial operator $-i\hbar \nabla$. A shorter way of writing equation (\ref{kp2}) is $(\widehat{\cal H}_{kp} -E_{nk}) |u_{nk}\rangle =0$.\\
\noindent The particular case $\mathbf{k}=0$ of equation (\ref{kp2}) is just equation (\ref{kp1}) for $u_{n0}(\mathbf{r})$. Solving this equation we get the eigenfunction basis set $\{u_{n0}(\mathbf{r}), n=1,2,3\dots \infty\}$ corresponding to $k=0$ (also called $\Gamma$ point). As we move from this point ($k\neq 0$) we get another complete set of eigenfunctions $u_{nk}(\mathbf{r})$ that can eventually be written as linear combination of the $\{u_{n0}(\mathbf{r})\}$ complete set at the $\Gamma$ point.\\
\noindent The kp Hamiltonian matrix elements $\langle u_{n0}|\widehat{\cal H}_{kp}|u_{n'0}\rangle$ read:
\begin{equation}
\label{kp3}
\langle u_{n0}|\widehat{\cal H}_{kp}|u_{n'0}\rangle= \left( E_{n'0}+\frac{\hbar^2 k^2}{2m}\right) \delta_{n,n'}+\frac{\hbar \mathbf{k}}{m} \cdot \mathbb P_{n,n'}
\end{equation}
\noindent where $\mathbb P_{n,n'}=\langle u_{n0}|\mathbf{p}|u_{n'0}\rangle$ is the so-called Kane parameter.
\subsection {The one-band model}
Should we are interested in a not overly accurate description of certain eigenvalue and the associate eigenvector, we can choose at $\mathbf{k}\neq 0$ the $\mathbf{k}=0$ function and calculate the expected value of the kp Hamiltonian with it. From the equation (\ref{kp3}), taking into account that $\mathbb P_{n,n}=\langle u_{n0}|\mathbf{p}|u_{n0}\rangle=0$ (since $\mathbf{p}$ is odd), we find that:
\begin{equation}
\label{kp4}
E_{n}(k)= E_{n0}+\frac{\hbar^2 k^2}{2m}
\end{equation}
\noindent It turns out to be a parabolic model (the energy vs. $k$ is a parabola). The curvature (or second derivative) of $E_{n}(k)$ is precisely the inverse of the electron mass $m=1$ a.u. We may include the influence of other functions ({\it remote bands} in semiconductor slang) at second-order perturbation (we have seen that $\mathbb P_{n,n}=\langle u_{n0}|\mathbf{p}|u_{n0}\rangle=0$, therefore the first order perturbation contribution is zero):
\begin{equation}
\label{kp5}
E_{nk}^{(2)} =-\sum_{n'} {\rm '}\frac{|\langle u_{n0}|\frac{\hbar}{m}\mathbf k\cdot\mathbf{p}|u_{n'0}\rangle|^2}{E_{n'0}-E_{n0}}
=\sum_{n'} {\rm '}\frac{\hbar^2 |\mathbf k \cdot \mathbb P_{n,n'}|^2}{m^2(E_{n0}-E_{n'0})}
=\sum_{\alpha=x,y,z}\sum_{n'} {\rm '} \frac{\hbar^2 k_{\alpha}^2 \cdot |\mathbb P_{n,n'}^{\alpha}|^2}{m^2(E_{n0}-E_{n'0})}
\end{equation}
\noindent With the inclusion of remote bands contribution, the energy finally is:
\begin{eqnarray}
\label{kp6}
E_{nk}&=&E_{n0}+\sum_{\alpha=x,y,z} \hbar^2 k_{\alpha}^2\left\{ \frac{1}{2m}+\frac{1}{m^2}\sum_{n'} {\rm '}\frac{|\mathbb P_{n,n'}^{\alpha}|^2}{E_{n0}-E_{n'0}} \right\} \\
&=&E_{n0}+\sum_{\alpha=x,y,z} \frac{\hbar^2 k_{\alpha}^2}{2}\frac{1}{m^*_{\alpha}}
\end{eqnarray}
\noindent with $m^*_{\alpha}$ the so-called electron effective mass. That is, the electron is not in a semiconductor as it is in a vacuum, where it has only kinetic energy and therefore has a parabolic energy vs. the linear momentum. In a semiconductor a periodic potential is acting on the electron. However, roughly speaking, the electron behaves {\it as if} it was in a vacuum but its mass (i.e. the inverse of the energy function curvature) has a different value that we call electron effective mass $m^*_{\alpha}$. \\
\noindent Moreover, if we look at the equation (\ref{kp6}) we see that the curvatures (second derivatives) can be different in different directions. This means that while the mass of the free electron is isotropic, the effective mass can be anisotropic. All the same, if we go to the literature we find out the repeated statement that the mass of the electron is highly isotropic. We will try to understand why. \\
\noindent In a typical semiconductor, such as $GaAs$, bands are built as linear combination of atomic orbitals. The so-called conduction band (lowest empty band) is generally a periodic linear combination in which the empty metal $s$ orbitals play the main role, while the valence bands (filled bands) involve $p$ orbitals. Other remote bands may involve $d$ orbitals, and so on. Therefore, $p$ is the orbital closest to $s$ and therefore leads to the most important perturbation, while the other bands have smaller effects. \\
\noindent From what we have said, the conduction has $|S\rangle$ symmetry and then the non-zero Kane $P$ parameters with the $|P\rangle$ symmetry band are $P = \langle S|\hat P_x |P_x\rangle= \langle S|\hat P_y |P_y\rangle=\langle S|\hat P_z |P_z\rangle$. Therefore, the perturbation contribution to the mass is identical in all three directions (isotropic).\footnote{Many semiconductors crystallize forming Zinc-Blend structures (Symmetry $T_d$). In these structures all contributions $\langle S|\hat P_{\alpha} |D_{\beta\gamma}\rangle$ are zero, except $\langle S|\hat P_{x} |D_{yz}\rangle$ and cyclic permutations ($xyz$ has $A_1$ symmetry in $T_d$). In general, as we have said above, the actually important contribution is that of the valence $|P\rangle$ band, and this is the reason underlying the highly isotropic character of the conduction effective mass.}\\
\noindent Let's introduce some numbers. For example, with a value of $0.303\, eV$ for the conduction band-edge energy $E_{n0}$ (that we call $E_c$), and an effective mass $m^*= 0.1\, m_0 = 0.1$ a.u., the conduction band shape around $k=0$ ($\Gamma$ point) and the free-electron band ($m=1$ a.u.) are shown together in the following figure:
\begin{center}
\begin{figurehere}
\resizebox{0.4\columnwidth}{!}{\includegraphics{F1.jpg}}
\end{figurehere}
\end{center}
\subsection {Two-band Model}
We may consider a conduction $|S\rangle$ and a valence $|Z\rangle$ bands. Without including the remote bands contribution, with $P = \langle S|\hat P_z |Z\rangle$, the expansion in a.u. of the Hamiltonian in this basis $\{|S\rangle, |Z\rangle \}$ is:
\begin{equation}
\label{kp7}
\begin{pmatrix} E_c + \frac{k^2}{2 m_e} & k P \\ k P & E_v + \frac{k^2}{2 m_h} \end{pmatrix}
\end{equation}
\noindent For a set of values $E_c = 0.303\, eV$, $E_v=0$ (we assign zero energy to the valence band at $k=0$), $P=0$, $m_e = m_h = 1$, the form of the bands result to be:
\begin{center}
\begin{figurehere}
\resizebox{0.35\columnwidth}{!}{\includegraphics{F2.jpg}}
\end{figurehere}
\end{center}
\noindent Both, conduction and valence bands are parabolic with positive masses (curvature). Next we consider the conduction-valence interaction with e.g. the $HgTe$ Kane parameter $P=8.46$ eV$\cdot$\AA. It yields:
\begin{center}
\begin{figurehere}
\resizebox{0.35\columnwidth}{!}{\includegraphics{F3.jpg}}
\end{figurehere}
\end{center}
\noindent The conduction-valence interaction turns positive masses and parabolic bands into non-parabolic ones (in this case, strongly linear) and, additionally, with an inversion of the sign in the mass of the valence band (also called the {\it holes} band). At the bottom of the figure you can see the evolution of the eigenvalues and the associated eigenvectors expanded in the basis set $\{|S\rangle, |Z\rangle \}$. For a value $k=0$, the extradiagonal terms are zero and it is clear that the conduction, corresponding to the most energetic eigenvalue (which coincides with the value $E_c = 0.303\, eV$), is the first component, while the {\it hole}, with zero energy (matching $E_v=0$) is the second one. For a value $k=0.2$ we can observe a great mixture, although keeping the largest weight that component that had all the weight at $k=0$.
\subsection {Two-band Model with inversion}
As we already said, in general the conduction band is of $|S\rangle$ symmetry and is built out of the empty metal $s$ orbitals, whereas the valence is of $|P\rangle$ symmetry and it is basically linear combination of occupied $p$ orbitals. But there are semiconductors where this is not the case. For example, in $PbTe$, $Pb$ undergoes the so-called Lanthanides contraction while $Te$ does not. The associated volume contraction and the enormous nuclear charge of $Pb$ leads to a change in the relative energy position of the $s$ and $p$ orbitals. Then the empty band (conduction) is of $|P\rangle$ symmetry and the filled one (valence) of $|S\rangle$ symmetry. This led to the introduction of the concept of negative bandgap.\cite{Pidgeon} (See \cite{JP2} i \cite{JP3} for details). \\
\noindent A simulation of the inverted two-band model (just repeating the previous calculation by changing the signs of $E_c$ [$E_c = -0.303\, eV$]), results in what is shown in the following figure, plotted together with the above result for a better comparison:\footnote{Please realize that in this case (two bands with inversion) the $|S\rangle$ band (lower energy) is fully occupied, while the $|P\rangle$ band (higher energy) is empty.}
\begin{center}
\begin{figurehere}
\resizebox{0.6\columnwidth}{!}{\includegraphics{F4_ang.jpg}}
\end{figurehere}
\end{center}
\noindent We stress that, apart from the inversion of symmetry, in both cases the extreme of the $|P\rangle$ band, at $\Gamma=0$, has an energy $E=0$, but while in the standard case the $|P\rangle$ band curvature is negative, in the case of $PbTe$ it is positive, with a reciprocal change in the curvature of the $|S\rangle$ band. \\
\noindent In this case of bands with inversion, the remote bands perturbation can generate singular bands profiles. For example, the same calculation as above but with effective masses $m_e = m_h = m_0/10$ and also a ten times smaller Kane parameter $P/10$, results in the following profile:
\begin{center}
\begin{figurehere}
\resizebox{0.4\columnwidth}{!}{\includegraphics{F5.jpg}}
\end{figurehere}
\end{center}
\noindent If we look at the composition of the eigenvectors we observe a significant change in the weight of the components at $k=0$ ($\Gamma$ point) vs. $k=0.2$.
\subsection {On the anisotropy of valence bands}
We have already said that the effective mass of the conduction band, of $|S\rangle$ symmetry, is highly isotropic. In the case of the valence band, of $|P\rangle$ symmetry, it is easy to realize that a realistic description must involve all three degenerate $p$ orbitals. Therefore, a minimally correct basis set for the valence band would be $\{|X\rangle,|Y\rangle,|Z\rangle\}$. If we take into account the spin and the often quite relevant spin-orbit coupling, the valence basis set must include six elements, which can always be chosen eigenfunctions of the total angular momentum and its $z$-component (for the total angular momentum and its $z$-component commute with the spin-orbit operator). The corresponding quantum numbers are $J=3/2$ (four-fold degenerate), split from $J=1/2$ (two-fold degenerate) by the spin-orbit term. Therefore, a minimally reasonable description of the top of the valence band must involve the four-fold degenerate states $J=3/2$ ($J_z=\pm 3/2$ called heavy hole HH, $J_z=\pm 1/2$, called light hole LH). The corresponding base is:
\begin{equation}
\begin{array}{ll}
\label{kp8}
|3/2,3/2\rangle =-\frac{1}{\sqrt{2}} |(X + i \, Y) \uparrow\rangle & |3/2,-3/2\rangle =\frac{1}{\sqrt{2}} |(X -i \, Y) \downarrow\rangle \cr
\cr
|3/2,1/2\rangle =\sqrt{\frac{2}{3}} |Z \uparrow\rangle - \frac{1}{\sqrt{6}} |(X + i \, Y) \downarrow\rangle &
|3/2,-1/2\rangle =\sqrt{\frac{2}{3}} |Z \downarrow\rangle + \frac{1}{\sqrt{6}} |(X - i \, Y) \uparrow\rangle
\end{array}
\end{equation}
\noindent Let's consider for example the interaction of HH (3/2, 3/2) with the closest state (of $|S\rangle$ symmetry). The contribution to the effective mass of $k_{\alpha}$ involves the term $\langle S|\hat P_{\alpha} | -\frac{1}{\sqrt{2}} (X+i Y)\rangle$, so the contribution $\alpha=z$ is zero, while the two (equivalent) contributions $\alpha =x,y$ are not. We can do the same reasoning if we disregard the spin. In this case we consider, for example, the state $|Z\rangle$. The contribution to the effective mass of $k_{\alpha}$ to this state involves the term $\langle S|\hat P_{\alpha} |Z\rangle$. Therefore it is zero if $\alpha =x,y$, whereas it is not if $\alpha =z$. Therefore, the effective masses in the $z$ direction are different from the masses {\it in-plane} $m_{xy}$. We can therefore easily understand why the {\it holes} effective masses are highly anisotropic $m_z \neq m_{xy}$.
\section {Last comment}
Of course, there is a lot more in kp than this short overview. The aim here has been to show that, indeed, coming to study semiconductors is, in a way, like Alice crossing the gate and entering wonderland, where the empty place left by an electron comes to life in the form of {\it hole}, this {\it hole} behaving quite strange: when you push it, instead of separating from you, it turns against you. And you have to pay attention if you apply a force: depending on whether the direction is from top to bottom or from left to right the answer can be very different ....
|
1,314,259,993,170 | arxiv | \section{Introduction}
There are various studies of compact astrophysical objects in the interior of which the energy density $\rho$ and the pressure $p$ obey an equation of state typical of dark energy such as $p=-\rho$. Such objects have been variously named in the literature. We refer to them as ``dark energy stars" for simplicity.
Although dark energy stars could have spacetime singularities, one is commonly interested in dark energy stars that are nonsingular. Buchdahl's theorem \cite{PhysRev.116.1027} precludes the existence of nonsingular compact objects with radius smaller than $9/8$ the Schwarzschild radius under the assumptions of spherical symmetry, isotropic stress, and nonnegative trace of the energy momentum tensor. Compact nonsingular dark energy stars are possible because the nonnegative trace condition does not apply for $p < - \rho/3$ dark energy (compact objects supported by anisotropy instead have also been studied \cite{1974ApJ...188..657B}).
In the mid 1960s the idea of objects with $p=-\rho$ at their center was put forward by Gliner \cite{gliner1966}. The first concrete solution was the Bardeen spacetime \cite{Bardeen,Borde:1994ai,PhysRevD.55.7615,Zhou:2011aa}, which is a nonsingular, asymptotically flat, spherically symmetric spacetime that may have zero, one, or two event horizons depending on the value of a parameter. The Bardeen stress energy tensor features radial pressure $p_r=-\rho$ everywhere and tangential pressure $p_T\ne p_r$ away from the center.
In the 1980s the gravitational effects of false vacuum bubbles forming in true vacuum, and vice versa, were considered \cite{PhysRevD.21.3305}. False-vacuum bubbles were studied as a possibility for wormholes \cite{doi:10.1143/PTP.65.1443} and localized inflation \cite{PhysRevD.35.1747,FARHI1987149} when it was found that the null energy condition imposes that any spherically symmetric false-vacuum bubble that forms in an asymptotically flat space, and grows beyond a certain critical size, must have emerged from an initial singularity \cite{FARHI1987149}. Smaller false vacuum bubbles may arise without initial singularities \cite{FARHI1990417}. There were also attempts to replace the black hole singularity inside the horizon with a Planckian density vacuum bubble and a junction layer \cite{FROLOV1989272,PhysRevD.41.383,0264-9381-5-12-002}.
Starting in the 1990s, compact objects with $p=-\rho$ at their center, called vacuum nonsingular black holes, or lambda black holes \cite{Dymnikova1992,Dymnikova2003,Dymnikova:2000zi,Dymnikova:2001mb}, were studied within the class of ``regular black-hole" solutions, i.e., asymptotically flat spacetimes that, like black holes, possess an event horizon but, unlike black holes, do not have a singularity (see, e.g., Ref. \cite{Ansoldi:2008jw} for a review). Lambda black holes, and similar horizonless objects known as G lumps \cite{DYMNIKOVA2007358,Dymnikova:2001fb,Dymnikova:2015yma}, are similar to the Bardeen spacetime in that they have $p_r=-\rho$ everywhere but anisotropic pressure $p_T \ne p_r$. This allows interpolation between a de Sitter core and a Schwarzschild exterior without junction layers. Interpolations between de Sitter cores and Reissner-Nordstrom exteriors for charged black holes have also been considered \cite{ANSOLDI2007261}. Compact objects with equations of state $p=w\rho$ where $w\ne-1$ were studied in Refs. \cite{doi:10.1139/cjp-2017-0526} ($w<-1/3$) and \cite{Bilic:2005sn} ($w<-1$).
In the 2000s, the idea of a finite-volume $p=-\rho$ region was revisited as a method of building a gravitationally stable compact object that does not have singularities or event horizons. The objects described by Chapline \textit{et al}. \cite{doi:10.1080/13642810108221981} contain a $p=-\rho$ dark energy core and have a microscopic quantum critical layer in place of an event horizon. These objects are described with the term ``dark energy star" in Ref. \cite{Chapline:2005ph} (our usage of the term is more general).
The stiff shell gravastar proposed by Mazur and Mottola \cite{mazur2001gravitational,MM2004} features a surface layer made of positive pressure stiff matter joined to the dark energy core and exterior vacuum by junction layers. A simplified version of this shell model with an infinitesimal shell was introduced by Visser and Wiltshire \cite{Visser:2003ge}. Whether a gravastar of the Visser-Wiltshire type with particular surface and interior conditions would collapse, explode, stabilize, or oscillate has been studied in Refs. \cite{1475-7516-2008-06-025,1475-7516-2011-10-013}. Anisotropic gravastars with continuous pressure were examined by Cattoen, Faber, and Visser \cite{cattoen2005gravastars} as a means of eliminating the junction layers. Various kinds of gravastars were found to be stable under small perturbations (e.g., Refs. \cite{Visser:2003ge,chirenti2007tell,eosgravastar,0264-9381-22-21-007}), compatible with charge \cite{0264-9381-22-21-007,horvat2008electrically,Chan:2010se,2013JMPh....4..869B}, and with an exterior cosmological constant \cite{chan2010lambda,2013JMPh....4..869B}. The rotation and angular momentum of gravastars may lead to instability for some rapidly spinning configurations \cite{Cardoso:2007az}, but other spinning configurations are stable \cite{Chirenti:2008pf}. In 2015, Mazur and Mottola examined the Schwarzschild interior solution below the Buchdahl bound, and they found that in the $R\rightarrow R_S$ limit, it behaved as a thin shell gravastar \cite{0264-9381-32-21-215024}. This $R\rightarrow R_S$ Schwarzschild interior gravastar behaves almost exactly as an extended source for the Kerr metric when slow rotation is added \cite{posada2017slowly}. Since gravastars need not have an event horizon, they could in principle be distinguished from black holes \cite{chirenti2007tell}. Gravitational lensing through gravastars has been studied \cite{PhysRevD.90.104013}. There have even been attempts to interpret LIGO data as horizonless compact objects \cite{PhysRevD.94.084016,PhysRevLett.116.171101}.
In this paper, we show that there are time-dependent solutions of Einstein's equations that start with nonnegative pressure $p \ge 0$ everywhere, end in a dark energy star with $p=-\rho$ in a finite central core, have no singularities or junction layers, and do not violate the weak or null energy conditions at any time. We denote the medium composing the system as ``matter" rather than ``fluid" because it involves anisotropic stress.
We finally remark that in this paper we consider asymptotically flat spacetimes rather than an exterior cosmological constant.
\section{Time varying Spherically symmetric systems}
We write Einstein's field equations for a general time-dependent spherically symmetric system in the form of a force equation and a continuity equation. Spherical symmetry allows the metric to be written in terms of two functions $\Phi(t,r)$ and $m(t,r)$ of the time and radial coordinates $t$ and $r$,
\begin{align}
ds^2 = - e^{2\Phi(t,r)} \, dt^2 + \frac{dr^2}{1-\frac{2Gm(t,r)}{r}} + r^2 \, d\theta^2 + r^2 \, \sin^2\theta \, d\phi^2 .
\end{align}
The corresponding stress-energy tensor $T_{\mu \nu}(t,r)$ may be simplified with tetrads to a local Lorentz frame,
\begin{align}
e^\mu_{\hat{\mu}}e^\nu_{\hat{\nu}}T_{\mu \nu}=T_{\hat{\mu}\hat{\nu}} = \, \begin{pmatrix}
\rho & -S_r & 0 & 0 \\
-S_r & p_r & 0 & 0 \\
0 & 0 & p_T & 0 \\
0 & 0 & 0 & p_T
\end{pmatrix}
,
\label{eq:T}
\end{align}
\begin{align}
e^\mu_{\hat{\mu}}e^\nu_{\hat{\nu}}g_{\mu \nu}=\eta_{\hat{\mu}\hat{\nu}}=\mathop{\rm diag}(-1,1,1,1), && e^\mu_{\hat{\mu}}=\left(
\begin{array}{cccc}
e^{-\Phi(t,r)} & 0 & 0 & 0 \\
0 & \sqrt{1-\frac{2 G m(t,r)}{r}} & 0 & 0 \\
0 & 0 & \frac{1}{r} & 0 \\
0 & 0 & 0 & \frac{1}{r\sin\theta} \\
\end{array}
\right).
\label{tetrad}
\end{align}
Here the $T_{\hat{\mu}\hat{\nu}}$ are components of the stress-energy tensor in an inertial frame at rest in the $t,r,\theta,\phi$ coordinate system. More specifically, $\rho=T_{\hat{t}\hat{t}}$ is the energy density, $p_r=T_{\hat{r}\hat{r}}$ and $p_T = T_{\hat{\theta}\hat{\theta}}= T_{\hat{\phi}\hat{\phi}}$ are the radial and transverse stresses, and $S_r = -T_{\hat{r}\hat{t}} = -T_{\hat{t}\hat{r}}$ is the $r$-component of the momentum density, with positive $S_r$ corresponding to the outward flow. We define our system in terms of the matter functions $\rho(t,r)$, $p_r(t,r)$, $\Delta(t,r) = p_T(t,r) - p_r(t,r)$, and $S_r(t,r)$, since they are more closely related to the weak energy condition. The function $\Delta(t,r)$ embodies a possible anisotropic stress. Although $p_r$ and $p_T$ should properly be referred to as radial and tangential stress, we follow the existing literature and call them radial and tangential pressure.
Einstein's equations become
\begin{align}
\frac{\partial m}{\partial r} & = 4\pi r^2 \rho ,
\label{eq:Einstein-rho}
\\[1ex]
\frac{\partial\Phi}{\partial r} & = \frac{G(m+4\pi r^3 p_r)}{r^2 \left(1-\frac{2Gm}{r}\right)} ,
\label{eq:Einstein-pr}
\\[1ex]
\frac{\partial m}{\partial \tau} & = -4\pi r^2\sqrt{1-\frac{2Gm}{r}} S_r ,
\label{eq:sr1}
\\[1ex]
- \frac{\partial p_r}{\partial r}-\frac{G \left( m+4\pi r^3 p_r \right) \left( \rho+p_r \right)}{r^2 \left( 1 - \frac{2 G m}{r} \right) }+\frac{2\Delta}{r}&=\sqrt{1-\frac{2Gm}{r}} \frac{\partial}{\partial\tau} \left( \frac{S_r}{1-\frac{2Gm}{r}} \right).
\label{Eq:Force}
\end{align}
Here we have introduced a new time variable $\tau$ defined so that
\begin{equation}
e^{-\Phi(t,r)} \frac{\partial}{\partial t} = \frac{\partial}{\partial \tau}.
\label{taudef}
\end{equation}
Equation~(\ref{eq:sr1}) and Eq.~(\ref{eq:Einstein-rho}) can be rearranged into a continuity equation for the energy density $\rho$ and energy flux $S_r$,
\begin{equation}
\frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\sqrt{1-\frac{2Gm}{r}}S_r \right)+\frac{\partial \rho}{\partial \tau}= 0 .
\label{eq:Continutiy}
\end{equation}
Equation~(\ref{Eq:Force}) resembles Newton's second law for the force density, where we can identify the terms on the left as the pressure gradient force, gravitational force, and anisotropy force, respectively, while the term on the right-hand side embodies the rate of change of momentum. Notice that the anisotropy force $2\Delta / r$ is a nonrelativistic force \cite{1974ApJ...188..657B,cattoen2005gravastars} coming from the spatial divergence of an anisotropic stress tensor.
Equation~(\ref{eq:Einstein-rho}) and Eq.~(\ref{eq:Einstein-pr}) are easily solved for $m(t,r)$ and $\Phi(t,r)$ with boundary conditions $m(t,r)=0$ at $r=0$ and $\Phi(t,r)=0$ at $r\to\infty$, for any $t$,
\begin{align}
m(t,r) & = \int_0^r \, \rho(t,r) \, 4 \pi r^2 \, dr ,
\label{eq:m(t,r)}
\\
\Phi(t,r) & = - \int_r^\infty \frac{G \left( m + 4 \pi r^3 p_r \right) }{ r^2 \left( 1 - \frac{2Gm}{r} \right) } \, dr .
\label{eq:Phi(t,r)}
\end{align}
We can also rearrange Eq.~(\ref{eq:sr1}) and Eq.~(\ref{Eq:Force}) to solve for $S_r$ and $\Delta$
\begin{align}
S_r & = -\frac{1}{\sqrt{1-\frac{2Gm}{r}}} \frac{1}{4\pi r^2} \frac{\partial m}{\partial \tau},
\label{eq:Einstein-Sr}
\\
\Delta & =\frac{r}{2}\Bigg[\frac{\partial p_r}{\partial r}+\frac{G \left( m+4\pi r^3 p_r \right) \left( \rho+p_r \right)}{r^2 \left( 1 - \frac{2 G m}{r} \right) }+\sqrt{1-\frac{2Gm}{r}} \frac{\partial}{\partial\tau} \left( \frac{S_r}{1-\frac{2Gm}{r}} \right)\Bigg].
\label{eq:Einstein-Delta}
\end{align}
Notice that if one would impose static anisotropic conditions, from Eq.~(\ref{Eq:Force}) the gradient of the pressure would have to satisfy
\begin{align}
\frac{dp_r}{dr} = - \frac{G \left( m+4\pi r^3 p_r \right) \left( \rho+p_r \right)}{r^2 \left( 1 - \frac{2 G m}{r} \right) } + \frac{2\Delta}{r},
\end{align}
which is the Tolman--Oppenheimer--Volkoff equation with an extra term due to the anisotropy.
Thin shells of matter manifest as Dirac delta functions or derivatives of Dirac delta functions in the components of the stress-energy tensor $T_{\mu\nu}$, which in our case are the functions $\rho$, $p_r$, $S_r$, and $\Delta$. There are no thin shells in $\rho$ or $S_r$ if the function $m(t,r)$ is continuous, as can seen from Eqs.~(\ref{eq:Einstein-rho}) and~(\ref{eq:Einstein-Sr}). There are no thin shells in $p_r$ if the function $\Phi(t,r)$ is continuous, as can be seen from Eq.~(\ref{eq:Einstein-pr}). There are no thin shells in $\Delta$ if the function $p_r(t,r)$ is continuous and the function $m(t,r)$ has continuous first derivatives, as can be seen by rewriting the last term in Eq.~(\ref{eq:Einstein-Delta}) as
\begin{align}
\frac{r}{2} \sqrt{1-\frac{2Gm}{r}} e^{-\Phi} \frac{\partial}{\partial t} \left( \frac{S_r}{1-\frac{2Gm}{r}} \right)
=
\frac{1}{8\pi(r-2Gm)} e^{-2\Phi} \left[ \frac{\partial \Phi}{\partial t} \frac{\partial m}{\partial t}
- \frac{\partial^2 m}{\partial t^2}
- \frac{3G}{r-2Gm} \left(\frac{\partial m}{\partial t}\right)^2 \right]
\end{align}
and
\begin{align}
\frac{\partial \Phi}{\partial t} =
- G \int_r^\infty \left[ \frac{1+8\pi G r^2 p_r}{(r-2Gm)^2} \frac{\partial m}{\partial t} + \frac{4\pi r^2}{r-2Gm} \frac{\partial p_r}{\partial t} \right]\, dr .
\end{align}
We conclude this section by mentioning
that Birkhoff's form of his theorem \cite{Birchoff} states that if a spherically symmetric system is surrounded by empty space, i.e., $T_{\mu \nu}=0$ beyond a certain radius $R$, its total mass $M$ is constant. This can be seen from our work as follows: the total mass $M=m(t,R)$, where we take the limit $r\rightarrow R$ with $r>R$; the condition $T_{\mu \nu}=0$ at $r=R$ requires $S_r(t,R)=0$, and from Eq.~(\ref{eq:Einstein-Sr}), $\partial m(t,R)/\partial t=0$, which is $dM/dt=0$.
\section{Energy conditions}
We specify the null and weak energy conditions here for the stress-energy tensor in Eq.~(\ref{eq:T}) (information on energy conditions can be found in Ref. \cite{Curiel2017}).
One common method for considering the energy conditions is to put the stress-energy tensor into one of the canonical types (see, e.g., Ref. \cite{Hawking:1973uf}). The stress-energy tensor in Eq.~(\ref{eq:T}) is either type I or type IV. It is type I when $(\rho+p_r)^2\ge 4 S_r^2$, and it is type IV when $(\rho+p_r)^2< 4 S_r^2$. If it is type IV, the weak energy condition cannot be satisfied \cite{Hawking:1973uf}, and so we do not consider it. If it is type I, then its canonical form is
\begin{equation}
T_{\mu \nu}=\rho^0 u_\mu u_\nu+p_r^0 \chi_\mu \chi_\nu+p_T^0(g_{\mu \nu}+u_\mu u_\nu-\chi_\mu \chi_\nu),
\end{equation}
where $u^\mu$ is the four-velocity of the matter, $\chi^\mu$ is a unit vector in the radial direction, and $\rho^0$, $p_r^0$, $p_T^0$ are the proper density and proper principal pressures of the matter (in the matter rest frame)
\begin{equation}
\rho^0=\frac{\rho-p_r+y}{2},\quad p_r^0=\frac{p_r-\rho+y}{2},\quad p_T^0=p_T,\quad y=\sqrt{(\rho+p_r)^2-4 S_r^2}.
\label{restframe}
\end{equation}
In the local Lorentz frame defined by the tetrad in Eq.~(\ref{tetrad}), the components of the four-vectors $u^\mu$ and $\chi^\mu$ are
\begin{equation}
u^{\hat{\mu}}=\big(\frac{1}{\sqrt{1-v_r^2}},\frac{v_r}{\sqrt{1-v_r^2}},0,0\big),\qquad \chi^{\hat{\mu}}=\big(\frac{v_r}{\sqrt{1-v_r^2}},\frac{1}{\sqrt{1-v_r^2}},0,0\big).
\end{equation}
Here $v_r$ is the radial velocity of the matter (negative for infall)
\begin{equation}
v=\mathop{\rm sign}(S_r)\sqrt{\frac{\rho+p_r-y}{\rho+p_r+y}}.
\end{equation}
The weak energy condition in the matter rest frame then reads
\begin{equation}
\rho^0\ge0,\qquad \rho^0+p_r^0\ge0,\qquad \rho^0+p_T^0\ge0.
\label{wecrest}
\end{equation}
Since we later assign the functions $\rho(t,r)$ and $p_r(t,r)$ instead of $\rho^0$ and $p_r^0$, the weak energy condition in Eq.~(\ref{wecrest}) assumes a complicated form because of the presence of the square root in $y$. We therefore find simpler expressions for the weak energy condition directly rather than from the canonical form.
The weak energy condition (WEC) is $T_{\mu\nu} k^\mu k^\nu \ge 0$ for all timelike (and in the limiting case lightlike) vectors $k^\mu$. The quantity $T_{\mu\nu} k^\mu k^\nu/(-k^2)$ is the energy density measured by an observer with four velocity $k^\mu/\sqrt{-k^2}$. The components of $k^\mu$ in a local Lorentz frame can be parametrized as
\begin{align}
k^{\hat{\mu}} = (k^{\hat{t}},k^{\hat{r}} ,k^{\hat{\theta}} ,k^{\hat{\phi}} ) = E \, (1, \beta \cos\alpha, \beta \sin\alpha \cos\varphi, \beta \sin\alpha \sin\varphi) ,
\end{align}
with $0\le\beta \le 1$, $0\le\alpha\le\pi$, and $0\le\phi<2\pi$.
Then the weak energy condition becomes
\begin{equation}
\rho + \beta^2 p_r \cos^2\alpha + \beta^2 p_T \sin^2\alpha - 2 \beta S_r \cos\alpha \ge 0
\label{eq:weakcondition}
\end{equation}
for all $\alpha$ and for $0\le \beta \le 1$. Depending on the parameters $\rho, p_r, p_T, S_r$, the minimum must either be on the boundary of the region $0\le\beta\le1$, $-1\le\cos{\alpha}\le1$ or be a local minimum inside the region. The weak energy condition amounts to the following inequalities:
\begin{align}
&\text{ if $p_r - | S_r| \ge 0$ and $p_T \ge 0$,}&& \text{the WEC is }\rho-\frac{S_r^2}{p_r}\ge0 , \label{eq:wec3} \\
&\text{ if $p_r - | S_r| \le 0$ and $p_T \ge p_r - | S_r|$,}&& \text{the WEC is } \rho+p_r- 2|S_r|\ge0, \label{eq:wec2} \\
& \text{ if $p_T \le p_r - | S_r|$ and $p_T \le 0$,}&& \text{the WEC is }\rho+p_T+\frac{S_r^2}{p_T-p_r}\ge0 . \label{eq:wec4}
\end{align}
A compact way of writing these inequalities is
\begin{align}
\rho+p_r-W-\frac{S_r^2}{W}\ge0,&& W=\max(p_r,|S_r|,p_r-p_T) .
\end{align}
Setting $S_r=0$ we recover the well-known static case
\begin{align}
\rho \ge 0, \quad \rho+p_r \ge 0, \quad \rho+p_T \ge 0 \qquad \text{(for $S_r=0$).} \label{eq:wecstat}
\end{align}
Note that Eq.~(\ref{eq:wec2}) enforces reality of $\rho^0$ and $p_r^0$ and forces the stress-energy tensor to be type I.
In other words, the rest frame for a matter element (a frame in which the energy flow vanishes, $S_i=0$, $i=1,2,3$) is given by a boost of velocity $v_r$ from our standard frame. The energy condition $\rho+p_r-2|S_r|\ge0$ implies the reality of $y$ in Eq.~(\ref{restframe}), whereas a violation implies imaginary $y$, complex $v_r$, and nonexistence of a rest frame for the matter element. In other terms, if no inertial frame has $S_i=0$, then in some inertial frame the energy density is negative.
The null energy condition (NEC) is $T_{\mu\nu} k^\mu k^\nu \ge 0$ for all lightlike vectors $k^\mu$. It similarly becomes
\begin{align}
&\text{ if $p_T \le p_r - | S_r|$,}&& \text{the NEC is } \rho+p_T+\frac{S_r^2}{p_T-p_r}\ge0 ,
\\
& \text{ if $p_T > p_r - | S_r|$,}&& \text{the NEC is }\rho+p_r- 2|S_r|\ge0 .
\end{align}
The weak energy condition implies the null energy condition.
For completeness, we recall that the strong energy condition (SEC) is $(T_{\mu\nu}-\tfrac{1}{2} T^\lambda_{\hspace{0.5 em} \lambda} g_{\mu\nu}) k^\mu k^\nu \ge 0$ for all timelike vectors $k^\mu$, and the dominant energy condition (DEC) is $T_{\mu\nu} k^\mu k^\nu \ge 0$ and $T^\lambda_{\hspace{0.5 em} \mu} T_{\lambda\nu} k^\mu k^\nu \le 0$ for all timelike vectors $k^\mu$.
\section{Pileup models}
We now introduce a class of models for the formation of dark energy stars that describe the collapse of a system from an initial state of positive pressure to a final state with a dark energy core, defined as a central region where $p_r=p_T=-\rho=$constant. We call our class of models pileup models because we build the dark energy core progressively by ``piling up" matter onto its surface. In pileup models, the energy density at the center increases until it reaches its final value. The pressure at the center initially increases with the density, then decreases until it reaches a value of $p=-\rho$, at which point the dark energy core is formed. After this, the dark energy core expands outward. We call an object where the density at the center has not yet reached its maximum value a precursor and an object with $p=-\rho$ at the center a dark energy star. We call an intermediate stage, if present, the transition.
We build our dark energy core by adding matter to its surface without changing its density so as to satisfy the WEC (\ref{eq:wec2}). Indeed, in the dark energy core, where $p_r=p_T=-\rho$, Eq.~(\ref{eq:wec2}) implies that $S_r$ must be 0 to avoid violating the weak energy condition. As a consequence the mass within any sphere contained in the dark energy core must be constant in $t$, and since this is true of every sphere in the dark energy core, the density must be constant in $t$ as well.
Spatially, the precursor is a ``normal" object with positive pressure and pressure gradient force pointing outwards opposing the force of gravity. The dark energy star has three zones: an innermost dark energy core where the density and pressure are constant in both space and time, an outermost normal zone where the pressure gradient force points outwards, and an intermediate region we call the inversion zone where the pressure gradient force points inwards. The inversion zone is necessary for the radial pressure to be a continuous function of the radius while being negative in the dark energy core and positive in the normal zone.
In this paper we specify the time and radial dependence in $\rho$ and $p_r$ in a way that avoids singularities and event horizons and does not violate the weak (and therefore null) energy condition. Then we derive $S_r$ from Eq.~(\ref{eq:Einstein-Sr}), and $\Delta$, and hence $p_T$, from Eq.~(\ref{eq:Einstein-Delta}). We use these derived functions to check the validity of the energy conditions Eqs.~(\ref{eq:wec3})-(\ref{eq:wec4}). The metric functions $m$ and $\Phi$ follow from Eqs.~(\ref{eq:m(t,r)}) and (\ref{eq:Phi(t,r)}). The contributions to $\Phi$ from the dark energy core and exterior Schwarzschild vacuum are closed form, but the contribution from the inversion and normal zones in general needs to be evaluated numerically.
\subsection{Example of pileup model}
Here, we present a parametrization of $\rho$ and $p_r$ for the formation of a dark energy star with total mass $M$. As an aid in avoiding event horizons and singularities while maintaining the WEC, we introduce an evolution parameter $f$ that increases monotonically during the collapse. In subsection B, we relate $f$ to the time $t$. At $f=0$, the density at the center reaches the value of the density in the dark energy core, and at $f=f_D$, the dark energy core is formed.
The density function $\rho(t,r)$ has a great deal of freedom when using this framework, but there are still some restrictions. To make the radius $R(t)$ of a dark energy star similar to that of a black hole, we set the density in the core as the Schwarzschild density $\rho_S=3M/(4\pi R_S^3)$, where $R_S=2GM$ is the Schwarzschild radius.
Birkhoff's theorem requires
\begin{align}
\int_0^{R(t)} 4 \pi r^2 \rho(t,r) \, dr=M ,
\end{align}
where $M$ is a constant. For simplicity, we use straight lines in the parametrization of $\rho$. In the precursor stage we set
\begin{equation}
\rho(\text{precursor}, f<0)=\begin{cases}
0, & x\geq s, \\
\frac{4 \rho_S}{s^4}(s-x), &x<s,
\end{cases}
\label{rhopre}
\end{equation}
where $x=r/R_S$. The parameter $s$ gives the star radius through $R=s R_S$. For the transition stage and dark energy star stage, we set
\begin{equation}
\rho(\text{transition and dark energy star}, f\ge0)=\begin{cases}
0, & x\geq s, \\
\rho_S \frac{s-x}{s-f},&f<x<s,\\
\rho_S, &0\leq x\leq f.
\end{cases}
\label{rhopost}
\end{equation}
For positive $f$, the radius of the constant density plateau is $R_p=f R_S$. Demanding that $M$ be constant at all times requires the following relationship between $f$ and $s$:
\begin{align}
(s-f) (s^3+fs^2+f^2s+f^3-4)=0.
\end{align}
The only real solution besides the trivial $s=f$ is
\begin{align}
s = - \frac{f}{3} \left[ 1 + 2 \sqrt{2} \sinh\left( \frac{1}{3} \mathop{\rm arccsch} \frac{\sqrt{2} f^3}{5f^3-27} \right) \right] .
\end{align}
When $f=1$, we have $s=1$, the radius of the object equals the Schwarzschild radius, and an event horizon at $r=R_S$ appears in the exterior Schwarzschild metric. When $f=0$, we have $s=4^{1/3}$ and both Eqs.~(\ref{rhopre}) and (\ref{rhopost}) for $x<s$ reduce to $\rho=\rho_S(1-x/s)$, showing that the density is continuous across $f=0$.
Figure \ref{rainbow den} shows the density profiles Eqs.~(\ref{rhopre}), (\ref{rhopost}) at various stages of collapse, in the precursor stage ($f=-0.75$), at the beginning of the transition stage ($f=0$), and during the dark energy star stage ($f=0.5$,$f=0.9$). The density of the plateau remains constant and the radius of the plateau increases with time.
\begin{figure}[H]
\centering
\includegraphics[width=9cm]{rhoslice.pdf}
\caption{ Density profiles at various stages of collapse in the example pileup model. The density is in units of the core density $\rho_S$, and the radius is in units of the Schwarzschild radius $R_S$. The evolution parameter $f$ describes the stage of collapse. The density profiles are chosen such that the density has a flat region to contain the core, a linearly decreasing normal zone for simplicity, and constant total mass. }
\label{rainbow den}
\end{figure}
For the radial pressure function $p_r(t,r)$ we require: (i) continuity at the surface $p_r(r\ge R)=0$, (ii) $p_r=-\rho$ within the core, (iii) $p_r\ge-\rho$ everywhere not to violate the WEC, (iv) $p_r\ge 0$ in the precursor stage, and finally (v) $p_r$ to be a continuously differentiable function in $r$ to avoid singularities and for greater regularity in $p_T$.\\
We set
\begin{equation}
p_r(\text{precursor}, f<0)=
\begin{cases}
\rho_S \frac{4 a}{s^3} \cos ^4\! \left(\frac{\pi x}{2 s}\right), & x\leq s, \\
0, & x>s;
\end{cases}
\label{eqpr1}
\end{equation}
\begin{equation}
p_r(\text{transition}, 0\leq f<f_D)=
\begin{cases}
\rho_S a+\rho_S(1+a)\frac{f}{f_D} \Big[\Psi \! \left( \frac{f-x}{f} \right)-1\Big],& 0\leq x<f,\\
\rho_S a \cos ^4 \! \left(\frac{\pi (x-f)}{2 (s-f)}\right), & f<x\leq s,\\
0, & x>s;
\end{cases}
\label{eqpr2}
\end{equation}
\begin{equation}
p_r(\text{dark energy star}, f\ge f_D)=
\begin{cases}
-\rho_S, &x<f-f_D,\\
\rho_S\Big[-1+(1+a) \Psi \! \left(\frac{f-x}{f_D}\right)\Big], & f-f_D\leq x \leq f,\\
\rho_S a \cos ^4\! \left(\frac{\pi (x-f)}{2 (s-f)}\right), & f<x\leq s,\\
0, & x>s.
\end{cases}
\label{eqpr3}
\end{equation}
Here, $\Psi(\xi)$ is the following function that smoothly interpolates between 1 and 0
\begin{equation}
\Psi(\xi)= \begin{cases}
1 & \xi<0\\
e^{1-\frac{1}{1-\xi^2}}, & 0\leq \xi \le1,\\
0, & \xi\ge1.
\end{cases}
\end{equation}
The parameter $a$ is the ratio of the radial pressure to the density at the outer edge of the inversion zone. For $a$, we choose $a=0.315$ such that the maximum value of $p_r/\rho$ is equal to $1/3$, which is the value for radiation. For the parameter $f_D$, we use $f_D=0.25$. In this example, the inversion zone and dark energy core are within the constant density plateau. This is necessary for the dark energy core, but the inversion zone can in principle extend outside the plateau.
Figure \ref{rainbow pr} shows the radial pressure profiles Eqs.~(\ref{eqpr1})-(\ref{eqpr3}) for the same stages of collapse as in Fig. \ref{rainbow den} plus two extra stages ($f=1/4$, $f=1/8$) to show the formation of the inversion zone.
\begin{figure}[H]
\centering
\includegraphics[width=9cm]{prslice.pdf}
\caption{ Radial pressure for the same stages of collapse as in Fig. \ref{rainbow den}, plus two extra contours showing the formation of the inversion zone.}
\label{rainbow pr}
\end{figure}
\subsection{End states and choice of $f(t)$}
A configuration that has reached a value $f=f_\infty$, and is no longer changing in $t$, we denote as an ``end state." By the chain rule $\partial m/\partial t=(\partial m/\partial f )(\partial f/\partial t)$ and Eq.~(\ref{eq:Einstein-Sr}), end states have $S_r=0$. The threshold for event horizon formation is $f=s=1$. We may prevent event horizons by specifying $f_\infty<1$. Still, the WEC may be violated at a particular $f_\infty<1$ due to a large negative tangential pressure $p_T$ arising from the $\partial p_r/\partial r$ term of Eq.~(\ref{eq:Einstein-Delta}) becoming large and negative as $f$ and $s$ approach 1. We may prevent such a violation of the WEC either by choosing a suitable radial profile for $p_r$ [such that a large positive gravity term in Eq.~(\ref{eq:Einstein-Delta}) cancels the large negative pressure gradient term in Eq.~(\ref{eq:Einstein-Delta})] or by specifying a more restrictive $f_\infty$ (reducing the pressure gradient term by having an object with a large radius $R$). We do the former, exploiting the fact that the gravity term in the anisotropy Eq.~(\ref{eq:Einstein-Delta}) becomes large and positive if $f_\infty$ is close to 1. Figure \ref{endstates} shows the anisotropy $\Delta$ for end states at $f_\infty=0.5,0.75,0.99$ for the radial pressure and density as defined in Subsection A.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{deltastats.pdf}
\caption{Anisotropies from various possible end states. The lines are $f_\infty=0.5$ (red), $f_\infty=0.75$ (brown), and $f_\infty=0.99$ (green). Note the pronounced positive anisotropy in the inversion zone and near $r=R_S$ as $R\rightarrow R_S$ (visible in green curve). Despite the pressure gradient term in the anisotropy becoming large and negative, the total anisotropy in the normal zone remains positive because the positive gravitational term also increases.}
\label{endstates}
\end{figure}
It is also illustrative to examine the end states in terms of forces using Eq.~(\ref{Eq:Force}). We introduce the following notation for the pressure gradient, gravitational, and anisotropy force densities (negative forces point inwards):
\begin{align}
F_p &=-\frac{\partial p_r}{\partial r},\\
F_G &=-\frac{G \left( m+4\pi r^3 p_r \right) \left( \rho+p_r \right)}{r^2 \left( 1 - \frac{2 G m}{r} \right) },\\
F_\Delta &=\frac{2\Delta}{r}.
\end{align}
\begin{figure}[H]
\centering
\includegraphics[width=5.8cm]{force1x.pdf}
\includegraphics[width=5.8cm]{force2x.pdf}
\includegraphics[width=5.8cm]{force3x.pdf}
\caption{Force densities for the same end state configurations shown in Fig. \ref{endstates}. $F_p$ (green line), $F_G$ (black line), and $F_\Delta$ (blue line) are the pressure gradient, gravitational, and anisotropy force densities, respectively. Negative forces point inwards. The small $r$ region, where the three force densities are zero, is the dark energy core. In the inversion zone the anisotropy force $F_\Delta$ largely cancels the inward pressure gradient force $F_p$, although if one looks closely one can see that the gravity force pushes outward in the lower inversion zone. As $f_\infty$ gets close to one, the gravity force $F_G$ becomes much stronger, and the other forces increase to compensate it. }
\label{endforces}
\end{figure}
Since the changing momentum terms on the right side of Eq.~(\ref{Eq:Force}) are zero for end states, examining the forces in end states gives an idea of how the object is supported, see Fig. \ref{endforces}. We see that the anisotropy force is in fact the only outward force in the part of the inversion zone with positive pressure, in line with the fact that continuous pressure gravastars require anisotropy \cite{cattoen2005gravastars}. If the pressure in the inversion zone is within a negative range, specifically $-\rho_S<p_r<-\rho_S/3$, the gravitational force pushes outwards. Repulsive gravity and anisotropy are unavoidable consequences of interpolating continuously between a dark energy core and a region with positive pressure.
We choose the relation $f=f(t)$ between the evolution parameter $f$ and the time $t$ in a way to avoid singularities, event horizons, and/or violation of the weak energy condition. For our example we use the function
\begin{equation}
f(t)=\begin{cases}
-f_\infty, & t<-\frac{t_C}{2}, \\
\frac{f_\infty}{4} \Big[15\frac{t}{t_C} -40 \left(\frac{t}{t_C}\right)^3+48 \left(\frac{t}{t_C}\right)^5\Big], & -\frac{t_C}{2}\le t \le \frac{t_C}{2}, \\
f_\infty, & t>\frac{t_C}{2}.
\end{cases}
\label{ftau}
\end{equation}
Here, $t_C$ is the total collapse time, which we set as $t_C=30 R_S$. Setting $f_\infty=0.9$ avoids problems with the WEC because $p_T$ in the normal zone remains positive, avoids event horizons because $f_\infty<1$, and allows the radius of the dark energy star to become smaller than the Buchdahl bound of $(9/8) R_S$ in that $R_\infty=s_\infty R_S=1.094R_S$.
With the specified $f(t)$, we may calculate the evolution of $p_T$, $\Delta$, and $S_r$, which is displayed in Fig. \ref{Srainbow}. The tangential pressure $p_T$ and anisotropy $\Delta$ become large and positive in the inversion zone. The anisotropy $\Delta$ is zero for all times at $r=0$ and is zero inside the dark energy core. The energy flow $S_r$ is confined to the normal zone.
\begin{figure}[H]
\centering
\includegraphics[width=5.8cm]{pTslice.pdf}
\includegraphics[width=5.67cm]{deltaslice.pdf}
\includegraphics[width=5.94cm]{srslice.pdf}
\caption{The tangential pressure $p_T$, the anisotropy $\Delta$, and the energy flow term $-S_r$ at the same stages of collapse as shown in Figs. \ref{rainbow den} and \ref{rainbow pr}. $\Delta$ and $p_T$ are large and positive in the inversion zone. $\Delta$ and $S_r$ are zero at $r=0$ and inside the dark energy core. $S_r$ is nonzero in the normal zone only. We plot $-S_r$ because it is the actual term in the stress energy tensor.}
\label{Srainbow}
\end{figure}
In Fig. \ref{fig:my_label} we show the evolution of the matter functions $\rho$, $p_r$, $p_T$, and $S_r$ over a wide range of $r$ and $t$. The white region on the right is the exterior of the object. The white line on the top left delineates the dark energy core. The colored horizontal lines correspond to the profiles shown in Figs. \ref{rainbow den}, \ref{rainbow pr}, and \ref{Srainbow}. One can see the formation and spread of the dark energy core in the $p_r$ and $p_T$ panels (red area) and the density plateau in the $\rho$ panel (purple area). Also, $p_r$ and $p_T$ are similar, showing low anisotropy, for a precursor $t<0$, and they become distinct, showing high anisotropy, in the transition and dark energy star stages. The energy flow $S_r$ is, in general, smaller than the other matter functions, is zero in the density plateau, and goes to zero at large times as the collapse starts and stops. We remark that we have no infinitesimally thin shells of matter because the functions $\rho$, $p_r$ and $m$ have sufficient continuity class that we never take a derivative of a discontinuity and get a Dirac delta function. The function $\rho(t,r)$ is $C^0$ in $r$ and $t$, $p_r(t,r)$ is $C^1$ in $r$ and $C_0$ in $t$, and $m(t,r)$ is $C^1$ in $r$ and $t$.
\begin{figure}[H]
\centering
\begin{minipage}{0.8 \linewidth}
\includegraphics[width=6.8cm]{rhofield.pdf}
\includegraphics[width=6.8cm]{prfield.pdf}
\includegraphics[width=6.8cm]{ptfield.pdf}
\includegraphics[width=6.8cm]{srfield.pdf}
\end{minipage}
\begin{minipage}{0.1 \linewidth}
\includegraphics[width=1.5cm]{filedlegend.pdf}
\end{minipage}
\caption{ Plots of the $T_{\mu \nu}$ functions as functions of radius $r$ and time $t$: energy density $\rho$, radial pressure $p_r$, tangential pressure $p_T$, and energy flow/momentum density $-S_r$. The white region on the right is vacuum exterior. The white line on the top left delineates the dark energy core and the colored horizontal lines correspond to the profiles shown in Figs. \ref{rainbow den}, \ref{rainbow pr}, and \ref{Srainbow}. For $|t|>t_C/2$ the configurations are static. }
\label{fig:my_label}
\end{figure}
For completeness, we display plots of the metric functions $m(t,r)$ and $\Phi(t,r)$ in Fig. \ref{metricfunc}. The $m(t,r)$ function can be expressed analytically due to the simplicity of $\rho$ and the integral in Eq.~(\ref{eq:m(t,r)}). The contributions to the integral for $\Phi(t,r)$ [see Eq.~(\ref{eq:Phi(t,r)})] are analytic in the Schwarzschild vacuum and dark energy core, but the integral is evaluated numerically in the inversion zone and normal zone. In the absence of singularities $m(t,0)=0$ and $m(t,r)=M$ for $r>R(t)$. Also, $\Phi(t,\infty)=0$, and the minimum in $r$ of $\Phi$ is at $r=0$ for the precursor but at some $r\ne0$ for the dark energy star.
\begin{figure}[H]
\centering
\includegraphics[width=7cm]{phiplot.pdf}
\includegraphics[width=6.8cm]{mplot.pdf}
\caption{ Plots of the metric functions $\Phi(t,r)$ and $m(t,r)$. In the absence of singularities, $m(t,r)$ is constrained to be 0 at $r=0$ and $M$ in the vacuum exterior. $\Phi(t,r)$ is constrained to be 0 at $r=\infty$ and shows a minimum at $r\ne0$ after the formation of a dark energy core. In our case, $m$ is a simple piecewise polynomial function. $\Phi$ has analytic contributions from the dark energy core and vacuum exterior, but the contributions from the normal and inversion zones are not simple and are evaluated numerically. }
\label{metricfunc}
\end{figure}
\section{Detailed Weak Energy Condition examination}
In this section, we examine the weak energy condition in detail, and we find that the example pileup model we defined does in fact satisfy the weak (and therefore null) energy condition.
\subsection{Dark Energy Core}
Within the plateau, $\partial m/\partial t=0$ and therefore $S_r=0$ and the energy condition inequalities are given in Eq.~(\ref{eq:wecstat}). The dark energy equation of state $p_r=p_T=-\rho$ satisfies these inequalities trivially.
\subsection{Inversion zone}
The inversion zone is within the plateau for our example so the energy conditions are still the static ones from Eq.~(\ref{eq:wecstat}). The first two are satisfied automatically by the construction of $\rho(t,r)$, $ p_r(t,r)$. The third may be shown to be true in the following way. We may write
\begin{equation}
\rho+p_T= \rho+p_r+\Delta=\rho+p_r+\frac{r}{2}\frac{\partial p_r}{\partial r}+\frac{G \left( m+4\pi r^3 p_r \right) \left( \rho+p_r \right)}{2 r \left( 1 - \frac{2 G m}{r} \right) }.
\end{equation}
Since within the plateau region $\partial p_r/\partial r \ge 0$, the following inequality is implied:
\begin{equation}
\rho+p_T \ge (\rho+p_r)\Bigg[1+\frac{G \left( m+4\pi r^3 p_r \right) }{2 r \left( 1 - \frac{2 G m}{r} \right) }\Bigg].
\end{equation}
Since $\rho+p_r\ge0$ and $ 1-2 G m/r\le 1$, one has
\begin{equation}
(\rho+p_r)\Bigg[1+\frac{G \left( m+4\pi r^3 p_r \right) }{2 r \left( 1 - \frac{2 G m}{r} \right) }\Bigg]\ge (\rho+p_r)\Bigg[1+\frac{G \left( m+4\pi r^3 p_r \right) }{2 r}\Bigg].
\end{equation}
Within the plateau region $m=\frac{4}{3}\pi r^3 \rho_S$, and $p_r\ge-\rho_s$, therefore $m+4\pi r^3 p_r\ge-\frac{8\pi}{3}r^3 \rho_S$, meaning
\begin{equation}
(\rho+p_r)\Bigg[1+\frac{G \left( m+4\pi r^3 p_r \right) }{2 r}\Bigg]\ge(\rho+p_r)\left(1-\frac{4\pi G \rho_S r^2}{3}\right).
\end{equation}
Using $r\le R_S$ in the plateau region we obtain
\begin{equation}
(\rho+p_r)\left(1-\frac{4\pi G \rho_S r^2}{3}\right)\ge(\rho+p_r)\left(1-\frac{4\pi G \rho_S R_S^2}{3}\right)=(\rho+p_r)\left(1-\frac{G M}{R_S}\right)=(\rho+p_r)\left(1-\frac{1}{2}\right)=\frac{\rho+p_r}{2}.
\end{equation}
Again using the fact that $\rho+p_r\ge0$ by construction, we may conclude
\begin{equation}
\rho+p_T \ge \frac{\rho+p_r}{2} \ge0 .
\end{equation}
The last weak energy condition inequality is therefore satisfied.
\subsection{Normal region}
In this region there is momentum present, so we need to examine the full energy conditions Eqs.~(\ref{eq:wec3})-(\ref{eq:wec4}). In certain cases however, the applicability conditions allow us to make simplifications. For the cases in Eqs.~(\ref{eq:wec3})-(\ref{eq:wec4}), we proceed as follows.
\subsubsection{Equation~(\ref{eq:wec3}) }
Because of the condition of applicability of Eq.~(\ref{eq:wec3}), it follows that $\rho-\frac{S_r^2}{p_r}\ge \rho-p_r$. Because of the value we set for $a$ in our example $\rho\ge p_r$, we have
\begin{equation}
\rho-\frac{S_r^2}{p_r}\ge \rho-p_r\ge0.
\end{equation}
The inequality from Eq.~(\ref{eq:wec3}) is therefore satisfied when applicable.
\subsubsection{Equation~(\ref{eq:wec2}) }
Note that within the normal region $\rho$ and $p_r$ are nonnegative, but the inequality from Eq.~(\ref{eq:wec2}) can still be violated if $S_r$ is too high.
We may reexpress $\rho+p_r-2|S_r|\ge0$ as
\begin{equation}
\Big|\frac{\partial m}{\partial t}\Big| \le 4 \pi r^2 e^{\Phi}\sqrt{1-\frac{2 G m}{r}}\frac{\rho+p_r}{2}\text{ for all $r$}.
\end{equation}
We may use the chain rule $\frac{\partial m}{\partial t}=\frac{\partial m}{\partial f}\frac{\partial f}{\partial t}$, and the fact that $\frac{\partial f}{\partial t}$ is independent of $r$, to find an equivalent condition on $f$ rather than $m$.
\begin{equation}
\Big|\frac{\partial f}{\partial t}\Big| \le \min_{r\text{ } \epsilon\text{ normal zone}}\left( 4 \pi r^2 e^{\Phi}\sqrt{1-\frac{2 G m}{r}}\frac{\rho+p_r}{2|\frac{\partial m}{\partial f}|}\right),
\label{superf}
\end{equation}
and Eq.~(\ref{superf}) is a constraint on $\partial f/\partial t$, which we show in Figure \ref{wec1proof} together with our choice of $f(t)$. We see from the figure that the inequality (\ref{superf}) is clearly satisfied. Therefore a rest frame for the matter exists and the condition from Eq.~(\ref{eq:wec2}) is satisfied in the normal region.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{srabswec.pdf}
\caption{Check of the WEC (\ref{eq:wec2}) in the normal zone. The condition in Eq.~(\ref{superf}) imposes $\partial f/\partial t$ to be below the orange line. The black line is $\partial f/\partial t$ for our choice of $f$ Eq.~(\ref{ftau}). Thus we see that our choice satisfies the WEC (\ref{eq:wec2}) in the normal zone.}
\label{wec1proof}
\end{figure}
\subsubsection{Equation~(\ref{eq:wec4})}
Because of the applicability condition $\Delta \le-|S_r|$, the inequality from Eq.~(\ref{eq:wec4}) is implied by $\rho+p_r+2\Delta\ge0$ where applicable. In terms of the anisotropy force, $F_\Delta=2\Delta/r$ one has $F_\Delta\ge(\rho+p_r)/r$. Since $F_G\le0$ in the normal zone, one may write $(\rho+p_r)/(r F_G)\ge F_\Delta/F_G$. We can then simplify the expression on the left with the form of $F_G$ and rewrite as the following:
\begin{equation}
\frac{r-2Gm}{G(m+4\pi r^3 p_r)}\ge\frac{F_\Delta}{F_G}.
\label{sufficient}
\end{equation}
The interpretation of this sufficient condition is as follows: WEC (\ref{eq:wec4}) is satisfied within this region, if the anisotropy force is not pulling ``in" too strongly.
In order to examine this, we look within the region for the maximum in $r$ of the ratio of the forces $F_\Delta/F_G$ and compare it to the minimum in $r$ of the left side of Eq.~(\ref{sufficient}). We graph both in Fig. \ref{wec3forces} and conclude the inequality (\ref{sufficient}) is always satisfied.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{ptwec.pdf}
\caption{The maximum of the force ratio $F_\Delta/F_G$ (blue) is below the minimum of the left-hand side of Eq.~(\ref{sufficient}) (red) for all $f$. Therefore, the third WEC inequality (\ref{eq:wec4}) is satisfied in the normal region.}
\label{wec3forces}
\end{figure}
\subsection{Minimum \boldmath$T_{\mu \nu}k^\mu k^\nu$ Summary}
As a summary, Fig. \ref{contour} is a graph of the minimum of $T_{\mu \nu}k^\mu k^\nu$ over position and time. We see that this minimum is non-negative at all points, being zero in the dark energy core and vacuum exterior and positive in the normal and inversion zones.
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{wecsummary.pdf}
\caption{Minimum of $T_{\mu \nu}k^\mu k^\nu/\rho_S$ plotted at points in $r,t$. The dark energy core is the region on the top left bounded by the white curve. The white region on the right is the exterior of the collapsing object. Note the formation and spread of the dark energy core where $T_{\mu \nu}k^\mu k^\nu=0$. Also note that $T_{\mu \nu}k^\mu k^\nu$ is non-negative at all points, meaning the weak energy condition is satisfied at all times and positions during collapse.}
\label{contour}
\end{figure}
\section{Conclusion}
We have presented a model for dark energy stars that describes the collapse of a spherical object from an initial state of positive pressure to a final state with negative pressure (equation of state $p=-\rho$) inside a finite radius core. Our model contains no spacetime or coordinate singularities, no event horizons, and it satisfies the weak and null energy conditions. In the static case, dark energy stars offer an ultracompact object with no singularities or event horizons. Our work shows that dynamical formation can still satisfy these criteria.
The strong energy condition is violated due to the $p=-\rho$ region. The dominant energy condition is violated in our particular example by a large positive $p_T$, although it appears that the DEC may be satisfied by less compact dark energy stars that require less anisotropy (i.e., $f_\infty\le0.43$). In any case, dominant energy condition violations due to high tangential pressure are recognized as common in anisotropic gravastar systems \cite{eosgravastar}.
We feel it is worth mentioning explicitly why the various singularity theorems (see Ref. \cite{Hawking:1973uf}) do not apply to our system. Penrose's 1965 singularity theorem, originally published in Ref. \cite{PhysRevLett.14.57}, states that singularities will result from gravitational collapse when the curvature condition $R_{\mu \nu}k^\mu k^\nu \ge0$ for null vectors $k^\mu$ (which is equivalent to the null energy condition by Einstein's equation \cite{Curiel2017}) is satisfied, a global Cauchy hypersurface exists, and a closed trapped surface exists. No closed trapped surface forms in our system, so this theorem does not apply. Two additional theorems by Hawking from 1967 and one by Hawking and Penrose in 1970 are more general in that criteria other than trapped surfaces can imply singularities. However, these theorems require the curvature condition $R_{\mu \nu}k^\mu k^\nu \ge0$ for all timelike vectors $k^\mu$, which is equivalent to the strong energy condition \cite{Hawking:1973uf,Curiel2017} and is violated in our system because it contains $p=-\rho$ dark energy. Buchdahl's theorem \cite{PhysRev.116.1027} is inapplicable both because the trace of the stress energy tensor in our system may be negative and the pressure may be anisotropic.
In spherical symmetric systems, once a dark energy core is formed its density cannot change without violating the weak energy condition. As such, we have introduced the idea of pileup models, and we have shown an example of a pileup model where the weak energy condition is satisfied and no event horizon or singularity is formed. By reexpressing the spherically symmetric Einstein field equations, we have defined our model in terms of a density function and radial pressure function. Defining the system in terms of matter functions is conducive to an easy evaluation of the energy conditions.
Alternatively, one could have specified some equation of state, or perhaps multiple equations of state, for anisotropic matter, then one could have used Eq.~(\ref{Eq:Force}) as a force equation and Eq.~(\ref{eq:Continutiy}) as a continuity equation to solve for the time evolution. Finding equations of state and initial conditions that result in formation of dark energy stars without singularities or event horizons while maintaining the WEC is an area that requires further research.
\section{Acknowledgements}
P.G. thanks Emil Mottola for an intriguing conversation that rekindled his interest in gravastars, and Stefano Ansoldi, Antonio De Felice, Shinji Mukohyama, Fumihiro Takayama, and Takahiro Tanaka for helpful discussions on the topic of this paper. P.G. also thanks the Yukawa Institute for Theoretical Physics at Kyoto University where part of this work was carried out. This work has been partially supported by NSF Award PHY-1720282 at the University of Utah.
\medskip
\bibliographystyle{apsrev4-1}
|
1,314,259,993,171 | arxiv | \section{Introduction}
While the logistics market is typically determined by strongly competing players, there also exist initiatives to encourage collaboration for, e.g., more cost-efficient and sustainable resource usage\footnote{https://eshipco.com/en/}\footnote{https://www.transporeon.com/en/reports/horizontal-collaboration}.
A simplified model of the logistics market is given by a set of customers and a set of vehicles belonging to different logistics companies with vehicle-specific costs.
The setup of cooperating companies, which seek to jointly optimize the overall costs while serving all customers, is formalized by the (multi-)vehicle routing problem \citep{laporte1992vehicle} (cf.\ Figure \ref{intro_vrp}). It involves assigning customers to the companies as well as determining the corresponding vehicle routes.
A natural additional constraint is that companies hereby want to keep certain types of their costs private. E.g., company-specific driver costs, fuel consumptions and also fixed costs for marketing or customer service, reflect business-internal information. Sharing such information would lead to a decisive competitive disadvantage.
We interpret solving the multi-vehicle routing problem as a team Markov game \citep{wang2002reinforcement} with partially observable
costs: The parallel acting vehicles play a cooperative game to build the (ideally) team-optimal vehicle routes, whereby each vehicle can observe only its own local cost information.
While other approaches for collaborative vehicle routing have studied the scenario of self-interested vehicles (companies) which can form coalitions involving only a subset of vehicles \citep{Mak2021}, we focus on the setup of selfless, team-oriented vehicles which all participate in the collaboration.
\begin{wrapfigure}{r}{0.35\textwidth}
\includegraphics[width=0.35\textwidth]{Plots/Intro_Routing_Sol.png}
\caption{An exemplary solution of a vehicle routing problem with $10$ customers and $3$ vehicles: Each customer is represented by a node. Each vehicle has its own depot (starting node). A vehicle's route is given by a sequence of visited nodes. Each edge induces a vehicle-specific cost.}
\label{intro_vrp}
\end{wrapfigure}
As reinforcement learning (RL) has lately become very attractive for learning heuristics for NP-hard combinatorial optimization problems like vehicle routing problems \citep{nazari2018reinforcement,kool2019attention,chen2019learning}, we build on an existing RL framework to implement our game. The so-called Neural Rewriter \citep{chen2019learning} considers the setup of a single vehicle and learns how to iteratively improve a given solution within a rewriting episode of fixed length. In each step of the episode the current routing solution, given by a sequence of visited nodes, gets slightly changed by swapping two nodes. Our proposed multi-agent Neural Rewriter (MANR) extends the Neural Rewriter to a multi-agent system with multiple vehicles (agents) with individual costs. We assume the individual cost matrices to originate from the same underlying distribution, i.e., we consider a set of homogeneous agents.
To ensure the required parallel action execution in the team Markov game, we have to avoid conflicting rewriting actions of agents. The assumption of partially observable costs poses additional requirements on the rewriting game structure: Given the incomplete knowledge about its team members, an agent must never modify the routes of other agents but can solely alter its own local route. Yet at the same time, agents must be able to exchange nodes, i.e., to change the customer-company-assignment. This raises the question: How should agents know whether excluding a customer node or integrating a new customer node is beneficial for the team if they only observe their own costs? There is a trade-off between non-disclosure and optimality; some sort of information exchange between the team members is inevitable for finding a global optimum.
We solve the described challenges on different levels of the MANR.
On the game setup level, the key is the introduction of a pool set to coordinate agent actions and prevent conflicts. It serves as a collection point for agents to drop customer nodes and take customer nodes from.
Providing the local agents with some global knowledge for deciding about exchanging customer nodes is ensured by the employed learning approach.
We ultimately seek to limit the necessary exchange of cost information between the players as much as possible. As a first step, we realize limited disclosure of vehicle-specific costs only during inference and consider shared costs during training. This is a valid assumption from an application perspective as training can be performed based on fictitious company cost information sampled from a realistic cost distribution. Our actor-critic setup follows the idea of centralized-learning-decentralized-execution \citep{zhang2021multi}: During training, all vehicle cost information is shared to learn a centralized critic for estimating the expected team benefit of rewriting actions. The critic passes this global knowledge on to an agent policy, which itself observes only local information. An agent thus learns to locally behave as a team player, assuming a representative team. During inference, each agent uses only his local cost (and not the cost of other agents) to determine his actions.
We give an overview of the related work in Section \ref{rel_work}.
In Section \ref{MANR} we define the considered collaborative vehicle routing problem and present and discuss the adapted Neural Rewriter, the so-called multi-agent Neural Rewriter as a solution approach. Section \ref{sec_emp_eval} empirically evaluates the MANR on simulated data for different setups varying the size of the routing problem as well as the number of vehicles and compares its performance to OR-Tools. Section \ref{conclusion} summarizes the results and outlines plans for future work.
\section{Related Work}
\label{rel_work}
\textbf{RL for vehicle routing}\hspace{0.2cm} RL has been successfully used to tackle NP-hard vehicle routing problems. Most of the work focuses on setups involving a single vehicle.
E.g., \citet{chen2019learning} consider a capacitated vehicle routing problem where a single vehicle has to visit a set of customers in (possibly) multiple tours starting and ending at its depot without exceeding the vehicle's capacity within one tour. Their approach follows the idea of local search, a well-established heuristic solution approach in the operations research community. Based on an initial solution, they iteratively create locally rewritten solutions via local modifications. They define a local modification as swapping two nodes in the current routing solution sequence and let a RL algorithm learn how to create good neighbouring solutions therewith. Other RL-based approaches for vehicle routing typically build a solution sequentially by visiting one customer more in each step of the episode. E.g., both \citet{kool2019attention,nazari2018reinforcement} consider multiple variants of vehicle routing problems which however also concern only a single vehicle. \citet{nazari2018reinforcement} describe the idea of multiple vehicles in a collaborative or competetive setting as an interesting direction and already mention the need to solve emerging conflicts between multiple vehicles. We decided to build on the rewriting approach of \citet{chen2019learning} as it allows to handle conflicts more naturally. In an iteratively built solution agents either need to perform their actions sequentially to avoid conflicts, which would violate our assumption of a Markov game with simultaneously acting agents, or they need a (stochastically influenced) a posterio conflict solver which would be challenging for agents to learn about. The rewriting approach allows to introduce a pool mechanism for ensuring conflict-free parallel agent actions (details are given below). Moreover, the rewriting setup avoids a challenging sparse reward environment: At each step the cost of the current solution indicates the quality of this solution.\newline\newline
\textbf{Limited disclosure in multi-agent RL}\hspace{0.2cm} In our setup, agents seek to keep their local costs private, whereby these costs influence the reward and also local state representations.
Limiting the disclosure of such agent-specific information is an active research topic for both cooperative and competitive multi-agent systems. The extreme case of independent learning \citep{matignon2012independent} considers agents as independent entities which observe solely their local environment and are not allowed to share information at all. Other approaches allow for some sort of information exchange as it is known to generally improve performance over independent learning for cooperative tasks \citep{tan1993multi}.
In centralized-learning-decentralized-execution schemes \citep{zhang2021multi, lowe2017multi}, one typically assumes perfect global knowledge about the agents' local states, actions and rewards during training. This information is leveraged to train policies which themselves observe solely local information. For execution, only the policies are used in a decentral manner and thus guarantee limited disclosure during inference.
The literature discusses also a stricter interpretation of limited disclosure by additionally requiring it during the training phase. One possible approach is to share solely locally learned model parameters instead of the local raw data in spirit of distributed machine learning \citep{mcmahan2017communication}. E.g., in \citet{zhang2018fully}, the cooperating agents jointly learn a global value function by sharing the agents' individual parameters of their local estimate of the global value function over a time-varying communication network.
As a first step, we consider the scenario of limited disclosure during inference, realized by a centralized-learning-decentralized-execution approach. To the best of our knowledge, there exists no RL-based approach to solve a multi-vehicle routing problem with simultaneously acting cooperating agents which can observe solely their own individual cost.
\section{Multi-Agent Neural Rewriter}
\label{MANR}
The developed multi-agent Neural Rewriter (MANR) uses building blocks from the original Neural Rewriter \citep{chen2019learning} to solve a multi-vehicle routing problem with vehicle-specific costs and depot locations in spirit of a team Markov game with partially observable costs.
During conception of the multi-agent setup, we considered the following guiding principles:
$\vspace{-5pt}$
\begin{itemize}
\item Self-determined agents: Whether and how an agent's route is changed can be decided only by the agent itself and not by other agents. This is essential since, due to the partial cost observability, the agents do not have the necessary information for deciding about optimal routes of other agents.
\item Conflict-free parallel decisions: Any decision of an agent cannot be in conflict with other agent decisions at the same time step.
$\vspace{-5pt}$
\end{itemize}
A naive extension of the Neural Rewriter to multiple vehicles by retaining the original action (and state) space locally for single agents does not satisfy these criteria: An agent swapping any two customer nodes in the overall solution can result in changing another agent's route and also gives room for contradicting agent actions. Solving these issues by simply restricting agents to swap only customer nodes in their own routes would prohibit the necessary customer exchange between agents. Hence, we needed to redefine the rules for our team Markov game.
The approach we propose is the introduction of a pool set over which single agents can get rid of customer nodes or integrate new ones. Agents can interact with each other only via the pool, a direct interaction is not possible.
The pool coordinates the node exchange and with this guarantees the self-determination of agents as well as the conflict-free agent decisions which will be described below in more detail. "Agentizing" the Neural Rewriter also required adaptions of the used models. For a detailed listing of modifications the reader is referred to Appendix \ref{app_diffNR}.
In the following, we first introduce the problem statement in Section \ref{problemStatement} and our translation to a team Markov game in Section \ref{RLSetup}. The implementation of the game with the corresponding MANR model components is described in Section \ref{models}.
\subsection{Problem Statement}
\label{problemStatement}
We consider a multi-vehicle routing problem where $\displaystyle n$ vehicles (agents), characterized by individual costs and depots, collaborate on serving a set of customers $\displaystyle {\mathbb{V}}$. Each customer node $\displaystyle {\bm{v}} \in {\mathbb{V}}$ has to be visited exactly once in total by any of the agents. Each agent route starts and ends at its own depot.
A collection of agent routes which satisfies these two criteria is called a feasible solution. The goal is to find an optimal solution which is a feasible solution and has minimal team average costs (cf.\ Figure \ref{intro_vrp}). The team average cost is given by the average over all agent route costs.
\subsection{Team Markov Game}
\label{RLSetup}
\citet{chen2019learning} modelled the rewriting procedure of local search for a one-vehicle routing problem as a Markov decision process: States represent routing solutions, actions local modifications of a solution and rewards indicate the cost improvement of a local modification. In our multi-agent system we generally differentiate between two perspectives on the problem: the local perspectives of single agents referring to their own local states and actions and the global perspective which observes all agents' states and actions simultaneously. We consider a single global episode which involves rewriting global states with global actions where all agents execute one local action in parallel. The system obtains global rewards which reflect the overall success of rewriting steps for the whole agent team. To address the challenge of non-conflicting parallel local agent actions, we establish the pool set and model it as an additional component in the system's global state. The pool offers the agents the opportunity to drop customer nodes which they want to exclude from their routes and also to integrate new customer nodes. It also plays an important role for the global team reward as improper usage of the pool leads to a collective penalty.
The precise Markov game setup including the kinds of agent interaction with the pool is described in the following.
\newline\newline
\textbf{States}\hspace{0.2cm} A global state $\displaystyle {\bm{s}}_t = (\displaystyle {s}^1_t, \displaystyle {s}^2_t, ...,\displaystyle {s}^n_t,{\mathbb{P}}_t)$ at time $\displaystyle t$ is defined by the concatenation of all local agent states at time $\displaystyle t$ and the corresponding current state of the pool $\displaystyle {\mathbb{P}}_t$. A local state $\displaystyle {s}^i_t$ of agent $\displaystyle i$ at time $\displaystyle t$ is given by the agent's route at time $\displaystyle t$ characterized by the sequence of visited nodes starting and ending at the agent's depot. The pool state $\displaystyle {\mathbb{P}}_t$ is a set of nodes which is either empty or contains customer nodes which were dropped there by agents and which are thus unvisited at that time step. We note that not all global states are necessarily feasible solutions to the routing problem but only those with an empty pool. See Figure \ref{fig_rewritten_states} for exemplary global states.
\begin{figure}[h]
\centering
\raisebox{-\height}{\includegraphics[width=0.32\textwidth]{Plots/Example_state_action_0_10Nodes_2Agents_new.png}}
\raisebox{-\height}{\includegraphics[width=0.32\textwidth]{Plots/Example_state_action_1_10Nodes_2Agents_new.png}}
\raisebox{-\height}{\includegraphics[width=0.32\textwidth]{Plots/Example_state_action_2_10Nodes_2Agents_new.png}}
\caption{Exemplary sequence of three global states with corresponding semantic global actions.}
\label{fig_rewritten_states}
\end{figure}
\textbf{Actions}\hspace{0.2cm}
A global action $\displaystyle {\bm{a}}_t = (\displaystyle {a}^1_t, \displaystyle {a}^2_t, ...\displaystyle {a}^n_t)$ at time $\displaystyle t$ is defined by the concatenation of all local agent actions at time $\displaystyle t$. A local action $\displaystyle {a}^i_t$ of agent $\displaystyle i$ at time $\displaystyle t$ involves making two successive decisions which specify the region of the solution to be changed as well as the rule of how to change it. Technically, it means to select a region node $\displaystyle {w}^i_t$ and afterwards a corresponding rule node $\displaystyle {u}^i_t$ with $\displaystyle {a}^i_t = (\displaystyle {w}^i_t, \displaystyle {u}^i_t)$ implying the region node to be moved by being placed after the rule node.
We allow local actions only to rewrite the agent's own local state and update the pool state. Semantically, an agent can re-arrange one node within its local state, integrate a new node by taking it from the pool, exclude a node by giving it to the pool or also keep its local state unchanged (cf.\ Figure \ref{fig_rewritten_states} for examples).
To encourage the frequent generation of feasible solutions, we restrict the set of allowed local actions dependent on the pool state: Each time the pool is filled and hence the global state is an infeasible solution, agents are asked to integrate nodes from the pool to reestablish a feasible solution and are not allowed to make any other changes to their local states.
We note that the local agent actions which involve local re-ordering or giving nodes to the pool cannot interfere with each other. To guarantee the same for taking nodes from the pool, the pool has a coordinating mechanism which offers nodes to agents for integration in a conflict-free manner (cf.\ Section \ref{models}).
The designed pool component thus enables us to meet the principles of conflict-free decisions and also self-determined agents, since all communication regarding exchanging nodes flows through the pool and nodes from there are integrated on a voluntarily basis.
The rules of the game, including the correspondence of the described rewriting actions to the choices of region and rule, are formalized in Appendix \ref{app_game}.
\newline\newline
\textbf{Rewards}\hspace{0.2cm}
The global reward generally reflects the improvement in the team average cost for two feasible global states and penalizes the agent team if infeasible solutions were created consecutively for a too long time.
The team average cost in the global state $\displaystyle {\bm{s}}_t$ is given by $c({\bm{s}}_t) = \frac{1}{n} \sum_i^n {c}^i({s}_t^i)$ where $\displaystyle {c}^i(\displaystyle{s}_t^i)$ denotes the local cost of agent $\displaystyle i$ at time $\displaystyle t$.
The global reward at time $\displaystyle t$ is then defined by
\begin{equation}
\displaystyle {r}_t = \begin{cases}
\displaystyle c({\bm{s}}_{prev_f(t)})-c({\bm{s}}_t) & \text{if $\displaystyle {\bm{s}}_t$ is feasible,}\\
-10 & \text{if $\displaystyle {\bm{s}}_t$ is infeasible and the last $\displaystyle m$ global states were infeasible,}\\
0 & \text{else,}
\end{cases}
\label{reward}
\end{equation}
where $\displaystyle c(\displaystyle{\bm{s}}_{prev_f(t)})$ denotes the team average cost of the last feasible solution before time step $\displaystyle t$ and $\displaystyle m$ is a hyperparameter. A strictly positive reward thus indicates an improved corresponding current feasible solution compared to the last one. For $m > 1$ the reward becomes non-Markovian which is currently left to the RL algorithm to learn about instead of being explicitly handled in the state representation. The choice of $m$ is discussed in Appendix \ref{app_hyperparam}. Note that computing the global reward requires the agent-specific costs in a global state to be revealed during training.
\newline\newline
\textbf{Rewriting episode}\hspace{0.2cm}
The globally observed episode starts with an initial feasible global state at time zero and is limited by a fixed number of rewriting steps $\displaystyle T$: $(\displaystyle{\bm{s}}_0,\displaystyle{\bm{a}}_0,\displaystyle{r}_1,\displaystyle{\bm{s}}_1,\displaystyle{\bm{a}}_1,\displaystyle{r}_2,\displaystyle{\bm{s}}_2,...,\displaystyle{\bm{s}}_{T-1},\displaystyle{\bm{a}}_{T-1},\displaystyle{r}_T, \displaystyle{\bm{s}}_T)$.
The final solution to the routing problem is defined by the last feasible global state in the rewriting episode.
\subsection{Model Overview}
\label{models}
In this section, we present the RL-based models used to implement the described rewriting game.\newline
We saw that a local agent action is a two-step procedure, requiring a region and a corresponding rule node to be chosen. Defining one policy over the tuple of regions and rules would result in a discrete distribution with a sample space size that is quadratic in the problem size.
Hence we follow \citet{chen2019learning} and reduce the space by considering two separate distributions for regions and rules. In our current implementation, only the rule distribution is modelled as a neural network, the region distribution is random (while following some rules). It can harm the overall rewriting procedure only in terms of slowing it down, as described below in more detail.
Since we assume homogeneous and thus interchangeable agents, we learn a single agent-agnostic policy for choosing rules.
To ensure limited disclosure of local costs in the execution phase, we require the agent policy to only observe local cost information for decision-making. This is enough information for agents to help the team by optimizing the order of their own currently visited nodes. However, it is not sufficient for deciding whether exchanging nodes is beneficial for the team. Hence, we must provide them with global cost knowledge during learning. This is realized by an actor-critic approach following the scheme of centralized-learning-decentralized-execution: Based on perfect knowledge about all agent costs, the centralized critic learns to judge global actions in given global states with respect to their benefit for the team. The critic is used to guide the centralized agent policy. It enables the agent policy to learn about the underlying cost matrix distribution and thus to locally assess if integrating or excluding a node is helpful for the team. To facilitate the training process of the critic, we don't learn it by a posteriori showing it a global action defined by the chosen local agent actions, but let the critic centrally select the global action itself during training. It allows a better trade-off between exploration and exploitation and thus helps learning, especially given the high-dimensional joint action space. We force agents to centrally coordinate their local actions only during training. During inference, solely the agent policy is used in a decentral manner, see Figure \ref{fig_global_action_training_inference}.
In the following, we summarize the workflow for generating a global action during training and inference together with the necessary model components. Due to space limitations, we refer the reader to Appendix \ref{app_loss} for a discussion of the corresponding loss functions.
\newline\newline
\underline{Encoding nodes:} For each node in the problem, we learn high-dimensional embeddings which incorporate information about the current state. Each agent encodes its currently visited nodes solely based on local information with an LSTM-based agent-agnostic local state encoder. For nodes in the pool we introduce an additional model, the pool state encoder, as the pool state differs from the local ones in its semantic structure. These learned embeddings are fed into our centralized critic and agent rule policy for decision-making. See Appendix \ref{app_models} for more details on the node encoders.
\begin{figure}[h]%
\centering
\subfloat[Training: Each agent samples multiple local candidate actions which are centrally collected to form global candidate actions. One of these global actions is centrally selected for execution.]{{\includegraphics[width=0.515\textwidth]{Plots/global_action_training.png} }}%
\qquad\qquad
\subfloat[Inference: Each agent chooses one local action which automatically determines the global action in a decentral manner.]{{\includegraphics[width=0.3\textwidth]{Plots/global_action_inference.png} }}%
\qquad
\caption{Global action determination: training vs.\ inference.}
\label{fig_global_action_training_inference}
\end{figure}
\newline\newline
\underline{Determining local actions:} During training, we sample $\displaystyle Z > 1$ candidate local actions for each agent. During inference, each agent selects one local action ($\displaystyle Z=1$) (cf.\ Figure \ref{fig_global_action_training_inference}).
A local action requires to choose a region node to be moved and based thereon a rule node. These decisions are made successively with two different models:
A region node is determined with the region selector which is influenced by randomness. The corresponding rule node is chosen by a learned agent-agnostic but agent-cost dependent rule policy, the local rule selector.\newline
\textbf{Region selector}\hspace{0.2cm}
Region nodes are chosen randomly while following some rules. For local re-ordering or exclusion of a node, agents can independently sample from a uniform distribution defined over their currently visited customers to get candidate local region nodes.
If the pool is filled and a region node must thus be one of the customer nodes in pool, we coordinate the assignment of region nodes to agents to avoid conflicts. The coordination can be viewed as an automated mechanism of the pool which simply informs each agent about its (for it) selected region.
This mechanism is also driven by randomness but at the same time equipped with a little intelligence. It neither offers a node first to the agent who just dropped the node in the pool, nor asks the same agent to integrate a node multiple times in a row (throughout the rewriting episode).
We note that a random region selector does not necessarily aggravate the rewriting procedure in terms of leading to a bad rewritten state since an agent can always choose to do nothing via the rule node if a bad region node was selected. Nevertheless it can slow down the rewriting procedure.\newline
\textbf{Local rule selector}\hspace{0.2cm} Given an agent's region node, the learned agent-agnostic local rule selector completes a local action by selecting a corresponding rule node. The decision is based on a predicted probability distribution over all possible rule candidates. It relies on an attention mechanism which processes node encodings of the already selected region and all respective possible rules as well as information as of how choosing the rule would affect the agent's local state (in terms of its node representations).
During training, the rule is sampled from the predicted probability distribution while for inference we choose the rule with the highest probability.
\newline\newline
\underline{Determining a global action:} During training, the $\displaystyle Z$ candidate local actions per agent are centrally collected and zipped together to build $\displaystyle Z$ candidate global actions. We choose one global action out of these candidates for rewriting based on an over the joint action space learned action-value function.
During inference, the single chosen local agent actions automatically determine the global action (cf. Figure \ref{fig_global_action_training_inference}) . \newline
\textbf{Global action scorer}\hspace{0.2cm} The learned MLP-based global action scorer quantifies the expected team benefit of rewriting a given global state with a given global action. During training, we use it in an epsilon-greedy strategy to establish a good trade-off between exploration and exploitation.
\section{Empirical Evaluation}
\label{sec_emp_eval}
We empirically evaluate the MANR for vehicle routing in different setups on simulated data varying the number of agents as well as the problem size, i.e., the number of customer nodes in the routing problem. The procedure for data simulation is summarized in Section \ref{sec_data_gen}.
Section \ref{sec_exps} discusses the experiment results for all setups and compares them to a benchmark. There exists no comparable (RL-based) approach which could be used in the same collaborative limited disclosure setting out of the box. For this reason, we compare our results to an established approach with perfect cost knowledge and evaluate the competitiveness of our method. We chose the widely used OR-Tools optimization software as a benchmark which is also based on the concept of local search. In Appendix \ref{app_collaboration} we furthermore demonstrate the benefit of collaboration for the average agent in our approach by comparing it with a non-collaborative setup.
\subsection{Data Generation}
\label{sec_data_gen}
Customer nodes and agent depot nodes are sampled within the unit square, see Figure \ref{fig_data_init}. We draw a random fraction of customer nodes near the agent depots to enforce the participation of all agents in an optimal routing solution. The precise node sampling procedure is explained in Appendix \ref{app_datagen}.
The agent-specific costs between two customer nodes are modelled via agent-specific velocities. Each agent velocity $\displaystyle \eta^i$ is uniformly sampled from $[0.95,1]$, i.e., an agent can be at most $5\%$ faster than other agents. The cost for agent $\displaystyle i$ to travel between two nodes ${\bm{v}}, {\bm{z}} \in [0,1]^2$ is then given by their inverse-velocity-scaled Euclidean Distance as $\displaystyle {c}^i({\bm{v}},{\bm{z}}) = \tfrac{1}{\eta^i} \lVert {\bm{v}}-{\bm{z}} \rVert_2 $.
\begin{wrapfigure}{r}{7.5cm}
\begin{tabular}{@{}cc@{}}
\hspace{-9pt}
\includegraphics[width=0.31\textwidth]{Plots/Init_node_dist_10Nodes_5Agents_new.png} &
\hspace{-34pt}
\includegraphics[width=0.31\textwidth]{Plots/Init_solution_10Nodes_5Agents_new.png}
\end{tabular}
\caption{Sampled routing problem with a corresponding sampled initial solution for $10$ customers and $5$ agents.}
\label{fig_data_init}
\end{wrapfigure}
The MANR requires an initial feasible solution as a starting point for the rewriting procedure. To enforce the pool usage need for successful rewriting, we simply randomly assign the customer nodes to agents as evenly as possible. Each agent then applies the nearest neighbour heuristic \citep{rosenkrantz1977analysis} to its customer node set, see Figure \ref{fig_data_init} for an exemplary initial solution. Further examples of initial states are depicted in Appendix \ref{app_datagen}.
\subsection{Experiments}
\label{sec_exps}
We performed experiments for vehicle routing problems of sizes $10$ and $20$ with $2$,$3$ and $5$ agents respectively. Each data set consists of $6280$ vehicle routing problems and is split into three parts of $80\%$-$10\%$-$10\%$ for training, validating and testing. Hyperparameter tuning was performed on the validation set with Ray Tune\footnote{https://docs.ray.io/en/latest/tune/}. During training, we chose $30$ rewriting steps in all setups with $10$ customer nodes and $40$ rewriting steps for those with $20$ nodes. For inference, we increased the rewriting steps to compensate the stochastic region selection: In contrast to the training phase, each agent now just samples a single random region in a step; we don't have the luxury of choosing from a set of candidate actions with (most probably) different random region suggestions. We consider $100$ rewriting steps throughout all experiments for evaluation which we have found to work out well for all setups. We refer to Appendix \ref{app_hyperparam} for a complete overview of selected hyperparameter values for each of the experiments. We compare our results to the routing solver from OR-Tools which is based on local search and specifically tuned for vehicle routing\footnote{https://developers.google.com/optimization/routing/vrp}. We use the default search parameters and start with the same initial solutions as the MANR. Note that the solver necessitates complete knowledge about all vehicle costs for optimization.
\newline\newline
\textbf{Evaluation}\hspace{0.2cm} For each experiment, we make $\displaystyle 20$ inference runs on the test set (due to the region stochasticity) and compute the test set performance averaged over all runs. Performance is measured with the team average cost of the last feasible solution in the rewriting episode and the initial solutions are kept equal throughout all runs. We also compute the mean performance when choosing the best run for each test sample individually ("MANR best"). This is a natural and valid metric when inference time is not decisive in the application. We compare these values to the mean test set performance of the initial solutions as well as of the solutions produced by OR-Tools: We report the performance gaps in terms of percentage cost reductions relative to the intitial solution and relative to OR-Tools. We also report the average run times for one routing problem, when performing evaluation on a server with a single GPU (Tesla V100S-PCIE-32GB) and CPU core. The results for the setup of $10$ nodes are presented in Table \ref{table_eval_10}. The results for $20$ nodes as well as the absolute performance values for all setups can be found in Appendix \ref{app_eval_figures}.
\begin{table}[!htb]
\caption{Empirical results for $10$ customer nodes and a varying amount of agents.}
\label{table_eval_10}
\begin{subtable}{0.545\linewidth}
\centering
\caption{Performance gaps}
\begin{tabular}{|c||c|c|c|c|}
\hline
\multirow{2}{*}{Setup} & \multicolumn{2}{c|}{gap init} & \multicolumn{2}{c|}{gap OR-Tools} \\
\cline{2-5}
& MANR & MANR best & MANR & MANR best \\ \hline
2 agents & 32\% & 40\% & -21\% & -7\% \\
3 agents & 41\% & 51\% & -37\% & -14\%\\
5 agents & 51\% & 62\% & -59\% & -24\%\\ \hline
\end{tabular}%
\end{subtable}
\begin{subtable}{0.545\linewidth}
\centering
\caption{Average run time in seconds}
\begin{tabular}{|c|| c| c|}
\hline
\multirow{2}{*}{Setup} & \multirow{2}{*}{MANR} & \multirow{2}{*}{OR-Tools} \\
&&\\
\hline
2 agents & 0.31 & 0.01 \\
3 agents & 0.47 & 0.01 \\
5 agents & 0.71 & 0.01 \\
\hline
\end{tabular}
\end{subtable}%
\end{table}
\textbf{Discussion}\hspace{0.2cm}
For all experimental setups (both $10$ and $20$ nodes), the MANR significantly improves over the initial solutions: We have an average percentage cost reduction of $40\%$ for the mean MANR solution over all runs ("MANR") and $50\%$ when considering the best solution for each routing problem per run ("MANR best"). Generally, the more agents the bigger the percentage cost reduction. This stems from the fact that the more agents, the higher the probability that a node gets assigned to a wrong agent in the randomly generated initial solution. Hence, we expect worse initial solutions and thus more room for improvement with an increasing amount of agents. The performance of the OR-Tools benchmark cannot be reached by the MANR but it gets reasonably close, taking into account the imperfect cost knowledge of the MANR agents. Evaluating the results of both $10$ and $20$ nodes, there is an average percentage cost reduction of $-41\%$ of the MANR solution quality relative to the one of OR-Tools, respectively, $-14\%$ when considering MANR best. The more agents the more pronounced the gap. This can be explained with the increasing global action space dimensionality which makes it more difficult for the global action scorer (the critic) to learn.
Looking at individual rewriting episodes of the MANR, we saw that agents learned to use the newly introduced pool in a meaningful way (see Appendix \ref{app_rollout} for an excerpt of an exemplary rewriting rollout). We also observed, that the MANR even solves a few isolated problems better than OR-Tools. It will be interesting to analyze the kind of these routing problems in the future. Comparing inference times, the well-tuned, in C++-implemented OR-Tools software is significantly faster. It is also less influenced by the number of agents. However, it requires to be perfectly informed about all agent costs in contrast to the MANR. Also, up to now, we haven't optimized our Python code for efficiency. There is room for improvement, e.g., by parallelizing the decentral agent action generation during inference which currently runs sequentially in our implementation. We could also investigate in more detail, which number of rewriting steps suffices in the respective settings and save time with shorter rewriting rollouts. Moreover, we expect the run times to approach each other when scaling to larger problem sizes as it was observed for the Neural Rewriter.
\section{Conclusion}
\label{conclusion}
We presented the multi-agent Neural Rewriter (MANR) for collaboratively solving a multi-vehicle routing problem with limited disclosure of vehicle-specific costs in spirit of a team Markov game. Vehicle-specific costs are only explicitly shared in the training phase to allow agents to learn about the underlying cost distribution. During inference, each agent performs its action solely based on local cost information.
We enable parallel conflict-free agent actions in the game by introducing a pool mechanism which coordinates the necessary node exchange between agents. The introduced pool comes at the cost of generating also infeasible solutions within the rewriting episode, but is counteracted by teaching agents a proper pool usage during training.
Our agent-agnostic policy, which must solely process local information, is provided with some global knowledge during training via its cost-omniscient critic.
Our empirical results demonstrate that the approach indeed enables the solely local cost observing agents to act for the sake of the team, i.e., to exchange nodes with other agents via the pool in a meaningful way: The MANR improves an initial solution where nodes get simply randomly assigned to agents by $50\%$ on average. Agents learn to assess the capabilities of representative team members and can base their rewriting decisions thereon.
The experiments also confirm the inescapable trade-off between non-disclosure and optimality. The performance of our benchmark, the OR-Tools heuristic, which assumes perfect cost information cannot be reached on average. However, the MANR gets close and even solves a few isolated problems better.
In the future, we plan to further empirically evaluate the scalability of our approach by increasing the number of nodes in the routing problem. We want to improve our current implementation by fine-tuning the model architectures and enhancing the code efficiency as we expect a significant performance boost from it. Another adaption of the current setup could involve explicitly representing the pool history in the state by, e.g., introducing a counter counting the consecutive steps of a filled pool, to guarantee the Markov property in the game setup and thus simplify learning. The approach could furthermore be improved by replacing the random component in the region selection with a learned model. Also the issue of limited agent scalability due to joint action space learning needs to be addressed. Moreover, we plan to extend to other setups. One direction could be to consider a heterogeneous agent team where the agent cost matrices are not sampled from the same distribution. It requires to provide the agent policy with more global cost knowledge than in the current setup and hence intensifies the trade-off between non-disclosure and optimality.
Another interesting direction concerns to tighten the limited disclosure requirement by also demanding it during the learning phase. A possible way to avoid explicit revelation of agent-specific costs during training is to employ distributed machine learning approaches, i.e., to only share local model parameters. We could also let agents exchange abstract information via learned vector-based embeddings. For both options it would be interesting trying to quantify the non-disclosure: We could introduce an opponent to the game which tries to re-construct agent costs from the shared information.
\subsubsection*{Acknowledgments}
The research of N.\ Paul was supported by the Fraunhofer Society within the project ``SWAP – Hierarchical swarms as production architecture with optimized utilization''. The work of T.\ Wirtz was funded by the German Federal Ministry of Education and Research, ML2R - no. 01S18038B. S.\ Wrobel contributed as part of the University of Bonn and the Fraunhofer Center for Machine Learning within the Fraunhofer Cluster for Cognitive Internet Technologies.
|
1,314,259,993,172 | arxiv |
\section{Introduction}
\label{sec:motivation}
\begin{figure}[!t]
\begin{subfigure}{\columnwidth}
\centering
\includegraphics{Idea1.pdf}
\caption{}
\vspace{8pt}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\centering
\includegraphics{Idea2.pdf}
\caption{}
\end{subfigure}
\caption{The comparison of two approaches to the prediction problem with use of an ANN (a) black box (b) combination of a mathematical model and ANN\label{fig:ideaBasic}}
\end{figure}
Predicting an output of a system is one of the most frequent tasks in fields of research in engineering, economics, or medicine. It may be a performance estimation of a device, cost analysis, or predictive medicine. Whenever it is possible, mathematical models are developed based on the laws of physics, observations and, unavoidably, assumptions. Mathematical models often can be inaccurate, incomplete, or very hard to formulate due to the gaps in knowledge. In such cases, to be able to predict a system's output, approximation methods are used. Computational models called Artificial Neural Networks have brought a vast improvement to predictions in many fields. An ANN can achieve excellent performance in function approximation, comparable to accurate mathematical models \cite{Nikzad2012}. For such high accuracy, an ANN needs a lot of data. In a typical learning process, more than twenty data points are needed per single input dimension. Obtaining enough data may be expensive and/or time-consuming, and in effect, unprofitable. Data numerousness is mostly dependent on the nonlinearity of a process one wants to predict. As an example of such a problem is a Solid Oxide Fuel Cell (SOFC) modeling, which is the main field of research of the authors of this publication. The SOFC's models are hard to be generalized over different types, have very complicated models because of transport and reaction kinetics phenomena. Obtaining data for one type of a SOFC, which is characterized by several parameters, may cost a few months of work. For such a problem, obtaining more than twenty data points per dimension would take years and therefore is infeasible \cite{Brus:2015kt,Buchaniec:2019jo,Mozdzierz:tt}. The basic idea of the integration of an ANN with a mathematical model is presented in \figurename\ \ref{fig:ideaBasic}.
The approach stems from the fact that the solution of every mathematical model is represented as a function or a set of functions. The method consists of determining the most uncertain parts of a mathematical model or lacking a theoretical description, and substitute them with a prediction of an ANN, instead of using the mathematical model or the ANN alone. With that being said, the work with the Interactive Mathematical modeling - Artificial Neural Network differs from the practice with a regular ANN. In a conventional procedure, a dataset is divided into training and validation data. An architecture, as well as a division of the data set, depends on the prediction's precision made on validation data. In addition to these steps, the IMANN requires to divide the problem into two parts. The mathematical model describes one of them, and the artificial neural network approximates the other. The decision regarding this division is a crucial part of working with IMANN, which affects the architecture of the network as well as the data. The main focus of the paper is to analyze the improvement of predictive accuracy under different level of integration of an ANN and a mathematical model.
This aim will be obtained by incorporating an artificial neural network to predict different parts of benchmark functions, which are regarded as a general representation of mathematical models. The ANN will be learned based on the mathematical model's output. A detailed description will be presented in section \ref{sec:bench}. In practice, it might be understood that the ANN prediction replaces only some equations in a model (or even just a part of an equation) and adapts to the system's behavior. As an example, let us consider a system of the equations in which one of the equations is replaced by the prediction done by an artificial neural network. The obtained approximation values of that function would be forced to fulfill all the equations in the system. If the prediction fails in doing so, there would be a discrepancy between measured and expected output during training. As a consequence, the ANN would be forced to improve its weights and biases until the system of equations is satisfied. Satisfying the system of equations ensures that the laws governing the system are included in the ANN. As we point later in this work such a replacement can benefit the accuracy and minimal dataset needed for an artificial neural network prediction.
\section{Literature review}
\label{sec:literature}
The problem with limited datasets is addressed frequently in the literature with many different approaches \cite{Andonie2010, Cataron2012, Shaikhina2017, Micieli2019}. One type of method is a data augmenting - the generation of a slightly different sample by modifying existing ones \cite{Simard2003}. Baird et al. used a combination of many graphical modifications to improve text recognition \cite{Baird1992}. Simard et al. proposed an improvement: the Tangent Prop method \cite{Simard1992}. In Tangent Prop, modified images were used to define the tangent vector, which was included in error estimation \cite{Simard1992}. Methods in which such vectors were used are still improved in recent works \cite{Rozsa2016, Lemley2017}. In the literature one can find a variety of methods, that can modify datasets, to improve ANN learning. A remarkably interesting approach, when dealing with two- and three-dimensional images is based on a persistent diagram (PD) technique. The PD changes the representation of the data to extract crucial characteristics \cite{Edelsbrunner2000} and uses as little information as possible to store them \cite{Adams2017}. Adcock et al. in \cite{Adcock2016} presented how persistent homology can improve machine learning. All the mentioned techniques are based on the idea to manipulate the dataset on which the ANN is trained.
An another method is to alter the ANN by including knowledge into its structure. A successful attempt to add the knowledge is made by a Knowledge-Based Artificial Neural Networks (KBANN) \cite{Towell1994}. The KBANN starts with some initial logic, which is transformed into the ANN \cite{Towell1994}. This ANN is then refined by using the standard backpropagation method \cite{Towell1994}. The KBANN utilizes a knowledge which is given by a symbolic representation in the form of logical formulas \cite{Towell1994}. In situations when knowledge is given by a functional representation i.e. containing variables, an interesting approach was presented by Su et al. in \cite{Su1992}. The authors proposed a new type of neural network - an Integrated Neural Network (INN) \cite{Su1992}. In the INN, an ANN is coupled with a mathematical model in such a way, that it learns how to bias the model's output, to improve concurrence with the modeled system \cite{Su1992}. The INN output consists of the sum of the model and ANN output \cite{Su1992}. A similar approach to improving the mathematical model was presented by Wang and Zhang in \cite{Wang1997}. They proposed an ANN, in which some of the neurons had their activation functions changed to the empirical functions \cite{Wang1997}. The ANN was used to alter the empirical model of the existing device in such a way, that it can be used for a different device. This idea led to a Neuro-Space Mapping (Neuro-SM) in which the functional model is a part of the ANN \cite{Bandler1999, Na2017}. Neuro-SM can be viewed as a model augmented by the ANN. In the Neuro-SM, the ANN maps the input and output of the model, but it does not interfere with the functional representation itself.
\section{Methodology}
\label{sec:model}
\subsection{IMANN background}
\begin{figure}
\centering
\includegraphics{./IdeaScheme.pdf}
\caption{The Integrated Mathematical Model - Artificial Neural Network block diagram \label{fig:ideaDetail}}
\end{figure}
\begin{figure}
\centering
\includegraphics{./Implementation.pdf}
\caption{Architecture schema of the IMANN \label{fig:imannImplementation}}
\end{figure}
The contribution of this paper is the in-depth analysis of an integrated Artificial Neural Network with a mathematical model. The integration is achieved by an interaction between the mathematical model and the ANN. The interaction is implemented by shifting a part of the mathematical model to be predicted by the ANN and teaching the ANN using a mathematical model's output errors. For instance, one can imagine that one of the equations in the model is replaced by ANN's prediction. In this approach, the model can be more flexible in predicting the shape of a function, and the ANN becomes more aware of the physics. If the equation predicted by the ANN fails to fulfill other equations in the system it will be reflected in the cost function, and thereby the ANN will be forced to improve its weights and biases.
\subsection{IMANN implementation}
System boundaries are supplied to the ANN, which predicts the assigned part of the mathematical model. System boundaries alongside with an ANN's output are provided to the mathematical model. The mathematical model computes the predicted system output. At the learning phase, the predicted output error is used to calculate the proper weights and biases of the ANN. The proposed approach to a prediction problem is graphically presented in \figurename\ \ref{fig:ideaDetail}. The selection of weights and biases is performed by an evolutionary algorithm. Every individual is a representation of one network, i.e. its weights and biases are in the form of a vector. The fitness function is an error measure of the mathematical model's output.
\label{sec:model}
\subsection{Model architecture}
The implementation of the IMANN has a feed-forward architecture consisting of fully connected layers with the last two layers called the part of the model layer (PM layer) and the model layer. In the input layer, the number of the neurons corresponds to the conditions in which the system is located. Further layers are standard hidden layers. The number of layers and the number of neurons contained therein are selected according to the complexity of the problem modeled by the network. The next layer is the mentioned PM layer, where the number of neurons corresponds to the number of replaced parts of the model. The output from that layer multiplied by the weights is directly entered into the model layer. The mathematical model, alongside with the system parameters, also receives all arguments, that were supplied to the network's input. The calculated mathematical model result is the final output of the IMANN. Schematically, the IMANN's architecture is presented in \figurename\ \ref{fig:imannImplementation}.
\subsection{Model learning process}
\label{sec:modelLearn}
To train the IMANN, the Covariance Matrix Adaptation - Evolution Strategy (CMA-ES) algorithm is chosen \cite{Hansen1996}. It is an effective and flexible algorithm, which is ideally suited to IMANN learning, where a problem taken under consideration can have different degrees of dimensionality. Here, the dimensionality of the problem is the number of weights and biases in the IMANN. To train the IMANN, the CMA-ES uses a vector made from the all network's weights and biases. The CMA-ES optimizes the vector values based on the error obtained on the training data. Details about the objective function will be discussed in the following section. Open source implementation of CMA-ES in Python was adopted for evolutionary weights adjustments \cite{hansen2019pycma}.
\section{Benchmarking functions}
\label{sec:bench}
\begin{figure*}{}
\centering
\includegraphics{shift.pdf}
\caption{Mathematical model and ANN prediction parts in case of polynomial function. IMANN extreme variations are pure mathematical model and pure ANN predictions \label{fig::shift}}{}
\end{figure*}
Every system's output, physical or theoretical, depends on its boundaries (inputs) and characteristics (adaptable parameters). Mathematically speaking, every system is a function. Mathematical models are functions that try to reflect real system behavior. Mathematical models are often based on assumptions and empirical parameters due to the gap in existing knowledge regarding the phenomena. The simplifications in problem formulations result in the discrepancy between the model and the real system outputs. The difference can dissolve only for hypothetical cases where system output is described by mathematical equations and well-defined.
Every function can be treated as a system, and any arbitrary function can be treated as its mathematical model.
The concept of IMANN strives to be reliable and applicable to any system: physical, economic, biological, or social, only when a part of this system can be represented in mathematical form. To be able to represent a wide range of possible applications, benchmark functions were employed as a representation of a system.
The systems' inputs ($\boldsymbol{x}$) and measurable outputs ($\Xi(\boldsymbol{x})$) corresponds to the inputs and outputs of benchmarking functions. The ANN predicts a part of the system, and the mathematical model calculates the rest of it, treating the ANN's prediction as a parameter or one of the model's equations. The values calculated by the benchmarking functions will then represent the measurable system outputs. If the subfunction values predicted by ANN had the same value as calculated from the extracted part of the benchmark function, this would represent a perfect match between the IMANN and system output. If the real system output differs from the calculated one, the IMANN will be forced to improve weights and biases and predict the part of the system, that it is responsible for, again.
\subsection{Functions}
To test the IMANN, two benchmarking functions are used. One arbitrary polynomial function for a one-dimensional input and modified Rosenbrock function for two dimensions. The chosen polynomial function is given by formula:
\begin{equation}
f_P(x) = \frac{x^5 - 16x^3 + 5x^2}{2},
\end{equation}%
the $N$-dimensional modified Rosenbrock function is given by:
\begin{equation}\label{eq:modrosen}
f_R(\mathbf{\boldsymbol{x}}) = \sum_{i=1}^{N-1}\left[\left(x_{i+1}-x_i^2\right)^4 + \left(1-x_i\right)^4 \right].
\end{equation}
In the case of polynomial function, eight formulations of mathematical models are used, four with one subfunction and four with two subfunctions. The difference between the formulations lies only in the nonlinearity and number of the subfunctions. Model formulations with one subfunction are given by the following equations:
\begin{IEEEeqnarray}{rCl}\label{eq:polynomial}
\IEEEyesnumber
\IEEEyessubnumber*
f_1(x) &=& \frac{a(x)x^5 - 16x^3 + 5 x^2}{2},\label{eq:polya}\\
f_2(x) &=& \frac{\hat{a}(x)x^4 - 16x^3 + 5 x^2}{2}\label{eq:polyb},\\
f_3(x) &=& \frac{\tilde{a}(x)x^3 - 16x^3 + 5 x^2}{2}\label{eq:polyc},\\
f_4(x) &=& \frac{\bar{a}(x) - 16x^3 + 5 x^2}{2}\label{eq:polyd},\\
\end{IEEEeqnarray}%
where $a,\ \hat{a},\ \tilde{a}$ and $\bar{a}$ are subfunctions and $f_i$ is the $i$-th model function. For perfect match with the modeled benchmarking function, subfunctions should be functions of $x$ in the form:
\begin{IEEEeqnarray}{rCl}\label{eq:polyparams}
\IEEEyesnumber\IEEEyessubnumber*
a(x) &=& 1 \label{eq:polyparama} \\
\hat{a}(x) &=& x\\
\tilde{a}(x) &=& x^2\\
\bar{a}(x) &=& x^5.\label{eq:polyparamabar}
\end{IEEEeqnarray}
The polynomial function's model formulations with two subfunctions are defined as:
\begin{IEEEeqnarray}{rCl}\label{eq:polynomial2}
\IEEEyesnumber
\IEEEyessubnumber*
f_5(x) &=& \frac{a(x)x^5 + b(x)x^3 + 5 x^2}{2}\label{eq:polye},\\
f_6(x) &=& \frac{\hat{a}(x)x^4 + \hat{b}(x)x^2 + 5 x^2}{2}\label{eq:polyf},\\
f_7(x) &=& \frac{\tilde{a}(x)x^3 + \tilde{b}(x)x + 5 x^2}{2}\label{eq:polyg},\\
f_8(x) &=& \frac{\bar{a}(x) + \bar{b}(x) + 5 x^2}{2}\label{eq:polyh},
\end{IEEEeqnarray}%
where $a,\ \hat{a},\ \tilde{a}$,\ $\bar{a},\ b,\ \hat{b},\ \tilde{b}$ and $\bar{b}$ are subfunctions. All model functions are defined in the domain $\Omega = \left\{x:x\in[-4,4]\right\}$. Ideally, subfunctions should be functions of $x$ in the form:
\begin{IEEEeqnarray}{rCl+rCl}\label{eq:polyparams2}
\IEEEyesnumber\IEEEyessubnumber*
a(x) &=& 1,\ & b(x) &=& -16, \label{eq:polyparama2} \\
\hat{a}(x) &=& x,\ & \hat{b}(x) &=& -16 x,\\
\tilde{a}(x) &=& x^2,\ & \tilde{b}(x) &=& -16 x^2,\\
\bar{a}(x) &=& x^5,\ & \bar{b}(x) &=& -16 x^3.\label{eq:polyparamabar2}
\end{IEEEeqnarray}
The idea behind the problem formulation is presented in \figurename\ \ref{fig::shift}.
In the case of the two-dimensional modified Rosenbrock function, the model formulation is defined in $\Omega = \left\{(x,y): x \in[-1.4,1.6],\ y \in [-0.25, 3.75] \right\}$ and is given by:
\begin{equation}\label{eq:modrosenmodel}
f_{9}(x, y) = c_1^4(x,y) + c_2^4(x,y),
\end{equation}%
where $c_1$ and $c_2$ ideally are subfunctions of $x$ and $y$ in the form of:
\begin{IEEEeqnarray}{rCl+rCl}\label{eq:rosenparams}
c_1(x, y) &=& y-x^2,\ & c_2(x, y) &=&1-x.
\end{IEEEeqnarray}
The IMANN's ANN is responsible for predicting the value of all the subfunctions mentioned above and provides them into the model. The ANN is learned with the difference between the model output $\Xi(\boldsymbol{x}, \boldsymbol{w})$ and the data generated from a benchmarking function $\hat{\Xi}(\boldsymbol{x})$ in $n$ sample points, where $\boldsymbol{w}$ is the vector of weights and biases of a neural network. The learning process is performed by an evolutionary algorithm, here with the use of a CMA-ES library for Python as explained in section \ref{sec:modelLearn}. The vector $\boldsymbol{w}$, which fully describes one network, will be optimized based on the fitness value:
\begin{equation}
F(x) = \sum_{i=1}^{n}\left(\Xi\left(\boldsymbol{x}, \boldsymbol{w}\right)-\hat{\Xi}\left(\boldsymbol{x}\right) \right)^2.
\label{eq:squredErrorSum}
\end{equation}
\section{Results}
\label{sec:results}
\begin{figure}[t!]
\begin{subfigure}{\columnwidth}
\centering
\includegraphics{1_param_const_best.pdf}
\vspace{-1.2\baselineskip}
\caption{\label{fig:sub:const}}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\centering
\includegraphics{1_param_linear_best.pdf}
\vspace{-1.2\baselineskip}
\caption{\label{fig:sub:linear}}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\centering
\includegraphics{1_param_nonlinear_best.pdf}
\vspace{-1.2\baselineskip}
\caption{\label{fig:sub:nonlinear}}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\centering
\includegraphics{1_param_full_best.pdf}
\vspace{-1.2\baselineskip}
\caption{\label{fig:sub:full}}
\end{subfigure}
\caption{Squared error integral reduction with increase of dataset size for a problem of predicting a function (a) $f_1$ (b) $f_2$ (c) $f_3$ (d) $f_4$\label{fig:errorIntegral}}
\end{figure}
\begin{figure}[t!]
\begin{subfigure}{\columnwidth}
\centering
\includegraphics{2_param_const_best.pdf}
\vspace{-1.2\baselineskip}
\caption{\label{fig:sub:const2}}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\centering
\includegraphics{2_param_linear_best.pdf}
\vspace{-1.2\baselineskip}
\caption{\label{fig:sub:linear2}}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\centering
\includegraphics{2_param_nonlinear_best.pdf}
\vspace{-1.2\baselineskip}
\caption{\label{fig:sub:nonlinear2}}
\end{subfigure}
\begin{subfigure}{\columnwidth}
\centering
\includegraphics{2_param_full_best.pdf}
\vspace{-1.2\baselineskip}
\caption{\label{fig:sub:full2}}
\end{subfigure}
\caption{Squared error integral reduction with increase of dataset size for a problem of predicting a function (a) $f_5$ (b) $f_6$ (c) $f_7$ (d) $f_8$\label{fig:errorIntegral2}}
\end{figure}
\begin{figure}[t!]
\includegraphics{rosenbrockH_subplot4.pdf}
\caption{Modified Rosenbrock function prediction with DNN and IMANN. Red surface corresponds to predicted value, wireframe to original function, blue points indicate learning dataset and colormap on the bottom indicates absolute error \label{fig::rosenerr}}
\end{figure}
\begin{figure}[t!]
\includegraphics{rosenH_best.pdf}
\caption{Squared error integral reduction with increase of dataset size for $f_{9}$ \label{fig::roseninterr}}
\end{figure}
\begin{figure}[t!]
\includegraphics{rosenbrockH_subplot16_dnn_5_5.pdf}
\caption{Modified Rosenbrock function prediction with DNN and IMANN. Red surface corresponds to predicted value, wireframe to original function, blue points indicate learning dataset and colormap on the bottom indicates absolute error. DNN network architecture was similar to the IMANN's, i.e. 1-5-5-1 \label{fig::rosenerrdnn55}}
\end{figure}
\begin{figure}
\includegraphics{rosenH_best_dnn_5_5.pdf}
\caption{Squared error integral reduction with increase of dataset size for $f_{9}$. DNN network architecture was similar to the IMANN's, i.e. 1-5-5-1 \label{fig::roseninterrdnn55}}
\end{figure}
To quantify the difference between the system's output and the predicted value, the integral of the squared error is used as an accuracy indicator:
\begin{equation}\label{eq:errorintegral}
R(\boldsymbol{x}, \boldsymbol{w}) = \int_\Omega{\sqrt{\left(\Xi\left(\boldsymbol{x}, \boldsymbol{w}\right)-\hat{\Xi}\left(\boldsymbol{x}\right)\right)^2}\mathrm{d}\boldsymbol{x}}.
\end{equation}
The integral in Eq. (\eqref{eq:errorintegral}) is computed with eighty point per dimension Gauss-{}Legendre quadrature.
The IMANN is compared to DNN implemented with the use of the TensorFlow library \cite{tensorflow2015}. The DNN is learned in a standard way, treating the system as a black box. To neglect the stochastic error in the computations, 20 attempts were performed for the IMANN and DNN, and the best result, based on $R$ value, was taken. The term ANN will be used when referring to ANN part of IMANN and DNN for a typical ANN implementation.
The squared error integral value for the polynomial functions versus the number of the learning data points are presented in \figurename\ \ref{fig:errorIntegral}. The IMANN's architecture was 1-5-5-1 and DNN's 1-32-16-16-1, respectively. Subfigures from Fig. \ref{fig:sub:const} to Fig. \ref{fig:sub:full} correspond to the task of predicting the functions from Eq. (\ref{eq:polya}) to Eq. (\ref{eq:polyd}) for both the IMANN and DNN. The IMANN divides tasks into two stages, firstly, the artificial neural network estimates Eq. (\ref{eq:polyparama}) to (\ref{eq:polyparamabar}), and secondly, the estimates are inserted into the mathematical models represented by functions from Eq. (\ref{eq:polya}) to Eq. (\ref{eq:polyd}). The performance comparisons between the IMANN and DNN are presented in \figurename\ \ref{fig:sub:const}-\ref{fig:sub:full}. As can be seen in \figurename\ \ref{fig:sub:const} the IMANN performs exceptionally well when only a constant is estimated by the ANN and the performance decay with the increase of non-linearity of the problem Figs. \ref{fig:sub:const}-\ref{fig:sub:nonlinear} with the uphold of the IMMAN advantage. With an increasing nonlinearity that is shifted from model to the ANN in the IMANN, the accuracy is getting closer to the DNN. When nonlinearity in the ANN is comparable to the overall model itself, the IMANN's prediction is worse than the DNN's (see \figurename\ \ref{fig:sub:full}). It is important to notice that the DNN has a much more complex architecture in comparison to the IMMAN. The increasing IMMAN architecture complexity is infeasible due to the utilization of the evolutionary algorithm for the adjustment of the weights. The dimensionality of the optimization of the fully connected network for the considered problem can be expressed with the following formula:
\begin{equation}
D = n_{\mathrm{in}}n_1+\sum_{i=2}^{m}n_{i-1}n_i + n_m n_{\mathrm{out}} + \sum_{i=1}^mn_i + 2n_{\mathrm{out}},
\end{equation}
where $D$ is the dimensionality, $n_{\mathrm{in}}$ and $n_{\mathrm{out}}$ is the input and output dimensionality and $n_i$ is the number of neurons in the $i$-th hidden layer. For instance, the IMANN architecture for the polynomial prediction with one subfunction, the dimensionality of the optimization problem is equal to 47.
Squared error integral value for polynomial functions versus number of learning data points while two subfunctions are predicted by ANN are presented in \figurename\ \ref{fig:errorIntegral2}. As before, IMANN's architecture was 1-5-5-1 and DNN's 1-32-16-16-1 respectively. Subfigures from Fig. \ref{fig:sub:const2} to Fig. \ref{fig:sub:full2} correspond to the task of predicting functions from Eq. \ref{eq:polye} to Eq. \ref{eq:polyh} for both IMANN and DNN. IMANN divides tasks into two stages, firstly, the artificial neural network estimates subfunctions Eq. \ref{eq:polyparama2} to Eq. \ref{eq:polyparamabar2}, and secondly, the estimates are inserted into the mathematical models represented by functions from Eq. \ref{eq:polye} to Eq. \ref{eq:polyh}. The performance comparisons between IMANN and DNN are presented in \figurename\ \ref{fig:sub:const2}-\ref{fig:sub:full2}. The increased difficulty and ambiguity of the problem, when the number of the subfunctions increases, causes the IMANN's prediction error to rise. The increase is especially significant when the linear part is being predicted by the ANN, and is equal to around seven orders of magnitude. Even with this increase, the IMANN performs five to ten orders of magnitude better than the DNN.
Figure \ref{fig::rosenerr} presents the approximation of the modified Rosenbrock function Eq. (\ref{eq:modrosen}) based on sixteen training data-points. The IMANN's architecture was 2-5-5-2 and the DNN's 2-32-32-16-1, respectively. The IMANN's ANN predicts subfunctions given in Eq. (\ref{eq:rosenparams}), and the estimates are inserted into the mathematical model represented by Eq. (\ref{eq:modrosenmodel}). The subfigures in \figurename\ \ref{fig::rosenerr} represent the prediction given by the DNN and IMANN. The contour maps at the bottom of each figure depict the error of the prediction as a function of system coordinates ($x$,$y$). The grid located above the contours indicates the training data, marked as dots. The prediction is displayed as a red surface, together with the original Rosenbrock function being a blue facade mesh. As can be seen in \figurename\ \ref{fig::rosenerr}, the IMANN achieved two order of magnitude higher prediction accuracy in the comparison to DNN when the networks were trained 256 data points. The precision of prediction as a function of training data-points for IMANN (2-5-5-2) and DNN (2-32-32-16-1) is presented in \figurename\ \ref{fig::roseninterr}. As can be seen in \figurename\ \ref{fig::roseninterr}, with the increase of the dataset the precision of both the DNN and IMMAN increases. However, starting from four data points the prediction precision of the IMANN is higher than the DNN's. Figure \ref{fig::rosenerrdnn55} presents the approximation of the modified Rosenbrock function based on 256 training data-points ($x$,$y$) for the same network architectures. The prediction of IMMAN is an order of magnitude better than the DNN's. The conclusion upholds when the same architectures of the IMANN (1-5-5-1) and DNN (1-5-5-1) are juxtaposed as it is presented in \figurename\ \ref{fig::roseninterrdnn55}.
It can be concluded that by decreasing the load on the ANN part in the IMANN will result in the IMANN's prediction performance equal to the model. In our case, the model performance is perfect because we already know the form of the subfunctions. In real applications, finding even such a simple thing as the constant fitting parameters might be a problem. These generalized computations prove, that the IMANN can achieve higher performance than the DNN. The IMANN can improve mathematical model performance by modeling their over-simplified or missing parts. The obtained results indicate great potential in the integration of mathematical models and artificial neural networks.
\section{Conclusions}
\label{sec:conclusion}
This paper presented an analysis regarding the integration of a mathematical model and an artificial neural network to limit the required dataset. The methodology can be applied to any system only if a part of it can be expressed in the form of mathematical equations. The combination of an artificial neural network and a mathematical model is interactive, which is expressed in the reinforcement of network weights adjustment based on the mathematical model misprediction. The Interactive Mathematical Model - Artificial Neural Network was employed to predict the values of several benchmark functions when given a different number of training data. The prediction of the IMANN was juxtaposed with the standard DNN network implemented in TensorFlow. The obtained results indicated that incorporating the mathematical model into an artificial neural network structure can be beneficial in terms of required data-sets, the precision of prediction, or the benefits in computational time. Replacing different parts of the model by artificial neural networks led us to the conclusion that the IMANN performs better when a more linear part of the model is replaced by the ANN prediction. This observation was not a surprise since it is in the core of the analyzed algorithm that it uses a synergy effect of the mathematical model and the artificial neural network. When an artificial neural network holds the primary function for the prediction, the synergy in the IMANN cannot be utilized any longer.
\section*{Acknowledgements}
\label{sec:acknowledgements}
The presented research is the part of the Easy-to-Assemble Stack Type (EAST): Development of solid oxide fuel cell stack for the innovation in Polish energy sector project, carried out within the FIRST TEAM program (project number First TEAM/2016-1/3) of the Foundation for Polish Science, co-financed by the European Union under the European Regional Development Fund. The authors are grateful for the support. \\
This research made use of computational power provided by the PL-Grid Infrastructure.
|
1,314,259,993,173 | arxiv | \section*{Introduction}
In \cite{DQnonneg}, the existence of quantisations for $0$-shifted symplectic structures on derived Artin $N$-stacks $Y$ was established, which in the Deligne--Mumford setting take the form of curved $A_{\infty}$ deformations of the \'etale structure sheaf $\sO_{Y}$. For general derived Artin $N$-stacks, the quantisation is formulated in terms of a site of stacky CDGAs (commutative bidifferential bigraded algebras), and leads to a deformation of the $\infty$-category of perfect complexes on $Y$.
Likewise, in \cite{DQvanish}, quantisations of $(-1)$-shifted symplectic structures on $X$ were established, in the form of twisted $BV$-algebra deformations of square roots $\sL$ of the dualising line bundle $K_{X}$, or equivalently deformations of the right $\sD_{X}$-module $\sL\ten_{\sO_{X}}\sD_{X}$.
The purpose of this paper is to unify and generalise these results by looking at quantisations of derived Lagrangians $(X,\lambda)$ on $0$-shifted symplectic derived stacks $(Y,\omega)$ (i.e. Lagrangians in the sense of \cite{PTVV}). When $X$ is empty, this recovers the scenario of \cite{DQnonneg}, and when $Y$ is a point it recovers the scenario of \cite{DQvanish}.
When $Y$ is a smooth variety, this generalises the description \cite{BaranovskyGinzburgKaledinPecharich} of quantisations of pairs $(Y,X)$ for smooth Lagrangians $X$, but our
derived Lagrangians $X$ can also be derived enhancements of singular schemes or stacks. The quantisations we establish are given by curved $A_{\infty}$ deformations of the structure sheaf $\sO_{Y}$,
equipped with a curved morphism to the ring $\sD_{X}(\sL)$ of differential operators on a line bundle $\sL$ on $X$.
Our perspective for studying these quantisations is that the governing DGLA is given by the Hochschild complex $\CCC^{\bt}(\sO_Y)$ acting on $\sD_{X/Y}(\sL)$ via the quasi-isomorphism $\sD_{X/Y} \to \CCC^{\bt}(\sO_Y, \sD_X)$.
A key notion is that of a self-dual (or involutive) quantisation of the pair $(Y, \sL)$, for $\sL$ a square root of the dualising line bundle $K_{X}$. This condition gives us an involution $\sD_X(\sL) \simeq \sD_X(\sL)^{\op}$, and a quantisation $\tilde{\O}_{Y} \to \sD_{X}(\sL)\llbracket \hbar \rrbracket$ is said to be self-dual if it is equipped with a compatible involution to its opposite $\tilde{\O}_{Y}^{\op} \to \sD_{X}(\sL)\llbracket \hbar \rrbracket$, semilinear with respect to the transformation $\hbar \mapsto -\hbar$. Our main result is Theorem \ref{quantpropsd}, which shows that each suitable formality isomorphism gives a parametrisation of non-degenerate self-dual quantisations by even de Rham power series
\[
\H^1(F^2\cone(\DR(Y) \to \DR(X)))^{\nondeg} \by \hbar^2 \H^1(\cone(\DR(Y) \to \DR(X)))\llbracket \hbar^2 \rrbracket,
\]
in particular guaranteeing that such quantisations always exist for derived Lagrangians on $0$-shifted symplectic structures.
Since each quantisation of $(Y, \sL)$ leads to a quantisation $\tilde{\O}_Y$ of $\sO_Y$ and an $\tilde{\O}_Y$-module in right $\sD_X$-modules (deforming $\sL\ten_{\sO_X}\sD_X$), it makes sense to push the module forward to give an $\tilde{\O}_Y-\sD_Y$-bimodule. We can then look at the dg category given by such bimodules coming from self-dual quantisations of proper Lagrangians $X \to Y$. This is an algebraic analogue of the derived category of simple holonomic DQ modules considered by Kashiwara and Schapira in \cite{kashiwaraschapira}, and enjoys many properties expected for an algebraic analogue of the Fukaya category envisaged in \cite{BehrendFantechiIntersections}.
Our approach to proving Theorem \ref{quantpropsd} will be familiar from \cite{poisson,DQvanish, DQnonneg}. For each quantisation $\Delta$, we define a map $\mu$ from de Rham power series to a quantised form of Poisson cohomology, giving a filtered quasi-isomorphism when $\Delta$ is non-degenerate. To each non-degenerate quantisation, we may then associate a de Rham power series $\mu^{-1}(\hbar^2 \frac{\pd \Delta}{\pd \hbar})$ whose constant term is a Lagrangian structure. Obstruction calculus shows that this induces an equivalence between self-dual quantisations and even power series.
Our main new technical ingredient is in defining the map $\mu$, where consider the morphism
\[
\CCC^{\bt}(\sO_Y) \to \CCC^{\bt}(\sD_{X/Y}(\sL))
\]
of $E_2$-algebras induced by the action of $\CCC^{\bt}(\sO_Y)$ on $\sD_{X/Y}(\sL)$. Via formality, we may regard these $E_2$-algebras as $P_2$-algebras, and then each quantisation defines a commutative diagram from the diagram $\DR(Y) \to \DR(X)$ to a deformation of the diagram above. The morphism $\mu$ is then given by composing with the map $\CCC^{\bt}(\sD_{X/Y}(\sL)) \to \sD_{X/Y}(\sL)$ and taking cones.
The structure of the paper is as follows.
In Section \ref{centresn}, we establish some technical background results on Hochschild complexes of almost commutative algebras. When equipped with a PBW filtration degenerating to Poisson cohomology, these become almost commutative brace algebras in a suitable sense (\S \ref{bracesn}). This allows us to construct suitable semidirect products of Hochschild complexes from morphisms of almost commutative algebras in \S \ref{semidirectsn}. Section \ref{affinesn} then uses these constructions to define the space $Q\cP(A,B;0)$ of quantisations associated to a morphism $A \to B$ of commutative bidifferential bigraded algebras (i.e. a map $\Spec B \to \Spec A$ of stacky derived affines in the sense of \cite{poisson}), and more generally the space $Q\cP(A,M;0)$ for a line bundle $M$ over $B$.
Section \ref{compatsn} contains the key technical construction of the compatibility map $\mu$ in Definition \ref{mudef}, with Definition \ref{Qcompatdef} then giving the notion of compatibility between a quantisation and a generalised Lagrangian. The main results of this section are Proposition \ref{QcompatP1}, giving a map from non-degenerate quantisations to generalised Lagrangians, and Proposition \ref{compatcor2}, which gives an equivalence between Lagrangians and non-degenerate co-isotropic structures. Proposition \ref{quantprop} then shows that the obstruction to quantising a co-isotropic structure is first order.
In Section \ref{stacksn}, these constructions are globalised via the method introduced in \cite{poisson}. \S \ref{sdsn} then introduces the notion of self-duality, enabling us to eliminate the first order obstruction and thus lead to Theorem \ref{quantpropsd}, the main comparison result. In \S \ref{higherrmk}, we then explain how the methods and results of the paper should adapt to Lagrangians on positively shifted symplectic stacks. Section \ref{fukayasn} outlines an algebraic analogue of the Fukaya category based on self-dual quantisations of line bundles on derived Lagrangians, and sketches a few key properties.
\tableofcontents
\subsubsection*{Notation}
Throughout the paper, we will usually denote chain differentials by $\delta$. The graded vector space underlying a chain (resp. cochain) complex $V$ is denoted by $V_{\#}$ (resp. $V^{\#}$).
Given an associative algebra $A$ in chain complexes, and $A$-modules $M,N$ in chain complexes, we write $\HHom_A(M,N)$ for the cochain complex given by
\[
\HHom_A(M,N)^i= \Hom_{A_{\#}}(M_{\#[i]},N_{\#}),
\]
with differential $ f\mapsto \delta_N \circ f \pm f \circ \delta_M$.
\section{The centre of an almost commutative algebra}\label{centresn}
The purpose of this section is to establish a canonical filtration on the Hochschild complex of an almost commutative algebra, and to study the resulting almost commutative brace algebra constructions. The primary motivation is to ensure that these correspond via formality of the $E_2$ operad to filtered $P_2$-algebras for which the Lie bracket has weight $-1$.
\subsection{Almost commutative algebras}
\subsubsection{Homological algebra of complete filtrations}\label{filtrnsn}
We now introduce a formalism for working with complete filtered complexes. Although we make little explicit use of these characterisations in the rest of the paper, they implicitly feature in the reasoning for complete filtered functors to have given properties.
\begin{definition}
Given a vector space $V$ with a decreasing filtration $F$, the Rees module $\xi(V,F)$ is given by
$\xi(V,F):= \bigoplus_p F^pV \hbar^{-p} \subset V[\hbar, \hbar^{-1}]$. This has the structure of a $\bG_m$-equivariant (i.e. graded) $\Z[\hbar]$-module, setting $\hbar$ to be of weight $1$ for the $\bG_m$-action.
\end{definition}
The functor $\xi$ gives an equivalence between exhaustively filtered vector spaces and flat $\bG_m$-equivariant $\Z[\hbar]$-modules --- see \cite[Lemma \ref{mhs2-flatfiltrn}]{mhs2} for instance.
We will be interested in filtrations which are complete, in the sense that $V = \Lim_i V/F^i$. Via the Rees constructions, this amounts to looking at the inverse limit over $k$ of the categories of $\bG_m$-equivariant $\Z[\hbar]/\hbar^k$-modules. However, Koszul duality provides a much more efficient characterisation. The Koszul dual of $\Z[ \hbar]$ is the dg algebra $\Z[{\,\mathchar'26\mkern-12mu d}]\simeq \oR\HHom_{\Z[\hbar]}(\Z,\Z)$ for ${\,\mathchar'26\mkern-12mu d}$ of chain degree $-1$ and weight $1$ with ${\,\mathchar'26\mkern-12mu d}^2=0$. Weak equivalences of $\Z[{\,\mathchar'26\mkern-12mu d}]$-modules in graded chain complexes are quasi-isomorphisms of the underlying chain complexes, forgetting ${\,\mathchar'26\mkern-12mu d}$, and these correspond to filtered quasi-isomorphisms of the associated complete filtered complexes.
\begin{definition}
For a filtered chain complex $(V,F)$, the corresponding $\bG_m$-equivariant $\Z[{\,\mathchar'26\mkern-12mu d}]$-module $\g\fr_FV$ is given in weight $i$ by
\[
\g\fr^i_FV:= \cone(F^{i+1}V \to F^iV),
\]
with ${\,\mathchar'26\mkern-12mu d} \co \g\fr_F^iV \to \g\fr^{i+1}_FV_{[-1]}$ given by the identity on $F^{i+1}V$ and $0$ elsewhere.
\end{definition}
There is an obvious quasi-isomorphism from $\g\fr_FV$ to the associated graded $\gr_FV$, but the latter does not have a natural ${\,\mathchar'26\mkern-12mu d}$-action.
The homotopy inverse functor to $\g\fr$
can be realised
explicitly as follows:
\begin{definition}
Given a $\Z[{\,\mathchar'26\mkern-12mu d}]$-module $E$ in $\bG_m$-equivariant chain complexes, define the chain complex $\ff(E)$ to be the semi-infinite total complex
\[
\ff(E):= (\bigoplus_{i>0} E(i) \oplus \prod_{i \le 0} E(i), \delta \pm {\,\mathchar'26\mkern-12mu d}),
\]
equipped with the complete exhaustive filtration
\[
F^p \ff(E):= (\prod_{i \le p} E(i), \delta \pm {\,\mathchar'26\mkern-12mu d}).
\]
\end{definition}
This clearly maps weak equivalences to filtered quasi-isomorphisms.
One way of thinking of the category of $\Z[{\,\mathchar'26\mkern-12mu d}]$-modules is that we are allowed to split the filtration on a filtered complex, but only at the expense of having a component ${\,\mathchar'26\mkern-12mu d}$ of the differential which does not respect the grading. The associated graded complex is then simply given by forgetting the action of ${\,\mathchar'26\mkern-12mu d}$.
Another way of understanding this equivalence is to observe that a cofibrant resolution of $\Z[{\,\mathchar'26\mkern-12mu d}]$ as a DGAA is given by the free algebra $\Z\<{\,\mathchar'26\mkern-12mu d}_1, {\,\mathchar'26\mkern-12mu d}_2, \ldots\>$ with ${\,\mathchar'26\mkern-12mu d}_m$ of chain degree $-1$ and weight $-m$, satisfying $\delta{\,\mathchar'26\mkern-12mu d}_m + \sum_{i+j=m} {\,\mathchar'26\mkern-12mu d}_i{\,\mathchar'26\mkern-12mu d}_j$. Thus the structure of a $\Z\<{\,\mathchar'26\mkern-12mu d}_1, {\,\mathchar'26\mkern-12mu d}_2, \ldots\>$-module on a chain complex $E$ is the same as a differential $\delta + \sum {\,\mathchar'26\mkern-12mu d}_i$ on $\bigoplus_{i>0} E(i) \oplus \prod_{i \le 0} E(i)$ respecting the filtration and agreeing with $\delta$ on the associated graded.
\begin{definition}
Given a ring $k$, a linear algebraic group $G$ over $k$, and a $G$-equivariant CDGA $R$ in chain complexes over $k$, define the category $dg\Mod_G(R)$ to consist of $G$-equivariant $R$-modules in chain complexes.
\end{definition}
Thus the Rees construction $\xi(V,M)$ of a filtered $R$-module $M$ lies in $dg\Mod_{\bG_m}(R[\hbar])$, while $ \g\fr_FM \in dg\Mod_{\bG_m}(R[{\,\mathchar'26\mkern-12mu d}])$. When $G$ is linearly reductive, there is a cofibrantly generated model structure on $dg\Mod_G(R)$ in which fibrations are surjections and weak equivalences are quasi-isomorphisms of the underlying chain complexes.
The dg algebra $R[{\,\mathchar'26\mkern-12mu d}]$ has the natural structure of a dg Hopf $R$-algebra, by setting ${\,\mathchar'26\mkern-12mu d}$ to be primitive.
\begin{definition}
We define a closed symmetric monoidal structure $\ten_R$ on the category $dg\Mod_{\bG_m}(R[{\,\mathchar'26\mkern-12mu d}])$ by giving the chain complex $M\ten_RN$ an $R[{\,\mathchar'26\mkern-12mu d}]$-module structure via the comultiplication on the Hopf algebra $R[{\,\mathchar'26\mkern-12mu d}]$.
\end{definition}
With respect to this structure, the functors $\g\fr$ and $\ff$ are both lax monoidal. By way of comparison,
note that for the usual tensor product of filtered complexes over $k$, we have $\gr_F(U\ten_k V) = \gr_F(U)\ten_{k}\gr_F(V)$.
\subsubsection{Koszul duality for almost commutative rings}
From now on, we fix a chain CDGA $R$ over $\Q$. We refer to associative algebras in chain complexes as DGAAs, and commutative algebras in chain complexes as CDGAs.
We will also refer to to coassociative coalgebras in chain complexes over $R$ as DGACs over $R$.
\begin{definition}
We say that a complete filtered DGAA $(A,F)$ is almost commutative if $\gr_FA$ is a CDGA. Similarly, a filtered DGAC $(C,F)$ is said to be almost cocommutative if the comultiplication on $\gr_FC$ is cocommutative.
\end{definition}
Thus for any almost commutative DGAA $(A,F)$, the Rees construction $\xi(A,F)$ is an algebra over the $\bB\bD_1$-operad over $[\bA^1/\bG_m]$ as described in \cite[\S 3.5.1]{CPTVV} (or \cite[\S 2.4.2]{CostelloGwilliamVol2} for its completion), corresponding to the filtration on the associative operad $\Ass$ given by powers of the augmentation ideal of $T(V) \to \Symm(V)$. Since we only wish to consider complete filtrations, we are effectively studying algebras $\g\fr(A,F)$ over the operad $ \g\fr(BD_1)$ in $dg\Mod_{\bG_m}(\Q[{\,\mathchar'26\mkern-12mu d}])$, where we write $BD_1$ for the complete filtered operad associated to $\bB\bD_1$.
\begin{definition}\label{bardef}
We write $\b$ for the bar construction from possibly non-unital DGAAs over $R$ to ind-conilpotent DGACs over $R$. Explicitly, this is given by taking the tensor coalgebra
\[
\b A:= T(A_{[-1]})= \bigoplus_{i \ge 0} (A_{[-1]})^{\ten_R i},
\]
with chain differential given on cogenerators $A_{[-1]}$ by combining the chain differential and multiplication on $A$.
Write $\b_+ A$ for the subcomplex $T_+(A_{[-1]})=\bigoplus_{i > 0} A_{[-1]}^{\ten_R i}$
Let $\Omega_+$ be the left adjoint to $\b_+$, given by the tensor algebra
\[
\Omega_+ C: = \bigoplus_{j > 0} (C_{[1]})^{\ten_R i},
\]
with chain differential given on generators $C_{[1]}$ by combining the chain differential and comultiplication on $C$. We then define $\Omega C:= R \oplus \Omega_+C$ by formally adding a unit.
\end{definition}
\begin{definition}\label{betadef}
Given an almost commutative DGAA $(A,F)$, we define the filtration $\beta F$ on $\b A$ by convolution with the Poincar\'e-Birkhoff Witt filtration $\beta$. Explicitly, there is a shuffle multiplication $\nabla$ on $(\b A)_{\#}$ given on cogenerators by the identity maps $(A\ten R) \oplus (R\ten A) \to A $, making $(\b A)_{\#}$ into a Hopf algebra. Writing $F$ as an increasing filtration, we then set
$\beta^j\b A:= \nabla((\b_+ A)^{\ten j})$, and
\[
(\beta F)_i\b A: = \sum_j F_{i+j}\cap \beta^j\b A.
\]
\end{definition}
\begin{lemma}\label{betanice}
The filtration $\beta F$ makes $\b A$ into an almost cocommutative DGAC.
\end{lemma}
\begin{proof}
The filtration $\beta$ automatically behaves with respect to the comultiplication, making $(\b A)_{\#}$ a filtered coalgebra, and so $(\beta F)$ also gives a filtered coalgebra structure. To see that $\b A$ is a filtered DGAC, it only remains to show that the spaces $(\beta F)_i\b A$ are closed under the chain differential. Since the latter is a coderivation, it suffices to check that it induces a filtered map on cogenerators.
The filtration induced on cogenerators by $\beta$ is just $A_{[-1]}= \gr^{\beta}_1A_{[-1]}$, so $(\beta F)_i A_{[-1]}=F_{i+1}A_{[-1]}$. We also get $\beta^1(A_{[-1]}^{\ten 2})=A_{[-1]}^{\ten 2} $, $\beta^2(A_{[-1]}^{\ten 2})=\L^2A_{[-1]}$, and $\beta^3(A_{[-1]}^{\ten 2})=0$, so
\[
(\beta F)_i(A_{[-1]}^{\ten 2})= F_{i+1}(A_{[-1]}^{\ten 2})+ F_{i+2}(\L^2A)_{[-2]}.
\]
Multiplication and the chain differential on $A$ automatically preserve $F$, so the only remaining condition is that multiplication sends $F_{i+2}(\L^2A)$ to $F_{i+1}A$ --- this is precisely the condition that $\gr_FA$ be commutative.
Finally, observe that on associated gradeds, the multiplication map $\gr^{\beta F}_i(A\ten A)\to \gr^{\beta F}_i(A)$ is the map
\[
\gr^F_{i+1}\Symm^2(A) \oplus \gr^F_{i+2}\L^2A \to \gr^F_{i+1}A
\]
given by multiplication on the first factor and Lie bracket on the second. Thus
\[
\gr^{\beta F}\b A_{\#}
\]
is the Poisson coalgebra $\Co\Symm_R(\gr^F_{*+1}\Co\Lie_R A)_{\#}$, the chain differential involving both product and Lie bracket on $\gr^FA$. In particular, the comultiplication on $\b A$ is cocommutative.
\end{proof}
In fact, observe that we can characterise $\beta F$ as the smallest almost cocommutative filtration on $\b A$ for which the induced filtration on cogenerators is $(\beta F)_i A_{[-1]}=F_{i+1}A_{[-1]}$.
\begin{definition}\label{betastardef}
Given an almost cocommutative DGAC $(C,F)$ over $R$, define the filtration $\beta^*F$ on $\Omega C$ and $\Omega_+C$ by convolution with the PBW filtration. Explicitly, define a comultiplication $\Delta$ on $T(C_{[1]})$ to be the algebra morphism sending $c \in C_{[1]}$ to $c\ten 1 + 1 \ten c$, and let $\beta^*_r:= \ker (\Delta^{(r+1)}\co T(C_{[1]}) \to T_+(C_{[1]})^{\ten r+1})$. We then set
\[
(\beta^* F)_i\Omega C: = \sum_j F_{i-j}\cap \beta^*_j\Omega C,
\]
and similarly for $\Omega_+C$. We then define $\hat{\Omega}_+C$ to be the completion with respect to $\beta^*$.
\end{definition}
\begin{lemma}\label{betastarnice}
The filtration $\beta^* F$ makes $\hat{\Omega} A$ into an almost commutative DGAA.
\end{lemma}
\begin{proof}
The constructions $(\b, \beta)$ and $(\Omega, \beta^*)$ are dual to each other, so
the proof of Lemma \ref{betanice} adapts after taking shifts and duals.
\end{proof}
\begin{definition}
Define the functors $\b_{BD_1}$ and $\Omega_{BD_1}$ by $\b_{BD_1}(A,F):= (\b A, \beta F)$ and $\Omega_{BD_1}(C,F):= (\hat{\Omega} C, \beta^*F)$; define $\b_{BD_1,+}$ and $\Omega_{BD_1,+}$ similarly.
\end{definition}
\begin{lemma}
The functor $\Omega_{BD_1,+}$ is left adjoint to the functor $\b_{BD_1,+}$ from complete non-unital almost commutative DGAAs $A$ over $R$ to non-counital almost cocommutative DGACs $C$ over $R$.
\end{lemma}
\begin{proof}
Given $A$ and $C$, the sets $\Hom_{DGAA}(\Omega_+C,A)$ and $\Hom(C, \b A)$ can both be identified with the set
\[
\{f \in F_1\HHom_R(C,A)^1 ~:~ [\delta, f] + f\smile f =0\},
\]
where the product $\smile$ combines multiplication on $A$ with comultiplication on $C$.
\end{proof}
Observe that the product $\smile$ makes the complex $\HHom_R(C,A)$ into an almost commutative DGAA, so $F_1\HHom_R(C,A)$ is closed under the commutator, hence a differential graded Lie algebra (DGLA).
\begin{lemma}\label{barcobarprop1}
If $A$ is a complete filtered non-unital almost commutative DGAA with $\gr_FA$ flat over $R$, then the co-unit $\vareps_A\co \Omega_{BD_1,+}\b_{BD_1,+}A \to A$ of the adjunction is a filtered quasi-isomorphism.
\end{lemma}
\begin{proof}
It suffices to show that $\vareps$ gives quasi-isomorphisms on the graded algebras associated to the filtrations. The functors $\gr_{\beta}\b_{BD_1,+}$ and $\gr_{\beta^*}\Omega_{BD_1,+}$ are then just the bar and cobar functors for the Poisson operad, equipped with a $\bG_m$-action setting the commutative multiplication to be of weight $0$ and the Lie bracket of weight $-1$. For $\hbar$ a formal variable of weight $1$, the graded Poisson operad can be written as $\Com \circ \hbar^{-1} \Lie$, where $(\hbar \cP)(i):= \hbar^{i-1}\cP(i)$ for any operad $\cP$. The $\bG_m$-equivariant Koszul dual of the graded Poisson operad is then $(\Com \circ \hbar^{-1} \Lie)^! = (\hbar \Com) \circ \Lie = \hbar (\Com \circ \hbar^{-1} \Lie) $, so it is self-dual after a shift in filtrations. This shift is precisely the difference between PBW and lower central series, so $\gr \vareps$ is a graded quasi-isomorphism by Koszul duality for the Poisson operad.
\end{proof}
\subsection{Hochschild complexes}
Recall that we are fixing a chain CDGA $R$ over $\Q$.
\begin{definition}\label{HHdef0}
For an almost commutative DGAA $(A,F)$ over $R$ and a filtered $(A,F)$-bimodule $(M,F)$ in chain complexes for which the left and right $\gr^FA$-module structures on $\gr^FM$ agree, we define the filtered chain complex
\[
\CCC_{R, BD_1}(A,M)
\]
to be the completion of the cohomological Hochschild complex $\CCC_R(A,M)$ (rewritten as a chain complex) with respect to the filtration $\gamma F$ defined as follows. We may identify $ \CCC_R(A,M)$ with the subcomplex of
\[
\HHom_R(\b A, \b(A \oplus M_{[1]}))
\]
consisting of coderivations extending the zero coderivation on $\b A$. The hypotheses on $M$ ensure that $A \oplus M$ is almost commutative (regarding $M$ as a square-zero ideal), so we have filtrations $\beta F$ on $\b A$ and $\b(A \oplus M_{[1]})$. We then define $(\gamma F)_i$ to consist of coderivations sending $(\beta F)_j \b A$ to $(\beta F)_{i+j-1}\b(A \oplus M)$.
Since a coderivation is determined by its value on cogenerators, and the cogenerators of the bar construction have weight $1$ with respect to the PBW filtration $\beta$, we may regard $(\gamma F)_i \CCC^{\#}_R(A,M)$ as the subspace of $ \HHom_R(\b A, M)^{\#}$ consisting of maps sending $(\beta F)_j\b A $ to $F_{i+j} M$.
We also define the subcomplex $\CCC_{R, BD_1,+}(A,M) $ to be the kernel of $\CCC_{R, BD_1}(A,M) \to M$, or equivalently $\HHom_R(\b_+ A, M)^{\#}$.
\end{definition}
\begin{remark}\label{HKRrmk}
When the filtrations $F$ are trivial in the sense that $A= \gr^F_0A$, $M=\gr^F_0M$, we simply write $\gamma := \gamma F$, and observe that $\gamma_0 \CCC_R(A,M)=M$, while $\gamma_1 \CCC_R(A,M) $ is just the Harrison cohomology complex. When $A$ is moreover cofibrant as a CDGA, observe that the HKR isomorphism gives a filtered levelwise quasi-isomorphism $(\CCC_R(A,M), \tau^{\HH}) \to (\CCC_{R, BD_1}(A,M),\gamma)$, where $\tau^{\HH}$ denotes good truncation in the Hochschild direction as featured in \cite[Definition \ref{DQnonneg-HHdef}]{DQnonneg}.
\end{remark}
\begin{lemma}\label{HHaclemma}
If $ \phi \co (A,F)\to (D,F)$ is a morphism of almost commutative DGAAs over $R$, then $\CCC_{R,BD_1}(A,D)$ is an almost commutative DGAA under the cup product, and $\CCC_{R,BD_1}(A,D) \to D$ is a morphism of almost commutative DGAAs.
\end{lemma}
\begin{proof}
This just follows because $\gr^{\gamma F}\CCC_R(A,D)^{\#} = \HHom( \gr^{\beta F}\b A, \gr^FD)^{\#} $, with $\gr^{\beta F}\b A $ cocommutative and $\gr^FD$ commutative.
\end{proof}
\subsubsection{Brace algebra structures}\label{bracesn}
Recall that a brace algebra $B$ over $R$ is an $R$-cochain complex equipped with a cup product in the form of a chain map
\[
B\ten B \xra{\smile} B,
\]
and braces in the form of cochain maps
\[
\{-\}\{-,\ldots,-\}_r \co B \ten B^{\ten r}\to B[-r]
\]
satisfying the conditions of \cite[\S 3.2]{voronovHtpyGerstenhaber} with respect to the differential. There is a brace operad $\Br$ in cochain complexes, whose algebras are brace algebras.
\begin{definition}\label{acbracedef}
Define an decreasing filtration $\gamma$ on the brace operad $\Br$ by putting the cup product in $\gamma^0$ and the braces $\{-\}\{-,\ldots,-\}_r $
in $\gamma^r$.
Thus a (brace, $\gamma$)-algebra $(A,F)$ in filtered complexes is a brace algebra for which the cup product respects the filtration, and the $r$-braces send $F_i$ to $F_{i-r}$. We refer to (brace, $\gamma$)-algebras as almost commutative brace algebras.
\end{definition}
Beware that the filtration $\gamma$ is not the same as that featuring in \cite[Definition 5.3]{safronovPoissonRednCoisotropic}, since we assign higher weights to higher braces.
In an almost commutative brace algebra $A$, the brace $\{-\}\{-\}_1$ is of weight $-1$; since it gives a homotopy between the cup product and its opposite, it follows that the commutator of the cup product is of weight $-1$, so $A$ is almost commutative as a DGAA. Moreover, a brace algebra structure on $A$ induces a dg bialgebra structure on $\b A$, as in \cite[\S 3.2]{voronovHtpyGerstenhaber}, and because $\beta^r \b A \subset (A_{[-1]})^{\ten \ge r}$, the multiplication on $\b A$ given by braces preserves the filtration $\beta F$ on $\b_{BD_1} A$, so it is a filtered bialgebra (with almost cocommutative comultiplication).
\begin{lemma}\label{HHaclemma2}
For any almost commutative DGAA $A$ over $R$, there is a natural almost commutative brace algebra structure on $\CCC_{R, BD_1}(A)$ over $R$. In particular, $\CCC_{R, BD_1}(A)_{[-1]}$ is a filtered DGLA over $R$, and its associated graded DGLA is abelian.
\end{lemma}
\begin{proof}
The formulae of \cite[\S 3]{voronovHtpyGerstenhaber} define a brace algebra structure on $\CCC_R(A)$. By Lemma \ref{HHaclemma}, we know that $(\CCC_R(A), \gamma F)$ is an almost commutative DGAA, so it suffices to show that the brace operations have the required weights.
Given $f \in (\gamma F)_p\HHom(\b A, A)$ and $g_i \in (\gamma F)_{q_i}\HHom(\b A,A)$, each $g_i$ corresponds to a coalgebra coderivation $\tilde{g}_i$ on $\b A$ sending $(\beta F)_j\b A$ to $(\beta F)_{j+ q_{i}-1}\b A$.
The element $\{f\}\{g_1, \ldots, g_m\}\in \HHom(\b A,A)$ is the composition
\[
\b A \xra{\Delta^{(m)}} (\b A)^{\ten m} \xra{ \tilde{g}_1\ten \ldots\ten \tilde{g}_m}( \b A)^{\ten m} \xra{\nabla} \b A \xra{f} A,
\]
where $\Delta^{(m)}$ is the iterated coproduct, and $\nabla$ the shuffle product. The definition of $\beta$ ensures that $\nabla$ preserves the filtration $\beta F$, so we have
\[
\{f\}\{g_1, \ldots, g_m\}\in (\gamma F)_{(p+q_1+\ldots +q_m-m)}\HHom(\b A,A).
\]
\end{proof}
\begin{definition}\label{braceopdef}
Given a brace algebra $B$, define the opposite brace algebra $B^{\op}$ to have the same elements as $B$, but multiplication $b^{\op}\smile c^{\op} := (-1)^{\deg b\deg c} (c\smile b)^{\op}$ and brace operations
given by the multiplication $(\b B^{\op}) \ten (\b B^{\op})\to \b B^{\op}$ induced by the isomorphism $(\b B^{\op})\cong (\b B)^{\op}$. Explicitly,
\[
\{b^{\op}\}\{c_1^{\op}, \ldots, c_m^{\op}\}:= \pm\{b\}\{c_m, \ldots, c_1\}^{\op},
\]
where $\pm= (-1)^{m(m+1)/2 + (\deg f-m)(\sum_i \deg c_i -m) + \sum_{i<j}\deg c_i\deg c_j}$.
\end{definition}
Observe that when a filtered brace algebra $B$ is almost commutative, then so is $B^{\op}$.
\begin{lemma}\label{involutiveHH}
Given DGAAs $A,D$ over $R$, there is an involution
\[
-i \co \CCC_R(A,D)^{\op} \to \CCC_R(A^{\op},D^{\op})
\]
of DGAAs given by
\[
i(f)(a_1, \ldots, a_m) = - (-1)^{\sum_{i<j} \deg a_i \deg a_j} (-1)^{m(m+1)/2}f(a_m^{\op}, \ldots , a_1^{\op})^{\op}.
\]
When $A=D$, the involution $-i$ is a morphism of brace algebras, and in particular
$i \co \CCC_R(A)_{[-1]} \to \CCC_R(A)_{[-1]}$ is a morphism of DGLAs.
Whenever $A$ is a cofibrant CDGA over $R$, this involution corresponds under the HKR isomorphism to the involution which acts on $\HHom_A(\Omega^p_{A/R},A)$ as scalar multiplication by $(-1)^{p-1}$.
\end{lemma}
\begin{proof}
This is effectively \cite[\S 2.1]{braunInvolutive}, adapted along the lines of \cite[Lemma \ref{DQnonneg-involutiveHH}]{DQnonneg}, together with the observation that $-i$ acts on braces in the prescribed manner.
\end{proof}
\subsubsection{Semidirect products}\label{semidirectsn}
\begin{lemma}\label{swisslemma}
Given a morphism $\phi \co A \to D$ of almost commutative filtered DGAAs over $R$, the almost commutative brace algebra $\CCC_{R, BD_1}(A)$ of Hochschild cochains acts on the almost commutative DGAA $\CCC_{R, BD_1}(A,D)$ in the form of a morphism
\[
\b_{BD_1,+}\CCC_{R,BD_1} (A) \to\b_{BD_1,+} \CCC_{R,BD_1}(\CCC_{R,BD_1} (A,D))
\]
of almost cocommutative bialgebras.
\end{lemma}
\begin{proof}
Given $g_1, \ldots, g_m\in \CCC_{R,BD_1}(A)$ and $f \in \CCC_{R,BD_1}(A,D)$, the brace operation $\{f\}\{g_1, \ldots, g_m\}$ is well-defined as an element of $\CCC_{R,BD_1}(A,D)$. Reasoning as in \cite[\S 3.2]{voronovHtpyGerstenhaber}, this combines with the morphism $\phi_* \co \CCC_{R,BD_1}(A)\to \CCC_{R,BD_1}(A,D)$
to give an action
\[
M_{\bt,\bt} \co \b_{BD_1}\CCC_{R,BD_1} (A,D)\ten_R \b_{BD_1}\CCC_{R,BD_1}(A)\to \b_{BD_1}\CCC_{R,BD_1}(A,D)
\]
of almost cocommutative dg coalgebras, associative with respect to the brace multiplication of \cite{voronovHtpyGerstenhaber}. This respects the filtrations for the same reason that the multiplication does on the bar construction of an almost commutative brace algebra (Definition \ref{acbracedef}).
Indeed, $\CCC_{R,BD_1}(A,D)$ is a brace $ \CCC_{R,BD_1}(A)$-module in the sense of \cite[Definition 3.2]{safronovPoissonRednCoisotropic}. On restricting to cogenerators, the multiplication above gives a map
\begin{align*}
\b_{BD_1}\CCC_{R,BD_1} (A,D)\to &\HHom( \b_{BD_1}\CCC_{R,BD_1}(A),\CCC_{R,BD_1}(A,D))\\
&\cong \CCC_{R,BD_1}(\CCC_{R,BD_1} (A,D)),
\end{align*}
and as in \cite[Proposition 4.2]{safronovPoissonRednCoisotropic}, this induces a morphism
\[
\b_{BD_1,+}\CCC_{R,BD_1} (A) \to\b_{BD_1,+} \CCC_{R,BD_1}(\CCC_{R,BD_1} (A,D))
\]
of almost cocommutative bialgebras, compatibility with the filtrations being automatic from the description above.
\end{proof}
For an $E_2$-algebra $C$ to act on an $E_1$-algebra $E$ is the same as a morphism from $C$ to the Hochschild complex of $E$. This is what we now construct for Hochschild complexes in the almost commutative setting, so that we will have a $BD_2$-algebra acting on a $BD_1$-algebra.
Proposition \ref{barcobarprop1} then combines with the adjunction property to give morphisms
\[
\CCC_{R,BD_1} (A) \xla{\sim} \Omega_{BD_1,+}\b_{BD_1,+}\CCC_{R,BD_1} (A) \to \CCC_{R,BD_1}(\CCC_{R,BD_1} (A,D)),
\]
of almost commutative DGAAs, and we need to enhance this to keep track of the brace algebra structures:
\begin{lemma}\label{barcobarprop2}
If $A$ is a complete filtered non-unital almost commutative brace algebra over $R$, then there is a natural almost commutative brace algebra structure on the DGAA $\Omega_{BD_1,+}\b_{BD_1,+}A$. If $\gr_FA$ is moreover flat over $R$, then there is a zigzag of filtered quasi-isomorphisms of almost commutative brace algebras between $A$ and $\Omega_{BD_1,+}\b_{BD_1,+}A$.
\end{lemma}
\begin{proof}
As in \cite{kadeishviliCobarBialg}, there is a natural brace algebra structure on $\Omega_+C$ for any bialgebra $C$; we now show that when $C$ is almost cocommutative, the resulting brace algebra structure on $\Omega_{BD_1,+}C$ is almost commutative.
For $c \in C$, the brace operation
\[
\{c\}\{-\}\co \Omega(C) \to \Omega(C)
\]
is defined by first taking the element $\sum_r \Delta^{(r)}c \in TC$, then applying the multiplication from $C$ internally within each subspace $C^{\ten r}$. Since $\Delta$ is almost cocommutative and $\Omega C$ almost commutative, it follows that when $c \in F_pC$, we get $\{c\}\{(\beta^*F)_i\Omega C\} \subset (\beta^*F)_{i+p}\Omega C$. Equivalently, for $y \in (\beta^*F)_i\Omega C$, the map $\{-\}\{y\}$ sends $(\beta^*F)_{p}C= F_{p-1}C$ to $(\beta^*F)_{i+p-1}\Omega C$.
We automatically have $\{c\}\{\}_0=c$, and the higher braces $\{c\}\{-\}_n \co \Omega(C)^{\ten n} \to \Omega(C)$ are then set to be $0$ for $c \in C$, and extended to the whole of $\Omega C$ via the identities
\[
\{xz\}\{y_1, \ldots, y_n\} = \sum_{i=0}^n \pm x\{y_1, \ldots, y_i\}z\{y_{i+1}, \ldots, y_n\}.
\]
In particular, this means that $\{-\}\{y\}$ is a derivation, so must map $(\beta^*F)_{p}\Omega C$ to $(\beta^*F)_{i+p-1}\Omega C$, since it does so on generators. We can then describe higher braces $\{-\}\{y_1, \ldots, y_n\}$ as the composition
\[
\Omega(C) \xra{\Delta^{(n)}} \Omega(C)^{\ten n} \xra{\{-\}\{y_1\}\ten \ldots \ten \{-\}\{y_n\} }\Omega(C)^{\ten n} \to \Omega(C),
\]
the final map being given by multiplication. By the construction of $\beta^*$, the map $\Delta^{(n)}$ preserves the filtration $(\beta^*F)$, so for $y_i \in (\beta^*F)_{q_i}\Omega C$, we have
\[
\{-\}\{y_1, \ldots, y_n\}\co (\beta^*F)_p\Omega(C)\to (\beta^*F)_{(p+q_1+ \ldots +q_n -n)}\Omega C,
\]
making $\Omega_{BD_1,+}C$ almost commutative
Taking $C= \b_{BD_1,+}A$ gives an almost commutative brace algebra $\Omega_{BD_1,+}\b_{BD_1,+}A$ and an almost commutative DGAA quasi-isomorphism $\Omega_{BD_1,+}\b_{BD_1,+}A \to A$ by Lemma \ref{barcobarprop1}, but this is not a brace algebra morphism in general. If we let $\Omega_{\Br,+}$ be the left adjoint to $\b_{BD_1}$ as a functor from almost commutative brace algebras to almost cocommutative bialgebras, then it suffices to establish a filtered brace algebra quasi-isomorphism $\Omega_{BD_1,+}\b_{BD_1,+}A\to \Omega_{\Br,+}\b_{BD_1,+}A $. If we disregard the filtrations, this is the main result of \cite{youngBraceBar}, and the filtered case follows by observing that the homotopy of \cite[Theorem 3.3]{youngBraceBar} preserves the respective filtrations.
\end{proof}
Combining Lemmas \ref{swisslemma} and \ref{barcobarprop2} gives:
\begin{proposition}\label{swissprop}
For any morphism $\phi \co A \to D$ of almost commutative filtered DGAAs over $R$, there is a canonical zigzag
\[
\CCC_{R, BD_1}(A) \la \tilde{C} \to \CCC_{R, BD_1}(\CCC_{R, BD_1}(A,D))
\]
of almost commutative brace algebras over $R$.
\end{proposition}
\begin{definition}\label{semidirectdef}
Given an almost commutative brace algebra $C$ over $R$, and an almost commutative DGAA $E$ over $R$ which is a left brace $C$-module compatibly with the filtrations, define the semidirect product $E_{[1]} \rtimes C$ to be the almost commutative non-unital brace algebra given by the homotopy fibre product of the diagram
\[
\tilde{C} \to \CCC_{R, BD_1}(E) \la \CCC_{R,BD_1,+}(E),
\]
for the brace algebra resolution $\tilde{C}$ of $C$ mapping to $\CCC_{R,BD_1}(E)$ via Lemma \ref{barcobarprop2} and the proof of Lemma \ref{swisslemma}.
\end{definition}
\begin{remark}\label{swissrmk}
Observe that we have a natural morphism $ E_{[1]} \rtimes C \to C$ of non-unital brace algebras, with homotopy fibre given by the homotopy kernel of $\CCC_{R,BD_1,+}(E) \to \CCC_{R,BD_1}(E) $. As a complex, this kernel is just $E_{[1]}$, and the underlying DGLA is just the DGLA underlying the DGAA $E$. For more discussion of the map $\CCC_{R,+}(E) \to \CCC_R(E) $ of $E_2$-algebras, see \cite[\S 2.7]{kontsevichOperads}.
\end{remark}
\section{Defining quantisations for derived co-isotropic structures}\label{affinesn}
In this section, we develop a precise notion of quantisation for derived co-isotropic structures in a stacky affine setting. Recall that we are fixing a chain CDGA $R$ over $\Q$.
\subsection{Stacky thickenings of derived affines}\label{stackyCDGAsn}
We now recall some definitions and lemmas from \cite[\S \ref{poisson-Artinsn}]{poisson}, as summarised in \cite[\S \ref{DQvanish-bicdgasn}]{DQvanish}. By default, we will regard the CDGAs in derived algebraic geometry as chain complexes $\ldots \xra{\delta} A_1 \xra{\delta} A_0 \xra{\delta} \ldots$ rather than cochain complexes --- this will enable us to distinguish easily between derived (chain) and stacky (cochain) structures.
\begin{definition}
A stacky CDGA is a chain cochain complex $A^{\bt}_{\bt}$ equipped with a commutative product $A\ten A \to A$ and unit $\Q \to A$. Given a chain CDGA $R$, a stacky CDGA over $R$ is then a morphism $R \to A$ of stacky CDGAs. We write $DGdg\CAlg(R)$ for the category of stacky CDGAs over $R$, and $DG^+dg\CAlg(R)$ for the full subcategory consisting of objects $A$ concentrated in non-negative cochain degrees.
\end{definition}
When working with chain cochain complexes $V^{\bt}_{\bt}$, we will usually denote the chain differential by $\delta \co V^i_j \to V^i_{j-1}$, and the cochain differential by $\pd \co V^i_j \to V^{i+1}_j$.
Readers interested only in DM (as opposed to Artin) stacks may ignore the stacky part of the structure and consider only chain CDGAs $A_{\bt}= A^0_{\bt}$ throughout this section.
\begin{definition}
Say that a morphism $U \to V$ of chain cochain complexes is a levelwise quasi-isomorphism if $U^i \to V^i$ is a quasi-isomorphism for all $i \in \Z$. Say that a morphism of stacky CDGAs is a levelwise quasi-isomorphism if the underlying morphism of chain cochain complexes is so.
\end{definition}
There is a model structure on chain cochain complexes over $R$ in which weak equivalences are levelwise quasi-isomorphisms and fibrations are surjections --- this follows by identifying chain cochain complexes with the category $dg\Mod_{\bG_m}(R[\pd])$ of \S \ref{filtrnsn}, for instance, for $\pd$ of chain degree $0$ and weight $1$, with $\pd^2=0$.
The following is \cite[Lemma \ref{poisson-bicdgamodel}]{poisson}:
\begin{lemma}\label{bicdgamodel}
There is a cofibrantly generated model structure on stacky CDGAs over $R$ in which fibrations are surjections and weak equivalences are levelwise quasi-isomorphisms.
\end{lemma}
There is a denormalisation functor $D$ from non-negatively graded CDGAs to cosimplicial algebras, with
left adjoint $D^*$ as in \cite[Definition \ref{ddt1-nabla}]{ddt1}.
Given a cosimplicial chain CDGA $A$, $D^*A$ is then a stacky CDGA in non-negative cochain degrees. By \cite[Lemma \ref{poisson-Dstarlemma}]{poisson}, $D^*$ is a left Quillen functor from the Reedy model structure on cosimplicial chain CDGAs to the model structure of Lemma \ref{bicdgamodel}.
Since $DA$ is a pro-nilpotent extension of $A^0$, when $\H_{<0}(A)=0$ we think of the simplicial hypersheaf $\oR \Spec DA$ as a stacky derived thickening of the derived affine scheme $\oR \Spec A^0$. Stacky CDGAs arise as formal completions of derived Artin $N$-stacks along affine atlases, as in \cite[\S \ref{poisson-stackyCDGAsn}]{poisson}. When $\fX$ is a $1$-geometric derived Artin stack (i.e. has affine diagonal), the formal completion of an affine atlas $U \to \fX$ is given by the relative de Rham complex
\[
O(U) \xra{\pd} \Omega^1_{U/\fX} \xra{\pd} \Omega^2_{U/\fX}\xra{\pd}\ldots ,
\]
which arises by applying the functor $D^*$
to the \v Cech nerve of $U$ over $\fX$.
\begin{definition}
Given a chain cochain complex $V$, define the cochain complex $\hat{\Tot} V \subset \Tot^{\Pi}V$ by
\[
(\hat{\Tot} V)^m := (\bigoplus_{i < 0} V^i_{i-m}) \oplus (\prod_{i\ge 0} V^i_{i-m})
\]
with differential $\pd \pm \delta$.
\end{definition}
\begin{definition}
Given a stacky CDGA $A$ and $A$-modules $M,N$ in chain cochain complexes, we define internal $\Hom$s
$\cHom_A(M,N)$ by
\[
\cHom_A(M,N)^i_j= \Hom_{A^{\#}_{\#}}(M^{\#}_{\#},N^{\#[i]}_{\#[j]}),
\]
with differentials $\pd f:= \pd_N \circ f \pm f \circ \pd_M$ and $\delta f:= \delta_N \circ f \pm f \circ \delta_M$,
where $V^{\#}_{\#}$ denotes the bigraded vector space underlying a chain cochain complex $V$.
We then define the $\Hom$ complex $\hat{\HHom}_A(M,N)$ by
\[
\hat{\HHom}_A(M,N):= \hat{\Tot} \cHom_A(M,N).
\]
\end{definition}
Note that there is a multiplication $\hat{\HHom}_A(M,N)\ten \hat{\HHom}_A(N,P)\to \hat{\HHom}_A(M,P)$; beware that the same is not true for $\Tot^{\Pi} \cHom_A(M,N)$ in general.
\begin{definition}\label{hfetdef}
A morphism $A \to B$ in $DG^+dg\CAlg(R)$ is said to be homotopy formally \'etale when the map
\[
\{\Tot \sigma^{\le q} (\oL\Omega_{A}^1\ten_{A}^{\oL}B^0)\}_q \to \{\Tot \sigma^{\le q}(\oL\Omega_{B}^1\ten_B^{\oL}B^0)\}_q
\]
on the systems of brutal cotruncations is a pro-quasi-isomorphism.
\end{definition}
Combining \cite[Proposition \ref{poisson-replaceprop}]{poisson} with \cite[Theorem \ref{stacks2-bigthm} and Corollary \ref{stacks2-Dequivcor}]{stacks2}, every strongly quasi-compact derived Artin $N$-stack over $R$ can be resolved by a derived DM hypergroupoid (a form of homotopy formally \'etale cosimplicial diagram) in $DG^+dg\CAlg(R)$.
The constructions of \S \ref{centresn} all adapt to chain cochain complexes, by just regarding the cochain structure as a $\bG_m$-equivariant $\Q[\pd]$-module structure; quasi-isomorphisms are only considered in the chain direction. We refer to associative (resp. brace) algebras in chain cochain complexes as stacky DGAAs (resp. stacky brace algebras), and have the obvious notions of almost commutativity for filtered stacky DGAAs and filtered stacky brace algebras. We define bar constructions $\b$ generalising Definition \ref{bardef} so that shifts are exclusively in the chain direction.
\begin{definition}\label{HHdefa}
For a stacky DGAA $A$ over $R$ and an $A$-bimodule $M$ in chain cochain complexes,
we define the internal cohomological Hochschild complex $\C\C_R(A,M)$ to be the chain cochain subcomplex of
\[
\cHom_R(\b A, \b(A \oplus M_{[1]}))
\]
consisting of coderivations extending the zero derivation on $\b A$, where the algebra structure on $A \oplus M_{[1]}$ is defined so that $M$ is a square-zero ideal.
Since a coderivation is determined by its value on cogenerators, the complex $\C\C_R(A,M)$ is given explicitly by
\[
\C\C_R(A,M)_{\#}:=\prod_n \cHom_R( A^{\ten n}, M)_{[n]},
\]
with chain differential $\delta \pm b$, for the
Hochschild differential $b$ given by
\begin{align*}
(b f)(a_1, \ldots , a_n) = &a_1 f(a_2, \ldots, a_n)\\
&+ \sum_{i=1}^{n-1}(-1)^i f(a_1, \ldots, a_{i-1}, a_ia_{i+1}, a_{i+2}, \ldots, a_n)\\
&+ (-1)^n f(a_1, \ldots, a_{n-1})a_n.
\end{align*}
We simply write $\C\C_R(A)$ for $\C\C_R(A,A)$.
When $(A,F)$ is almost commutative and $(M,F)$ is a filtered $A$-bimodule for which the left and right $\gr^FA$-module structures on $\gr^FM$ agree, we define the filtered chain cochain complex
\[
\C\C_{R, BD_1}(A,M)
\]
by endowing $\C\C_R(A,M)$ with the filtration $\gamma F$ of Definition \ref{HHdef0}, and completing with respect to it.
\end{definition}
\subsection{Differential operators}
We now fix a stacky CDGA $B$ over a chain CDGA $R$, and recall the definitions of differential operators from \cite[\S \ref{DQvanish-biquantsn}]{DQvanish}.
\begin{definition}\label{Diffdef}
Given $B$-modules $M,N$ in chain cochain complexes, inductively define the
filtered chain cochain complex $\cDiff(M,N)= \cDiff_{B/R}(M,N)\subset \cHom_R(M,N)$ of differential operators from $M$ to $N$ by setting
\begin{enumerate}
\item $F_0 \cDiff(M,N)= \cHom_B(M,N)$,
\item $F_{k+1} \cDiff(M,N)=\{ u \in \cHom_R(M,N)~:~ [b,u]\in F_{k} \cDiff(M,N)\, \forall b \in B \}$, where $[b,u]= bu- (-1)^{\deg b\deg u} ub$.
\item $\cDiff(M,N)= \LLim_k F_k\cDiff(M,N)$.
\end{enumerate}
We simply write $\cDiff_{B/R}(M):= \cDiff_{B/R}(M,M)$.
We then define the filtered cochain complex $\hat{\Diff}(M,N)= \hat{\Diff}_{B/R}(M,N)\subset \hat{\HHom}_R(M,N)$ by $\hat{\Diff}(M,N):= \hat{\Tot} \cDiff(M,N)$.
\end{definition}
\begin{definition}
Given a $B$-module $M$ in chain cochain complexes, write $\sD(M)= \sD_{B/R}(M):= \hat{\Diff}_{B/R}(M,M)$, which we regard as a sub-DGAA of $\hat{\HHom}_R(M,M)$. We simply write $\sD_B= \sD_{B/R}$ for $\sD_{B/R}(B,B)$ and $\cDiff_{B/R}$ for $\cDiff_{B/R}(B,B)$.
\end{definition}
The definitions ensure that the associated gradeds $\gr^F_k\cDiff_B(M,N)$ have the structure of $B$-modules. As in \cite{DQvanish}, there are maps
\[
\gr^F_{k} \cDiff(M,N) \to \cHom_B(M\ten_B\CoS^k_B\Omega^1_B,N)
\]
for all $k$, which are isomorphisms when $B$ is cofibrant. [Here, $\CoS_B^p(M) =\Co\Symm^p_B(M)= (M^{\ten_B p})^{\Sigma_p}$ and $\Co\Symm_B(M) = \bigoplus_{p \ge 0}\CoS_B^p(M)$.]
The following is \cite[Definition \ref{DQvanish-bistrictlb}]{DQvanish}:
\begin{definition}\label{bistrictlb}
Define a strict line bundle over $B$ to be a $B$-module $M$ in chain cochain complexes such that $M^{\#}_{\#}$ is a projective module of rank $1$ over the bigraded-commutative algebra $B^{\#}_{\#}$ underlying $B$.
\end{definition}
The motivating examples of strict line bundles, and the only ones we will need to consider for our applications in \S \ref{lbsn}, are the double complexes $B_c$ defined as follows. Given $c \in \z^1\z_0B$, we just set $B_c^{\#}$ to be the $B$-module $B^{\#}$ (so the chain differential is still $\delta$), and then we set the cochain differential to be $\pd +c$.
\subsection{Relative quantised polyvectors}
\begin{definition}\label{QPoldef}
Given a morphism $\phi \co A\to B$ of cofibrant stacky CDGAs over $R$ and a strict line bundle $M$ over $B$, we define the DGLA $Q\widehat{\Pol}(A,M;0)[1]$ of $0$-shifted relative quantised polyvectors as follows. We first note that Definition \ref{semidirectdef} and Proposition \ref{swissprop} adapt to double complexes to give a non-unital almost commutative stacky brace algebra
\[
\C:= \C\C_{R,BD_1}(A, \cDiff_{B/R}(M))_{[1]} \rtimes \C\C_{R,BD_1}(A),
\]
and then form the DGLA
\[
Q\widehat{\Pol}_R(A,M;0) := \prod_{p \ge 0} \hat{\Tot}(\gamma F)_p \C\hbar^{p-1}.
\]
We define filtrations $\tilde{F}$ and $G$ on $Q\widehat{\Pol}_R(A,M;0)$ by
\begin{align*}
\tilde{F}^iQ\widehat{\Pol}_R(A,M;0) &:= \prod_{p \ge i}\hat{\Tot}(\gamma F)_p\C\hbar^{p-1},\\
G^jQ\widehat{\Pol}_R(A,M;0) &:= Q\widehat{\Pol}_R(A,M;0)\hbar^j.
\end{align*}
\end{definition}
Note that almost commutativity of $\C$ implies that $[\tilde{F}^iQ\widehat{\Pol}, \tilde{F}^jQ\widehat{\Pol}] \subset \tilde{F}^{i+j-1}Q\widehat{\Pol}$ and $[G^iQ\widehat{\Pol}, G^jQ\widehat{\Pol}]\subset G^{i+j}Q\widehat{\Pol}$.
\begin{remark}\label{extremecasesrmk}
When $B=0$, observe that $\sD_{B/R}=0$, so we just have $Q\widehat{\Pol}_R(A,0;0) \simeq \prod_{p \ge 0}(\hat{\Tot} \gamma_p\C\C_{R,BD_1}(A)\hbar^{p-1})$, which admits a filtered quasi-isomorphism from the complex $Q\widehat{\Pol}_R(A,0)$ of $0$-shifted quantised polyvectors from \cite[Definition \ref{DQnonneg-qpoldef}]{DQnonneg} as in Remark \ref{HKRrmk}.
By Remark \ref{swissrmk}, there is always a projection $ Q\widehat{\Pol}_R(A,M;0)[1]\to Q\widehat{\Pol}_R(A,0;0)[1]$, and the homotopy fibre over $0$ is equivalent to the filtered $L_{\infty}$-algebra underlying the DGAA $Q\widehat{\Pol}_A(B,-1):= \prod_{p \ge 0}F_p\sD_{B/A}(M)\hbar^{p-1}$ when $B$ is cofibrant over $A$. The latter follows because the HKR isomorphism for $A$ ensures that $ \cDiff_{B/A} \to \C\C_R(A, \cDiff_{B/R})$ is a filtered quasi-isomorphism.
\end{remark}
The following is standard:
\begin{definition}\label{mcPLdef}
Given a DGLA $L$, define the the Maurer--Cartan set by
\[
\mc(L):= \{\omega \in L^{1}\ \,|\, d\omega + \half[\omega,\omega]=0 \in L^{2}\}.
\]
Following \cite{hinstack}, define the Maurer--Cartan space $\mmc(L)$ (a simplicial set) of a nilpotent DGLA $L$ by
\[
\mmc(L)_n:= \mc(L\ten_{\Q} \Omega^{\bt}(\Delta^n)),
\]
where
\[
\Omega^{\bt}(\Delta^n)=\Q[t_0, t_1, \ldots, t_n,\delta t_0, \delta t_1, \ldots, \delta t_n ]/(\sum t_i -1, \sum \delta t_i)
\]
is the commutative dg algebra of de Rham polynomial forms on the $n$-simplex, with the $t_i$ of degree $0$.
Given a pro-nilpotent DGLA $L= \Lim_i L_i$, define $\mmc(L):= \Lim_i \mmc(L_i)$.
\end{definition}
\begin{definition}\label{QPdef}
Given a morphism $\phi \co A\to B$ of cofibrant stacky CDGAs over $R$, define the space $Q\cP(A,M;0)$ of quantisations of the pair $(A,M)$ to be the space
\[
\mmc(\tilde{F}^2Q\widehat{\Pol}(A,M;0)[1])
\]
of Maurer--Cartan elements of the pro-nilpotent DGLA $\tilde{F}^2Q\widehat{\Pol}(A,M;0)[1]$.
\end{definition}
Replacing $\tilde{F}^2Q\widehat{\Pol}(A,M;0)$ with its quotient by $G^k$ gives a space $ Q\cP(A,B;0)/G^k$; we think of $\cP(A,B;0) :=Q\cP(A,B;0)/G^1$ as being the space of co-isotropic structures on $A \to B$.
\begin{remark}\label{curvedrmk}
Uncoiling the definitions, it follows that each element of $Q\cP(A,M;0)$ gives rise to a curved almost commutative $A_{\infty}$-deformation $\tilde{A}$ of $\hat{\Tot}A$ over $R\llbracket \hbar \rrbracket$ (coming from elements of $\mc(\CCC_{R}(\hat{\Tot} A))$), together with a curved almost commutative $A_{\infty}$-morphism $\tilde{A} \to \sD_{B/R}(M)\llbracket \hbar \rrbracket$ deforming the map $\hat{\Tot} A \to \sD_{B/R}(M)$.
However, there are additional restrictions on the resulting deformations, which remember that they originate from the stacky CDGAs $A \to B$ instead of the CDGAs $\hat{\Tot}A \to \hat{\Tot}B$. When the stacky CDGAs are bounded in the cochain direction, as occurs when they originate from $1$-geometric derived Artin stacks, these additional restrictions are vacuous (cf. \cite[Example \ref{DQnonneg-quantex}]{DQnonneg}).
\end{remark}
\begin{definition}\label{TQpoldef0}
Define the filtered tangent space to quantised polyvectors by
\begin{align*}
TQ\widehat{\Pol}(A,M;0)&:= Q\widehat{\Pol}(A,M;0)\oplus \hbar Q\widehat{\Pol}_R(A,M;0)\eps,\\
\tilde{F}^jTQ\widehat{\Pol}(A,M;0)&:= \tilde{F}^jQ\widehat{\Pol}(A,M;0)\oplus \hbar\tilde{F}^j Q\widehat{\Pol}(A,M;0) \eps,
\end{align*}
for $\eps$ of degree $0$ with $\eps^2=0$. Then $TQ\widehat{\Pol}(A,0)[1] $ is a DGLA, with Lie bracket given by $ [u+v\eps, x+y\eps]= [u,x]+ [u,y]\eps + [v,x]\eps$.
Write $TQ\cP(A,M;0)$ for the space
\[
\mmc(\tilde{F}^2TQ\widehat{\Pol}(A,M;0)[1]).
\]
\end{definition}
\begin{definition}\label{TQPoldef}
Given a Maurer--Cartan element $\Delta \in \mc( Q\widehat{\Pol}_R(A,M;0) )$, define $T_{\Delta} Q\widehat{\Pol}_R(A,M;0)$ to be the non-unital brace algebra
\[
(\hbar Q\widehat{\Pol}_R(A,M;0)_{\#}, \delta_{Q\Pol} + [\Delta,-]).
\]
We define filtrations $\tilde{F}$ and $G$ on $T_{\Delta}Q\widehat{\Pol}_R(A,M;0)$ by
\begin{align*}
\tilde{F}^iT_{\Delta}Q\widehat{\Pol}_R(A,M;0)_{\#} &:=\hbar\tilde{F}^iQ\widehat{\Pol}_R(A,M;0)_{\#},\\
G^jT_{\Delta}Q\widehat{\Pol}_R(A,M;0) &:= \hbar^jT_{\Delta}Q\widehat{\Pol}_R(A,M;0).
\end{align*}
Note that $(T_{\Delta} Q\widehat{\Pol}_R(A,M;0), \tilde{F})$ is an almost commutative brace algebra over $R$.
\end{definition}
Observe that $T_{\Delta}Q\cP(A,M;0):= \mmc(\tilde{F}^2T_{\Delta}Q\widehat{\Pol}(A,M;0)[1])$ is just the fibre of $TQ\cP(A,M;0) \to Q\cP(A,M;0)$ over $\Delta$.
\begin{definition}\label{Qsigmadef}
Given $\Delta \in Q\cP(A,M;0)$, define $\sigma(\Delta)\in \z^2(\tilde{F}^2T_{\Delta}Q\widehat{\Pol}(A,M;0))$ to be
\[
-\pd_{\hbar^{-1}}\Delta = \hbar^{2}\frac{\pd \Delta}{\pd \hbar}.
\]
More generally, define $\sigma \co \cP(A,M;0)\to TQ\cP(A,M;0)$ to be the morphism induced by the morphism $\Delta \mapsto \Delta -\pd_{\hbar^{-1}}\Delta\eps$ of DGLAs from $Q\widehat{\Pol}(A,M;0)$ to $TQ\widehat{\Pol}(A,M;0)$.
\end{definition}
As in \cite[\S \ref{poisson-bipoisssn}]{poisson}, we will usually consider stacky CDGAs $A \in DG^+dg\CAlg(R)$ satisfying the following properties:
\begin{assumption}\label{biCDGAprops}
\begin{enumerate}
\item for any cofibrant replacement $\tilde{A}\to A$ in the model structure of Lemma \ref{bicdgamodel}, the morphism $\Omega^1_{\tilde{A}/R}\to \Omega^1_{A/R}$ is a levelwise quasi-isomorphism,
\item the $A^{\#}$-module $(\Omega^1_{A/R})^{\#}$ in graded chain complexes is cofibrant (i.e. it has the left lifting property with respect to all surjections of $A^{\#}$-modules in graded chain complexes),
\item there exists $N$ for which the chain complexes $(\Omega^1_{A/R}\ten_AA^0)^i $ are acyclic for all $i >N$.
\end{enumerate}
\end{assumption}
\begin{lemma}\label{gradedcalclemma}
If $A$ and $B$ are both cofibrant and satisfy Assumption \ref{biCDGAprops}, then $\gr_G^i\tilde{F}^pQ\widehat{\Pol}(A,M;0)$ is quasi-isomorphic to the cocone of
\[
\prod_{j \ge p} \hat{\HHom}_A(\Omega^{j-i}_{A/R},A)\hbar^{j-1}[i-j] \to \prod_{j \ge p} \hat{\HHom}_B(\oL\CoS^{j-i}_B\bL_{B/A},B)\hbar^{j-1}
\]
coming from the connecting homomorphism $S \co \oL\Omega^1_{B/A}=\bL_{B/A}\to\Omega^1_{A/R}[1]$.
Moreover, $\gr_G^i\tilde{F}^pT_{\Delta}Q\widehat{\Pol}(A,M;0)$ is quasi-isomorphic to $\hbar\gr_G^i\tilde{F}^pT_{\Delta}Q\widehat{\Pol}(A,M;0)$.
\end{lemma}
\begin{proof}
By construction, $\gr_G^i\tilde{F}^pQ\widehat{\Pol}$ is the cocone of
\[
\prod_{j \ge p} \hat{\Tot}\gr^{\gamma}_{j-i}\C\C^{\bt}_R(A)\hbar^{j-1} \to \prod_{j \ge p} \hat{\Tot}\gr^{\gamma F}_{j-i}\C\C^{\bt}_R(A, \cDiff_{B/R} )\hbar^{j-1}.
\]
Since $B$ is assumed cofibrant, we have isomorphisms
\[
\gr^F_{k} \cDiff_{B/R} \to \cHom_B(\CoS^k_B\Omega^1_{B/R},B).
\]
The bar-cobar resolution for $A$ as a commutative algebra then gives quasi-isomorphisms
\begin{align*}
\cHom_A(\Omega^{j-i}_{A/R},A)[i-j]&\to \gr^{\gamma}_{j-i}\C\C^{\bt}_R(A)\\
\cHom_A(\CoS^{j-1}_B(\cocone(\Omega^1_{B/R} \to\Omega^1_{A/R}\ten_AB)) ,B)&\to \gr^{\gamma F}_{j-i}\C\C^{\bt}_R(A, \cDiff_{B/R} ).
\end{align*}
Since $\cocone(\Omega^1_{B/R} \to\Omega^1_{A/R}\ten_AB) $ is a model for the cotangent complex $\bL_{B/A}$, the results follow.
\end{proof}
Given an element $\Delta \in Q\cP(A,M;0)$, we write $\Delta_A$ for the image in $Q\cP(A,0)$ and $\Delta_B$ for the image in $\hat{\Tot}\C\C_{R,BD_1}(A, \cDiff_{B/R})$. If we write $\Delta = \sum_{j \ge 2} \Delta_j \hbar^{j-1}$, then by working modulo $G^1+\tilde{F}^3$, Lemma \ref{gradedcalclemma} allows us to identify $\Delta_2=(\Delta_{2,A},\Delta_{2,B})$ with a closed element of the cocone of
\[
\hat{\HHom}_A(\Omega^2_{A/R},A) \to \oR\hat{\HHom}_B(\oL\CoS^2_B\bL_{B/A},B)[2].
\]
Now $\Delta_{2,A}$ defines a closed element of the first space, and since the composition of this map with
\[
\hat{\HHom}_B(\oL\CoS^2_B\bL_{B/A},B) \to \hat{\HHom}_B(\Omega^1_{B/R}\ten_B^{\oL}\bL_{B/A},B)
\]
is homotopic to $0$, $\Delta_{2,B}$ defines a closed element of the latter.
We then have a diagram
\[
\begin{CD}
\Omega^1_{A/R} @>>> \Omega^1_{B/R}\\
@V{\Delta_{2,A}^{\sharp}}VV @VV{\Delta_{2,B}^{\sharp}}V\\
\hat{\HHom}_A(\Omega^1_{A/R},A) @>{S}>> \oR\hat{\HHom}_B(\bL_{B/A},B)[1]
\end{CD}
\]
commuting up to a canonical homotopy coming from $\Delta_{2,B}$.
\begin{definition}\label{Qnondegdef}
Say that a quantisation $\Delta$ of the pair $(A,M)$ is non-degenerate if the maps
\begin{align*}
\Delta_{2,A}^{\sharp}\co \Tot^{\Pi} (\Omega_{A/R}^1\ten_AA^0) &\to \hat{\HHom}_A(\Omega^1_A, A^0)\\
\Delta_{2,B}^{\sharp}\co \Tot^{\Pi} (\Omega_{B/R}^1\ten_BB^0) &\to \oR\hat{\HHom}_B(\bL_{B/A}, B^0)[1]
\end{align*}
are quasi-isomorphisms and $\Tot^{\Pi} (\Omega_{A/R}^1\ten_AA^0)$ (resp. $\Tot^{\Pi} (\Omega_{B/R}^1\ten_BB^0)$) is a perfect complex over $A^0$ (resp. $B^0$).
\end{definition}
\section{Compatibility of quantisations and isotropic structures}\label{compatsn}
In this section, we introduce generalised isotropic structures, develop the notion of compatibility between a quantisation and a generalised isotropic structure, and give some preliminary existence results for quantisations of Lagrangians.
\subsection{Morphisms from the de Rham algebra}
\begin{definition}\label{DRdef}
Given a stacky CDGA $A$ over $R$, define the stacky de Rham algebra of $A$ to be the complete filtered stacky CDGA
\[
\cD\cR(A/R)^n_i:= \prod_{j\ge 0} (\Omega^j_A)^n_{i+j}
\]
with filtration $F^p \cD\cR(A/R)= \prod_{j\ge p} (\Omega^j_A)_{[j]}$, cochain differential $\pd$ and chain differential $\delta \pm d$, where $d$ is the de Rham differential, and the differentials $\pd,\delta$ are induced from those on $A$.
We then write $\DR(A/R):= \hat{\Tot}\cD\cR(A/R)$.
\end{definition}
In particular, beware that the de Rham differential is absorbed in the chain (derived) structure, not the cochain (stacky) structure.
\begin{lemma}\label{liftlemma}
Given a morphism $A \to \gr_F^0B$ of stacky CDGAs over $R$, with $A$ cofibrant and $(B,F)$ a complete filtered stacky CDGA, there is an associated filtered stacky CDGA morphism $\cD\cR(A/R) \to F^0B$ over $R$, unique up to coherent homotopy.
\end{lemma}
\begin{proof}
We may assume that $A$ is cofibrant, and then $\cD\cR(A)$ is cofibrant as a complete filtered stacky CDGA, in the sense that it has the left lifting property with respect to surjections of complete filtered stacky CDGAs over $R$ which are levelwise filtered quasi-isomorphisms. For any filtered $A$-module $(M,F)$, we may regard $M$ as a $\cD\cR(A)$-module via the projection $\cD\cR(A) \to A$. When $M=F^1M$, the double complex
$ \cHom_{ \cD\cR(A),\Fil}(\Omega^1_{\cD\cR(A)/R}, M)$
of filtered derivations from $\cD\cR(A)$ to $M$ is then
levelwise acyclic, by the construction of $\cD\cR(A)$.
Now, the double complex $ \cHom_{ \cD\cR(A),\Fil}(\Omega^1_{\cD\cR(A)/R},\gr_F^rB )$ governs the obstruction theory to lifting maps from $\cD\cR(A)$ along the square-zero extension $F^0B/F^{r+1}B \to F^0B/F^rB$. Thus the acyclicity above gives the required equivalence of mapping spaces
\[
\map_{\Fil}(\cD\cR(A), B) \simeq \map(A, \gr_F^0B)
\]
of filtered stacky CDGAs and of stacky CDGAs, respectively.
\end{proof}
The following is a slight generalisation of \cite[Lemma \ref{poisson-keylemma}]{poisson}:
\begin{lemma}\label{mulemma}
Take a cofibrant stacky CDGA $A$ over $R$, a complete filtered CDGA $B$ over $R$, and a filtered morphism $\phi \co \DR(A/R)\to B$. Then for any derivation $\pi\in \mc(F^1\DDer_R(B))$, there is an associated filtered CDGA morphism
\[
\mu(-,\pi) \co \DR(A/R) \to (B, \delta + \pi)
\]
given by $\mu(a,\pi)=\phi(a)$ and $\mu(df,\pi) = \phi(df) + \pi\phi(f)$ for $a,f \in A$.
\end{lemma}
\begin{proof}
The formulae clearly define a filtered morphism $ \mu(-,\pi) \co \DR(A)^{\#} \to B^{\#}$ of graded algebras, since $ \phi\circ d + \pi \circ \phi$ defines a derivation on $A$ with respect to $\phi \co A \to B$. We therefore need only check that $\mu$ is a chain map. We have
\begin{align*}
\delta\mu(a,\pi) &= \phi(\delta a)+\phi(da)\\
\pi\mu(a,\pi) &=\pi\phi(a)\\
(\delta+\pi)\mu(a,\pi)&= \mu(\delta a +da,\pi),
\end{align*}
and the calculation above applied to $a=f$ and using that $(\delta +\pi)^2=0$ gives
\begin{align*}
(\delta+\pi)\mu(df,\pi) &= - (\delta+\pi)\mu(\delta f, \pi)\\
&= - (\delta+\pi)\phi(f)\\
&= -\phi(d\delta f) -\pi\phi(\delta f)\\
&= \mu(-d\delta f, \pi)\\
&=\mu( (\delta -d)df, \pi),
\end{align*}
as required.
\end{proof}
Combining Lemmas \ref{liftlemma} and \ref{mulemma} gives:
\begin{lemma}\label{mulemma2}
Take a morphism $\phi \co A \to \gr_F^0B$ of stacky CDGAs over $R$, with $A$ cofibrant and $B$ a complete filtered stacky CDGA. Then for any $\pi\in \mmc(\hat{\Tot} F^1\cDer_R(B))$, there is an associated morphism
\[
\mu(-,\pi) \co \DR(A/R) \to (\hat{\Tot}{B},\delta + \pi),
\]
of
filtered CDGAs, unique up to coherent homotopy.
\end{lemma}
\subsection{The compatibility map}
We now develop the notion of compatibility between de Rham data and quantisations of a pair $(A \to B)$, generalising the notion of compatibility between generalised $0$-shifted pre-symplectic structures and $E_1$ quantisations from \cite{DQnonneg}.
A choice of Levi decomposition of the Grothendieck--Teichm\"uller group over $\Q$ gives a formality quasi-isomorphism $E_2 \simeq P_2$.
Writing $\tau$ for the good truncation filtration $\tau_{\ge p}$ on a homological operad, a formality quasi-isomorphism automatically gives a filtered quasi-isomorphism $(E_2, \tau) \simeq (P_2, \tau)$. The filtration $\tau$ on $P_2$ gives the commutative multiplication weight $0$ and the Lie bracket weight $-1$, and we refer to $(P_2, \tau)$-algebras in complete filtered complexes as almost commutative $P_2$-algebras.
Likewise, the map in \cite{voronovHtpyGerstenhaber} from the $E_2$ operad to the brace operad $\Br$ must preserve the good truncation filtrations. Finally, note that the good truncation filtration is contained in the filtration $\gamma$ on $\Br$ from Definition \ref{acbracedef}, since all operations of homological degree $r$ lie in $\gamma^r$, so in particular the closed operations do so. Thus every almost commutative brace algebra can be regarded as an $(E_2, \tau)$-algebra.
\begin{definition}
Given a Levi decomposition $w \in \Levi_{\GT}(\Q)$ of the Grothendieck--Teichm\"uller group $\GT$ over $\Q$, we denote by $p_w$ the resulting $\infty$-functor from almost commutative brace algebras to almost commutative $P_2$-algebras over $\Q$.
\end{definition}
Note that the $\infty$-functor $p_w$ automatically commutes with the fibre functors $A \mapsto F_1A$ to the underlying filtered DGLAs.
\begin{definition}
For any of the definitions from \S \ref{affinesn}, we add the subscript $w$ to indicate that we are replacing $\C\C_{R,BD_1}(A) $ with $p_{w}\C\C_{R,BD_1}(A)$ in the construction.
\end{definition}
Since the DGLAs underlying $\C\C_{R,BD_1}(A) $ and $p_{w}\C\C_{R,BD_1}(A)$ are filtered quasi-isomorphic, in particular we have canonical weak equivalences $Q\cP_w(A,0) \simeq Q\cP(A,0)$. Properties of the filtration $\tilde{F}$ then ensure that the complexes $T_{\Delta}Q\widehat{\Pol}_w(A,0)$ are filtered $(P_2,\tau)$-algebras.
\begin{definition}\label{mudef}
Given a choice $w \in \Levi_{\GT}(\Q)$ of Levi decomposition for $\GT$ and $\Delta \in Q\cP(A,M;0)_w/G^j$ define
\[
\mu_w(-,\Delta) \co \cocone(\DR(A/R) \to \DR(B/R))\llbracket\hbar\rrbracket/\hbar^j \to T_{\Delta}Q\widehat{\Pol}_w(A;B,0)/G^j
\]
as follows.
Since $[B ,F_i\cDiff_{B/A}] \subset F_{i-1}\cDiff_{B/A}$, we have a map $B \to \gr_{\gamma F}^0 \C\C_{R,BD_1}( \cDiff_{B/A})$. Combined with the weak equivalence $\cDiff_{B/A}\to \C\C_{R,BD_1}(A, \cDiff_{B/R})$, up to coherent homotopy this gives a commutative diagram
\[
\begin{CD}
A @>>> B \\
@VVV @VVV \\
\gr^0_{\tilde{\gamma}} (p_w\C\C_{R,BD_1}(A))\llbracket\hbar\rrbracket/\hbar^j) @>>> \gr^0_{\widetilde{\gamma F}} (p_w\C\C_{R,BD_1}(\cDiff_{B/A} )\llbracket\hbar\rrbracket/\hbar^j)
\end{CD}
\]
where the filtrations on the bottom row are taken to be $(\widetilde{\gamma F})^p:= \prod_{i \ge p} (\gamma F)_i \hbar^i$.
Applying Lemma \ref{mulemma2} to this diagram
and the Maurer--Cartan elements on the bottom line induced by $\Delta$ yields a diagram
\[
\begin{CD}
\DR(A) @>{\mu_w(-,\Delta)}>>(\hat{\Tot}\widetilde{\gamma}^0 (p_w\C\C_{R,BD_1}(A)\llbracket\hbar\rrbracket/\hbar^j), \delta+ [\Delta_A,-])\\
@VVV @VVV\\
\DR(B ) @>{\mu_w(-,\Delta)}>> (\hat{\Tot}\widetilde{\gamma F}^0 (p_w\C\C_{R,BD_1}( \cDiff_{B/A})\llbracket\hbar\rrbracket/\hbar^j), \delta + [\Delta_B,-])\\
@AAA @AAA \\
0 @>>>(\hat{\Tot}\widetilde{\gamma F}^0 (p_w\C\C_{R,BD_1,+}( \cDiff_{B/A})\llbracket\hbar\rrbracket/\hbar^j), \delta + [\Delta_B,-]),
\end{CD}
\]
and taking homotopy limits of the columns gives the desired map.
\end{definition}
\begin{remark}\label{cfDQvanish}
When $B=0$, this recovers the definition of $\mu_w$ from \cite[Definition \ref{DQnonneg-muwdef}]{DQnonneg}. When $R=A$, this definition is slightly different from that in \cite[Definition \ref{DQvanish-QPolmudef}]{DQvanish}. The construction there relied on a filtered DGAA resolution $\DR'(B/R)$ of $\DR(B/R)$, with \cite[Lemma \ref{DQvanish-mulemma1}]{DQvanish} giving a non-commutative analogue of Lemma \ref{mulemma2}.
Instead, Definition \ref{mudef} effectively constructs the map $\mu_w \co \DR(B/R) \to T_{\Delta}\sD_{B/R}$ in this setting by first taking
\[
\DR(B/R) \to p_w\hat{\Tot}\C\C_{R,BD_1}( \cDiff_{B/R})
\]
using the commutative structure underlying a $P_2$-algebra, then applying the projection $\C\C_{R,BD_1}( \cDiff_{B/R})\to \cDiff_{B/R} $. The map $\mu_w$ then converges more quickly than the map $\mu$ in \cite{DQvanish}, but depends on a choice of formality isomorphism.
This raises the question of whether the construction of \cite{DQvanish} could be adapted to unshifted symplectic structures, giving equivalences not relying on formality. This would mean establishing an analogue of Lemma \ref{liftlemma} giving a universal property for $\DR(B/R)$ within a suitable category of filtered $E_2$-algebras. The filtered DGAA $\DR'(B/R)$ is not almost commutative, but the left and right $A$-module structures on $\gr_F\DR'(B/R)$ agree. Similarly, $\DR(B/R)$ will not have the desired universal property in $BD_2$-algebras, but the analogy raises the possibility that it might do so in some larger category.
\end{remark}
\subsubsection{Generalised Lagrangians}
We now fix a cofibrant stacky CDGA $A$ over $R$, and a cofibration $A \to B$ of stacky CDGAs over $R$.
\begin{definition}
Recall that a $0$-shifted pre-symplectic structure $\omega$ on $A/R$ is an element
\[
\omega \in \z^{2}F^2\DR(A/R).
\]
It is called symplectic if $\omega_2 \in \z^0\Tot^{\Pi}\Omega^2_{A/R}$ induces a quasi-isomorphism
\[
\omega_2^{\sharp} \co \hat{\HHom}_A(\Omega^1_{A/R}, A^0)\to \Tot^{\Pi} (\Omega_{A/R}^1\ten_AA^0)
\]
and $\Tot^{\Pi} (\Omega_{A/R}^1\ten_AA^0)$ is a perfect complex over $A^0$.
An isotropic structure on $B$ relative to $\omega$ is an element $(\omega, \lambda)$ of
\[
\z^{2}\cocone(F^2\DR(A/R)\to F^2\DR(B/R))
\]
lifting $\omega$. This structure is called Lagrangian if $\omega$ is symplectic and the image $\bar{\lambda}_2$ of $\lambda$ in $\z^{-1}\Tot^{\Pi}\Omega^1_{B/R}\ten_B\Omega^1_{B/A} $
induces a quasi-isomorphism
\[
\lambda_2^{\sharp} \co \hat{\HHom}_B(\Omega^1_{B/A}, B^0)\to \Tot^{\Pi} (\Omega_{B/R}^1\ten_BB^0)[-1]
\]
and $\Tot^{\Pi} (\Omega_{B/A}^1\ten_BB^0)$ is a perfect complex over $B^0$.
\end{definition}
\begin{definition}\label{tildeFDRdef}
Define a decreasing filtration $\tilde{F}$ on $ \DR(A/R)\llbracket\hbar\rrbracket$ by
\[
\tilde{F}^p\DR(A/R):= \prod_{i\ge 0} F^{p-i}\DR(A/R)\hbar^{i}.
\]
Define a further filtration $G$ by $ G^k \DR(A/R)\llbracket\hbar\rrbracket = \hbar^{k}\DR(A/R)\llbracket\hbar\rrbracket$.
\end{definition}
\begin{definition}\label{GPreSpdef}
Define the space of generalised $0$-shifted isotropic structures on the pair $(A,B)$ over $R$ to be the simplicial set
\[
G\Iso(A,B;0):= \mmc( \tilde{F}^2\cone(\DR(A/R)\llbracket\hbar\rrbracket \to \DR(B/R)\llbracket\hbar\rrbracket)),
\]
where we regard the cochain complexes as a DGLA with trivial bracket.
Also write $G\Iso(A,B;0)/\hbar^{k}$ for the obvious truncation in terms of $\DR[\hbar]/\hbar^k$,
so $ G\Iso(A,B;0)= \Lim_k G\Iso(A,B;0)/\hbar^{k} $. Write $\Iso = G\Iso/\hbar$.
Set $G\Lag(A,B;0) \subset G\Iso(A,B;0)$ to consist of the points whose images in $\Iso(A,B;0)/\hbar$ are Lagrangians on symplectic structures --- this is a union of path-components.
\end{definition}
Thus the components of $G\Iso(A,B;0)$ are just elements in $\H^1\tilde{F}^2\cone(\DR(A/R) \to \DR(B/R))\llbracket\hbar\rrbracket$, with equivalence classes of $n$-morphisms given by elements in $\H^{1-n}$ of the same complex.
\subsubsection{Compatible structures}
In addition to our morphism $A \to B$, we now fix a strict line bundle $M$ over $B$, in the sense of Definition \ref{bistrictlb}.
\begin{definition}\label{Qcompatdef}
We say that a generalised isotropic structure $(\omega,\lambda)$ and a quantisation $\Delta$ of the pair $(A,M)$ are $w$-compatible (or a $w$-compatible pair) if
\[
[\mu_w(\omega, \Delta)] = [-\pd_{\hbar^{-1}}(\Delta)] \in \H^1(\tilde{F}^2T_{\Delta}Q\widehat{\Pol}_w(A,M;0)) \cong \H^1(\tilde{F}^2T_{\Delta}Q\widehat{\Pol}(A,M;0)),
\]
where $\sigma=-\pd_{\hbar^{-1}}$ is the canonical tangent vector of Definition \ref{Qsigmadef}.
\end{definition}
\begin{definition}\label{vanishingdef}
Given a simplicial set $Z$, an abelian group object $A$ in simplicial sets over $Z$, a space $X$ over $Z$ and a morphism $s \co X \to A$ over $Z$, define the homotopy vanishing locus of $s$ over $Z$ to be the homotopy limit of the diagram
\[
\xymatrix@1{ X \ar@<0.5ex>[r]^-{s} \ar@<-0.5ex>[r]_-{0} & A \ar[r] & Z}.
\]
\end{definition}
\begin{definition}\label{Qcompdef}
Define the space $Q\Comp_w(A,M;0)$ to be the homotopy vanishing locus of
\[
(\mu_w - \sigma) \co G\Iso(A,B;0) \by Q\cP_w(A,M;0) \to TQ\cP_w(A,M;0)
\]
over $Q\cP_w(A,M;0)$
We define a cofiltration on this space by setting $ Q\Comp_w(A,M;0)/G^j$ to be the homotopy vanishing locus of
\[
(\mu_w - \sigma) \co (G\Iso(A,B;0)/G^j) \by (Q\cP_w(A,M;0)/G^j) \to TQ\cP_w(A,M;0)/G^j
\]
over $Q\cP_w(A,M;0)/G^j $.
\end{definition}
Thus $Q\Comp_w(A,M;0)$ consists of data $(\omega, \lambda, \Delta,\alpha)$, where $(\omega, \lambda)$ is a generalised isotropic structure, $\Delta$ a quantisation of $(A,M)$, and $\alpha$ a homotopy between $\mu_w(\omega,\lambda)$ and $\sigma(\Delta)$.
\begin{definition}
Define $Q\Comp_w(A,M;0)^{\nondeg} \subset Q\Comp_w(A,M;0)$ to consist of $w$-compatible quantised pairs $(\omega, \Delta)$ with $\Delta$ non-degenerate. This is a union of path-components, and by \cite[Lemma \ref{poisson-compatnondeg}]{poisson} any pre-symplectic form compatible with a non-degenerate quantisation is symplectic. The same argument shows that any isotropic pair compatible with a non-degenerate quantisation is Lagrangian so there is a natural projection
\[
Q\Comp_w(A,M;0)^{\nondeg}\to G\Lag(A,B;0)
\]
as well as the canonical map
\[
Q\Comp_w(A,M;0)^{\nondeg} \to Q\cP_w(A,M;0)^{\nondeg}.
\]
\end{definition}
\subsection{The equivalences}
\begin{proposition}\label{QcompatP1}
For any Levi decomposition $w$ of $\GT$, the canonical map
\begin{eqnarray*}
Q\Comp_w(A,M;0)^{\nondeg} \to Q\cP_w(A,M;0)^{\nondeg}\simeq Q\cP(A,M;0)^{\nondeg}
\end{eqnarray*}
is a weak equivalence. In particular, $w$ gives rise to a morphism
\[
Q\cP(A,M;0)^{\nondeg} \to G\Lag(A,B;0)
\]
(from non-degenerate quantisations to generalised Lagrangians)
in the homotopy category of simplicial sets.
\end{proposition}
\begin{proof}
The proof of \cite[Proposition \ref{poisson-compatP1}]{poisson} adapts to this context, along much the same lines as \cite[Proposition \ref{DQnonneg-QcompatP1}]{DQnonneg}. The essential idea is that non-degeneracy of a quantisation $\Delta$ ensures that $\mu_w(-,\Delta)$ is a filtered quasi-isomorphism, so the generalised Lagrangian data $(\omega, \lambda)$ associated to $\Delta$ are given by
\[
-\mu_w(-,\Delta)^{-1} (\pd_{\hbar^{-1}}\Delta).
\]
\end{proof}
Write $\widehat{\Pol}(A,B;0):=Q\widehat{\Pol}(A,M;0)/G^1$, with a filtration $F$ given by the image of the filtration $\tilde{F}$, then also write $\Comp:= Q\Comp_w/G^1$, $\cP:= Q\cP/G^1$, $\Lag:= G\Lag/G^1$ and $\Iso:= G\Iso/G^1$. In particular, observe that since $\widehat{\Pol}(A,B;0)$ is already a $P_2$-algebra, the space $\Comp$ is independent of the Levi decomposition $w$ of $\GT$.
The following proposition establishes an equivalence between Lagrangians and non-degenerate co-isotropic Poisson structures in the $0$-shifted setting:
\begin{proposition}\label{compatcor2}
The canonical maps
\begin{eqnarray*}
\Comp(A,B;0)^{\nondeg} &\to& \cP(A,B;0)^{\nondeg} \\
\Comp(A,B;0)^{\nondeg} &\to& \Lag(A,B;0)
\end{eqnarray*}
are weak equivalences.
\end{proposition}
\begin{proof}
The first equivalence is given by observing that the equivalences in Proposition \ref{QcompatP1} respect the cofiltration $G$. For the second equivalence, we adapt the proofs of \cite[Corollary \ref{poisson-compatcor1} and Proposition \ref{poisson-level0prop}]{poisson}, establishing the equivalence by induction on the filtration $F$.
The space $\Lag(A,B;0)/F^3$ is just given by elements $(\omega, \lambda)$ in the cocone of $\hat{\Tot}\Omega^2_{A/R}\to \hat{\Tot}\Omega^2_{A/R}$ which are non-degenerate in the sense that the induced map $(\omega, \lambda)^{\sharp}$ induces a quasi-isomorphism
\[
\begin{CD}
\hat{\HHom}_A(\Omega^1_{A/R},A^0) @>{S}>> \hat{\HHom}_B(\Omega^1_{B/A}, B^0)[1]\\
@V{\omega^{\sharp}}VV @VV{\lambda^{\sharp}}V \\
\hat{\Tot}(\Omega^1_{A/R}\ten_AA^0) @>>> \hat{\Tot}(\Omega^1_{B/R}\ten_BB^0)
\end{CD}
\]
of diagrams.
Since $ \cP(A,B;0)/F^3$ is given by elements $(\varpi,\pi)$ in the cocone of $S \co \hat{\HHom}_A(\Omega^2_{A/R},A) \to \hat{\HHom}_B(\CoS^2_B\Omega^1_{B/A}, B)[2] $, the essentially unique Poisson structure compatible with $(\omega, \lambda)$ is just given by the image of $(\omega, \lambda)$ under the symmetric square of the homotopy inverse of $(\omega, \lambda)^{\sharp}$, so
\[
\Comp(A,B;0)^{\nondeg}/F^3 \xra{\sim} \Lag(A,B;0)/F^3.
\]
Adapting the proof of \cite[Corollary \ref{poisson-compatcor1}]{poisson},
there is a
a commutative diagram
\[
\begin{CD}
(\Comp(A,B;0)^{\nondeg}/F^{p+1})_{(\omega,\lambda,\varpi, \pi)} @>>>(\Lag(A,B;0)/F^{p+1})_{(\omega,\lambda)}\\
@VVV @VVV \\
(\Comp(A,B;0)^{\nondeg}/F^{p})_{(\omega,\lambda,\varpi,\pi)} @>>>(\Lag(A,B;0)/F^{p})_{(\omega, \lambda)}\\
@VVV @VVV \\
\mmc(M(\omega,\lambda,\varpi,\pi,p)[1])@>>>\mmc(\hat{\Tot}\cocone (\Omega^{p}_{A/R}\to \Omega^{p}_{B/R})[2-p])
\end{CD}
\]
of fibre sequences, where
$
M(\omega,\lambda,\varpi,\pi,p)
$
is defined to be the homotopy limit of the diagram
\[
\begin{CD}
\hat{\Tot}\Omega^{p}_{A/R}[1-p] @>>> \hat{\Tot}\Omega^{p}_{B/R}[1-p]\\
@V{\L^{p}(\varpi^{\sharp})}VV @VV{\L^{p}(\pi^{\sharp})}V\\
\hat{\HHom}_A(\Omega^p_{A/R},A)[1-p] @>S>> \hat{\HHom}_B( \CoS^p_B\Omega^1_{B/A},B)[1] \\
@A{\nu(\omega, \varpi) - (p-1)}AA @AA{\nu(\lambda, \pi) - (p-1)}A \\
\hat{\HHom}_A(\Omega^p_{A/R},A)[1-p] @>S>> \hat{\HHom}_B( \CoS^p_B\Omega^1_{B/A},B)[1].
\end{CD}
\]
Here $\nu(\omega, \varpi)$ is the tangent map of $\mu(\omega, -)$ at $\varpi$, given by
\[
\mu(\omega, \pi+ \rho \eps) = \mu(\omega, \pi) + \nu(\omega, \pi)(\rho)\eps
\]
for $\eps^2=0$, with $\nu(\lambda, \pi)$ defined similarly.
Arguing as in \cite[Lemma \ref{DQvanish-tangentlemma}]{DQvanish}, $\nu(\omega, \varpi)\simeq p (\varpi^{\sharp} \circ \omega^{\sharp})$ and $\nu(\lambda, \pi)\simeq p(\pi^{\sharp} \circ \lambda^{\sharp})$ in the diagram above. Since we are in the non-degenerate setting, $\varpi^{\sharp} \circ \omega^{\sharp}$ and $\pi^{\sharp} \circ \lambda^{\sharp}$ are homotopic to the identity maps on their respective spaces, so $\nu(\omega, \varpi)$ and $\nu(\lambda, \pi)$ are homotopic to multiplication by $p$.
Because $p-(p-1)$ is invertible, we then get
\[
M(\omega,\lambda,\varpi,\pi,p) \simeq \hat{\Tot}\cocone (\Omega^{p}_{A/R}\to \Omega^{p}_{B/R})[1-p].
\]
Substituting in the diagram of fibre sequences then gives
\begin{align*}
&(\Comp(A,B;0)^{\nondeg}/F^{p+1}) \\
&\simeq(\Comp(A,B;0)^{\nondeg}/F^{p})\by^h_{(\Lag(A,B;0)/F^{p}) }(\Lag(A,B;0)/F^{p+1}),
\end{align*}
from which the desired equivalence $(\Comp(A,B;0)^{\nondeg}/F^{p+1})\simeq (\Lag(A,B;0)/F^{p+1})$ follows by induction.
\end{proof}
\begin{proposition}\label{quantprop}
For any Levi decomposition $w$ of $\GT$, the maps
\begin{align*}
&Q\cP_w(A,M;0)^{\nondeg}/G^j \\
&\to(Q\cP_w(A,M;0)^{\nondeg}/G^2)\by^h_{(G\Lag(A,B;0)/G^2)}(G\Lag(A,B;0)/G^j) \\
&\simeq (Q\cP_w(A,M;0)^{\nondeg}/G^2)\by \prod_{2 \le i<j } \mmc(\cone(\DR(A/R) \to \DR(B/R))\hbar^i)
\end{align*}
coming from Proposition \ref{QcompatP1} are weak equivalences for all $j \ge 2$.
\end{proposition}
\begin{proof}
The proof of \cite[Proposition \ref{DQvanish-quantprop}]{DQvanish} and \cite[Proposition \ref{DQnonneg-quantprop}]{DQnonneg} generalises to this setting. For $(\omega,\lambda,\varpi,\pi) \in \Comp(A,B;0)$, there is a
a commutative diagram
\[
\begin{CD}
(Q\Comp_w(A,M;0)/G^{j+1})_{(\omega,\lambda,\varpi, \pi)} @>>>(G\Iso(A,B;0)/G^{j+1})_{(\omega,\lambda)}\\
@VVV @VVV \\
(Q\Comp_w(A,M;0)/G^j)_{(\omega,\lambda,\varpi,\pi)} @>>> G\Iso(A,B;0)/G^{j})_{(\omega, \lambda)}\\
@VVV @VVV \\
\mmc(N(\omega,\lambda,\varpi,\pi,j)[1]) @>>>\mmc(\cone(F^{2-j}\DR(A/R)\to F^{2-j}\DR(B/R)) \hbar^{j})
\end{CD}
\]
of fibre sequences, for a space $N(\omega,\lambda,\varpi,\pi,j)$ defined as follows.
We set $N(\omega,\lambda,\varpi,\pi,j)$ to be the homotopy limit of the diagram
\[
\begin{CD}
\cocone( F^{2-j}\DR(A/R)\to F^{2-j}\DR(B/R))\hbar^{j} \\
@VV{\mu(-,-,\varpi,\pi)}V \\
F^{2-j}T_{(\varpi,\pi)}\widehat{\Pol}(A,B;0)\hbar^{j} \\
@AA{\nu(\omega,\lambda,\varpi, \pi)+ \pd_{\hbar^{-1}}}A \\
(F^{2-j}\widehat{\Pol}(A,B;0)\hbar^{j},\delta_{\varpi,\pi})= F^{2-j}T_{(\varpi,\pi)}\widehat{\Pol}(A,B;0)\hbar^{j-1},
\end{CD}
\]
where $\nu(\omega,\lambda,\varpi, \pi)$ is the tangent map of $\mu(\omega,\lambda,-, -)$ at $(\varpi,\pi)$, given by
\[
\mu(\omega,\lambda, \varpi+\tau \eps, \pi+ \rho \eps) = \mu(\omega, \pi) + \nu(\omega, \pi)(\tau, \rho)\eps
\]
with $\eps^2=0$.
On the associated graded pieces, the proof of \cite[Proposition \ref{DQnonneg-quantprop}]{DQnonneg} shows that $ \gr_F^p(\nu(\omega,\lambda,\varpi, \pi) + \pd_{\hbar^{-1}})$ is homotopic to $(1-j)\hbar$.
As this is an isomorphism for all $j \ge 2$,
the map $N(\omega, \lambda,\varpi,\pi,j) \to \cocone(F^{2-j}\DR(A/R)\to F^{2-j}\DR(B/R)) \hbar^{j}$ is a quasi-isomorphism, which inductively gives the required weak equivalences from the fibre sequences above.
\end{proof}
\begin{remark}\label{quantrmk}
Taking the limit over all $j$, Proposition \ref{quantprop} gives an equivalence
\begin{align*}
&Q\cP_w(A,M;0)^{\nondeg}\\
& \simeq (Q\cP_w(A,M;0)^{\nondeg}/G^2)\by \prod_{i \ge 2} \mmc(\cone(\DR(A/R) \to \DR(B/R))\hbar^i);
\end{align*}
in particular, this means that there is a canonical map
\[
(Q\cP(A,M;0)^{\nondeg}/G^2) \to Q\cP(A,M;0)^{\nondeg},
\]
dependent on $w\in \Levi_{\GT}$, corresponding to the distinguished point $0$.
Even if $\pi$ is degenerate, a variant of Proposition \ref{quantprop} still holds. Because $\varpi^{\sharp} \circ \omega^{\sharp}$ and $\pi^{\sharp} \circ \lambda^{\sharp} $ are homotopy idempotent, the map $\gr_F^p\nu(\omega,\lambda, \varpi, \pi)$ has eigenvalues in the interval $[0,p]$, so we just replace $(1-j)$ with an operator having eigenvalues in the interval $[1-p-j, 1-j]$.
Since this is still a quasi-isomorphism for $j>1$, we have
\begin{align*}
&Q\Comp_w(A,M;0) \\
&\simeq (Q\Comp_w(A,M;0)/G^2)\by \prod_{i \ge 2} \mmc(\cocone(\DR(A/R)\to \DR(B/R))\hbar^i).
\end{align*}
giving a sufficient first-order criterion for degenerate quantisations to exist.
\end{remark}
\section{Global quantisations}\label{stacksn}
As in \cite[\S \ref{DQvanish-Artinsn}]{DQvanish} and \cite[\S \ref{DQnonneg-stacksn}]{DQnonneg}, in order to pass from stacky CDGAs to derived Artin stacks, we will exploit a form of \'etale functoriality. We then introduce the notion of self-duality and thus establish the existence of quantisations for derived Lagrangians.
\subsection{Diagrams of quantised pairs}\label{Artindiagsn}
\begin{definition}
Given a small category $I$, an $I$-diagram $(A,F)$ in almost commutative stacky DGAAs over $R$, and a filtered $A$-bimodule $M$ in $I$-diagrams of chain cochain complexes for which the left and right $\gr^FA$-module structures on $\gr^FM$ agree, we define the filtered chain cochain complex
\[
\C\C_{R, BD_1}(A,M)
\]
to be the equaliser of the obvious diagram
\[
\prod_{i\in I} \C\C^{\bt}_{R,BD_1}(A(i),M(i)) \implies \prod_{f\co i \to j \text{ in } I} \C\C^{\bt}_{R,BD_1}(A(i),M(j)),
\]
for the $BD_1$ Hochschild complexes of Definition \ref{HHdefa}.
We then write $\C\C^{\bt}_{R,BD_1}(A):= \C\C^{\bt}_{R,BD_1}(A,A)$, which inherits the structure of a stacky brace algebra from each $\C\C^{\bt}_{R,BD_1}(A(i),A(i))$.
\end{definition}
Note that if $u \co I \to J$ is a morphism of small categories and $A$ is a $J$-diagram of almost commutative stacky DGAAs over $R$, with $B= A \circ u$, then we have a natural map $ \C\C^{\bt}_R(A) \to \C\C^{\bt}_R(B)$.
In order to ensure that $\C\C^{\bt}_R(A,M)$ has the correct homological properties, we now consider categories of the form $[m]= (0 \to 1 \to \ldots \to m)$. Similarly to \cite[Lemma \ref{DQnonneg-calcCClemma}]{DQnonneg}, the construction $\C\C^{\bt}_R(A,M)$ preserves weak equivalences proved we restrict to pairs $(A,M)$ for which each $A(i)$ is cofibrant as an $R$-module and $M$ is fibrant for the injective model structure (i.e. the maps $M(i) \to M(i+1)$ are all surjective).
As in \cite[\S \ref{DQvanish-Artindiagramsn}]{DQvanish}, we can do much the same for differential operators:
\begin{definition}
Given a small category $I$, an $I$-diagram $B$ of stacky CDGAs over $R$, and $B$-modules $M,N$ in chain cochain complexes, define the
filtered chain cochain complex $\cDiff_{B/R}(M,N)$
to be the equaliser of the obvious diagram
\[
\prod_{i \in I} \cDiff_{B(i)/R}(M(i),N(i)) \implies \prod_{f\co i \to j \text{ in } I} \cDiff_{B(i)/R}(M(i),f_*N(j)),
\]
and write $\cDiff_{B/R}$ for $\cDiff_{B/R}(B,B)$
\end{definition}
If $B$ is an $[m]$-diagram in $DG^+dg\CAlg(R)$ which is cofibrant and fibrant for the injective model structure (i.e. each $B(i)$ is cofibrant in the model structure of Lemma \ref{bicdgamodel} and the maps $B(i) \to B(i+1)$ are surjective), then observe that $\gr^F_k\cDiff_{B/R}$
is a model for the derived $\Hom$-complex $\oR \cHom_B(\CoS^k_B\Omega^k_{B/R},B)$.
The constructions in \S \ref{affinesn} now all carry over verbatim, generalising from morphisms of cofibrant stacky CDGAs to morphisms $A \to B$ of $[m]$-diagrams of stacky CDGAs which are cofibrant and fibrant for the injective model structure. In particular, for any such morphism and a strict line bundle $M$ over $B$, we have a DGLA
\[
Q\widehat{\Pol}(A,M;0)[1]
\]
of $0$-shifted relative quantised polyvectors as in Definition \ref{QPoldef}, and a space
\[
Q\cP(A,M;0)
\]
of quantisations of the pair $(A,M)$ as in Definition \ref{QPdef}.
In order to identify $Q\cP/G^1$ with $\cP$, and for notions such as non-degeneracy to make sense, we have to assume that for our fibrant cofibrant $[m]$-diagrams $A,B$ of stacky CDGAs, each $A(j),B(j)$ satisfies Assumption \ref{biCDGAprops}, so there exists $N$ for which the chain complexes $(\Omega^1_{A(j)/R}\ten_{A(j)}A(j)^0)^i $ are acyclic for all $i >N$, and similarly for $B$.
\begin{definition}\label{ICompdef}
Given a morphism $A \to B$ of fibrant cofibrant $[m]$-diagrams in stacky CDGAs (for the injective model structure)
define
\[
G\Iso(A,B;0):= G\Iso(A(0),B(0);0)= \Lim_{i\in [m]} G\Iso(A(i),B(i);0),
\]
for the space $G\Iso$ of generalised isotropic structures of Definition \ref{GPreSpdef}, and define the space $G\Lag(A,B;0)$ of generalised Lagrangians similarly.
Given a choice $w \in \Levi_{\GT}(\Q)$ of Levi decomposition for $\GT$, define
\[
\mu_w \co G\Iso(A,B;0) \by Q\cP_w(A,M;0) \to TQ\cP_w(A,M;0)
\]
by setting $\mu_w(\omega,\lambda, \Delta)(i):= \mu_w(\omega(i),\lambda(i), \Delta(i)) \in TQ\cP_w(A(i),B(i);0)$ for $i \in [m]$, and let $ Q\Comp_w(A,M;0)$ be the homotopy vanishing locus of
\[
(\mu_w - \sigma) \co G\Iso(A,B;0) \by Q\cP_w(A,M;0) \to TQ\cP_w(A,M;0).
\]
over $Q\cP_w(A,M;0)$.
\end{definition}
As in \cite[\S \ref{poisson-bidescentsn}]{poisson}, if we let $(DG^+dg\CAlg(R)^{[1]})^{\et} \subset DG^+dg\CAlg(R)^{[1]}$ be the wide subcategory of the arrow category with only homotopy formally \'etale morphisms (see Definition \ref{hfetdef}) between arrows, then
for any of the constructions $F$ based on $Q\cP$,
\cite[Definition \ref{poisson-inftyFdef}]{poisson} adapts to give
an $\infty$-functor
\[
\oR F \co \oL (DG^+dg\CAlg(R)^{[1]})^{\et} \to \oL s\Set
\]
from the $\infty$-category of stacky CDGAs and homotopy formally \'etale morphisms to the $\infty$-category of simplicial sets.
This construction has the property that $(\oR F)(\phi \co A\to B) \simeq F(\phi\co A\to B)$
for all morphisms $\phi$ of cofibrant stacky CDGAs $A$ over $R$.
Immediate consequences of Propositions \ref{QcompatP1} and \ref{quantprop} are that for any $w \in \Levi_{\GT}(\Q)$, the canonical maps
\begin{align*}
& Q\Comp_w(A,M;0)^{\nondeg} \to Q\cP_w(A,M;0)^{\nondeg}\simeq Q\cP(A,M;0)^{\nondeg};\\
& Q\cP_w(A,M;0)^{\nondeg}/G^j\\
&\to
(Q\cP_w(A,M;0)^{\nondeg}/G^2)\by \prod_{2 \le i<j } \mmc(\cocone(\DR(A/R) \to \DR(B/R))\hbar^i[1])
\end{align*}
are weak equivalences of $\infty$-functors on the full subcategory of $(\oL DG^+dg\CAlg(R)^{[1]})^{\et}$ consisting of objects satisfying the conditions of Assumption \ref{biCDGAprops}, for all $j \ge 2$.
\subsection{Descent and line bundles}\label{lbsn}
We now extend the constructions above to line bundles, via $\bG_m$-equivariance exactly as in \cite[\S \ref{DQvanish-lbsn}]{DQvanish}.
On $DG^+dg\Alg(\Q)$, we consider the functor $(B\bG_m)^{\Delta}\circ D$, which sends $B$ to the nerve
of the groupoid
\[
\mathrm{TLB}(B):= [\z^1(\z_0B)/(\z_0B^0)^{\by}]
\]
of trivial line bundles, where $f \in (B^0)^{\by}$ acts on $\z^1B$ by addition of $\pd \log f = f^{-1}\pd f$.
For any morphism $A \to B$ of cofibrant stacky CDGAs over $R$, we can extend $ Q\cP(A,B;0)$ to a simplicial representation of the
groupoid $\mathrm{TLB}(B)$ above by sending an object
$c \in \z^1(\z_0B)$ to $Q\cP(A,B_c;0)$, with $(\z_0B^0)^{\by}$ acting via functoriality for line bundles. Note that the quotient representation $Q\cP(-,-;0)/G^1= \cP(-,0)$ is trivial; we also set $G\Iso$ to be a trivial representation $c \mapsto G\Iso(A,B;0)$.
\begin{definition}
For any of the constructions $F$ of \S \ref{Artindiagsn}, let $\oR (F/^h\bG_m)$ be the $\infty$-functor on $\oL dg\CAlg(R)^{\et}$ given by applying the construction of \cite[\S \ref{poisson-bidescentsn}]{poisson} to the right-derived functor of the Grothendieck construction
\[
B \mapsto
\holim_{\substack{ \lra \\ c \in \mathrm{TLB}(B)}} F(A,B_c),
\]
then taking hypersheafification with respect to homotopy formally \'etale coverings.
\end{definition}
Given a derived Artin $N$-stack $X$, and $A \in DG^+dg\CAlg(R)$, we say that an element $f \in \ho \Lim_i X(D^iA)$ is homotopy formally \'etale if the induced morphism
\[
N_cf_0^*\bL_{X/R} \to \{ \Tot \sigma^{\le q} \oL\Omega^1_{A/R}\ten^{\oL}_AA^0\}_q
\]
from \cite[\S \ref{poisson-Artintgtsn}]{poisson} is a pro-quasi-isomorphism.
Given a morphism $X \to Y$ of derived Artin $N$-stacks, we then write
$(dg_+DG\Aff_{\et}^{[1]}\da X/Y)$ for the $\infty$-category consisting of morphisms $\Spec B \to \Spec A$ in $dg_+DG\Aff_R$, equipped with
homotopy formally \'etale elements of $ \ho \Lim_i X(D^iB)\by^h_{Y(D^iB)}Y(D^iA)$; morphisms in this $\infty$-category are given by
compatible homotopy formally \'etale maps $A \to A'$, $B \to B'$ .
\begin{definition}
Given a map $X \to Y$ of strongly quasi-compact derived Artin $N$-stacks over $R$, a line bundle $\sL$ on $X$ and any of the functors $F$ above, define
$
F(Y,\sL)
$
to be the homotopy limit of
\[
\oR(F/^h\bG_m)(A,B)\by_{\oR (*/^h\bG_m)(B)}^h\{\sL|_{B}\}
\]
over objects $\Spec B \to \Spec A$ in the $\infty$-category $(dg_+DG\Aff_{\et}^{[1]}\da X/Y)$.
\end{definition}
\begin{remark}
In many cases, we can take smaller categories than $(dg_+DG\Aff_{\et}^{[1]}\da X/Y)$ on which to calculate the homotopy limit. When the $\bG_m$-action on $F$ is trivial, we can restrict to compatible hypergroupoid resolutions as in \cite[\S \ref{poisson-bidescentsn}]{poisson}. When $X$ and $Y$ are derived Deligne--Mumford $N$-stacks, we do not need stacky CDGAs, and can just work over $(DG\Aff_{\et}^{[1]}\da X/Y)$.
When $X$ and $Y$ are $1$-geometric derived Artin stacks, we may just consider the $\infty$-category of commutative diagrams
\[
\begin{CD}
U @>f>> X \\
@VVV @VVV \\
V @>g>> Y
\end{CD}
\]
with $U,V$ derived affines and the maps $f,g$ being smooth; to this we associate the morphism $\Omega^{\bt}_{U/X} \to \Omega^{\bt}_{V/Y}$ of stacky CDGAs as in \S \ref{stackyCDGAsn}, giving an object of $(dg_+DG\Aff_{\et}^{[1]}\da X/Y)$. Following Remark \ref{curvedrmk}, this means that an object of $Q\cP(Y,\sL;0)$ is a form of curved $A_{\infty}$ deformation of the presheaf $V \mapsto \Tot \Omega^{\bt}_{V/Y}$, acting on a deformation of the presheaf $U \mapsto \Tot \Omega^{\bt}_{U/X}\ten_{f^{-1}\O_X}f^{-1}\sL$ given by $R$-linear differential operators.
\end{remark}
Adapting \cite[Definition \ref{DQvanish-nondegstack}]{DQvanish} along the lines of Definition \ref{Qnondegdef} gives:
\begin{definition}\label{nondegstack}
Say that a quantisation $\Delta \in Q\cP(Y,\sL;0)/G^k$ is non-degenerate if the induced maps
\begin{align*}
\Delta_{2,Y}^{\sharp}\co \bL_{Y/R} \to \oR\hom_{\sO_Y}(\bL_{Y/R}, \sO_{Y/R})\\
\Delta_{2,X}^{\sharp}\co \bL_{X/R} \to \oR\hom_{\sO_X}(\bL_{X/Y}, \sO_{X})[1]
\end{align*}
are quasi-isomorphisms and and $\bL_{X}, \bL_{Y}$ are perfect.
\end{definition}
Propositions \ref{compatcor2} and \ref{quantprop} now readily generalise (substituting the relevant results from \cite[\S \ref{poisson-Artinsn}]{poisson} to pass from local to global), giving:
\begin{proposition}\label{prop3}
For any $X \to Y$, any line bundle $\sL$ on $X$ and any $w \in \Levi_{\GT}(\Q)$, the canonical maps
\begin{align*}
\Comp(Y,X;0)^{\nondeg} &\to \cP(Y,X;0)^{\nondeg} \\
\Comp(Y,X;0)^{\nondeg} &\to \Lag(Y,X;0)\\
Q\Comp_w(Y,\sL;0)^{\nondeg} &\to Q\cP(Y,\sL;0)^{\nondeg}
\end{align*}
\begin{align*}
Q\Comp_w(Y,\sL;0) &\to (Q\Comp_w(Y,\sL;0)/G^2) \by^h_{(G\Iso(Y,X;0)/G^2)} G\Iso(Y,X;0)\simeq \\
&(Q\Comp_w(Y,\sL;0)/G^2)\by \prod_{i \ge 2} \mmc(\cone(\DR(Y/R)\to \DR(X/R) \hbar^i)
\end{align*}
are filtered weak equivalences. In particular, $w$ gives rise to a morphism
\[
Q\cP(Y,\sL;0)^{\nondeg} \to G\Lag(Y,X;0)
\]
in the homotopy category of simplicial sets.
\end{proposition}
\begin{remark}\label{cfBGKP}
The results of Proposition \ref{prop3} are compatible with those of \cite[Theorem 1.1.4]{BaranovskyGinzburgKaledinPecharich}, which fixes a quantisation $\tilde{\sO}_{Y}$ of a smooth variety $Y$ and describes quantisations of line bundles $\sL$ on smooth Lagrangians $X$, compatible with $\tilde{\sO}_{Y}$. They show that the obstructions to quantising $\sL$ in this way are a class $c_1 (\sL) -\half c_1 (K_X ) -\At(\tilde{\sO}_Y , X) \in \H^2F^1\DR(X)$ and a power series in $\hbar^2\H^2\DR(X)\llbracket \hbar \rrbracket$ determined by $\tilde{\sO}_Y$. Their first condition corresponds to our first-order obstruction, i.e. the obstruction to lifting the co-isotropic structure from $\cP(Y,X;0)^{\nondeg}$ to $Q\cP(Y,\sL;0)^{\nondeg}/G^2$.
There are no further obstructions to quantising the pair $(\O_Y,\sL)$, but
their second condition is to ensure that the resulting quantisation of $\O_Y$ is $\tilde{\O}_Y$, with the obstruction then coming from the higher-order coefficients of the exact sequence
\[
\H^1(\cone(\DR(Y)\to \DR(X))\llbracket \hbar \rrbracket \to \H^2\DR(Y)\llbracket \hbar \rrbracket \to \H^2\DR(X)\llbracket \hbar \rrbracket.
\]
When $\sL^{\ten 2}$ has a right $\sD$-module structure, the Chern class $c_1 (\sL) -\half c_1 (K_X )$ vanishes. Moreover, whenever there is an isomorphism $\tilde{\O}_Y\simeq \tilde{\O}_Y^{\op}$ of quantisations which is semilinear with respect to the transformation $\hbar \mapsto -\hbar$, the calculations of \cite[Remark 5.3.4]{BaranovskyGinzburgKaledinPecharich} show that $\At(\tilde{\O}_Y,X)=0$. Thus their obstruction does indeed vanish in the scenario of
Theorem \ref{quantpropsd} below.
\end{remark}
\subsection{Self-duality}\label{sdsn}
In order to eliminate the potential first order obstruction to quantising a generalised Lagrangian in Proposition \ref{prop3}, we now introduce the notion of self-duality, combining the ideas of \cite[\S \ref{DQvanish-sdsn}]{DQvanish} and \cite[\S \ref{DQnonneg-sdsn}]{DQnonneg}.
We wish to consider line bundles $\sL$ on $X$ equipped with an involutive equivalence $(-)^t \co \sD(\sL) \simeq \sD(\sL)^{\op}$. Such an equivalence is the same as a right $\sD$-module structure on $\sL^{\ten 2}$. Since a dualising line bundle $K_{X}$ on $X$ naturally has the structure of a right $\sD$-module (see for instance \cite[\S 2.4]{GaitsgoryRozenblyumCrystal} for a proof in the derived setting), we will typically take $\sL$ to be a square root of $K_{X}$. In this case, the equivalence $\sD(\sL) \simeq \sD(\sL)^{\op}$ comes from the equivalences $ \sL \simeq \sL^{\vee}$ and $\sD(\sE)^{\op} \simeq \sD(\sE^{\vee})$, where $\sE^{\vee}:= \oR\hom_{\sO_X}(\sE,K_X)$.
\begin{definition}
Given a morphism $\phi \co A\to B$ of cofibrant stacky CDGAs over $R$ and a strict line bundle $M$ over $B$, equipped with a contravariant involution $(-)^t$ of $\cDiff_{B/R}(M)$, we define an involution $(-)^*$ on
the DGLA $Q\widehat{\Pol}(A,M;0)[1]$ by
\[
\Delta^*(\hbar):= i(\Delta)(-\hbar)^t,
\]
for the brace algebra involution
\begin{align*}
-i &\co (\C\C_{R,BD_1}(A, \cDiff_{B/R}(M))_{[1]} \rtimes \C\C_{R,BD_1}(A))^{\op}\\
&\to \C\C_{R,BD_1}(A, \cDiff_{B/R}(M)^{\op})_{[1]} \rtimes \C\C_{R,BD_1}(A)
\end{align*}
adapted from Lemma \ref{involutiveHH}.
\end{definition}
Since $(-)^*$ is a quasi-isomorphism of filtered DGLAs, it gives rise to an involutive weak equivalence
\[
(-)^*\co Q\cP(A,M;0) \to Q\cP(A,M;0)
\]
\begin{lemma}\label{filtsd}
For the filtration $G$ induced on $\tilde{F}^pQ\widehat{\Pol}(A,M;0)^{sd}$ by the corresponding filtration on $\tilde{F}^p Q\widehat{\Pol}(A,M;0)$, we have
\[
\gr_G^k \tilde{F}^pQ\widehat{\Pol}(A,M;0)^{sd} \simeq \begin{cases}
\gr_G^k \tilde{F}^pQ\widehat{\Pol}(A,M;0) & k \text{ even}\\
0 & k \text{ odd}.
\end{cases}
\]
\end{lemma}
\begin{proof}
This combines \cite[Lemma \ref{DQvanish-filtsd}]{DQvanish} and \cite[Lemma \ref{DQnonneg-filtsd}]{DQnonneg}. It follows because Lemma \ref{involutiveHH} ensures that the involution acts trivially on $\gr_G^0Q\widehat{\Pol}(A,M;0)$. It therefore acts as multiplication by $(-1)^k$ on $ \gr_G^kQ\widehat{\Pol}(A,M;0)= \hbar^k\gr_G^0Q\widehat{\Pol}(A,M;0)$.
\end{proof}
\begin{definition}\label{selfdualdef}
For a line bundle $\sL$ on $X$ with a right $\sD$-module structure on $\sL^{\ten 2}$,
we define
the space
\[
Q\cP(Y,\sL;0)^{sd}
\]
of self-dual quantisations to be the space of homotopy fixed points of the $\Z/2$-action on $Q\cP(Y,\sL;0)$ generated by $(-)^*$.
\end{definition}
\begin{remark}\label{curvedsdrmk}
Following Remark \ref{curvedrmk},
a self-dual quantisation of $(X \xra{\phi}Y,\sL)$ gives rise to a
curved $A_{\infty}$-deformation $\tilde{\O}_{Y}$ of $\hat{\Tot}\sO_{Y}$ over $R\llbracket \hbar \rrbracket$ ,equipped with a contravariant involution $*$ which is semilinear under the transformation $\hbar \mapsto -\hbar$, together with a curved involutive $A_{\infty}$-morphism $\phi^{-1}\tilde{\O}_{Y} \to \sD_{\sO_{X}/R}(\sL)\llbracket \hbar \rrbracket$.
More is true: by \cite[Proposition \ref{DQnonneg-Perprop}]{DQnonneg}, a quantisation gives a curved $A_{\infty}$ deformation of the dg category $\per_{\dg}(\sO_{Y})$ of perfect complexes on $Y$, with self-dual quantisations incorporating a semilinear lift of the involution $\oR \hom_{\sO_{Y}}(-, \sO_{Y})$. A self-dual quantisation of the pair $(Y,\sL)$ thus gives a curved semilinearly involutive $A_{\infty}$-deformation of the involutive category $\per_{\dg}(\sO_{Y})$ fibred over $\per_{\dg}(\sO_{X})$ via the functor
\begin{align*}
(\per_{\dg}(\sO_{Y}),\oR \hom_{\sO_{Y}}(-, \sO_{Y})) &\to (\per_{\dg}(\sO_{X}),\oR \hom_{\sO_{X}}(-, \sL^{\ten 2}))\\
\sF &\mapsto \phi^*\sF \ten \sL,
\end{align*}
with an additional restriction of the curvature of the deformation in terms of differential operators.
Adapting \cite[Remark \ref{DQnonneg-sdgerbermk}]{DQnonneg}, we can extend the input data from the space $\oR\Gamma(X, B\bG_m)$ of line bundles to the space $\oR\Gamma(Y, B^2\bG_m)\by^h_{\oR\Gamma(X, B^2\bG_m)} \{1\}$ of pairs $(\sG,\sL)$ with $\sG$ a $\bG_m$-gerbe on $Y$, and $\sL$ a trivialisation of $\phi^*\sG$. There is then a notion of self-dual quantisation for pairs $(\sG,\sL)$ with $\sG$ a $\mu_2$-gerbe and $\sL$ a trivialisation of the $\bG_m$-gerbe associated to $\phi^*\sG$, with a right $\sD$-module structure on the line bundle $\sL^{\ten 2}$. In particular, we may consider involutive quantisations of $(\per_{\dg}(\sO_{Y}),\oR \hom_{\sO_{Y}}(-, \sM))$ for any line bundle $\sM$, the criterion for self-duality now being that $\sL^{\ten 2} \ten \phi^*\sM$ be a right $\sD$-module, so that we consider the involution $\oR \hom_{\sO_{X}}(-, \sL^{\ten 2}\ten \phi^*\sM)$ on $\per_{\dg}(\sO_{X})$.
The natural example to take for $\sM$ is the dualising line bundle $K_{Y}= \det \bL_{Y}$, but when $X$ is Lagrangian, $\phi^*K_{Y}$ will be trivial, so the resulting quantisations are quite similar. In any case, the $\bG_m$-actions on our filtered DGLAs are all unipotent, so extend to $\bG_m\ten_{\Z}\Q$-actions. Since $\mu_2\ten\Q=0$, this means there are canonical equivalences between the spaces of self-dual quantisations for varying $(\sG,\sL)$
\end{remark}
\begin{definition}
As in \cite[Remark \ref{DQnonneg-oddcoeffsrmk}]{DQnonneg}, write $t \in \GT(\Q)$ for the $(-1)$-Drinfel'd associator which induces the involution of Lemma \ref{involutiveHH}. We then denote by $\Levi_{\GT}^t$ the space of
Levi decompositions $w$ of $\GT$ with $w(-1)=t$; these form a torsor for the subgroup $(\GT^1)^t$ of $t$-invariants in the pro-unipotent radical $\GT^1$.
\end{definition}
\begin{definition}
Define $G\Lag(Y,X;0)^{sd}$ to be the homotopy fixed points of the involution of $G\Lag(Y,X;0)$ given by $\hbar \mapsto -\hbar$. Explicitly, we set $G\Iso(A,B;0)^{sd}$ to be
\[
\mmc( \cone(F^2\DR(A/R)\to F^2\DR(B/R))) \by \prod_{i>0} \mmc( \cone(\DR(A/R)\to \DR(B/R))\hbar^{2i}),
\]
with $G\Lag(A,B;0)^{sd}$ the subspace of non-degenerate elements.
\end{definition}
\begin{theorem}\label{quantpropsd}
Take a morphism $X \to Y$ of strongly quasi-compact Artin $N$-stacks over $R$, and a line bundle $\sL$ on $X$ with a right $\sD$-module structure on $\sL^{\ten 2}$ (such as when $\sL$ is any square root of $K_{X}$). For any $w \in \Levi_{\GT}^t(\Q)$, the induced map
\[
Q\cP(Y,\sL;0)^{\nondeg,sd} \to G\Lag(Y,X;0)^{sd}
\]
(from non-degenerate self-dual quantisations to generalised self-dual Lagrangians)
coming from Proposition \ref{prop3} is a weak equivalence.
In particular, $w$ associates a canonical choice of self-dual quantisation of $(Y,\sL)$ to every Lagrangian structure of $X$ over $Y$.
\end{theorem}
\begin{proof}
This is much the same as \cite[Proposition \ref{DQvanish-quantpropsd}]{DQvanish}. Lemma \ref{filtsd} implies that $w$ gives rise to weak equivalences
\begin{align*}
Q\cP(Y,\sL;0)^{sd}/G^{2i} &\to Q\cP(Y,\sL;0)^{sd}/G^{2i-1}\\
Q\cP(Y,\sL;0)^{sd}/G^{2i+1} &\to (Q\cP(Y,\sL;0)^{sd}/G^{2i})\by^h_{(Q\cP(Y,\sL;0)/G^{2i})}(Q\cP(Y,\sL;0)/G^{2i+1}).
\end{align*}
Combined with Proposition \ref{prop3}, these give weak equivalences from $Q\cP(Y,\sL;0)^{\nondeg,sd}/G^{2i+1}$ to
\[
(Q\cP(Y,\sL;0)^{\nondeg,sd}/G^{2i})\by \mmc(\hbar^{2i} \cone(\DR(Y/R) \to \DR(X/R))
\]
for all $i>0$. Moreover, \cite[Remark \ref{DQnonneg-oddcoeffsrmk}]{DQnonneg} ensures that for our choice of Levi decomposition $w$, the map $\mu_w$ is equivariant under the involutions $*$, so these equivalences are just given by taking homotopy $\Z/2$-invariants.
The result then follows by induction, the base case holding because $*$ acts trivially on $ Q\cP(Y,\sL;0)/G^1=\cP(Y,X;0) $, so $ Q\cP(Y,\sL;0)^{sd}/G^1\simeq \cP(Y,X;0)$.
\end{proof}
\subsection{Quantisations of higher Lagrangians}\label{higherrmk}
Given a Lagrangian $(X, \lambda)$ with respect to an $n$-shifted symplectic structure $(Y,\omega)$ for $n > 0$, we now discuss how the techniques of this paper should adapt to give a notion of quantisations and to establish their existence. The broad picture is that we should have an $E_{n+1}$-algebra deformation of $\sO_{Y}$ acting on an $E_n$-algebra deformation of $\sO_{X}$.
If we exploit Koszul duality for $P_{n+1}$-algebras, we may replace the filtered Hochschild complexes of \S \ref{centresn} with Poisson coalgebra coderivations on bar complexes to give $P_{n+2}$-algebras of derived multiderivations acting on $P_{n+1}$-algebras (instead of $E_2$-algebras acting on $E_1$-algebras). Proposition \ref{compatcor2} then generalises to give an alternative proof of the equivalence, announced by Costello and Rozenblyum and now proved by Melani and Safronov \cite{melanisafronovII}, between $n$-shifted Lagrangians and non-degenerate $n$-shifted co-isotropic structures.
By adapting the methods of this paper, \cite{melanisafronovII} also established quantisations for $n$-shifted co-isotropic structures for $n>1$. We now sketch a parametrisation of quantisations for higher Lagrangians, including the case $n=1$ not addressed in \cite{melanisafronovII}. Following \cite[Remark \ref{DQnonneg-nonfedosovrmk}]{DQnonneg}, these constructions might lead to parametrisations of degenerate $n$-shifted co-isotropic structures.
\subsubsection{Almost commutative $E_k$-algebras}
We begin with the notion of a $BD_k$-algebra as a higher analogue of an almost commutative algebra. There is a filtration on the Lie operad given by arity, inducing a filtration on the free Lie algebra generated by any filtered complex. Taking the universal enveloping $E_k$-algebra of this Lie algebra then gives a filtered $E_k$-algebra, and this construction corresponds to a filtration on the $E_k$ operad. We can then define the $BD_k$ operad to be the $E_k$ operad equipped with this completed filtration, for $k \ge 1$.
Explicitly, $BD_1$ is just the operad defined in \cite[\S 3.5.1]{CPTVV}, whose algebras are almost commutative DGAAs. For $k\ge 2$, the operad $BD_k$ is just given by the re-indexed good truncation filtration $F^p BD_k= \tau_{\ge p(k-1)}E_k$ --- this agrees with \cite[\S 3.5.1]{CPTVV} for $k=2$, but differs by the reindexation for higher $k$. In particular, almost commutative brace algebras are equivalent to $BD_2$-algebras.
Informally, an $n$-shifted quantisation of a morphism $A \to B$ of CDGAs consists of a $BD_{n+1}$-algebra deformation $\tilde{A}$ of $A$ acting on a $BD_n$-algebra deformation $\tilde{B}$ of $B$ in a sense we now attempt to make precise. An $n$-shifted quantisation of a morphism $A \to B$ of stacky CDGAs will be an $n$-shifted quantisation of $\hat{\Tot}A \to \hat{\Tot}B$ subject to additional boundedness constraints.
\subsubsection{Centres}
From now on, we refer to $BD_k$-algebras in complete filtered cochain chain complexes as stacky $BD_k$-algebras.
Adapting \cite[Theorem 5.3.1.14]{lurieHigherAlgebra} from $\infty$-operads to the operads $BD_k$ in filtered chain complexes will
give a stacky $BD_k$-algebra
\[
\oR\C\C_{BD_k,R}(A,D)
\]
associated to any morphism $A \to D$ of stacky $BD_k$-algebras over $R$, universal with the property that there is a $BD_k$-algebra morphism $\oR\C\C_{BD_k,R}(A,D)\ten_R^{\oL} A \to D$ in the associated $\infty$-category. Explicitly, these centres should be given by $E_k$ Hochschild complexes equipped with a PBW filtration. The associated graded $\gr \oR\C\C_{BD_k,R}(A,D)$ is necessarily the centre of the morphism $\gr A \to \gr B$ of graded $P_k$-algebras, so is given by derived $P_k$ multiderivations from $\gr A$ to $\gr B$.
The universal property implies that $\oR\C\C_{BD_k,R}(A):=\oR\C\C_{BD_k,R}(A,A) $ is naturally an $E_1$-algebra in stacky $BD_k$-algebras, i.e. a stacky $E_1 \ten^{\oL}_{BV} BD_k$-algebra for the Boardman--Vogt tensor product $\ten_{BV}$. Moreover, for any morphism $A \to D$, the centre $\oR\C\C_{BD_k,R}(A,D) $ will then become a $\oR\C\C_{BD_k}(A)$-module in stacky $BD_k$-algebras.
For any morphism
$A_1 \by A_2 \to D$, the idempotents in the domain give a decomposition $D= D_1 \by D_2$, and by universality for each morphism $A \to D$ we thus have
\[
\oR\C\C_{BD_k,R}(R\by A, R \by D) \simeq \oR\C\C_{BD_k,R}(R,R) \by \oR\C\C_{BD_k,R}(A,D) = R \by \oR\C\C_{BD_k,R}(A,D).
\]
The centre of $R \by A \to R \by D$ as in the category of augmented stacky $BD_k$-algebras over $R$ is just
\[
\oR\C\C_{BD_k,R}(R\by A, R \by D)\by_{ (R \by D)}^h R,
\]
so the reasoning above shows that
\[
\oR\C\C_{BD_k,R,+}(A,D):= \oR\C\C_{BD_k,R}(A,D)\by^h_D 0
\]
is naturally a non-unital stacky $BD_k$-algebra, with $\oR\C\C_{BD_k,R,+}(D)$ a non-unital stacky $E_1 \ten^{\oL}_{BV} BD_k$-algebra.
Adapting Lemma \ref{semidirectdef}, we then have:
\begin{definition}
Given a stacky $E_1 \ten^{\oL}_{BV} BD_k$-algebra $C$ over $R$ and a $C$-module $E$ in stacky $BD_k$-algebras over $R$,
we define $ E_{[1]} \rtimes C $ to be the non-unital stacky $E_1 \ten^{\oL}_{BV}BD_k$-algebra
\[
C\by^h_{ \oR\C\C_{BD_k,R}(E) }\oR\C\C_{BD_k,R,+}(E),
\]
the morphism $C \to \oR\C\C_{BD_k,R}(E)$ existing by universality.
\end{definition}
\subsubsection{Quantised $n$-shifted relative polyvectors for $n>0$}
Given a morphism $\phi \co A\to B$ of stacky CDGAs over $R$, now consider the non-unital $E_1 \ten^{\oL}_{BV} BD_{n+1}$-algebra
\[
(\C,F):= \oR\C\C_{BD_{n+1},R}(A, \oR\C\C_{ BD_n,R}(B) )_{[1]} \rtimes \oR\C\C_{BD_{n+1},R}(A)
\]
in complete filtered cochain chain complexes. Definition \ref{QPoldef} then adapts verbatim to give a complex $Q\widehat{\Pol}(A,B;n) $ equipped with filtrations $\tilde{F}$ and $G$.
Since we wish $Q\widehat{\Pol}(A,B;n)[n+1]$ to have the structure of a DGLA with $ [\tilde{F}^i,\tilde{F}^j] \subset \tilde{F}^{i+j-1}Q$ and $[G^i, G^j]\subset G^{i+j}$, acting as derivations on the bifiltered $E_1 \ten^{\oL}_{BV} BD_{n+1}$-algebra $\hbar Q\widehat{\Pol}(A,B;n)$, we need to know that $\oR\C\C_{BD_k}(A)$ has the structure of a $BD_{k+1}$-algebra. The analogous statement for $k=1$ is the content of Lemma \ref{HHaclemma2}. In general, the property would follow from the following conjecture:
\begin{conjecture}\label{additivityconj}
For $k \ge 1$, the additivity isomorphism $E_{k+1} \simeq E_1 \ten^{\oL}_{BV}E_k$ of \cite[Theorem 5.1.2.2]{lurieHigherAlgebra} induces a map $BD_{k+1} \simeq E_1 \ten^{\oL}_{BV} BD_k$ of operads in complete filtered chain complexes.
\end{conjecture}
Here, $\ten^{\oL}_{BV}$ denotes the derived Boardman--Vogt tensor product, so the conjecture amounts to saying that an $A_{\infty}$-algebra in $BD_k$-algebras is naturally a $BD_{k+1}$-algebra. On passing to associated graded complex, the equivalence would give $P_{k+1} \to E_1 \ten^{\oL}_{BV} P_k$, which has been proved to be an equivalence by Rozenblyum (unpublished, cf. \cite[\S 3.4]{CPTVV}) and independently by Safronov \cite{safronovBraces}; thus the map in the conjecture is necessarily an equivalence if it exists. A proof of Conjecture \ref{additivityconj} has also been announced by Rozenblyum (cf. \cite[comment after Conjecture 3.5.7]{CPTVV}).
For $k \ge 2$, the conjecture would follow if additivity is compatible with the action of the Grothendieck--Teichm\"uller group.
The conjecture would also ensure that the centres $\oR\C\C_{BD_k,R}(A,D)$ above all exist by appealing directly to \cite[Theorem 5.3.1.14]{lurieHigherAlgebra} for $k \ge 1$, regarding $BD_k$-algebras as $E_{k-1}$-algebras in $BD_1$-algebras.
The definitions of \S\S \ref{affinesn}, \ref{compatsn} all then adapt, replacing $Q\widehat{\Pol}(A,M;0)$ with $Q\widehat{\Pol}(A,B;n)$ and taking appropriate shifts. The space $Q\cP(A,B;n)$ of $n$-shifted quantisations of the pair $(A,B)$ is just
\[
\mmc(\tilde{F}^2Q\widehat{\Pol}(A,B;n)[n+1]),
\]
elements of which give rise to curved $E_{n+1}$-algebra deformations of $\hat{\Tot}A$ acting on curved $E_n$-algebra deformations of $B$.
The space $G\Iso(A,B;n)$ of $n$-shifted isotropic structures is
\[
\mmc( \tilde{F}^2\cone(\DR(A/R)\llbracket\hbar\rrbracket \to \DR(B/R)\llbracket\hbar\rrbracket)[n]),
\]
and Definition \ref{mudef} then adapts to give a compatibility map
\[
\mu_w(-,\Delta) \co \cocone(\DR(A/R) \to \DR(B/R)\llbracket\hbar\rrbracket/\hbar^j \to T_{\Delta}Q\widehat{\Pol}_w(A;B,0)/G^j
\]
for each quantisation $\Delta$; Definition \ref{Qcompdef} adapts to give a space $Q\Comp_w(A,B;n)$ for each $w \in \Levi_{\GT}(\Q)$.
Propositions \ref{QcompatP1}, \ref{compatcor2} and \ref{quantprop} will all carry over directly, in particular giving a map $ Q\cP(A,B;n)^{\nondeg}\to G\Lag(A,B;n)$, the non-degenerate locus in $G\Iso(A,B;n) $. The techniques of \S \ref{stacksn} then extend these to global constructions for Artin $N$-stacks.
\subsubsection{Self-duality}
The functor $D \mapsto D^{\op}$ sending an almost commutative algebra to its opposite gives an involutive endofunctor of the category of $BD_1$-algebras, and hence of the categories of $E_1 \ten^{\oL}_{BV} BD_k$-algebras. The universal property of centres then gives an involution
\[
-i \co \oR\C\C_{BD_k,R}(A,D)^{\op} \to \oR\C\C_{BD_k,R}(A^{\op},D^{\op}),
\]
which in the $k=1$ case is the involution $-i$ of Lemma \ref{involutiveHH}. Defining an involutive $E_1 \ten^{\oL}_{BV} BD_k$-algebra to be a homotopy fixed point of the involutive endofunctor $(-)^{\op}$, the involution above makes $ \oR\C\C_{BD_k,R}(A,D)$ a stacky involutive $BD_k$-algebra whenever $A$ and $D$ are stacky involutive $BD_k$-algebras. In fact, this is necessarily the centre of $A \to D$ in the category of stacky involutive $BD_k$-algebras --- the operad governing involutive $BD_k$-algebras is $BD_k \circ (0, \Q.(\Z/2), 0, \ldots)$, with distributivity transformation given by the involution.
As in \S \ref{sdsn}, we then have an involution $(-)^*$ on
the DGLA $Q\widehat{\Pol}(A,D;0)[n+1]$ given by
$
\Delta^*(\hbar):= i(\Delta)(-\hbar)^t,
$
and we can define $ Q\cP(A,B;n)^{sd}$ to be the fixed points of the resulting $\Z/2$-action, so its points give rise to involutive quantisations.
The proof of Theorem \ref{quantpropsd} then adapts to give:
\begin{theorem}\label{higherquantpropsd}
Take a morphism $X \to Y$ of strongly quasi-compact Artin $N$-stacks over $R$. If Conjecture \ref{additivityconj} holds, then for any $w \in \Levi_{\GT}^t(\Q)$, the induced map
\[
Q\cP(Y,X;n)^{\nondeg,sd} \to G\Lag(Y,X;n)^{sd}
\]
(from non-degenerate self-dual quantisations to generalised self-dual Lagrangians)
is a weak equivalence for all $n>0$.
In particular, $w$ associates a canonical choice of self-dual quantisation of $(Y,X)$ to every $n$-shifted Lagrangian structure of $X$ over $Y$.
\end{theorem}
\begin{remark}[Twisted quantisations]
One significant difference between Theorems \ref{quantpropsd} and \ref{higherquantpropsd} is that the former incorporates the data of a line bundle. Similar input data are not essential for positively shifted quantisations because a commutative algebra is canonically isomorphic to its opposite $E_1$-algebra, whereas $\sO_{X}$ is not in general a right $\sD$-module.
However, by generalising Remark \ref{curvedsdrmk} we still expect a sensible notion of twisted quantisations for $n$-shifted Lagrangians, fibred over the space $\oR\Gamma(Y,B^{n+2}\bG_m)\by^h_{\oR\Gamma(X,B^{n+2}\bG_m) }\{1\}$ of pairs $(\sG,\sL)$ with $\sG$ a $B^{n+1}\bG_m$-torsor on $Y$, and $\sL$ a trivialisation of $\phi^*\sG$ on $X$. Self-dual (i.e. involutive) quantisations would then be parametrised by $\oR\Gamma(Y,B^{n+2}\mu_2)\by^h_{\oR\Gamma(X,B^{n+2}\mu_2) }\{1\}$. Adapting \cite[Theorem 5.3.2.5]{lurieHigherAlgebra} from filtered $E_{n+2}$-algebras to $BD_{n+2}$-algebras would establish the required actions of $(n+2)$-groupoids $\ho\Lim_{i \in \Delta} B^{n+2}D^i(A)^{\by}$ generalising $\mathrm{TLB}$ from \S \ref{lbsn}.
However, since these spaces will come from unipotent group actions on quantised polyvectors, the actions of the torsion groups $B^{n+1}\mu_2(A), B^{n+1}\mu_2(B)$ must be trivial, so the spaces of twisted self-dual quantisations will be canonically equivalent as $(\sG,\sL)$ varies.
\end{remark}
\section{A ``Fukaya category'' for algebraic Lagrangians}\label{fukayasn}
In \cite[\S 5.3]{BehrendFantechiIntersections}, Behrend and Fantechi discussed the construction of a dg category whose objects are local systems on Lagrangian submanifolds of a complex symplectic variety. An extensive survey of related results is given in \cite[Remark 6.15]{BBDJS}, where Joyce et al. discuss possible approaches to constructing such a ``Fukaya category'' with complexes of vanishing cycles as morphisms. On a complex symplectic manifold, Kashiwara and Schapira give a likely candidate for this category for smooth Lagrangians in \cite{kashiwaraschapira}, as the derived category of simple holonomic DQ modules for a DQ algebroid quantisation of the sheaf of analytic functions, and it is this approach which generalises naturally in our setting.
\subsection{DQ modules}
Since we are working algebraically rather than analytically, the analogue of a DQ module is an $\tilde{\O}_{Y}-\sD_{Y}$-bimodule, where $\tilde{\O}_{Y}$ is a quantisation of ${\O}_{Y}$. For a line bundle $\sL$ on a derived Lagrangian $\phi\co X \to Y$, each quantisation $(\tilde{\O}_{Y}, \tilde{\sL})$ of $(\sO_{Y}, \sL)$ gives rise to such a bimodule as follows.
When the quantisation $\tilde{\sL}$ is given by a differential operator $\Delta \in \sD_{X}(\sL)\llbracket \hbar \rrbracket$, we write $ \tilde{\sL}\hten_{\sO_{X}}\sD_{X}$ for the associated right $\sD$-module
\[
(\sL\ten_{\sO_{X}}\sD_{X}\llbracket \hbar \rrbracket, \delta + \Delta \cdot-),
\]
and similarly for related constructions. This is an abuse of notation because $\Delta$ is not $\sO_{X}$-linear, and in fact $\tilde{\sL}\hten_{\sO_{X}}\sD_{X}$ is a more fundamental object than the $R$-linear deformation $\tilde{\sL}$ of $\sL$, because
\[
\tilde{\sL} = (\tilde{\sL}\hten_{\sO_{X}}\sD_{X})\ten_{\sD_{X}}\sO_{X}.
\]
The associated $\tilde{\O}_{Y}-\sD_{Y}$-bimodule is then given by taking
\[
\phi_{\dagger}(\tilde{\sL}\hten_{\sO_{X}}\phi^{-1}\sD_{Y})= \sD_{\sO_{Y}}(\sO_{Y},\oR \phi_*\tilde{\sL}),
\]
which is naturally equipped with a left $\tilde{\O}_{Y}$-module structure. Here $\hten$ denotes the $\hbar$-completed tensor product, since we regard $ \tilde{\O}_{Y}, \tilde{\sL}$ as inverse systems over $ \{R[\hbar]/\hbar^i\}_i$.
\begin{definition}\label{fukayadef}
Fix a non-degenerate involutive quantisation $\tilde{\O}_{Y} \in Q\cP(Y,0)^{\nondeg,sd}$ quantising a symplectic structure $\omega \in \H^2F^2\DR(Y/R)$, and assume that $\tilde{\O}_{Y}$ is $w$-compatible with $\omega \cdot a$ for some $w \in \Levi_{\GT}^t(\Q)$ and $a \in \H^0\DR(Y/R)\llbracket \hbar^2 \rrbracket$.
Now define a dg category $\cF(\tilde{\O}_{Y})$ as follows.
Objects are given by morphisms $\phi \co X \to Y$ equipped with a square root $\sL$ of the dualising complex $K_{X}$, together with an element $\Delta \in Q\cP(Y,\sL;0)^{\nondeg,sd}$ lifting $\tilde{\O}_{Y}$.
We then define the complex
\[
\HHom_{\cF(\tilde{\O}_{Y}) }( (\sL_1, \Delta_1), (\sL_2, \Delta_2))
\]
of morphisms to be
the complex
\[
\oR\Gamma(Y, (\hat{\Tot} \C\C_R(\sO_{Y}, \cDiff_{\sO_{Y}}( \oR \phi_{1*}\sL_1, \oR \phi_{2*}\sL_2))\llbracket \hbar \rrbracket, \delta+ \Delta_{Y}+ \Delta_{1}\mp\Delta_{2})),
\]
with composition of morphisms defined in the obvious way.
\end{definition}
Here, $\Delta_{Y}$ is the Hochschild element defining the quantisation $\tilde{\O}_{Y}$, which thus acts on $\hat{\Tot} \C\C_R(\sO_{Y}, M)$ for all $\sO_{Y}$-bimodules $M$. The elements $\Delta_{i}$ giving the quantisations of $\sL_i$ lie in the complexes $\hat{\Tot} \C\C_R(\phi_i^{-1}\sO_{Y}, \cDiff_{\phi_i^{-1}\sO_{Y}/R}(\sL_i))$ twisted by $\Delta_{Y}$, so act on the complex above by left and right multiplication respectively.
\begin{remarks}
The condition that $\tilde{\O}_{Y}$ is $w$-compatible with $\omega \cdot a$ for some $w \in \Levi_{\GT}^t(\Q)$ and $a \in \H^0\DR(Y/R)\llbracket \hbar^2 \rrbracket$ is probably independent of $w$, and ensures that every self-dual line bundle on a Lagrangian $(X,\lambda)$ over $(Y, \omega)$ admits a self-dual quantisation extending $\tilde{\O}_{Y}$, since the generalised symplectic structure $\omega\cdot a$ extends to a generalised Lagrangian $(\omega\cdot a, \lambda\cdot a)$.
The complex $\HHom_{\cF(\tilde{\O}_{Y}) }( (\sL_1, \Delta_1), (\sL_2, \Delta_2))$ is effectively the Hochschild complex of $\tilde{\O}_{Y}$ with coefficients in the bimodule $\sD_{\sO_{Y}}( \oR \phi_{1*}\sL_1, \oR \phi_{2*}\sL_2)$, so can be thought of as a model for
\[
\oR\HHom_{ \tilde{\O}_{Y}\hten_R \sD_{Y}^{\op}}(\oR \phi_{1\dagger}(\tilde{\sL}_1\hten \sD_{X_1}), \oR \phi_{2\dagger}(\tilde{\sL}_2\hten \sD_{X_2})),
\]
\end{remarks}
Since we are permitting all derived Lagrangians to give rise to elements of $\cF(\tilde{\O}_{Y})$, we cannot expect all morphisms in this dg category to be related to vanishing cycles. However, when Grothendieck--Verdier duality applies (such as for proper morphisms) we have the following:
\begin{lemma}\label{cfvanish1}
When the functor $\oR \phi_{1*}$
has a derived right adjoint $\phi_1^!$ on quasi-coherent complexes given by $\phi^!_1\sF=\phi^*_1\sF\ten K_{X_1}[-d]$, then the complex $\HHom_{\cF(\tilde{\O}_{Y}) }( (\sL_1, \Delta_1), (\sL_2, \Delta_2))[d]$ is given by derived global sections of a deformation over $R\llbracket \hbar \rrbracket$ of the
self-dual line bundle
\[
\phi_2^*\sL_1\ten \phi_1^*\sL_2
\]
on $X_1\by^h_{Y}X_2$.
\end{lemma}
\begin{proof}
By definition, $\HHom_{\cF(\tilde{\O}_{Y}) }( (\sL_1, \Delta_1), (\sL_2, \Delta_2))$ reduces modulo $\hbar$ to the complex
\begin{align*}
\oR\Gamma(Y, \hat{\Tot} \C\C_R(\sO_{Y}, \cDiff_{\sO_{Y}}( \oR \phi_{1*}\sL_1, \oR \phi_{2*}\sL_2))).
\end{align*}
We now observe that the inclusion
\[
\oR\cHom_{\sO_{Y}}(\oR \phi_{1*}\sL_1, \oR \phi_{2*}\sL_2) \to \cDiff_{\sO_{Y}}( \oR \phi_{1*}\sL_1, \oR \phi_{2*}\sL_2)
\]
naturally extends to a morphism
\[
\oR\cHom_{\sO_{Y}}(\oR \phi_{1*}\sL_1, \oR \phi_{2*}\sL_2) \to \HHom_{\cF(\tilde{\O}_{Y}) }( (\sL_1, \Delta_1), (\sL_2, \Delta_2)).
\]
This is a quasi-isomorphism, because
\[
\sO_{Y} \to \C\C_R(\sO_{Y}, \cDiff_{\sO_{Y}}(\sO_{Y}))
\]
is a quasi-isomorphism as in Remark \ref{extremecasesrmk}.
Finally, we have
\begin{align*}
\oR\cHom_{\sO_{Y}}(\oR \phi_{1*}\sL_1, \oR \phi_{2*}\sL_2)&\simeq \oR\HHom_{\sO_{X_1\by^h_{Y}X_2}}(\phi_2^*\sL_1, \phi_1^!\sL_2)\\
&\simeq \oR\HHom_{\sO_{X_1\by^h_{Y}X_2}}(\phi_2^*\sL_1, \phi_1^*\sL_2\ten K_{X_1}[-d])\\
&\simeq \phi_2^*\sL_1\ten \phi_1^*\sL_2[-d],
\end{align*}
because self-duality of $\sL_1$ says $\hom_{\sO_{X_1}}(\sL_1, K_{X_1})\simeq \sL_1$. Since $X_1$ is Lagrangian, $\phi_1^*K_{X}$ is trivial, so $\phi_2^*\sL_1\ten \phi_1^*\sL_2$ is indeed self-dual with respect to $K_{X_1\by^h_{Y}X_2 }= \phi_2^*K_{X_1}\ten \phi_1^*K_{X_2}$.
\end{proof}
\begin{remarks}\label{vanishrmks1}
The argument sketched in \cite[Proposition 5.6 and Theorem 5.7]{safronovPoissonRednCoisotropic} gives a map
\begin{align*}
Q\cP(Y,\sL_1;0)\by^h_{(-)^{\op},Q\cP(Y,0)}Q\cP(Y,\sL_1;0)&\to Q\cP( \phi_2^*\sL_1\ten \phi_1^*\sL_2,-1)\\
(\tilde{\sO}_Y^{\op},\tilde{\sL}_1; \tilde{\sO}_Y, \tilde{\sL}_2) &\mapsto \tilde{\sL}_1 \ten^{\oL}_{\tilde{\sO}_Y} \tilde{\sL}_2,
\end{align*}
by ensuring that the differential on the right hand side is a power series of differential operators of the correct orders on $X_1\by^h_YX_2$, via properties of the shuffle product. Existence of this map would also be an immediate consequence of the $k=0$ analogue of Conjecture \ref{additivityconj}, since it would allow us to regard $\tilde{\sO}_Y$ as an $E_1$-algebra in $\bB\bD_0$-algebras, with the bar construction realising the derived tensor product as a twisted $\bB\bD_0$-algebra.
Thus the
deformation in Lemma \ref{cfvanish1}
is an $E_0$-quantisation in the sense of \cite{DQvanish}, i.e. an element of $Q\cP( \phi_2^*\sL_1\ten \phi_1^*\sL_2,-1)$; it will also be non-degenerate and self-dual.
Furthermore, there is a generalisation of Lemma \ref{cfvanish1} to Lagrangian correspondences. Say we had non-degenerate quantisations $\tilde{\O}_{Y}, \tilde{\O}_{Z}$ of derived Artin $N$-stacks $Y,Z$, a morphism $\psi \co T \to Y \by Z$, and a self-dual line bundle $\sM$ on $T$ with a quantisation $\Delta \in Q\cP(Y\by Z,\sM;0)^{\nondeg,sd}$ lifting the quantisation $\tilde{\O}_{Y}\ten\tilde{\O}_{Z}$ of $Y \by Z$.
The definition of $\HHom_{\cF(\tilde{\O}_{Y}) }$ then adapts to give a
a dg functor from $\cF(\tilde{\O}_{Y})^{\op}$ to $\tilde{\O}_{Z}\hten_R \sD_{Z}^{\op}$-modules, roughly given by
\[
\oR\HHom_{\tilde{\O}_{Y}\hten \sD_{Y}^{\op}}(-, (\pr_{Y}\psi)_{\dagger}(\tilde{\sM}\ten \sD_{Y\by Z}) ).
\]
After a shift, this should also give a dg functor $\cF(\tilde{\O}_{Y})^{\op} \to \cF(\tilde{\O}_{Z})$, at least after restricting to proper objects and provided $T$ is proper over $Y$.
Similarly, for a quantisation of $\psi$ lifting $\tilde{\O}_{Y}^{\op}\ten\tilde{\O}_{Z} $, we should have a dg functor
$\cF(\tilde{\O}_{Y})\to \cF(\tilde{\O}_{Z})$ given by taking the tensor product with $\tilde{\sO}_T$ over $\tilde{\O}_{Y}\hten \sD_{Y}^{\op}$.
\end{remarks}
\subsection{Local quantisations of Lagrangians}
The Fukaya category envisaged in \cite[\S 5.3]{BehrendFantechiIntersections} had an object for each local system on a Lagrangian submanifold $L$. By contrast, the category outlined in \cite[Remark 6.15]{BBDJS} only had one object for each square root of $K_L$. Our approach in Definition \ref{fukayadef} is closest to \cite{kashiwaraschapira}, which considers all simple DQ modules supported on smooth Lagrangians. Once we have fixed our quantisation $\tilde{\O}_{Y}$ in $Q\cP(Y,0)^{\nondeg,sd}$ and a compatible Lagrangian in $(\omega,\lambda) \in \Lag(Y,X;0)$, the homotopy fibre of
\[
Q\cP(Y,\sL;0)^{ \nondeg,sd}\to \Lag(Y,X;0)\by^h_{\Lag(Y,\emptyset;0)}Q\cP(Y,0)^{\nondeg,sd}
\]
over $(\tilde{\O}_{Y}, \lambda)$ parametrises self-dual $\tilde{\O}_{Y}$-module quantisations of the line bundle $\sL$ on the Lagrangian $(X,\lambda)$. We now explain how this homotopy fibre can be regarded as a torsor for the group of self-dual rank $1$ local systems, so comes close to the intention of \cite{BehrendFantechiIntersections}.
By Theorem \ref{quantpropsd}, components of the homotopy fibre are a torsor for the even de Rham power series
\[
\H^1(F^2\DR(X/R))^{\nondeg} \by \prod_{i>0} \H^1\DR(X/R)\hbar^{2i},
\]
although the parametrisation depends on $w \in \Levi_{\GT}^t$.
As in \cite[Remark \ref{DQvanish-rightDmodrmk}]{DQvanish}, quantisations $(\sL\llbracket \hbar \rrbracket, \delta + \Delta)$ of $\sL$ correspond to deformations $\sE_{\hbar}:=(\sL\ten_{\sO_{X}}^{\oL}\sD_{X}\llbracket \hbar \rrbracket, \delta +\Delta\cdot\{-\})$ of $\sL\ten_{\sO_{X}}^{\oL}\sD_{X}$ as a right $\sD_{X}$-module. Other deformations of this form can be obtained by tensoring with deformations $\O'_{\hbar}$ of $\sO_{X}\llbracket \hbar \rrbracket $ as a left $\sD_{X}$-module. When $\sL^{\ten 2} = K_{X}$, the self-duality condition for $\sE_{\hbar}$ is
\[
\sE_{-\hbar} \simeq \oR\hom_{\sD_{X}^{\op}\llbracket\hbar\rrbracket}(\sE_{\hbar},\sD_{X}\llbracket\hbar\rrbracket)\ten_{\sO_{X}} K_{X}
\]
as right $\sD_{X}\llbracket \hbar \rrbracket$-modules. The condition for $\O'_{\hbar}\ten\sE_{-\hbar}$ to also be self-dual is then
\[
\O'_{-\hbar} \simeq \oR\hom_{\sO_{X}\llbracket \hbar \rrbracket}(\O'_{\hbar}, \sO_{X}\llbracket \hbar \rrbracket)
\]
as left $\sD_{X}$-modules.
The parametrisation in terms of de Rham cohomology strongly suggests that the homotopy fibre above is a torsor
for this group of self-dual rank $1$ local systems, and this is the original motivation behind the construction of $\mu$ in \cite{DQvanish}. Making this precise would be quite cumbersome, so we give a statement which implies it will be true on formal neighbourhoods:
\begin{lemma}\label{localquant}
If $R=\H_0R$ and $\H^1\DR(X/R)=0$,
then the non-empty homotopy fibres of
\[
Q\cP(Y,X;0)^{ \nondeg,sd}\to \Lag(Y,X;0)\by^h_{\Lag(Y,\emptyset;0)}Q\cP(Y,0)^{\nondeg,sd}
\]
are connected, the space of automorphisms of each quantisation being the group
\[
\{g \in 1+ \hbar R\llbracket \hbar \rrbracket ~:~ g(\hbar)^{-1}=g(-\hbar)\},
\]
acting by scalar multiplication.
\end{lemma}
\begin{proof}
Theorem \ref{quantpropsd} implies that the homotopy fibre is connected, because $ \H^1\DR(X/R)=0$. Morphisms are $\prod_{i>0} \H^0\DR(X/R)\hbar^{2i}$, and we need to understand what these map to via the equivalences between generalised Lagrangians and quantisations.
If we take $\hbar^2a(\hbar^2)\in \hbar^2R \llbracket \hbar^2 \rrbracket$, then linearity gives
\[
\mu_w(\hbar^2a(\hbar^2))= \hbar^2a(\hbar^2) \in \tau_{\ge 0} G^2T_{\Delta}Q\widehat{\Pol}(\sL,-1)^{sd}.
\]
The corresponding gauge automorphism in $\tau_{\ge 0} Q\cP(\sL,-1)^{\nondeg,sd}_{\pi}$ is then an element $g$ with $-\pd_{\hbar^{-1}}(g)g^{-1}= \hbar^2a(\hbar^2)$, so
\[
g = \exp(\int a d\hbar).
\]
Gauge elements are thus precisely exponentials of odd power series, giving the group described above.
\end{proof}
\begin{remark}[Vanishing cycles]
Following Remarks \ref{vanishrmks1}, when we restrict to proper Lagrangians, morphisms in the dg category $\cF(\tilde{\O}_{Y})$ are given by elements in $Q\cP( \phi_2^*\sL_1\ten \phi_1^*\sL_2,-1)^{\nondeg,sd}$ over the canonical $(-1)$-shifted symplectic structure on a fibre product of Lagrangians.
On the derived critical locus $i \co \mathrm{Crit}(f)\to Z$ of $f \co Z \to \bA^1$, we know from \cite[Lemma \ref{DQvanish-PVlemma}]{DQvanish} that the vanishing cycles quantisation $\Delta_f$ is an element of $Q\cP(i^*\Omega^d_Z,-1)^{\nondeg,sd}_{\lambda_f}$ for the canonical $(-1)$-shifted symplectic structure $\lambda_f$; it gives the perverse sheaf of vanishing cycles on localising at $\hbar$.
Since the behaviour of $\mu_w$ on cohomology is independent of $w$, the proof of \cite[Lemma \ref{DQvanish-PVlemma}]{DQvanish} even adapts to show that $\lambda_f$ is $w$-compatible with $\Delta_f$ \'etale locally.
In settings where the shifted Darboux theorems of \cite{BBBJdarboux,BouazizGrojnowski} apply, Lemma \ref{localquant} then allows us to regard $Q\cP(i^*\Omega^d_Z,-1)^{\nondeg,sd}_{\lambda_f}$ as the space of those self-dual quantisations which correspond to vanishing cycles on formal neighbourhoods.
Note that in our approach to defining $\cF(\tilde{\O}_{Y})$, it does not seem sensible to restrict to the locus of $Q\cP(Y,\sL;0)^{\nondeg,sd}$ corresponding to Lagrangians rather than generalised Lagrangians under the equivalence of Theorem \ref{quantpropsd}, because we cannot control the interaction of $\mu_w$ with the comparison of Lemma \ref{cfvanish1}.
\end{remark}
\bibliographystyle{alphanum}
|
1,314,259,993,174 | arxiv | \section{Introduction}
Recent years have seen huge progress in performance in brain MRI segmentation, classification and synthesis largely thanks to the application of convolutional neural networks to these problems. The organisation of challenges such as BRATS \cite{menze2014multimodal} and the MICCAI 2017 White Matter Hyperintensity Challenge \cite{kuijf2019standardized} have allowed the community to benchmark their segmentation algorithms on research data. In these cases, training data is usually preprocessed following a consistent protocol with techniques such as skull stripping, bias field correction, histogram normalisation and co-registration. Efforts are often put in place to ensure a certain degree of standardisation across the centres providing data, in terms of scanners parameters such as field strength, manufacturer, echo time, relaxation time and contrast agent. In addition, individuals generally have similar pre-clinical conditions and pathological presentations. When applied to data from clinical practice that presents much more heterogeneous acquisition conditions, the performance of algorithms trained on challenge data degrades. Performance can improve if algorithms are fine-tuned on labelled data in the target domain, but these can be expensive to acquire and rely on relative homogeneity of acquisition parameters in the target domain. If no labels are available then unsupervised domain adaptation may be used, which has seen growing interest in recent years e.g. \cite{kamnitsas2017unsupervised,perone2019unsupervised}.
Domain is not always a clear binary label. Scans of a particular MR modality (e.g T1-weighted) may come from the same scanner in the same hospital but may use different acquisition parameters. Variability can be so large that each image can almost be considered its own domain.
When evaluating domain adaptation methods for segmentation, there is often a training set, a validation set and a test set for both source and target domains. Methods are judged on their ability to generalise from seen data in the source domain to unseen data in the target domain. In this work we argue for a different evaluation criterion, namely how well a model performs on the data it adapts to. We call this \textit{``test-time unsupervised domain adaptation''}. When this test-time adaptation is performed on an individual subject we call it \textit{``one-shot unsupervised domain adaptation''}. We present a domain adaptation method which leverages a combination of adversarial learning and consistency under augmentation to work in this one-shot case. We apply this methodology on multiple sclerosis lesion segmentation but it is designed to be applicable to other tasks in medical imaging.
\subsubsection{Related work:}
Our work considers the use of existing unsupervised domain adaptation methods when only a single \textit{unlabelled} sample from the target domain is available. In this work we use the same data, pre-processing and segmentation task as in \cite{valverde2019one}, where the authors tackle one-shot supervised domain adaptation, adapting to a target domain using a single \textit{labelled} subject.
It is worth mentioning the framework proposed by Zhao et al \cite{zhao2019data} and highlighting the difference to this work. The authors consider the variability between single-modality brain MRIs to be quantifiable by an additive intensity transform and a spatial transform to a brain atlas. They use this technique to create an entire labelled dataset from a single brain with an associated anatomical parcellation (hence the term ``one-shot''). While the intensity transform tackles the variation in acquisition parameters, the spatial transform covers variations in anatomy. Although this and follow-up work produce realistic training data in the context of brain parcellation, such scheme cannot be trivially extended to application to pathologies in which the variability in presentation, location and extent is far greater. This is especially true in lesion segmentation, where a lesion prior cannot be produced from a non-linear deformations of an atlas. \looseness=-1
Neural style-transfer methods were recently applied for unsupervised domain adaptation of cardiac MRI in \cite{ma2019neural}. The style of the target domain is matched to that of a single subject in the source domain by simultaneously minimising a style loss $l_{style}(\hat{y}, y)$ and a content loss $l_{content}(\hat{y}, x)$ where $\hat{y}$ is the generated style-transferred image, $x$ is the image from the target domain and $y$ is the image from the source domain. This method relies on finding an image in the source domain which most closely resembles the target image based on a Wasserstein distance metric. This method is similar to ours in that adaptation is performed on each individual test subject as its own optimisation problem.
Recent advances in self-supervised learning have led to large improvements in semi-supervised learning. Methods such as \cite{carlucci2019domain} use self-supervised tasks such as solving jigsaw puzzles to perform domain adaptation. Promoting invariance in networks outputs under data augmentation is another self-supervised task which was shown to work well for domain adaptation in \cite{french2017self} and which we refer to as Mean Teacher. It was adapted for use in medical image segmentation in \cite{perone2019unsupervised}. In \cite{orbes_and_varsavsky} the authors showed improvements over Mean Teacher using a simpler paired consistency method. They used paired data as a form of ``ground-truth augmentation''. When paired data is not available, which is most common in practice, small adjustments to this method can lead to substantial improvements. The method of \cite{orbes_and_varsavsky} was chosen to demonstrate the value of test-time UDA, as it reported better results than domain adversarial learning and Mean Teacher on a related task. However, note that our domain adaptation methodology is not bound to a particular method.
\section{Domain Adversarial Learning and Paired Consistency}
We adapt the method for domain adaptation described in \cite{orbes_and_varsavsky} which consists of domain adversarial learning and consistency training. In domain adversarial training we seek to find a feature representation $\phi_{\theta}(x)$ which contains as little information about $d$ - the domain of $x$ - as possible and the most information about the label $y$. We do so by including a domain discriminator $D_{\gamma}(x)$ which predicts a domain $\hat{d}$ and is trained by minimising the binary cross-entropy between this prediction and the ground-truth domain $d$, $\mathcal{L}_{adv} = l_{bce}(D_{\gamma}(\phi_{\theta}(x)), d)$. We use the gradient reversal layer from \cite{ganin2016domain} to guarantee that the network weights $\theta$ change in the direction which minimises the supervised loss $\mathcal{L}_{sup}$ and maximises the adversarial loss $\mathcal{L}_{adv}$ where $\mathcal{L}_{sup} = l(\mathcal{M}(x), y_s)$ (we use the dice loss for $l$).
Consistency training is a simple semi-supervised learning method which works by enforcing invariance to data augmentation. A model $\mathcal{M}$ is trained to produce a prediction $\hat{y}_s$ on some source data $x_s$ which has an associated label $y_s$ using a regular supervised loss $\mathcal{L}_{sup}$. An image from the target domain $x_T$ is passed to the same model $\mathcal{M}$ to obtain $\hat{y}_T$. The same image is passed through the model after augmentation $g(x_T)$ (details about the choice of $g$ in section \ref{section:aug}) to produce $\hat{y}^{aug}_{T}$. The paired consistency loss $\mathcal{L}_{pc}$ aims at minimising the difference between $\hat{y}_T$ and $\hat{y}^{aug}_{T}$. Following the guidance from \cite{perone2019unsupervised} and \cite{orbes_and_varsavsky}, the soft dice is used as $\mathcal{L}_{pc}$, defined as $\mathcal{L}_{pc}(\hat{y},\hat{y}^{aug}) = \sum_{i=1}^{N} \hat{y}_i\hat{y}_i^{aug} / (\sum_{i=1}^{N} \hat{y}_i + \sum_{i=1}^{N} \hat{y}_i^{aug})$. By enforcing predictions to be invariant to some noise of perturbation $\delta$, $y(x) = y(x+\delta)$, we encourage the decision boundary of our classifier to fall in regions of low density
The right hand side of Figure \ref{fig:method} (right) depicts the benefits of domain adversarial learning. In frame a) we see a source and target domain represented by green and red ovals respectively. They contain representations of foreground and background pixels shown as grey crosses and red dots. Frame b) shows what happens when domain adversarial learning is used. The domains become indistinguishable which makes the ovals overlap. However, when the decision boundary is drawn to separate the two classes it is done only by looking at the source domain. In frame c) we introduce paired consistency. The unlabelled points are near the labelled ones, they will be assigned the label of their nearest cluster which allows the boundary to be redrawn in an area of low density. We include some t-SNE plots of our learned features in Figure 3 of the Supplementary Material which clearly show the positive effect of domain adaptation to the separability of lesion and background across both domains.
The method proposed in \cite{orbes_and_varsavsky} achieved consistency training using what they denote as ``ground-truth augmentation''. This means two registered scans of the same patient using different acquisition parameters. In this work, we avoid this requirement by providing stronger augmentation and dropping the third output of their domain discriminator which sought to find a feature space which contained no information about whether an image was source, target or target augmented. Note that this minor change significantly reduces the data requirements of the model.
\subsubsection{Implementation details}
We use a simple 2D U-Net with five levels as the backbone of our model. Each encoding block has two 2D convolutions with kernel sizes of $3\times3$, a stride of 1, and padding of 1 (except the first which has a padding of 2 and kernel size 5). The blocks have gradually increasing number of filters: 64, 96, 128, 256, 512. We use instance norm and leaky ReLU after each convolution in each block as in \cite{isensee2018nnu}. We use max pooling between each encoder block and bilinear upsampling between each decoder block and the standard concatenation of feature vectors from the same depth.
For the domain discriminator we use a small VGG-style convolutional neural network with four convolutions of kernel size 3 and stride of 2 each followed by a batch norm operation and three fully connected layers of size 28800, 256 and 128 respectively with 0.5 dropout in between. We follow the suggestion from \cite{kamnitsas2017unsupervised} to feed in a concatenated vector of multi-depth features as input to the discriminator. Specifically, we take the activations from each depth of the decoder (excluding the center of the U-Net) and use bilinear interpolation to make them the same shape as the penultimate depth in the spatial dimension. We then concatenate on the channels dimension. All code is written in PyTorch and will be made available at the time of publication.
\begin{figure}[h]
\centering
\includegraphics[scale=0.05]{MICCAI2020_figure_w_diagram.pdf}
\caption{Left: Our domain adaptation method uses a paired consistency loss $\mathcal{L}_{pc}$ which encourages predictions from the target image $x_T$ to be invariant to some augmentation $g$. The backbone is a single 2D U-Net (parameters are shared) with features from each depth being interpolated bilinearly, concatenated and fed to a domain discriminator which uses an adversarial loss $\mathcal{L}_{adv}$ to maximise domain confusion. Right: In a) we depict representations of pixels in some feature space, the green circle is source and red target with crosses and circles depicting foreground and background. b) shows what happens when we introduce an adversarial loss, the feature spaces are shifted such that they are indistinguishable from the source domain but the decision boundary is drawn with only souce data. In c) we show the effect of the PC loss in moving the decision boundary to an area of low-density}
\label{fig:method}
\end{figure}
\section{Experiments}
In the proposed test-time UDA, an unusual approach to train\/val\/test splits is taken. In fact, part of the data for which we train the paired consistency component of our model $\mathcal{M}$ is the one on which the labelling quality is tested. Please note that the labels of the test set are \textit{never} used during training. In order to prevent data leakage, all hyperparameters tuning strategies and model selection steps were performed on a completely separate dataset (results not shown).
Each UDA run was trained for exactly 15,000 iterations using a batch size of 20 with the exception of the supervised baseline which had a validation subject to allow for model selection. We used the Adam optimiser with a learning rate of $1\times10^{-3}$ with no learning rate policy. A separate Adam optimiser with learning rate $1\times10^{-4}$ was used for the discriminator. In order to further validate our model we submit results to the online validation server for the ISBI 2015 challenge. We provide results for the first timepoint of each of the test subjects in the supplementary material.
\subsubsection{Augmentation}
\label{section:aug}
In \cite{perone2019unsupervised} the authors used random affine transforms (rotating, scaling, shearing and translating) as well as random elastic deformations where an affine grid is warped and applied to the image. Their method does augmentation on the output of a neural network but this output does not need to be differentiated. We use all these augmentations but exclude elastic deformation, as it is difficult to implement in a differentiable manner (a requirement of the proposed method). Following the recommendations in \cite{orbes_and_varsavsky} we use augmentation which is realistic, valid and smooth. To this end, we also add bias field augmentation \cite{gibson2017niftynet} and k-space augmentation \cite{shaw2019mri} as extra transformations, as they have been shown to produce realistic variations in MRIs.
\subsubsection{Data}
Domain adaptation is here applied to multiple sclerosis lesion segmentation as an exemplary task. We use as source domain data from two separate MICCAI challenges on multiple sclerosis lesion segmentation MS2008 \cite{styner20083d} and MS2016 \cite{commowick2018objective}. Data from ISBI2015 \cite{carass2017longitudinal} is used as target domain. The FLAIR sequences from each of these datasets are skull-stripped (using HD-BET \cite{isensee2019automated}), bias-field corrected using the N4 algorithm and registered to MNI space as in \cite{valverde2019one}.
\subsection{Results}
We present results from five different methods. First, there is a lower bound provided by using a model trained on the source domain and applied to data from the target domain, which we refer to as \textit{no adaptation}. The highest expected performance is provided by training a model on the target domain images and labels, fine-tuned from a model trained on the source domain, which we refer to as \textit{supervised}. When we use paired consistency and adversarial learning to domain adapt to a single subject on the target domain, this is denoted as \textit{One-shot UDA}. We compare this against a model which sees this and two more subjects from the target domain, and refer to it as \textit{Test-time UDA}. A comparison was also made against a traditional approach to domain adaptation where the model trains on target domain data which excludes the test subject; we refer to this variant as \textit{Classic UDA}. In Table 1 of the supplementary material we show results for each of these methods evaluated on a variety of metrics. These were chosen to match those in \cite{carass2017longitudinal}. The LFPR is the lesion false positive rate and LTPR is the lesion true positive rate which are implemented as in \cite{styner20083d}. We follow the recommendations of the MICCAI Grand Challenges, specifically the method described in \cite{simpson2019large}, to provide a single rank score comparing all methods. Note that this ranking method provides a single summary metric that incorporates a per-metric non-parametric statistical significance model.
\begin{table}
\centering
\scalebox{.77}{
\begin{tabular}{l|c|c|c|c|c|c|c|c}
\toprule
Method & Rank & Dice & Hausdorff & LFPR & LTPR & PPV & Sensitivity & Vol Diff \\
\midrule
Supervised & 1.71 & 0.67 $\pm$ 0.1 & 37. $\pm$ 8. & 0.52 $\pm$ 0.2 & 0.61 $\pm$ 0.2 & 0.67 $\pm$ 0.2 & 0.73 $\pm$ 0.2 & 0.44 $\pm$ 0.2 \\
Test-time UDA (ours) & 2.43 & 0.61 $\pm$ 0.2 & 48. $\pm$ 5. & 0.54 $\pm$ 0.2 & 0.57 $\pm$ 0.2 & 0.54 $\pm$ 0.2 & 0.76 $\pm$ 0.09 & 0.72 $\pm$ 1.0 \\
One-shot UDA (ours) & 2.71 & 0.60 $\pm$ 0.2 & 47. $\pm$ 11 & 0.52 $\pm$ 0.2 & 0.51 $\pm$ 0.2 & 0.54 $\pm$ 0.2 & 0.76 $\pm$ 0.09 & 0.92 $\pm$ 2.0 \\
Classic UDA & 3.86 & 0.56 $\pm$ 0.1 & 47. $\pm$ 14 & 0.55 $\pm$ 0.2 & 0.58 $\pm$ 0.2 & 0.49 $\pm$ 0.2 & 0.73 $\pm$ 0.2 & 0.78 $\pm$ 0.5 \\
No adaptation & 4.29 & 0.57 $\pm$ 0.1 & 55. $\pm$ 7. & 0.68 $\pm$ 0.08 & 0.55 $\pm$ 0.2 & 0.49 $\pm$ 0.2 & 0.76 $\pm$ 0.08 & 0.76 $\pm$ 0.7 \\
\bottomrule
\end{tabular}
}
\caption{Results on metrics described in \cite{carass2017longitudinal}. The metrics are ranked using the scheme from \cite{simpson2019large} to provide a rank score. The proposed test-time methods are labelled (ours).}
\label{tab:ranking_results}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{qualitative_figure_4.png}
\caption{Some qualitative results comparing no adaptation, classic unsupervised domain adaptation, one-shot unsupervised domain adaptation, test-time unsupervised domain adaptation, and the hypothetical gold-standard using supervised learning. Red denotes the ground-truth annotation, true positives are shown in green, false negatives are in yellow and false positives are in blue.}
\label{fig:my_label}
\end{figure}
\section{Discussion}
The results in Table \ref{tab:ranking_results} show a clear ordering with Supervised as the best performing method, as expected, followed by test-time UDA, one-shot UDA, classic UDA and finally no adaptation. These results reveal that learning enough information about a domain shift, i.e. Classic UDA, is not enough to get the best performance on each test subject in the target domain. By domain-adapting to each test subject, we are adapting to the subjects individual anatomical and pathological presentation. It is also worth mentioning that our One-shot unsupervised domain adaptation achieved a dice of 0.60 on the ISBI training set which is comparable to the 0.58 reported on the ISBI holdout set in \cite{valverde2019one} despite not using a single label from ISBI. Results in Table \ref{tab:my_label} show the performance of Test-time UDA against a Supervised baseline, Classic UDA and One-shot UDA. Classic UDA outperformed One-shot, but test-time UDA was best of all. Future work will include experiments on brain tumour segmentation and compare additional UDA methods in the Classic, One-shot and Test-time settings.
\begin{table}[h]
\centering
\scalebox{.77}{\begin{tabular}{l|c|c|c|c|c|c|c}
\toprule
Method & Rank & Dice & LFPR & LTPR & PPV & TPR & Volume Difference \\
\midrule
Valverde et al. (Supervised) & 1.50 & 0.60 $\pm$ 0.2 & 0.22 $\pm$ 0.2 & 0.41 $\pm$ 0.2 & 0.73 $\pm$ 0.2 & 0.54 $\pm$ 0.2 & 5829 $\pm$ 7900 \\
Test-time UDA (ours) & 4.25 & 0.51 $\pm$ 0.2 & 0.53 $\pm$ 0.2 & 0.25 $\pm$ 0.2 & 0.59 $\pm$ 0.2 & 0.51 $\pm$ 0.2 & 6947 $\pm$ 8800 \\
Classic UDA & 4.42 & 0.49 $\pm$ 0.2 & 0.54 $\pm$ 0.2 & 0.28 $\pm$ 0.2 & 0.55 $\pm$ 0.2 & 0.48 $\pm$ 0.2 & 5784 $\pm$ 7500 \\
One-shot UDA (ours) & 4.50 & 0.48 $\pm$ 0.2 & 0.52 $\pm$ 0.3 & 0.28 $\pm$ 0.1 & 0.52 $\pm$ 0.3 & 0.51 $\pm$ 0.2 & 7009 $\pm$ 7700 \\
\bottomrule
\end{tabular}}
\caption{Results from the ISBI 2015 holdout set hosted at \url{https://smart-stats-tools.org/lesion-challenge}. We ran our three UDA methods on the first timepoint of each of the 14 test subjects. Note that one of the limitations of this form of validation is the low inter-rater disagreement reported in Carass et al. The same ranking scheme was used as in the training set, however the symmetric distance was used instead of the Hausdorff. The Classic UDA outperformed One-shot but test-time UDA was best of all.}
\label{tab:my_label}
\end{table}
\section{Conclusion}
Existing approaches to unsupervised domain adaptation in medical image segmentation adapt to subjects in a target domain. The performance of these algorithms is then measured based on how well they generalise to unseen subjects in this target domain. When looking through scans in a hospital PACS system there is a large amount of heterogeneity in acquisition parameters. As an example, at our local hospital (anonymous), we found more than 1400 different brain MRI sequences being used. We can thus think of each of these scans as its own domain, which motivates what we call ``test-time unsupervised domain adaptation''. Note that this is not an algorithmic modification, but simply a training and testing framework, where a domain adaptation algorithm is trained and evaluated on the same target data. We perform experiments using a modern domain adaptation technique which combines the benefits of domain adversarial learning and consistency regularisation. Our experiments on multiple sclerosis lesions suggest that using domain adaptation on a single subject can be more effective than classic domain adaptation on more subjects.
\bibliographystyle{splncs04}
|
1,314,259,993,175 | arxiv | \section{Introduction}
Let $(X,d)$ be a nondegenerate $($i.e,
with at least two points$)$ compact metric space, and $f:X \rightarrow X$ be a continuous map. Such $(X,f)$ is called a dynamical system. For a dynamical system $(X,f)$, let $M(X)$, $M(f,X)$, $M_{erg}(f,X)$ denote the space of probability measures, $f$-invariant, $f$-ergodic probability measures, respectively. Let $\rho$ be a metric for the weak*-topology on $M(X).$
We denote the sets of natural numbers, integer numbers and nonnegative numbers by $\N, \Z, \Z^+$ respectively.
It is believed that in most cases, positive topological entropy of $(X,f)$ implies a rich structure of $M(f,X).$ In \cite{Katok} Katok showed that every $C^{1+\alpha}$ diffeomorphism in dimension $2$ has horseshoes of large entropies. This implies that the system has ergodic measures of arbitrary intermediate metric entropies, that is $\left\{h_{\mu}(f): \mu\right.$ is an ergodic measure$\left.\right\}$ includes $[0, \htop(f)),$ where $\htop(f)$ is the topological entropy of $(X,f),$ and $h_\mu(f)$ is the metric entropy of $\mu.$ Katok believed that this holds in any dimension.
\begin{Con}{(Katok)}
For every $C^{2}$ diffeomorphism $f$ on a Riemannian manifold $X$, the set $$\{h_{\mu}(f):\mu \text{ is an ergodic measure for } (X,f)\}$$
includes $[0, \htop(f))$.
\end{Con}
In the last decade, a number of partial results have been obtained. See \cite{Ures2012,QuasSoo2016,Burguet2020,GSW2017,KKK2018,LSWW2020,LiOpro2018,Sun2019}. In the recent works, we find shadowing property is a powerful tool to construct ergodic measures. It is shown in \cite{LiOpro2018} that for a transitive dynamical system with the shadowing property, if the entropy function is upper semi-continuous, then for every $0 \leq c<\htop(f)$ the set of ergodic measures with entropy $c$ is residual in the space of invariant measures with entropy at least $c$. This implies the set $\{h_{\mu}(f):\mu \text{ is an ergodic measure for } (X,f)\}$
includes $[0, \htop(f))$.
In this paper we would like to explore refined Katok’s conjecture. Let $\varphi:X\rightarrow \mathbb{R}$ be a continuous function. Denote
$$L_\varphi=\left[\inf_{\mu\in M(f,X)}\int\varphi d\mu, \, \sup_{\mu\in M(f,X)}\int\varphi d\mu\right]$$
and
$$\mathrm{Int}(L_\varphi)=\left(\inf_{\mu\in M(f,X)}\int\varphi d\mu, \, \sup_{\mu\in M(f,X)}\int\varphi d\mu\right).$$
Similar as Katok’s conjecture, we would like to consider the following refined question.
\begin{Que}\label{Conjecture-2}
For every typical diffeomorphism $f$ on a Riemannian manifold $X$, every continuous function $\varphi$ on $X,$ and every $a\in \mathrm{Int}(L_\varphi),$ whether one has $$\{h_{\mu}(f):\mu\in M_{erg}(f,X) \text{ and }\int\varphi d\mu=a\}=\{h_{\mu}(f):\mu\in M(f,X) \text{ and }\int\varphi d\mu=a\}?$$
\end{Que}
In the present paper, we give partial answer to this question.
We say that a continuous function $\varphi$ has {\it bounded variation} if there exists $\varepsilon>0$ for which
$
\sup _{n \in \mathbb{N}} \gamma_{n}(\varphi, \varepsilon)<\infty
$
with $$\gamma_{n}(\varphi, \varepsilon)=\sup \left\{\left|\sum_{k=0}^{n-1}\varphi(f^k(x))-\sum_{k=0}^{n-1}\varphi(f^k(y))\right|: d\left(f^{k} (x), f^{k} (y)\right)<\varepsilon \text { for } k=0, \ldots, n-1\right\}.$$
For any $a\in L_\varphi,$ define $$M_{\varphi}(a)=\{\mu\in M(f,X):\int\varphi d\mu=a\},\ M_{\varphi}^{erg}(a)=\{\mu\in M_{erg}(f,X):\int\varphi d\mu=a\}.$$ Then $M_{\varphi}(a)$ is closed in $M(f,X),$ and thus $\sup\{h_{\mu}(f):\mu\in M_\varphi(a)\}=\max\{h_{\mu}(f):\mu\in M_\varphi(a)\}$ for any $a\in L_\varphi$ if the entropy function is upper semi-continuous.
We denote the support of a probability measure $\mu$ by $S_\mu:=\{x\in M:\mu(U)>0\ \text{for any neighborhood}\ U\ \text{of}\ x\}.$
Now we state our main result as follows.
\begin{maintheorem}\label{thm-continuous}
Suppose that a homeomorphism $f:X\to X$ of a compact metric space is transitive, expansive, and has the shadowing property. Let $\varphi$ be a continuous function on $X$ with bounded variation.
Then:
\begin{description}
\item[(I)] For any $a\in \mathrm{Int}(L_\varphi),$ any $\mu\in M_\varphi(a)$ and any $\eta, \zeta>0$, there is $\nu\in M_\varphi^{erg}(a)$ such that $\rho(\nu,\mu)<\zeta$ and $|h_{\nu}(f)-h_{\mu}(f)|<\eta.$
\item[(II)] For any $a\in \mathrm{Int}(L_\varphi),$ any $\mu\in M_\varphi(a),$ any $0\leq c\leq h_{\mu}(f)$ and any $\eta, \zeta>0$, there is $\nu\in M_\varphi^{erg}(a)$ such that $\rho(\nu,\mu)<\zeta$ and $|h_{\nu}(f)-c|<\eta.$
\item[(III)] For any $a\in \mathrm{Int}(L_\varphi)$ and $0\leq c< \max\{h_{\mu}(f):\mu\in M_\varphi(a)\},$ the set $\{\mu\in M_\varphi^{erg}(a):h_{\mu}(f)=c,\ S_\mu=X\}$ is residual in $\{\mu\in M_\varphi(a):h_{\mu}(f)\geq c\}.$
\item[(IV)] $\{(\int \varphi d\mu, h_\mu(f)):\mu\in M(f,X),\ \int \varphi d\mu \in\mathrm{Int}(L_\varphi)\}=\{(\int \varphi d\mu, h_\mu(f)):\mu\in M_{erg}(f,X),\ \int \varphi d\mu \in\mathrm{Int}(L_\varphi)\}=\{(a,b):a\in\mathrm{Int}(L_\varphi), 0\leq b\leq \max\{h_{\mu}(f):\mu\in M_\varphi(a)\}\}.$
\end{description}
\end{maintheorem}
The hyperbolic flow and singular hyperbolic flow versions of Theorem \ref{thm-continuous} are being prepared \cite{HT2022}.
Readers can see the dynamical systems and functions to which Theorem \ref{thm-continuous} is applicable in section \ref{section-applications}.
\begin{Rem}
(1) In \cite{EKW} Eizenberg, Kifer and Weiss proved for systems with the specification property that any $f$-invariant probability measure $\mu$ is the weak limit of a sequence of ergodic measures
$\{\mu_n\}_{n=1}^{\infty}$, such that the entropy of $\mu$ is the limit of the entropies of the $\{\mu_n\}_{n=1}^{\infty}$. Pfister and Sullivan refered to this property as the entropy-dense property \cite{PS2005} and proved that the entropy-dense property holds for systems with the approximate product property. From Theorem \ref{thm-continuous}(I) if a homeomorphism $f:X\to X$ of a compact metric space is expansive, transitive, and has the shadowing property, then the entropy-dense property holds in $M_\varphi(a)$ for any $a\in \mathrm{Int}(L_\varphi)$ and any continuous function $\varphi$ with bounded variation.
(2) Li and Oprocha proved in \cite{LiOpro2018} that for a transitive dynamical system with the shadowing property, for every invariant measure $\mu$ and every $0 \leq c \leq$ $h_{\mu}(f),$ there exists a sequence of ergodic measures $\{\mu_{n}\}_{n=1}^{\infty}$ such that $\lim _{n \rightarrow \infty} \mu_{n}=\mu$ and $\lim _{n \rightarrow \infty} h_{\mu_{n}}(f)=c$. If further the entropy function is upper semi-continuous, they proved that for every $0 \leq c<\htop(f)$ the set of ergodic measures with entropy $c$ is residual in the space of invariant measures with entropy at least $c$. From Theorem \ref{thm-continuous}(II) and (III) we obtain more refined results for a homeomorphism $f:X\to X$ of a compact metric space which is expansive, transitive and has the shadowing property, that is for a continuous function $\varphi$ with bounded variation, for any $a\in \mathrm{Int}(L_\varphi),$ any $\mu\in M_\varphi(a),$ and any $0 \leq c \leq$ $h_{\mu}(f),$ there exists a sequence of ergodic measures $\{\mu_{n}\}_{n=1}^{\infty}\subseteq M_\varphi(a)$ such that $\lim _{n \rightarrow \infty} \mu_{n}=\mu$ and $\lim _{n \rightarrow \infty} h_{\mu_{n}}(f)=c,$ and for any $a\in \mathrm{Int}(L_\varphi)$ and $0\leq c< \max\{h_{\mu}(f):\mu\in M_\varphi(a)\},$ in $M_\varphi(a)$ the set of ergodic measures with entropy $c$ and full support is residual in the space of invariant measures with entropy at least $c$.
(3) From Theorem \ref{thm-continuous}(IV) we give partial answer to Question \ref{Conjecture-2} for a homeomorphism $f:X\to X$ of a compact metric space which is expansive, transitive and has the shadowing property, that is $\left\{h_{\mu}(f): \mu\in M_\varphi^{erg}(a)\right\}=\left\{h_{\mu}(f): \mu\in M_\varphi(a)\right\}$ for any $a\in \mathrm{Int}(L_\varphi)$ and any continuous function $\varphi$ with bounded variation.
\end{Rem}
For a continuous function without the assumption of bounded variation, we don't know how to obtain the corresponding results as Theorem \ref{thm-continuous}.
In the proof of Theorem \ref{thm-continuous}, there are two keypoints: 'multi-horseshoe' entropy-dense property (see Theorem \ref{Mainlemma-convex-by-horseshoe}) and conditional variational principle (see Theorem \ref{BarreiraDoutor2009-theorem3}) proved by L. Barreira and P. Doutor in \cite{BarreiraDoutor2009}.
\begin{figure}[h]\caption{Graph of $(\int \varphi d\mu, h_\mu(f))$}\label{fig-1}
\begin{center}
\begin{tikzpicture}[domain=0:2]
\draw[->] (-1,0) -- (6,0)
node[below right] {$\int \varphi d\mu$};
\draw[->] (0,-1) -- (0,4)
node[left] {$h_\mu(f)$};
\draw (0.7,2.5) .. controls (3.5,4) .. (5.2,2.2)
node[below] at (3,2) {$(\int \varphi d\mu, h_\mu(f))$};
\draw[dashed] (0.7,2.5) -- (0.7,0);
\draw[dashed] (5.2,2.2) -- (5.2,0);
\end{tikzpicture}
\end{center}
\end{figure}
Finally, we draw the graph of $(\int \varphi d\mu, h_\mu(f)).$ Theorem \ref{thm-continuous} shows that every point in the region between the two dotted lines can be attained by ergodic measures.
\textbf{Organization of this paper.} The remainder of this paper is organized as follows.
In Section \ref{section-entropy-dense} we introduce 'multi-horseshoe' entropy-dense property and prove that it holds for topologically Anosov system which is transitive.
In Section \ref{Almost Additive} and \ref{section-almost2}, using conditional variational principle (see Theorem \ref{BarreiraDoutor2009-theorem3}) proved by L. Barreira and P. Doutor in \cite{BarreiraDoutor2009},
we give abstract conditions on which the results of Theorem \ref{thm-continuous} hold in the more general context of almost additive sequences of continuous functions.
In Section \ref{section-thm} by 'multi-horseshoe' entropy-dense property we show that the abstract conditions given in Section \ref{Almost Additive} and \ref{section-almost2} are satisfied for topologically Anosov system which is transitive, and thus we obtain Theorem \ref{thm-continuous}.
In Section \ref{section-applications}, we apply the results in the previous sections to transitive locally maximal hyperbolic sets and transitive two-side subshits of finite type. Finally, we consider homoclinic classes and give corresponding results on refined Katok’s conjecture.
\section{'Multi-Horseshoe' Entropy-Dense Property}\label{section-entropy-dense}
In this section, we introduce 'multi-horseshoe' entropy-dense property which shall serve for our needs in the future.
Eizenberg, Kifer and Weiss proved for systems with the specification property that \cite{EKW} any $f$-invariant probability measure $\nu$ is the weak limit of a sequence of ergodic measures
$\{\nu_n\}$, such that the entropy of $\nu$ is the limit of the entropies of the $\{\nu_n\}$. This is a central point in
large deviations theory, which was first emphasized in \cite{FO}. Meanwhile, this also plays an crucial part in the computing of Billingsley dimension \cite{Billingsley1960, Billingsley1961} on shift spaces \cite{PS2003}.
Pfister and Sullivan refered to this property as the \emph{entropy-dense} property \cite{PS2005}.
\begin{Def}
We say $(X, f)$ satisfies the {\it entropy-dense property} (or $M_{erg}(f, X)$ is {\it entropy-dense} in $M(f, X)$), if for any $\mu\in M(f, X)$, for any neighborhood $G$ of $\mu$ in $M(X)$, and for any $\eta>0$, there exists a $\nu\in M_{erg}(f, X)$ such that $h_{\nu}(f)>h_{\mu}(f)-\eta$ and $\nu \in G$.
\end{Def}
\begin{Def}
We say $(X, f)$ satisfies the {\it refined entropy-dense property} (or $M_{erg}(f, X)$ is {\it refined entropy-dense} in $M(f, X)$), if for any $\mu\in M(f, X)$, for any neighborhood $G$ of $\mu$ in $M(X)$,
and for any $\eta>0$, there exists a closed $f$-invariant set $\Lambda_{\mu}\subseteq X $ such that $M(f, \Lambda_{\mu})\subseteq G$ and $\htop(f, \Lambda_{\mu})>h_{\mu}(f)-\eta$. By classical variational principle,
it is equivalent that for any neighborhood $G$ of $\mu$ in $M(X)$, and for any $\eta>0$, there exists a $\nu\in M_{erg}(f, X)$ such that $h_{\nu}(f)>h_{\mu}(f)-\eta$ and $M(f, S_{\nu})\subseteq G$.
\end{Def}
Of course, $\textrm{refined entropy-dense} \Rightarrow\textrm{entropy-dense}\Rightarrow $ ergodic measures are dense in the space of invariant measures. For systems with the approximate product property, Pfister and Sullivan in fact had obtained the refined entropy-dense properties by \cite[Proposition 2.3]{PS2005}.
Note that if a dynamical system is transitivie and has the shadowing property, then it has approximate product property by their definitions. Then we have
\begin{Prop}\label{prop-entropy-dense-for-shadowing}
Suppose that $(X, f)$ is topologically transitive and satisfies the shadowing property. Then $(X, f)$ has the refined entropy-dense property.
\end{Prop}
For $C^r$ diffeomorphisms, a theorem by A. Katok \cite{Katok} asserts that any ergodic hyperbolic measure can be approximated by a horseshoe. Here for topological dynamical systems we introduce 'multi-horseshoe' entropy-dense property and show that it holds for topologically Anosov system which is transitive.
\begin{Def}\label{definition-hyperbolic}
A homeomorphism $f:X\to X$ of a compact metric space called {\it topologically hyperbolic} or {\it topologically Anosov}, if $X$ has infinitely many points, $(X,f)$ is expansive and satisfies the shadowing property.
\end{Def}
For any $m\in\N$ and $\{\nu_i\}_{i=1}^m \subseteq M(X)$, we write $\cov\{\nu_i\}_{i=1}^m$ for the convex combination of $\{\nu_i\}_{i=1}^m$, namely,
$$\cov\{\nu_i\}_{i=1}^m=\cov(\nu_1,\cdots,\nu_m):=\left\{\sum_{i=1}^mt_i\nu_i:t_i\in[0, 1], 1\leq i\leq m~\textrm{and}~\sum_{i=1}^mt_i=1\right\}.$$
\begin{Def}\label{def-strong-basic}
We say $(X, f)$ satisfies the {\it 'multi-horseshoe' entropy-dense property} (abbrev. {\it 'multi-horseshoe' dense property}) if for any $K=\cov\{\mu_i\}_{i=1}^m\subseteq M(f, X),$ any $x\in X$ and any $\eta, \zeta,\eps>0$, there exist compact invariant subsets $\Lambda_i\subseteq\Lambda\subsetneq X$ such that for each $1\leq i\leq m$
\begin{enumerate}
\item $(\Lambda_i,f|_{\Lambda_i})$ and $(\Lambda,f|_{\Lambda})$ conjugate to transitive subshifts of finite type (and thus they are transitive and topologically Anosov).
\item $\htop(f, \Lambda_i)>h_{\mu_i}(f)-\eta$ and consequently, $\htop(f, \Lambda)>\sup\{h_{\kappa}(f):\kappa\in K\}-\eta.$
\item $d_H(K, M(f, \Lambda))<\zeta$, $d_H(\mu_i, M(f, \Lambda_i))<\zeta$.
\item There is a positive integer $L$ such that for any $z$ in $\Lambda_i$ or $\Lambda$ one has $f^{j+mL}(z) \in B(x,\eps)$ for some $0\leq j\leq L-1$ and any $m\in\Z$.
\end{enumerate}
\end{Def}
\begin{Thm}\label{Mainlemma-convex-by-horseshoe}
Suppose $(X, f)$ is topologically Anosov and transitive. Then $(X, f)$ satisfies the 'multi-horseshoe' dense property.
\end{Thm}
\subsection{Some definitions}
\subsubsection{The space of probability measures}
Consider a topological dynamical system $(X, f).$ The space of Borel probability measures on $X$ is denoted by $M(X)$ and the set of continuous functions on $X$ by $C(X)$. We endow $\varphi\in C(X)$ the norm $\|\varphi\|=\max\{|\varphi(x)|:x\in X\}$.
Let ${\{\varphi_{j}\}}_{j\in\mathbb{N}}$ be a dense subset of $C(X)$, then
$$\rho(\xi, \tau)=\sum_{j=1}^{\infty}\frac{|\int\varphi_{j}d\xi-\int\varphi_{j}d\tau|}{2^{j}\|\varphi_{j}\|}$$
defines a metric on $M(X)$ for the $weak^{*}$ topology \cite{Walters}.
Then we denote the Hausdorff distance between two nonempty subsets of $M(X),$ $A$ and $B,$ by $$d_H(A, B):=\max\set{\sup_{x\in A}\inf_{y\in B}\rho(x, y),\sup_{y\in B}\inf_{x\in A}\rho(y, x) }.$$
For $\nu\in M(X)$ and $r>0$, we denote a ball in $M(X)$ centered at $\nu$ with radius $r$ by
$$\mathcal{B}(\nu, r):=\{\rho(\nu, \mu)<r:\mu\in M(X)\}. $$
One notices that
\begin{equation}\label{diameter-of-Borel-pro-meas}
\rho(\xi, \tau)\leq2~~\textrm{for any}~~\xi, \tau\in M(X).
\end{equation}
It is also well known that the natural imbedding $j:x\mapsto \delta_x$ is continuous. Since $X$ is compact and $M(X)$ is Hausdorff, one sees that there is a homeomorphism between $X$ and its image $j(X)$. Therefore, without loss of generality
we will assume that
\begin{equation}\label{metric-on-X}
d(x, y)=\rho(\delta_x, \delta_y).
\end{equation}
For $x\in X$ and $\varepsilon>0$, we denote a ball in $X$ centered at $x$ with radius $\varepsilon$ by
$$B(x,\varepsilon):=\{d(x,y)<\varepsilon:y\in X\}.$$
A straight calculation using \eqref{diameter-of-Borel-pro-meas} and \eqref{metric-on-X} gives
\begin{Lem}\label{lem:prohorov}
For any $\varepsilon > 0,\delta >0$, and any two sequences $\{x_i\}_{i=0}^{n-1},\{y_i\}_{i=0}^{n-1}$ of $X$, if $d(x_i,y_i)<\varepsilon$ holds for any $i\in [0,n-1]$, then for any $J\subseteq \{0,1,\cdots,n-1\}$ with $\frac{n-|J|}{n}<\delta$, one has:
\begin{description}
\item[(a)] $\rho(\frac{1}{n}\sum_{i=0}^{n-1}\delta_{x_i},\frac{1}{n}\sum_{i=0}^{n-1}\delta_{y_i})<\varepsilon.$
\item[(b)] $\rho(\frac{1}{n}\sum_{i=0}^{n-1}\delta_{x_i},\frac{1}{|J|}\sum_{i\in J}\delta_{y_i})<\varepsilon+2\delta.$
\end{description}
\end{Lem}
Lemma \ref{lem:prohorov} is easy to be verified and shows us that if any two orbit of $x$ and $y$ in finite steps are close in the most time, then the two empirical measures induced by $x,y$ are also close.
\subsubsection{Topological entropy and metrc entrpy}
Now let us to recall the definition of topological entropy in \cite{Bowen1973} by Bowen. For $x, y\in X$ and $n\in\N$, the Bowen distance between $x, y$ is defined as
$$d_n(x, y):=\max\{d(f^i(x), f^i(y)):i=0, 1, \cdots, n-1\}$$
and the Bowen ball centered at $x$ with radius $\eps>0$ is defined as
$$B_n(x, \eps):=\{y\in X:d_n(x, y)<\eps\}. $$
Let $E\subseteq X$, and $\mathcal {G}_{n}(E, \sigma)$ be the collection of all finite or countable covers of $E$ by sets of the form $B_{u}(x, \sigma)$ with $u\geq n$. We set
$$C(E;t, n, \sigma, f):=\inf_{\mathcal {C}\in \mathcal {G}_{n}(E, \sigma)}\sum_{B_{u}(x, \sigma)\in \mathcal {C}}e^{-tu} \,\,\,\text{ and }
C(E;t, \sigma, f):=\lim_{n\rightarrow\infty}C(E;t, n, \sigma, f). $$
Then we define
$$\htop(E;\sigma, f):=\inf\{t:C(E;t, \sigma, f)=0\}=\sup\{t:C(E;t, \sigma, f)=\infty\}$$
The \textit{Bowen topological entropy} of $E$ is
\begin{equation*}\label{definition-of-topological-entropy}
\htop(f, E):=\lim_{\sigma\rightarrow0} \htop(E;\sigma, f).
\end{equation*}
We call $(X, \B, \mu)$ a probability space if $\B$ is a Borel $\sigma$-algebra on $X$ and $\mu$ is a probability measure on $X$. For a finite measurable partition $\xi=\{A_1, \cdots, A_n\}$ of a probability space $(X, \B, \mu)$, define
$$H_\mu(\xi)=-\sum_{i=1}^n\mu(A_i)\log\mu(A_i). $$
Let $f:X\to X$ be a continuous map preserving $\mu$. We denote by $\bigvee_{i=0}^{n-1}f^{-i}\xi$ the partition whose element is the set $\bigcap_{i=0}^{n-1}f^{-i}A_{j_i}, 1\leq j_i\leq n$. Then the following limit exists:
$$h_\mu(f, \xi)=\lim_{n\to\infty}\frac1n H_\mu\left(\bigvee_{i=0}^{n-1}f^{-i}\xi\right)$$
and we define the metric entropy of $\mu$ as
$$h_{\mu}(f):=\sup\{h_\mu(f, \xi):\xi~\textrm{is a finite measurable partition of X}\}. $$
\subsubsection{Transitive, mixing, expansive and shadowing property}
If for every pair of non-empty open sets $U,$ $V$ there is an integer $n$ such that $f^n(U)\cap V\neq \emptyset$ then we call $(X, f)$ \textit{topologically transitive}.
Furthermore, if for every pair of non-empty open sets $U,$ $V$ there exists an integer $N$ such that $f^n(U)\cap V\neq \emptyset$ for every $n>N$, then we call $(X, f)$ \textit{topologically mixing}.
When $f:X\to X$ is a homeomorphism of a compact metric space, we say that $(X, f)$ is \emph{expansive} if there exists a constant $c>0$ such that for any $x, y\in X$, $d(f^i(x), f^i(y))> c$ for some $i\in\Z$. We call $c$ the expansive constant.
When $f:X\to X$ is a homeomorphism of a compact metric space, we say that a subset $Y$ of $X$ is $f$-invariant if $f(Y)= Y.$
If $Y$ is a closed $f$-invariant subset of $X,$ then $(Y,f|_Y)$ also is a dynamical system. We will call it a subsystem of $(X,f).$
A finite sequence $\C=\langle x_1, \cdots, x_l\rangle, l\in\N$ is called a \emph{chain}. Furthermore, if $d(f(x_i), x_{i+1})<\eps, 1\leq i\leq l-1$, we call $\C$ an $\eps$-chain with length $l.$
For any $m\in\N$, if there are $m$ $\eps$-chains $\mathfrak{C}_i=\langle x_{i, 1}, \cdots, x_{i, l_i}\rangle$, $l_i\in\N, 1\leq i\leq m$ satisfying that $d(f(x_{i, l_i}),x_{i+1, 1})<\eps, 1\leq i\leq m-1$, then we can concatenate $\mathfrak{C}_i$s to constitute a new $\eps$-chain
$$\langle x_{1, 1}, \cdots, x_{1, l_1}, x_{2, 1}, \cdots, x_{2, l_2}, \cdots, x_{m, 1}, \cdots, x_{m, l_m}\rangle$$
which we denote by $\mathfrak{C}_1\mathfrak{C}_2\cdots\mathfrak{C}_m$.
\begin{Def}\label{def-shadowing}
Suppose $f:X\to X$ is a homeomorphism of a compact metric space. For any $\delta>0$, a sequence $\{x_n\}_{n\in \Z}$ is called a \textit{$\delta$-pseudo-orbit} if
$d(f(x_n), x_{n+1})<\delta~~\textrm{for}~~n\in\Z.$ $\{x_n\}_{n\in \Z}$ is \textit{$\eps$-shadowed} by some $y\in X$ if
$d(f^n(y), x_n)<\eps~~\textrm{for any}~~n\in\Z.$
We say that $(X, f)$ has the \textit{shadowing property} if for any $\eps>0$, there exists $\delta>0$ such that any $\delta$-pseudo-orbit is $\eps$-shadowed by some point in $X$.
\end{Def}
\subsection{Some lemmas}
\begin{Prop}\label{prop-transitive-shadowing-for-f-n}
Suppose $f:X\to X$ is a homeomorphism of a compact metric space. Consider $\Delta\subseteq X$ which satisfies $f^n(\Delta)=\Delta$ for some $n\in\N$ and let $\Lambda=\bigcup_{i=0}^{n-1}f^i(\Delta).$ If $f^i(\Delta)\cap f^j(\Delta)=\emptyset$ for any $0\leq i<j\leq n-1,$ then
\begin{description}
\item[(1)] if $(\Delta, f^n)$ is expansive, then $(\Lambda, f)$ is expansive;
\item[(2)] if $(\Delta, f^n)$ is topologically transitive, then $(\Lambda, f)$ is topologically transitive;
\item[(3)] if $(\Delta, f^n)$ has the shadowing property, then $(\Lambda, f)$ also has the shadowing property.
\end{description}
\end{Prop}
\begin{proof}
Item (1) and (2) come directly from the uniform continuity of $f, \cdots, f^{m-1}.$ Since $(\Delta, f^n)$ has the shadowing property and $f:X\to X$ is a homeomorphism, then $(f^i(\Delta), f^n)$ has the shadowing property for any $0\leq i\leq n-1.$ Combining with $f^i(\Delta)\cap f^j(\Delta)=\emptyset$ for any $0\leq i<j\leq n-1,$ $(\Lambda, f^n)$ also has the shadowing property. Let $k>0$ be an integer, then from \cite[Theorem 2.3.3]{AH} a dynamical system $(X,f)$ has the shadowing property if and only if so does $(X,f^{k})$. So $(\Lambda, f)$ has the shadowing property.
\end{proof}
For $\eps>0$ and $n\in\N$, two points $x$ and $y$ are $(n, \eps)$-separated if
$d_n(x,y)>\eps.$
A subset $E$ is $(n, \eps)$-separated if any pair of different points of $E$ are $(n, \eps)$-separated. For $x\in X$, we define the empirical measure of $x$ as
\begin{equation*}
\mathcal{E}_{n}(x):=\frac{1}{n}\sum_{j=0}^{n-1}\delta_{f^{j}(x)},
\end{equation*}
where $\delta_{x}$ is the Dirac mass at $x$.
Let $F\subseteq M(X)$ be a neighbourhood of $\nu \in M(f,X)$. Define $X_{n,F}:=\{x\in X:\mathcal{E}_{n}(x)\in F\}.$
For $k\in\N$, let $P_k(f):=\{x\in X:f^k(x)=x\}$.
\begin{Lem}\label{Maincor-mu-gamma-n-expansive}\cite[Corollary 4.13]{DT}
Suppose $(X, f)$ is topologically Anosov and transitive.
Then for any $\eta>0$, there exists an $\eps^*_1=\eps^*_1(\eta)>0$ such that for any $\mu\in M(f, X)$ and its neighborhood $F_{\mu}$, there exists an $0<\eps^*_2=\eps^*_2(\eta,\mu,F_\mu)<\eps^*_1$ such that for any $x\in X$, for any $0<\eps\leq\eps^*_2$, for any $N\in\N$, there exist an $n=n(\eta,\mu,F_\mu, \eps,x)\geq N$ such that for any $p\in\N$, there exists an $(pn, \frac{\eps^*_1}{3})$-separated set $\Gamma_{pn}$ so that
\begin{description}
\item[(1)]\label{lem-B-aACor} $\Gamma_{pn}\subseteq X_{pn, F_{\mu}}\cap B(x, \eps)\cap P_{pn}(f)$;
\item[(2)]\label{lem-B-bACor} $\frac{\log |\Gamma_{pn}|}{pn}>h_{\mu}(f)-\eta$;
\end{description}
\end{Lem}
From Lemma \ref{Maincor-mu-gamma-n-expansive}, the set of periodic points is dense in $X,$ and the set of periodic measures is dense in the space of invariant measures. Then by \cite[Proposition 6.5]{T16}, there is an invariant measure with full support.
Since measure supported on periodic points has zero metric entropy, then the set of invariant measures with zero metric entropy is dense in the space of invariant measures. So we have the following.
\begin{Cor}\label{Corollary-zero-metric-entropy}
Suppose $(X, f)$ is topologically Anosov and transitive.
Then we have
\begin{description}
\item[(1)] $\{\mu\in M(f,X):h_{\mu}(f)=0\}$ is dense in $M(f,X).$
\item [(2)] there is an invariant measure with full support.
\end{description}
\end{Cor}
\subsection{Proof of Theorem \ref{Mainlemma-convex-by-horseshoe}}
Fix $K=\cov\{\nu_i\}_{i=1}^m\subseteq M(f, X),$ $x\in X$ and $\eta_0,\zeta_0,\eps_0>0.$ Let $\rho_0=\min\{\rho(\nu_i, \nu_j):1\leq i<j\leq m\}.$ Then $ K\cap M_{erg} (f,X)$ is empty or finite (less than $m$). By Proposition \ref{prop-entropy-dense-for-shadowing}, $(X, f)$ has the refined entropy-dense property, so there are infinitely many ergodic measures on $X$.
This implies that $K\neq M(f, X)$ and thus $d_H(K, M(f, X))>0.$
Let $\eta,\zeta>0$ with $\eta\leq \min\{\htop(f),\eta_0\}$ and $\zeta<\min\{d_H(K, M(f, X)),\rho_0,\zeta_0\}$. By the variational principle of the topological entropy, there exists $\nu_0\in M(f, X)$ such that $h_{\nu_0}(f)>\htop(f)-\frac{\eta}{8}.$
By Lemma \ref{Maincor-mu-gamma-n-expansive}, there exist $\eps^*>0$ and $0<\tilde{\eps}^*<\eps^*$ such that for any $0<\frac{\delta}{2}<\tilde{\eps}^*$ and each $0\leq i\leq m,$ there exists an $n_i\in\N$ such that for any $p\in\N$, there exists an $(pn_i, \frac{\eps^*}{3})$-separated set $\Gamma_{pn_i}^{\nu_i}$ with
\begin{description}
\item[(a)]\label{lem-B-aBasic} $\Gamma_{pn_i}^{\nu_i}\subseteq P_{pn_i}(f)\cap X_{pn_i, \B(\nu_i, \frac{\zeta}{4})}\cap B(x, \frac{\delta}{2})$;
\item[(b)]\label{lem-B-bBasic} $\frac{\log |\Gamma_{pn_i}^{\nu_i}|}{pn_i}>h_{\nu_i}-\frac{\eta}{8}$.
\end{description}
We can assume that $\frac{\eps^*}{3}<\frac{c}{4}$ where $c>0$ is the expansive constant. Let $s(n,\frac{\eps^*}{3})$ denote the largest cardinality of any $(n,\frac{\eps^*}{3})$-separated set of $X$, then by \cite[Theorem 7.11]{Walters} one has
\begin{equation}
\htop(f)=\limsup_{n\to\infty}\frac{1}{n}\log s(n,\frac{\eps^*}{3}).
\end{equation}
Then there exists $N_1\in\N$ such that for any $n\geq N,$ one has
\begin{equation}\label{equation-AB}
s(n,\frac{\eps^*}{3})<e^{n(\htop(f)+\frac{\eta}{4})}.
\end{equation}
Set $\eps=\min\{\frac{\zeta}{4}, \frac{\rho_0}{6}, \frac{\tilde{\eps}^*}{27},\frac{\eps_0}{2}\}.$ Then there exists a $0<\delta<\eps$ such that any $\delta$-pseudo-orbit can be $\eps$-shadowed by some point in $X.$ Set $n=p_0n_0n_1n_2\cdots n_m$ where $p_0$ is large enough such that for any $1\leq i\leq m$
\begin{equation}\label{equation-AA}
n\geq 2N,\ e^{n(h_{\nu_i}-\frac{\eta}{8})}-n\geq e^{n(h_{\nu_i}-\frac{\eta}{4})}\ \text{ and } e^{n(\htop(f)-\frac{\eta}{4})}>\lceil\frac{n}{2}\rceil e^{\lceil\frac{n}{2}\rceil(\htop(f)+\frac{\eta}{4})}+\sum_{m=1}^{N_1-1}| P_{m}^*(f)|
\end{equation}
where $P_{m}^*(f)$ is the set of periodic points with minimal period $m.$
Then for each $0\leq i\leq m$, $P_{n_i}(f)\subseteq P_n(f)$ by definition and furthermore, we can obtain an $(n, \frac{\eps^*}{3})$-separated set $\Gamma_n^{\nu_i}$ with
\begin{description}
\item[(a)]\label{lem-B-aBasic2} $\Gamma_{n}^{\nu_i}\subseteq P_{n}(f)\cap X_{n, \B(\nu_i, \frac{\zeta}{4})}\cap B(x, \frac{\delta}{2})$;
\item[(b)]\label{lem-B-bBasic2} $\frac{\log |\Gamma_{n}^{\nu_i}|}{n}>h_{\nu_i}-\frac{\eta}{8}$.
\end{description}
Since periodic points in $\Gamma_n^{\nu_0}$ with same period $l_0$ for some $l_0\in\N$ are $(l_0,\frac{\eps^*}{3})$-separated, by \eqref{equation-AB} and \eqref{equation-AA} we have
\begin{equation*}
\begin{split}
\sum_{m=1}^{\lceil\frac{n}{2}\rceil}| P_{m}^*(f)\cap \Gamma_n^{\nu_0}|\leq &\sum_{m=N_1}^{\lceil\frac{n}{2}\rceil} s(m,\frac{\varepsilon^*}{3})+\sum_{m=1}^{N_1-1}| P_{m}^*(f)|\\
<&\sum_{m=N_1}^{\lceil\frac{n}{2}\rceil} e^{{m}(\htop(f)+\frac{\eta}{4})}+\sum_{m=1}^{N_1-1}| P_{m}^*(f)|\\
\leq&{\lceil\frac{n}{2}\rceil}e^{{\lceil\frac{n}{2}\rceil}(\htop(f)+\frac{\eta}{4})}+\sum_{m=1}^{N_1-1}| P_{m}^*(f)|\\
< &e^{n(\htop(f)-\frac{\eta}{4})}<e^{n(h_{\nu_0}-\frac{\eta}{8})}<|\Gamma_{n}^{\nu_0}|.
\end{split}
\end{equation*}
Thus there exists $x_0\in\Gamma_n^{\nu_0}$ with minimal period $n.$ Together with $\frac{\eps^*}{3}<\frac{c}{4},$ the only sub-intervals of length $n$ of $\langle x_0, f(x_0), \cdots, f^{n-1}(x_0),x_0, f(x_0), \cdots, f^{n-1}(x_0)\rangle$ that are $\frac{\eps^*}{9}$-shadowed by $\langle x_0, f(x_0), \cdots, f^{n-1}(x_0)\rangle$ are the initial and the final sub-intervals.
By the separation assumption, for any $1\leq i\leq m$ we have $$|\{y\in\Gamma_n^{\nu_i}:d_n(y,f^j(x_0))<\frac{\eps^*}{9} \text{ for some }0\leq j\leq n-1\}|\leq n.$$ Consequently, by \eqref{equation-AA} one can find a subset $\widetilde{\Gamma}_n^{\nu_i}\subset \Gamma_n^{\nu_i}$ with $|\widetilde{\Gamma}_n^{\nu_i}|>e^{n(h_{\nu_i}-\frac{\eta}{4})}$ such that $d_n(y,f^j(x_0))\geq \frac{\eps^*}{9}$ for any $y\in\widetilde{\Gamma}_n^{\nu_i}$ and $0\leq j\leq n-1.$
Denote $r_i=|\widetilde{\Gamma}_n^{\nu_i}|$ and $r=\sum_{i=1}^mr_i$. Enumerate the elements of each $\widetilde{\Gamma}_n^{\nu_i}$ by $\widetilde{\Gamma}_n^{\nu_i}=\{p_1^i, \cdots, p_{r_i}^i\}$. Let $\widetilde{\Gamma}_n=\{p_1^1, \cdots, p_{r_1}^1, \cdots, p_1^m, $ $ \cdots, p_{r_m}^m\}.$
Take $l$ large enough such that
\begin{equation}\label{equation-AC}
\frac{1}{l}<\frac{\zeta}{12} \text{ and }\frac{(l-2)\log|\widetilde{\Gamma}_n^{\nu_i}|}{nl}>h_{\nu_i}-\eta \text{ for any } 1\leq i\leq m.
\end{equation}
Now let $\Gamma^i=\widetilde{\Gamma}_n^{\nu_i}\times\widetilde{\Gamma}_n^{\nu_i}\times\cdots\times\widetilde{\Gamma}_n^{\nu_i}$ whose element is $\underline{y}=(y_1, \cdots, y_{l-2})$ with $y_j\in\widetilde{\Gamma}_n^{\nu_i}$ for $1\leq j\leq l-2,$ and let $\Gamma=\widetilde{\Gamma}_n\times\widetilde{\Gamma}_n\times\cdots\times\widetilde{\Gamma}_n$ whose element is $\underline{y}=(y_1, \cdots, y_{l-2})$ with $y_j\in\widetilde{\Gamma}_n$ for $1\leq j\leq l-2.$ For any $y\in X$, let $\mathfrak{C}_y^n=\langle y, fy, \cdots, f^{n-1}y\rangle$. Then for $\underline{y}\in \Gamma^i$ or $\Gamma$ we define the following pseudo-orbit:
$$\mathfrak{C}_{\underline{y}}=\mathfrak{C}_{x_0}^n\mathfrak{C}_{x_0}^n\mathfrak{C}_{y_1}^n\mathfrak{C}_{y_2}^n\cdots\mathfrak{C}_{y_{l-2}}^n.$$
It is clear that $\mathfrak{C}_{\underline{y}}$ is a $\delta$-pseudo-orbit. Moreover, one notes that we can freely concatenate such $\mathfrak{C}_{\underline{y}}$s to constitutes a $\delta$-pseudo-orbit. We write $\mathfrak{C}_{\underline{y}}=\langle \omega_1, \omega_2, \cdots, \omega_{ln}\rangle.$ If $\underline{y}\in\Gamma^i$ and $d(f^k(z),\omega_{k+1})\leq \eps$ for any $0\leq k\leq ln-1$ then by Lemma \ref{lem:prohorov} and \eqref{equation-AC} one has
\begin{equation}\label{eq-AA}
\begin{split}
\rho(\E_{ln}(z), \nu_i)\leq &\rho(\E_{ln}(z), \frac{1}{n(l-2)} \sum_{k=1}^{n(l-2)}\delta_{\omega_k})+\rho(\frac{1}{n(l-2)} \sum_{k=1}^{n(l-2)}\delta_{\omega_k}, \nu_i)\\
\leq &\eps+\frac{4}{l}+\frac{1}{l-2}\sum_{j=1}^{l-2}\rho(\E_{n}(y_j), \nu_i)\\
<&\eps+\frac{4}{l}+\frac{\zeta}{4}<\frac34\zeta.
\end{split}
\end{equation}
If $\underline{y}\in\Gamma$ and $d(f^k(z),\omega_{k+1})\leq \eps$ for any $0\leq k\leq ln-1,$ we denote $q_i=|\{1\leq j\leq l-2:y_j\in \widetilde{\Gamma}_n^{\nu_i}\}|,$
then $\sum_{i=1}^mq_i=l-2$ and
\begin{equation}\label{eq-AB}
\begin{split}
\rho(\E_{ln}(z), \frac{\sum_{i=1}^mq_i\nu_i}{\sum_{i=1}^m q_i})\leq &\rho(\E_{ln}(z), \frac{1}{n(l-2)} \sum_{k=1}^{n(l-2)}\delta_{\omega_k})+\rho(\frac{1}{n(l-2)} \sum_{k=1}^{n(l-2)}\delta_{\omega_k}, \frac{\sum_{i=1}^mq_i\nu_i}{\sum_{i=1}^m q_i})\\
\leq &\eps+\frac{4}{l}+\frac{1}{l-2}\sum_{i=1}^m\sum_{y_j\in \widetilde{\Gamma}_n^{\nu_i}}\rho(\E_{n}(y_j), \nu_i)\\
<&\eps+\frac{4}{l}+\frac{\zeta}{4}<\frac34\zeta.
\end{split}
\end{equation}
by Lemma \ref{lem:prohorov} and \eqref{equation-AC}.
Now we define
$$\Sigma_{r_i^{l-2}}:=\{\theta=\dots\theta_{-2}\theta_{-1}\theta_{0}\theta_{1}\theta_{2}\dots:\theta_{j}=(\theta_{j,1}, \cdots, \theta_{j,l-2})\in \Gamma^i \text{ for any } j\in\Z \}$$
and
$$\Sigma_{r^{l-2}}:=\{\theta=\dots\theta_{-2}\theta_{-1}\theta_{0}\theta_{1}\theta_{2}\dots:\theta_{j}=(\theta_{j,1}, \cdots, \theta_{j,l-2})\in \Gamma \text{ for any } j\in\Z \}.$$
Then $(\Sigma_{r_i^{l-2}},\sigma)$ and $(\Sigma_{r^{l-2}},\sigma)$ are full shifts. This implies $(\Sigma_{r_i^{l-2}},\sigma)$ and $(\Sigma_{r^{l-2}},\sigma)$ are mixing and have shadowing property. And for each $\theta=\dots\theta_{-2}\theta_{-1}\theta_{0}\theta_{1}\theta_{2}\dots$ in $\Sigma_{r_i^{l-2}}$ or $\Sigma_{r^{l-2}}$
$$\mathfrak{C}_{\theta}=\dots\mathfrak{C}_{\theta_{-2}}\mathfrak{C}_{\theta_{-1}}\mathfrak{C}_{\theta_{0}}\mathfrak{C}_{\theta_{1}}\mathfrak{C}_{\theta_{2}}\dots$$
is a $\delta$-pseudo-orbit. We write $\mathfrak{C}_{\theta}=\dots\omega_{-2}\omega_{-1}\omega_{0}\omega_{1}\omega_{2}\dots,$ by the shadowing property,
$$Y_{\theta}=\{z\in X:d(f^{j}(z), \omega_j)\leq\eps, j\in\Z\}$$
is nonempty and closed.
We claim that $Y_{\theta}\cap Y_{\theta'}=\emptyset$ for any $\theta\neq\theta'$ in $\Sigma_{r_i^{l-2}}$ or $\Sigma_{r^{l-2}}.$ Next we prove the claim by the following two cases.
Case (1): If $\theta\neq\theta'\in \Sigma_{r_i^{l-2}}$ for some $1\leq i\leq m,$ then there is $t\in\Z$ and $1\leq s\leq l-2$ such that $\theta_{t,s}\neq\theta'_{t,s}.$ Since $\theta_{t,s}$ and $\theta'_{t,s}$ are $(n,\frac{\eps^*}{3})$-separated, we have $d_n(f^{lnt+sn+n}(z),f^{lnt+sn+n}(z'))>\eps^*/3-2\eps>\eps^*/9$ for any $z\in Y_{\theta}$ and $z'\in Y_{\theta'}.$ So $Y_{\theta}\cap Y_{\theta'}=\emptyset.$
Case (2): For any $\theta\neq\theta'\in \Sigma_{r^{l-2}},$ there is $t\in\Z$ and $1\leq s\leq l-2$ such that $\theta_{t,s}\neq\theta'_{t,s}.$ If $\theta_{t,s},\theta'_{t,s}\in\widetilde{\Gamma}_n^{\nu_i}$ for some $1\leq i\leq m,$ then $Y_{\theta}\cap Y_{\theta'}=\emptyset$ by Case (1). If there are $1\leq i\neq i'\leq m$ such that $\theta_{t,s}\in\widetilde{\Gamma}_n^{\nu_i}$ and $\theta'_{t,s}\in\widetilde{\Gamma}_n^{\nu_{i'}}$, then $$d_n(f^{lnt+sn+n}(z),f^{lnt+sn+n}(z'))>\eps$$
for any $z\in Y_{\theta}$ and $z'\in Y_{\theta'}.$ Otherwise, we have $$\rho(\mathcal{E}_{n}(f^{lnt+sn+n}(z)),\mathcal{E}_{n}(f^{lnt+sn+n}(z')))\leq \eps\leq \frac{\rho_0}{6}.$$
Combining with $$\rho(\mathcal{E}_{n}(f^{lnt+sn+n}(z)),\mathcal{E}_{n}(\theta_{t,s}))\leq \eps\leq \frac{\rho_0}{6},$$ $$\rho(\mathcal{E}_{n}(f^{lnt+sn+n}(z')),\mathcal{E}_{n}(\theta'_{t,s}))\leq \eps\leq \frac{\rho_0}{6}$$
and $$\rho(\nu_i,\mathcal{E}_{n}(\theta_{t,s}))<\frac{\zeta}{4}\leq \frac{\rho_0}{4},$$ $$\rho(\nu_{i'},\mathcal{E}_{n}(\theta'_{t,s}))<\frac{\zeta}{4}\leq \frac{\rho_0}{4},$$
we have $\rho(\nu_i,\nu_{i'})<\rho_0$ which contradicts that $\rho_0=\min\{\rho(\nu_i, \nu_j):1\leq i<j\leq m\}.$
So $$d_n(f^{lnt+sn+n}(z),f^{lnt+sn+n}(z'))>\eps$$ for any $z\in Y_{\theta}$ and $z'\in Y_{\theta'}.$
This implies $Y_{\theta}\cap Y_{\theta'}=\emptyset.$
Then we can define the following disjoint union:
$$\Delta_i=\bigsqcup_{\theta\in \Sigma_{r_i^{l-2}}}Y_{\theta}~\textrm{and}~\Delta=\bigsqcup_{\theta\in \Sigma_{r^{l-2}}}Y_{\theta}. $$
Note that $f^{nl}(Y_{\theta})= Y_{\sigma(\theta)}$. Then $f^{nl}(\Delta_i)=\Delta_i$, $1\leq i\leq m$ and $f^{nl}(\Delta)=\Delta.$
Therefore, if we define $\pi:\Delta\to \Sigma_{r^{l-2}}$ and $\pi_i:\Delta_i\to \Sigma_{r_i^{l-2}}$ as
\begin{equation*}
\pi(x):=\theta \textrm{ for all } x\in Y_{\theta}~\textrm{with}~\theta\in \Sigma_{r^{l-2}},
\end{equation*}
\begin{equation*}
\pi_i(x):=\theta' \textrm{ for all } x\in Y_{\theta'}~\textrm{with}~\theta'\in \Sigma_{r_i^{l-2}},
\end{equation*}
then $\pi$ and $\pi_i$ are surjective by the shadowing property. Moreover, it is not hard to check that $\pi$ and $\pi_i$ are continuous. So $\Delta$ and $\Delta_i$ are closed. Meanwhile, $(X, f)$ is expansive, so $\pi, \pi_i$ are conjugations.
Let $\Lambda_i=\cup_{k=0}^{nl-1}f^k(\Delta_i)$ and $\Lambda=\cup_{k=0}^{nl-1}f^k(\Delta).$ Then $f(\Lambda_i)=\Lambda_i$, $1\leq i\leq m$ and $f(\Lambda)=\Lambda.$
Now let us prove that $\Lambda$ and $\Lambda_i$ satisfy the property 1-4.
(1) Since $\pi$ and $\pi_i$ are conjugations, the transitivity, expansivity and the shadowing property of $(\Sigma_{r_i^{l-2}},\sigma)$ and $(\Sigma_{r^{l-2}},\sigma)$ yield the same properties of $(\Delta, f^{nl})$ and $(\Delta_i, f^{nl})$. Next we show that $f^{k}(\Delta_i)\cap f^{k'}(\Delta_i)=\emptyset$ for any $0\leq k<k'\leq nl-1.$ If $f^{k}(\Delta_i)\cap f^{k'}(\Delta_i)\neq\emptyset,$ then for any $z\in f^{k}(\Delta_i)\cap f^{k'}(\Delta_i),$ there exist $\theta,\ \theta'\in \Sigma_{r_i^{l-2}}$ such that
$$d(f^{j-k}(z), \omega_j)\leq\eps \text{ and } d(f^{j-k'}(z), \omega'_j)\leq\eps\ \forall j\in\Z$$
where $\mathfrak{C}_{\theta}=\dots\omega_{-2}\omega_{-1}\omega_{0}\omega_{1}\omega_{2}\dots$ and $\mathfrak{C}_{\theta'}=\dots\omega'_{-2}\omega'_{-1}\omega'_{0}\omega'_{1}\omega'_{2}\dots.$ Then we have
\begin{equation}\label{eq-AD}
d(\omega_{j+k}, \omega'_{j+k'})\leq2\eps\ \forall j\in\Z.
\end{equation}
Case (1): If $1\leq k'-k\leq n-1,$ then \eqref{eq-AD} implies $d(\omega_{j}, \omega'_{j+k'-k})\leq2\eps<\frac{\eps^*}{9}\ \forall 0\leq j\leq n-1.$ Note that $\omega_{0}\omega_{1}\omega_{2}\dots\omega_{2n-1}=\omega'_{0}\omega'_{1}\omega'_{2}\dots\omega'_{2n-1}=\langle x_0, fx_0, \cdots, f^{n-1}x_0,x_0, fx_0, \cdots, f^{n-1}x_0\rangle,$ this contradicts that the minimal period of $x_0$ is $n.$
Case (2): If $k'-k= n,$ then \eqref{eq-AD} implies $d(\omega_{j+n}, \omega'_{j+2n})\leq2\eps<\frac{\eps^*}{9}\ \forall 0\leq j\leq n-1.$ Note that $\omega_{n}\omega_{n+1}\dots\omega_{2n-1}=\langle x_0, fx_0, \cdots, f^{n-1}x_0\rangle$ and $\omega'_{2n}\in \widetilde{\Gamma}_n^{\nu_i},$ this contradicts that $d_n(y,f^j(x_0))\geq \frac{\eps^*}{9}$ for any $y\in\widetilde{\Gamma}_n^{\nu_i}$ and $0\leq j\leq n-1.$
Case (3): If $n< k'-k\leq n(l-1),$ then $k'-k=tn+s$ for some $1\leq t\leq l-2$ and $0\leq s\leq n-1.$ Thus \eqref{eq-AD} implies $d(\omega_{n-s+j}, \omega'_{(t+1)n+j})\leq2\eps\ \forall 0\leq j\leq n-1.$ Note that $\omega_{0}\omega_{1}\omega_{2}\dots\omega_{2n-1}=\langle x_0, fx_0, \cdots, f^{n-1}x_0,x_0, fx_0, \cdots, f^{n-1}x_0\rangle,$ and $\omega'_{(t+1)n}\in \widetilde{\Gamma}_n^{\nu_i},$ this contradicts that $d_n(y,f^j(x_0))\geq \frac{\eps^*}{9}$ for any $y\in\widetilde{\Gamma}_n^{\nu_i}$ and $0\leq j\leq n-1.$
Case (4): If $n(l-1)< k'-k\leq nl-1,$ then \eqref{eq-AD} implies $d(\omega_{nl-k'+k+j}, \omega'_{nl+j})\leq2\eps\ \forall 0\leq j\leq n-1.$ Note that $\omega_{0}\omega_{1}\omega_{2}\dots\omega_{2n-1}=\omega'_{nl}\omega'_{nl+1}\omega'_{nl+2}\dots\omega'_{(l+2)n-1}=\langle x_0, fx_0, \cdots, f^{n-1}x_0,x_0, fx_0, \cdots, f^{n-1}x_0\rangle,$ and $1\leq nl-k'+k<n,$ this contradicts that the minimal period of $x_0$ is $n.$
So we have $f^{k}(\Delta_i)\cap f^{k'}(\Delta_i)=\emptyset$ for any $0\leq k<k'\leq nl-1.$ Therefore, Proposition \ref{prop-transitive-shadowing-for-f-n} ensures that
$(\Lambda_i,f)$ is transitive and topologically Anosov.
In fact, if we define $$\Omega_i=\{\mathfrak{C}_{\theta}=\dots\omega_{-2}\omega_{-1}\omega_{0}\omega_{1}\omega_{2}\dots:\theta\in \Sigma_{r_i^{l-2}}\},$$
then $(\Omega_i,\sigma^{nl})$ conjugates to $(\Sigma_{r_i^{l-2}},\sigma).$ Thus $(\Omega_i,\sigma^{nl})$ conjugates to $(\Delta_i, f^{nl}),$ and $(\cup_{k=0}^{nl-1}\sigma^k(\Omega_i),\sigma)$ conjugates to $(\Lambda_i, f).$ This implies $(\cup_{k=0}^{nl-1}\sigma^k(\Omega_i),\sigma)$ is a subshift which is transitive and topologically Anosov. Recall from \cite{Walters2} a subshift satisfies shadowing property if and only if it is a subshift of finite type. So $(\cup_{k=0}^{nl-1}\sigma^k(\Omega_i),\sigma)$ is a transitive subshift of finite type.
By similar method, $(\Lambda,f)$ is also transitive and topologically Anosov, and $(\Lambda,f)$ conjugate to a transitive subshift of finite type.
(2) One has $\htop(f, \Lambda_i)=\frac{1}{nl}\htop(f^{nl}, \Delta_i)=\frac{1}{nl}\htop(\sigma, \Sigma_{r_i^{l-2}})=\frac{(l-2)\log|\widetilde{\Gamma}_n^{\nu_i}|}{nl}>h_{\nu_i}-\eta>h_{\nu_i}-\eta_0$ by \eqref{equation-AC}.
(3) For any ergodic measure $\mu_i\in M(f,\Lambda_i)$, pick an arbitrary generic point $z_i$ of $\mu_i$ in $\Delta_i$.
Then $$\rho(\E_{ln}(f^{tln}(z_i)), \nu_i)<\frac34\zeta\text{ for any } t\in\N$$ by \eqref{eq-AA}.
In addition, we have $\mu_i=\lim_{j\to\infty}\E_j(z_i)=\lim_{t\to\infty}\E_{tln}(z_i)$. So we have $$\rho(\mu_i, \nu_i)=\lim_{t\to\infty}\rho(\E_{tln}(z_i), \nu_i)\leq \frac34\zeta.$$ By the ergodic decomposition Theorem, we obtain that $d_H(\nu_i, M(f,\Lambda_i))\leq \frac34\zeta$.
Now since $K$ is convex and $\Lambda_i\subseteq\Lambda$, one gets that $K\subseteq \B(M(f, \Lambda), \zeta)\subseteq \B(M(f, \Lambda), \zeta_0)$.
On the other hand, for any ergodic measure $\mu\in M(f, \Lambda)$, pick a generic point $z$ of $\mu$ in $\Delta$. Then $z$ $\eps$-shadows some $\delta$-pseudo-orbit $\mathfrak{C}_{\theta}$ with $\theta\in\Sigma_{r^{l-2}}$. Then for any $t\in\N$, there exist nonnegative integers $q_i, 1\leq i\leq m$ such that $$\rho\left(\E_{ln}(f^{tln}(z)), \frac{\sum_{i=1}^mq_i\nu_i}{\sum_{i=1}^m q_i}\right)< \frac34\zeta$$ by \eqref{eq-AB}.
So $\mu\in \B(K, \frac34\zeta).$ By the ergodic decomposition Theorem, $M(f, \Lambda)\subseteq \B(K, \zeta)\subseteq \B(K, \zeta_0)$. As a result, $\Lambda\subsetneq X$. For otherwise, $d_H(K, M(f, \Lambda))=d_H(K, M(f, X))>\zeta$, a contradiction.
(4) Note thet for any $\theta$ in $\Sigma_{r_i^{l-2}}$ or $\Sigma_{r^{l-2}},$ one has $\omega_{mnl}=x_0$ for any integer $m$ where $\mathfrak{C}_{\theta}=\dots\omega_{-2}\omega_{-1}\omega_{0}\omega_{1}\omega_{2}\dots.$ Then for any $z$ in $\Delta_i$ or $\Delta,$ $$d(f^{mnl}(z),x)\leq d(f^{mnl}(z),x_0)+d(x_0,x)<\eps+\frac{\delta}{2}<2\eps<\eps_0.$$
So for any $z$ in $\Lambda_i$ or $\Lambda$ one has $f^{j+mnl}(z) \in B(x,\eps)$ for some $0\leq j\leq nl-1$ and any $m\in\Z$.\qed
\section{Almost Additive Sequences and Theorem \ref{thm-continuous}(I)}\label{Almost Additive}
In this section we give abstract conditions on which Theorem \ref{thm-continuous}(I) holds in the more general context of almost additive sequences of continuous functions. We first recall some definitions about almost additive sequences from \cite{BarreiraDoutor2009}.
Consider a topological dynamical system $(X, f).$ We recall that a sequence of functions $\Phi=\left(\varphi_{n}\right)_{n\in\mathbb{N}}$ is said to be {\it almost additive} (with respect to $(X,f)$ ) if there is a constant $C>0$ such that for every $n, m \in \mathbb{N}$ we have
$$
-C+\varphi_{n}+\varphi_{m} \circ f^{n} \leqslant \varphi_{n+m} \leqslant C+\varphi_{n}+\varphi_{m} \circ f^{n} .
$$
We denote by $A(f,X)$ the family of almost additive sequences of continuous functions. From \cite[Proposition 3]{BarreiraDoutor2009} the limit $\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int \varphi_{n} d \mu$ exists for any $\Phi=\left(\varphi_{n}\right)_{n\in\mathbb{N}}\in A(f,X)$ and any $\mu\in M(f,X),$ and
the function,
\begin{equation}\label{equation-P}
M(f,X) \ni \mu \mapsto \lim _{n \rightarrow \infty} \frac{1}{n} \int\varphi_{n} d \mu,
\end{equation}
is continuous with the weak* topology in $M(f,X).$
Let $d \in \mathbb{N}$ and take $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$. We write
$$
A=\left(\Phi^{1}, \ldots, \Phi^{d}\right) \text { and } B=\left(\Psi^{1}, \ldots, \Psi^{d}\right)
$$
and also $\Phi^{i}=\left(\varphi_{n}^{i}\right)_{n\in\N}$ and $\Psi^{i}=\left(\psi_{n}^{i}\right)_{n\in\N}$. We assume that
\begin{equation}\label{equation-O}
\liminf _{m \rightarrow \infty} \frac{\psi_{m}^{i}(x)}{m}>0 \quad \text { and } \quad \psi_{n}^{i}(x)>0
\end{equation}
for every $i=1, \ldots, d, x \in X$, and $n \in \mathbb{N}$. Given $a=\left(a_1, \ldots, a_{d}\right) \in \mathbb{R}^{d}$ we define:
$$
R_{A,B}(a)=\bigcap_{i=1}^{d}\left\{x \in X: \lim _{n \rightarrow \infty} \frac{\varphi_{n}^{i}(x)}{\psi_{n}^{i}(x)}=a_{i}\right\}.
$$
We also consider the function $\mathcal{P}_{A,B}: M(f,X) \rightarrow \mathbb{R}$ defined by:
\begin{equation}\label{equation-A}
\mathcal{P}_{A,B}(\mu) =\left(\frac{\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int\varphi_{n}^{1} d \mu}{\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int \psi_{n}^{1} d \mu}, \ldots, \frac{\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int \varphi_{n}^{d} d \mu}{\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int \psi_{n}^{d} d \mu}\right)
=\lim _{n \rightarrow \infty}\left(\frac{\int \varphi_{n}^{1} d \mu}{\int \psi_{n}^{1} d \mu}, \ldots, \frac{\int \varphi_{n}^{d} d \mu}{\int \psi_{n}^{d} d \mu}\right).
\end{equation}
\eqref{equation-P} ensures that the second identity in \eqref{equation-A} holds, and that the function $\mathcal{P}_{A,B}$ is continuous.
Denote
$$L_{A,B}=\{\mathcal{P}_{A,B}(\mu):\mu\in M(f,X)\}.$$
For any $a\in L_{A,B},$ define $$M_{A,B}(a)=\{\mu\in M(f,X):\mathcal{P}_{A,B}(\mu)=a\},\ M_{A,B}^{erg}(a)=\{\mu\in M_{erg}(f,X):\mathcal{P}_{A,B}(\mu)=a\}.$$
Then $M_{A,B}(a)$ is closed in $M(f,X)$ since the function $\mathcal{P}_{A,B}$ is continuous.
Let $\alpha:M(f,X)\to \mathbb{R}$ be a continuous function.
We define the pressure of $\alpha$ with respect to $\mu$ by $P(f,\alpha,\mu)=h_\mu(f)+\alpha(\mu).$
Let $E(f,X) \subseteq A(f,X)$ be the family of sequences with a unique equilibrium measure (see definition of equilibrium measures of almost additive sequences in subsection \ref{subsection-equilbrium}). Now we give abstract conditions on which Theorem \ref{thm-continuous}(I) holds in the more general context of almost additive sequences of continuous functions.
\begin{maintheorem}\label{thm-Almost-Additive}
Suppose $(X, f)$ is a dynamical system whose entropy function is upper semi-continuous. Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O}. Let $\alpha:M(f,X)\to \mathbb{R}$ be a continuous function. Assume that the following holds:
for any $K=\cov\{\mu_i\}_{i=1}^m\subseteq M(f, X),$ and any $\eta, \zeta>0$, there exist compact invariant subsets $\Lambda_i\subseteq\Lambda\subsetneq X$ such that for each $i\in\{1,2,\cdots,m\}$
\begin{description}
\item[(1)] $\htop(f, \Lambda_i)>h_{\mu_i}(f)-\eta.$
\item[(2)] $d_H(K, M(f, \Lambda))<\zeta$, $d_H(\mu_i, M(f, \Lambda_i))<\zeta.$
\item[(3)] $\text{span}\left\{\Phi^1|_{\Lambda}, \Psi^1|_{\Lambda},\cdots,\Phi^d|_{\Lambda},\Psi^d|_{\Lambda}\right\}\subseteq E(f|_{\Lambda},\Lambda).$
\end{description}
Then for any $a\in \mathrm{Int}(L_{A,B}),$ any $\mu_0\in M_{A,B}(a)$ and any $\eta, \zeta>0$, there is $\nu\in M_{A,B}^{erg}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-P(f,\alpha,\mu_0)|<\eta.$
\end{maintheorem}
\begin{Ex}\label{example-1}
The function $\alpha:M(f,X)\to \mathbb{R}$ can be defined as following:
\begin{description}
\item[(1)] $\alpha\equiv0.$ Then $P(f,\alpha,\mu)=h_\mu(f)$ is the metric entropy of $\mu.$
\item[(2)] $\alpha(\mu)=\int\varphi d \mu$ with a continuous function $\varphi.$ Then from the $weak^{*}$-topology on $M(X),$ $\alpha:M(f,X)\to \mathbb{R}$ is a continuous function. $P(f,\varphi,\mu)=h_\mu(f)+\alpha(\mu)$ is the pressure of $\varphi$ with respect to $\mu.$
\item[(3)] $\alpha(\mu)=\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int\varphi_{n} d \mu$ with an almost additive sequences of continuous functions $\Phi=\left(\varphi_{n}\right)_{n\in\N}.$ Then $\alpha:M(f,X)\to \mathbb{R}$ is a continuous function from (\ref{equation-P}).
$P(f,\Phi,\mu)=h_\mu(f)+\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int\varphi_{n} d \mu$ is the pressure of $\Phi$ with respect to $\mu.$ Readers can refer to \cite{Barreira2006,Mummert2006} for thermodynamic formalism of almost additive sequences.
\end{description}
\end{Ex}
\begin{Rem}
Another type of additivity assumption was introduced by Feng and Huang\cite{FH2010}: a sequence of continuous functions $\left(\varphi_{n}\right)_{n \in\mathbb{N}}$ is said to be
asymptotically additive if for each $\varepsilon>0$, there exists a continuous function $\varphi:X\to \mathbb{R}$ such that
$$
\limsup _{n \rightarrow \infty} \frac{1}{n}\sup_{x\in X}|\varphi_{n}(x)-S_{n} \varphi(x)|\leq \varepsilon,
$$
where $S_{n} f(x)=\sum_{i=0}^{n-1}\varphi(f^i(x)).$ Every almost additive sequence is asymptotically additive (see, for example, \cite[Proposition 2.1]{ZZC}). It's proved in \cite[Page 294]{Barreira2006} that for expansive dynamical system satisfying specification property, every almost additive sequence of continuous functions with bounded variation has a unique equilibrium measure. But in the case of asymptotically additive sequences, the bounded variation condition does not guarantee the uniqueness of the equilibrium measure (see \cite[Remark 4.5]{Cuneo2020}). The uniqueness of the equilibrium measure is indispensable on the proof of conditional variational principle (see Theorem \ref{BarreiraDoutor2009-theorem3}) which is used in our proof. So we don't consider asymptotically additive sequences in this article.
\end{Rem}
\subsection{Some lemmas}
To prove Theorem \ref{thm-Almost-Additive}, we need some lemmas.
For any $r\in \mathbb{R},$ denote $r^+=\{s\in\mathbb{R}:s>r\}$ and $r^-=\{s\in\mathbb{R}:s<r\}.$ For any $d \in \mathbb{N},$ $r=\left(r_1, \ldots, r_{d}\right)\in\mathbb{R}^d$ and $\xi=\left(\xi_1, \ldots, \xi_{d}\right)\in\{+,-\}^d$, we define $$r^\xi=\{s=\left(s_1, \ldots, s_{d}\right)\in\mathbb{R}^d:s_i\in r_i^{\xi_i} \text{ for }i=1,2,\cdots,d\}.$$ We denote $F^d=\{\left(\frac{p_1}{q_1}, \ldots, \frac{p_d}{q_d}\right):p_i,q_i\in\mathbb{R}\text{ and }q_i>0 \text{ for any }1\leq i\leq d\}.$
It is easy to check that
\begin{Lem}\label{lemma-E}
Let $b_i=\frac{p^i}{q^i}\in F^1$ for $i=1,2.$
\begin{description}
\item[(1)] If $b_1=b_2,$ then $\frac{\theta p^1+(1-\theta)p^2}{\theta q^1+(1-\theta)q^2}=b_1=b_2$ for any $\theta\in[0,1].$
\item[(2)] If $b_1\neq b_2,$ then $\frac{\theta p^1+(1-\theta)p^2}{\theta q^1+(1-\theta)q^2}$ is strictly monotonic on $\theta\in[0,1].$
\end{description}
\end{Lem}
\begin{Lem}\label{lemma-C}
Let $d \in \mathbb{N}$ and $a=\left(\frac{p_1}{q_1}, \ldots, \frac{p_d}{q_d}\right) \in F^{d}.$ If $\{b_\xi=\left(\frac{p_1^\xi}{q_1^\xi}, \ldots, \frac{p_d^\xi}{q_d^\xi}\right)\}_{\xi\in\{+,-\}^d}\subseteq F^d$ are $2^d$ numbers satisfies $b_\xi\in a^\xi$ for any $\xi\in \{+,-\}^d,$ then there are $2^d$ numbers $\{\theta_\xi\}_{\xi\in\{+,-\}^d}\subseteq [0,1]$ such that $\sum_{\xi\in\{+,-\}^d}\theta_\xi=1$ and $$ \frac{\sum_{\xi\in\{+,-\}^d}\theta_\xi p^{\xi}_i}{\sum_{\xi\in\{+,-\}^d}\theta_\xi q^{\xi}_i}=\frac{p_i}{q_i}\text{ for any }1\leq i\leq d.$$
\end{Lem}
\begin{proof}
We prove the lemma inductively. It is clearly true if $d=1$ from Lemma \ref{lemma-E}.
Now we assume that it is true for $d=k\in\mathbb{N}.$ Let $a=\left(a_1, \ldots, a_{k+1}\right) \in \mathbb{R}^{k+1},$ and $\{b^\xi\}_{\xi\in\{+,-\}^{k+1}}$ is $2^{k+1}$ numbers satisfies $b^\xi\in a^\xi$ for any $\xi\in \{+,-\}^{k+1}.$ Then for the $2^{k}$ numbers $\{b^\xi\}_{\xi_{k+1}=+},$ there is $2^k$ numbers $\{\tau^\xi\}_{\xi_{k+1}=+}\subseteq [0,1]$ such that $\sum_{\xi_{k+1}=+}\tau^\xi=1$ and
\begin{equation}\label{equation-S}
\frac{\sum_{\xi_{k+1}=+}\tau_\xi p^{\xi}_i}{\sum_{\xi_{k+1}=+}\tau_\xi q^{\xi}_i}=\frac{p_i}{q_i}\text{ for any }1\leq i\leq k.
\end{equation}
Since $\frac{p^\xi_{k+1}}{q^\xi_{k+1}}>\frac{p_{k+1}}{q_{k+1}}$ for any $\xi\in\{+,-\}^{k+1}$ with $\xi_{k+1}=+,$ then we have
\begin{equation}\label{equation-Q}
\frac{\sum_{\xi_{k+1}=+}\tau_\xi p^{\xi}_{k+1}}{\sum_{\xi_{k+1}=+}\tau_\xi q^{\xi}_{k+1}}>\frac{p_{k+1}}{q_{k+1}}.
\end{equation}
Similarly, the $2^{k}$ numbers $\{b^\xi\}_{\xi_{k+1}=-},$ there is $2^k$ numbers $\{\tau^\xi\}_{\xi_{k+1}=-}\subseteq [0,1]$ such that $\sum_{\xi_{k+1}=-}\tau^\xi=1$ and
\begin{equation}\label{equation-T}
\frac{\sum_{\xi_{k+1}=-}\tau_\xi p^{\xi}_i}{\sum_{\xi_{k+1}=-}\tau_\xi q^{\xi}_i}=\frac{p_i}{q_i}\text{ for any }1\leq i\leq k.
\end{equation}
Since $\frac{p^\xi_{k+1}}{q^\xi_{k+1}}<\frac{p_{k+1}}{q_{k+1}}$ for any $\xi\in\{+,-\}^{k+1}$ with $\xi_{k+1}=-,$ then we have
\begin{equation}\label{equation-R}
\frac{\sum_{\xi_{k+1}=-}\tau_\xi p^{\xi}_{k+1}}{\sum_{\xi_{k+1}=-}\tau_\xi q^{\xi}_{k+1}}<\frac{p_{k+1}}{q_{k+1}}.
\end{equation}
By \eqref{equation-Q}, \eqref{equation-R} and Lemma \ref{lemma-E}(2) there is $\tau_{k+1}\in(0,1)$ such that
$$\frac{\tau_{k+1}\sum_{\xi_{k+1}=+}\tau_\xi p^{\xi}_{k+1}+(1-\tau_{k+1})\sum_{\xi_{k+1}=-}\tau_\xi p^{\xi}_{k+1}}{\tau_{k+1}\sum_{\xi_{k+1}=+}\tau_\xi q^{\xi}_{k+1}+(1-\tau_{k+1})\sum_{\xi_{k+1}=-}\tau_\xi q^{\xi}_{k+1}}=\frac{p_{k+1}}{q_{k+1}}.$$
By \eqref{equation-S}, \eqref{equation-T} and Lemma \ref{lemma-E}(1), we have $$\frac{\tau_{k+1}\sum_{\xi_{k+1}=+}\tau_\xi p^{\xi}_{i}+(1-\tau_{k+1})\sum_{\xi_{k+1}=-}\tau_\xi p^{\xi}_{i}}{\tau_{k+1}\sum_{\xi_{k+1}=+}\tau_\xi q^{\xi}_{i}+(1-\tau_{k+1})\sum_{\xi_{k+1}=-}\tau_\xi q^{\xi}_{i}}=\frac{p_{i}}{q_{i}}\text{ for any }1\leq i\leq k..$$
Let
$$
\theta_\xi=\left\{\begin{array}{ll}
\tau_{k+1}\tau_\xi, & \text { for } \xi_{k+1}=+ \\
(1-\tau_{k+1})\tau_\xi, & \text { for } \xi_{k+1}=-.
\end{array}\right.
$$
Then we have $\sum_{\xi\in\{+,-\}^{k+1}}\theta_\xi=1$ and $$ \frac{\sum_{\xi\in\{+,-\}^{k+1}}\theta_\xi p^{\xi}_i}{\sum_{\xi\in\{+,-\}^{k+1}}\theta_\xi q^{\xi}_i}=\frac{p_i}{q_i}\text{ for any }1\leq i\leq k+1.$$ So we complete the proof of Lemma \ref{lemma-C}.
\end{proof}
From Lemma \ref{lemma-C} we have
\begin{Cor}\label{corollary-A}
Suppose $(X, f)$ is a dynamical system. Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O}. Then for any $a\in \mathrm{Int}(L_{A,B}),$ and $2^d$ invariant measures $\{\mu_\xi\}_{\xi\in\{+,-\}^d}$ with
$$\mathcal{P}_{A,B}\left(\mu_\xi\right)\in a^\xi \text{ for any }\xi\in \{+,-\}^d,$$ there are $2^d$ numbers $\{\theta_\xi\}_{\xi\in\{+,-\}^d}\subseteq [0,1]$ such that $\sum_{\xi\in\{+,-\}^d}\theta_\xi=1$ and $\mathcal{P}_{A,B}\left(\sum_{\xi\in\{+,-\}^d}\theta_\xi\mu_\xi\right)= a.$
\end{Cor}
\begin{Lem}\label{lemma-D}
Suppose $(X, f)$ is a dynamical system. Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O}. Then for any $a\in \mathrm{Int}(L_{A,B}),$ any $\mu\in M_{A,B}(a)$ and any $\eta,\zeta>0$, there exist $2^d$ invariant measures $\{\mu_\xi\}_{\xi\in\{+,-\}^d}$ such that
$$\mathcal{P}_{A,B}\left(\mu_\xi\right)\in a^\xi, h_{\mu_\xi}(f)>h_{\mu}(f)-\eta \text{ and } \rho(\mu_\xi,\mu)<\zeta \text{ for any }\xi\in \{+,-\}^d.$$
\end{Lem}
\begin{proof}
By $a\in \mathrm{Int}(L_{A,B})$ there is $\nu_\xi\in M(f,X)$ such that $\mathcal{P}_{A,B}\left(\nu_\xi\right)\in a^\xi$ for any $\xi\in \{+,-\}^d.$ Then there is $\tau_\xi\in(0,1)$ close to $1$ such that $\mu_\xi=\tau_\xi\mu+(1-\tau_\xi)\nu_\xi$ satisfies
$$h_{\mu_\xi}(f)>h_{\mu}(f)-\eta \text{ and } \rho(\mu_\xi,\mu)<\zeta \text{ for any }\xi\in \{+,-\}^d.$$ And we have $\mathcal{P}_{A,B}\left(\mu_\xi\right)\in a^\xi$ by $\tau^\xi>0$ and Lemma \ref{lemma-E}(2).
\end{proof}
In \cite{BarreiraDoutor2009} L. Barreira and P. Doutor give the conditional variational principle as following.
\begin{Thm}\cite[Theorem 3]{BarreiraDoutor2009}\label{BarreiraDoutor2009-theorem3}
Suppose $(X, f)$ is a dynamical system whose entropy function is upper semi-continuous. Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$
\text{span}\left\{\Phi^1, \Psi^1,\cdots,\Phi^d,\Psi^d\right\} \subseteq E(f,X)
$
and $B$ satisfies \eqref{equation-O}.
If $a \not\in L_{A,B}$, then $R_{A,B}(a)=\emptyset.$ Otherwise, if $a \in \mathrm{Int}(L_{A,B})$, then $R_{A,B}(a)\neq \emptyset$, and the following properties hold:
\begin{description}
\item[(1)] $\htop(f,R_{A,B}(a))$ satisfies the variational principle:
$$
\htop(f,R_{A,B}(a))=\max \left\{h_{\mu}(f): \mu \in M(f,X) \text { and } \mathcal{P}_{A,B}(\mu)=a\right\}.
$$
\item[(2)] There is an ergodic measure $\mu_{a} \in M(f,X)$ with $\mathcal{P}_{A,B}\left(\mu_{a}\right)=a, \mu_{a}\left(R_{A,B}(a)\right)=1$, and
$$
\htop(f,R_{A,B}(a))=h_{\mu_{a}}(f).
$$
\end{description}
\end{Thm}
\subsection{Proof of Theorem \ref{thm-Almost-Additive}}
Fix $a\in \mathrm{Int}(L_{A,B}),$ $\mu_0\in M_{A,B}(a)$ and $\eta, \zeta>0.$
Since the metric entropy is upper semi-continuous and $\alpha:M(f,X)\to \mathbb{R}$ is continuous, there is $0<\zeta'<\zeta$ such that
\begin{equation}\label{equation-C}
h_{\omega}(f)<h_{\mu_0}(f)+\frac{\eta}{2} \text{ and }|\alpha(\omega)-\alpha(\mu_0)|<\frac{\eta}{2}.
\end{equation}
for any $\omega\in M(f,X)$ with $\rho(\mu_0,\omega)<\zeta'.$
By Lemma \ref{lemma-D} there exist $2^d$ invariant measures $\{\mu_\xi\}_{\xi\in\{+,-\}^d}$ such that
\begin{equation}\label{equation-U}
\mathcal{P}_{A,B}\left(\mu_\xi\right)\in a^\xi, h_{\mu_\xi}(f)>h_{\mu_0}(f)-\frac{\eta}{6} \text{ and } \rho(\mu_\xi,\mu_0)<\frac{\zeta'}{2} \text{ for any }\xi\in \{+,-\}^d.
\end{equation}
Since the function $\mathcal{P}_{A,B}$ is continuous, there is $0<\zeta''<\zeta'$ such that such that
\begin{equation}\label{equation-B}
\mathcal{P}_{A,B}\left(\omega_\xi\right)\in a^\xi
\end{equation}
for any $\omega_\xi \in M(f,X)$ with $\rho(\omega_\xi,\mu_\xi)<\zeta''.$
For the $2^d$ invariant measures $\{\mu_\xi\}_{\xi\in\{+,-\}^d},$ there exist compact invariant subsets $\Lambda_\xi\subseteq\Lambda\subsetneq X$ such that for each $\xi\in \{+,-\}^d$
\begin{description}
\item[(1)] $\htop(f, \Lambda_\xi)>h_{\mu_\xi}(f)-\frac{\eta}{6}.$
\item[(2)] $d_H(\cov\{\mu_\xi\}_{\xi\in \{+,-\}^d}, M(f, \Lambda))<\frac{\zeta''}{2}$, $d_H(\mu_\xi, M(f, \Lambda_\xi))<\frac{\zeta''}{2}.$
\item[(3)] $\text{span}\left\{\Phi^1|_{\Lambda}, \Psi^1|_{\Lambda},\cdots,\Phi^d|_{\Lambda},\Psi^d|_{\Lambda}\right\}\subseteq E(f|_{\Lambda},\Lambda).$
\end{description}
By item(1) and the variational principle of the topological entropy, there are $\nu_\xi\in M(f, \Lambda_\xi)$ such that $$h_{\nu_\xi}(f)>\htop(f, \Lambda_\xi)-\frac{\eta}{6}>h_{\mu_\xi}(f)-\frac{2\eta}{6}>h_{\mu_0}(f)-\frac{\eta}{2}.$$
Then by item(2) and \eqref{equation-B}, we have $\mathcal{P}_{A,B}\left(\nu_\xi\right)\in a^\xi.$
By Corollary \ref{corollary-A} there is $2^d$ numbers $\{\theta_\xi\}_{\xi\in\{+,-\}^d}\subseteq [0,1]$ such that $\sum_{\xi\in\{+,-\}^d}\theta_\xi=1$ and $\mathcal{P}_{A,B}\left(\nu'\right)= a$ where $\nu'=\sum_{\xi\in\{+,-\}^d}\theta_\xi\nu_\xi.$
Then on one hand we have
\begin{equation}\label{equation-E}
\begin{split}
&\sup \left\{h_{\mu}(f): \mu \in M(f,\Lambda) \text { and } \mathcal{P}_{A,B}(\mu)=a\right\}\\
\geq &h_{\nu'}(f)\geq \min\{h_{\nu_\xi}(f):\xi\in \{+,-\}^d\}\\
> &h_{\mu_0}(f)-\frac{\eta}{2}.
\end{split}
\end{equation}
On the other hand, by item(2), \eqref{equation-U} and \eqref{equation-C} we have
\begin{equation}\label{equation-F}
\sup \left\{h_{\mu}(f): \mu \in M(f,\Lambda) \text { and } \mathcal{P}_{A,B}(\mu)=a\right\}<h_{\mu_0}(f)+\frac{\eta}{2}.
\end{equation}
Now by item(3) and Theorem \ref{BarreiraDoutor2009-theorem3}, there is an ergodic measure $\nu \in M(f,\Lambda)$ with $\mathcal{P}_{A,B}\left(\nu\right)=a$, and
$$
h_{\nu}(f)=\htop(f,R_{A,B}(a))=\max \left\{h_{\mu}(f): \mu \in M(f,\Lambda) \text { and } \mathcal{P}_{A,B}(\mu)=a\right\}.
$$
Then $\nu\in M_{A,B}^{erg}(a)$ and by \eqref{equation-E}, \eqref{equation-F} we have $|h_{\nu}(f)-h_{\mu_0}(f)|<\frac{\eta}{2}.$ By item(2) and \eqref{equation-U} we have $\rho(\nu,\mu_0)<\zeta'<\zeta.$ Finally, by \eqref{equation-C} we have $|\alpha(\nu)-\alpha(\mu_0)|<\frac{\eta}{2},$ and thus $|P(f,\alpha,\nu)-P(f,\alpha,\mu_0)|<\eta.$ So we complete the proof of Theorem \ref{thm-Almost-Additive}.\qed
\section{Almost Additive Sequences and Theorem \ref{thm-continuous}(II)-(IV)}\label{section-almost2}
In this section we give abstract conditions on which Theorem \ref{thm-continuous}(II)-(IV) holds in the more general context of almost additive sequences of continuous functions.
Consider a topological dynamical system $(X, f).$ Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O}. Let $\alpha:M(f,X)\to \mathbb{R}$ be a continuous function.
Recall that the pressure of $\alpha$ with respect to $\mu$ is $P(f,\alpha,\mu)=h_\mu(f)+\alpha(\mu).$ For any $a\in L_{A,B},$ denote $$H_{A,B}(f,\alpha,a)=\sup\{P(f,\alpha,\mu):\mu\in M_{A,B}(a)\}.$$ In particular, when $\alpha\equiv0,$ we write
$$H_{A,B}(f,a)=H_{A,B}(f,0,a)=\sup\{h_\mu(f):\mu\in M_{A,B}(a)\}.$$
Now we give abstract conditions on which Theorem \ref{thm-continuous}(II)-(IV) holds in the more general context of almost additive sequences of continuous functions.
\begin{maintheorem}\label{thm-Almost-Additive2}
Assume that, in the conditions of Theorem \ref{thm-Almost-Additive},
$\{\mu\in M(f,X):h_{\mu}(f)=0\}$ is dense in $M(f,X),$ and $\alpha$ satisfies that for any $\mu_1, \mu_2 \in M(f, X)$ with $\alpha(\mu_1) \neq \alpha(\mu_2)$
\begin{equation}\label{equation-W}
\beta(\theta):=\alpha(\theta \mu_1+(1-\theta) \mu_2)\text{ is strictly monotonic on }[0,1],
\end{equation}
and for any $\mu_1, \mu_2 \in M(f, X)$ with $\alpha(\mu_1) = \alpha(\mu_2)$
\begin{equation}\label{equation-AF}
\beta(\theta):=\alpha(\theta \mu_1+(1-\theta) \mu_2)\text{ is constant on on }[0,1].
\end{equation}
Then we have
\begin{description}
\item[(II)] For any $a\in \mathrm{Int}(L_{A,B}),$ any $\mu_0\in M_{A,B}(a),$ any $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq c\leq P(f,\alpha,\mu_0)$ and any $\eta, \zeta>0$, there is $\nu\in M_{A,B}^{erg}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-c|<\eta.$
\item[(III)] For any $a\in \mathrm{Int}(L_{A,B})$ and $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq c< H_{A,B}(f,\alpha,a),$ the set $\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)=c\}$ is residual in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$ If further there is an invariant measure with full support, then for any $a\in \mathrm{Int}(L_{A,B})$ and $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq c< H_{A,B}(f,\alpha,a),$ the set $\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)=c,\ S_\mu=X\}$ is residual in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$
\item[(IV)] $\{(\mathcal{P}_{A,B}(\mu), P(f,\alpha,\mu)):\mu\in M(f,X),\ a=\mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B}),\ \max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq P(f,\alpha,\mu)< H_{A,B}(f,\alpha,a)\}$ coincides with $\{(\mathcal{P}_{A,B}(\mu), P(f,\alpha,\mu)):\mu\in M_{erg}(f,X),\ a=\mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B}),\\ \max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq P(f,\alpha,\mu)< H_{A,B}(f,\alpha,a)\}.$ If further $
\text{span}\left\{\Phi^1, \Psi^1,\cdots,\Phi^d,\Psi^d\right\} \subseteq E(f,X)
$ and $\alpha\equiv0,$ then $\{(\mathcal{P}_{A,B}(\mu), h_{\mu}(f)):\mu\in M(f,X),\ \mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B})\}=\{(\mathcal{P}_{A,B}(\mu), h_\mu(f)):\mu\in M_{erg}(f,X),\ \mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B})\}.$
\end{description}
\end{maintheorem}
\begin{Ex}
The function $\alpha:M(f,X)\to \mathbb{R}$ can be defined as
\begin{description}
\item[(1)] $\alpha\equiv0.$
\item[(2)] $\alpha(\mu)=\int\varphi d \mu$ with a continuous function $\varphi.$
\item[(3)] $\alpha(\mu)=\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int\varphi_{n} d \mu$ with an almost additive sequences of continuous functions $\Phi=\left(\varphi_{n}\right)_{n\in\N}.$
\end{description}
Then $\alpha:M(f,X)\to \mathbb{R}$ is a continuous function from Example \ref{example-1}. Furthermore, $\alpha$ is affine and thus it satisfies \eqref{equation-W} and \eqref{equation-AF} if it is defined as above.
\end{Ex}
\subsection{Proof of Theorem \ref{thm-Almost-Additive2}}
We first establish several auxiliary results.
\begin{Lem}\label{lemma-B}
Suppose $(X, f)$ is a dynamical system. Let $V$ be a convex subset of $M(f,X).$ If there is an invariant measure $\mu_V\in V$ with $S_{\mu_V}=X,$ then $\{\mu\in V:S_\mu=X\}$ is residual in $V.$
\end{Lem}
\begin{proof}
Since $\{\mu\in M(f,X):S_\mu=X\}$ is either empty or residual in $M(f,X)$ from \cite[Proposition 21.11]{DGS}. So if there is an invariant measure $\mu_V\in V$ with $S_{\mu_V}=X,$ then $\{\mu\in M(f,X):S_\mu=X\}$ is residual in $M(f,X).$ Thus $\{\mu\in V:S_\mu=X\}$ is a $G_\delta$ subset of $V.$ In addition, for any $\nu\in V$ and $\theta\in(0,1),$ we have $\nu_\theta=\theta\nu+(1-\theta)\mu_V\in V$ and $S_{\nu_\theta}=X.$ So $\{\mu\in V:S_\mu=X\}$ is dense in $V.$
\end{proof}
\begin{Lem}\label{lemma-A}
Suppose $(X, f)$ is a dynamical system. Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O}. If $\alpha:M(f,X)\to \mathbb{R}$ is a continuous function satisfying \eqref{equation-W} and \eqref{equation-AF}, then for any $a\in \mathrm{Int}(L_{A,B})$ and any $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq c< H_{A,B}(f,\alpha,a),$ the following properties hold:
\begin{description}
\item[(1)] If $\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)\geq c\}$ is dense in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\},$ then $\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)\geq c\}$ is residual in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$
\item[(2)] If there is an invariant measure with full support, then $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c,\ S_\mu=X\}$ is residual in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$
\item[(3)] If $\{\mu\in M(f,X):h_{\mu}(f)=0\}$ is dense in $M(f,X),$ then $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)= c\}$ is dense in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$ If further entropy function is upper semi-continuous, then $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)= c\}$ is residual in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$
\end{description}
\end{Lem}
\begin{proof}
(1) From \cite[Proposition 5.7]{DGS}, $M_{erg}(f,X)$ is a $G_\delta$ subset of $M(f,X).$ Then $\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)\geq c\}$ is a $G_\delta$ subset of $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$ If $\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)\geq c\}$ is dense in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\},$ then $\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)\geq c\}$ is residual in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$
(2) Since $\{\mu\in M(f,X):S_\mu=X\}$ is either empty or residual in $M(f,X)$ from \cite[Proposition 21.11]{DGS}. So if there is an invariant measure with full support, then $\{\mu\in M(f,X):S_\mu=X\}$ is residual in $M(f,X).$
Now we show that $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c,\ S_\mu=X\}$ is non-empty.
By Lemma \ref{lemma-D} there exist $2^d$ invariant measures $\{\mu_\xi\}_{\xi\in\{+,-\}^d}$ such that
$$\mathcal{P}_{A,B}\left(\mu_\xi\right)\in a^\xi \text{ for any }\xi\in \{+,-\}^d.$$
Since $\{\mu\in M(f,X):S_\mu=X\}$ is dense in $M(f,X)$ and the function $\mathcal{P}_{A,B}$ is continuous, then there exists $\omega_\xi\in M(f,X)$ close to $\mu_\xi$ such that
\begin{equation*}
\mathcal{P}_{A,B}\left(\omega_\xi\right)\in a^\xi,\ S_{\omega_\xi}=X\text{ for any }\xi\in \{+,-\}^d.
\end{equation*}
By Corollary \ref{corollary-A} there is $2^d$ numbers $\{\theta_\xi\}_{\xi\in\{+,-\}^d}\subseteq [0,1]$ such that $\sum_{\xi\in\{+,-\}^d}\theta_\xi=1$ and $\mathcal{P}_{A,B}\left(\omega\right)= a$ where $\omega=\sum_{\xi\in\{+,-\}^d}\theta_\xi\mu_\omega.$ Then $S_{\omega}=X.$ Since $c< H_{A,B}(f,\alpha,a),$ there is $\nu\in M_{A,B}(a)$ such that $P(f,\alpha,\nu)>c.$
By \eqref{equation-W} we can choose $\theta\in(0,1)$ close to $1$ such that $\tilde{\mu}=\theta\nu+(1-\theta)\omega$ satisfies $P(f,\alpha,\tilde{\mu})>c.$
Then $\tilde{\mu}\in \{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c,\ S_\mu=X\}.$ Note that $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}$ is a convex set by \eqref{equation-W}, \eqref{equation-AF} and Lemma \ref{lemma-E}(1). So by Lemma \ref{lemma-B} we complete the proof of item(2).
(3) Fix $\mu_0\in M_{A,B}(a)$ with $P(f,\alpha,\mu_0)\geq c$ and $\zeta>0.$ By Lemma \ref{lemma-D} there exist $2^d$ invariant measures $\{\mu_\xi\}_{\xi\in\{+,-\}^d}$ such that
\begin{equation}\label{equation-X}
\mathcal{P}_{A,B}\left(\mu_\xi\right)\in a^\xi \text{ and } \rho(\mu_\xi,\mu_0)<\frac{\zeta}{2} \text{ for any }\xi\in \{+,-\}^d.
\end{equation}
Since $\{\mu\in M(f,X):h_{\mu}(f)=0\}$ is dense in $M(f,X)$ and the function $\mathcal{P}_{A,B}$ is continuous, then there exist $\nu_\xi\in M(f,X)$ close to $\mu_\xi$ such that
\begin{equation}\label{equation-Y}
\mathcal{P}_{A,B}\left(\nu_\xi\right)\in a^\xi,\ h_{\nu_\xi}(f)=0 \text{ and } \rho(\nu_\xi,\mu_\xi)<\frac{\zeta}{2} \text{ for each } \xi\in \{+,-\}^d.
\end{equation}
By Corollary \ref{corollary-A} there are $2^d$ numbers $\{\theta_\xi\}_{\xi\in\{+,-\}^d}\subseteq [0,1]$ such that $\sum_{\xi\in\{+,-\}^d}\theta_\xi=1$ and $\mathcal{P}_{A,B}\left(\nu'\right)= a$ where $\nu'=\sum_{\xi\in\{+,-\}^d}\theta_\xi\nu_\xi.$
Then by \eqref{equation-Y} $h_{\nu'}(f)=0.$ By \eqref{equation-X} and \eqref{equation-Y} we have $\rho(\nu',\mu_0)<\zeta.$
Now by \eqref{equation-W} we choose $\theta\in[0,1]$ such that $\nu=\theta\mu_0+(1-\theta)\nu'$ satisfies $P(f,\alpha,\nu)=c.$ Then by Lemma \ref{lemma-E}(1)
\begin{equation}\label{equation-Z}
\mathcal{P}_{A,B}(\nu)=a \text{ and } \rho(\nu,\mu_0)<\zeta.
\end{equation}
So $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)= c\}$ is dense in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$ If further entropy function is upper semi-continuous, $\{\mu\in M(f,X):P(f,\alpha,\mu)\in[c,c+\frac{1}{n})\}$ is open in $\{\mu\in M(f,X):P(f,\alpha,\mu)\geq c\}$ for any $n\in\mathbb{N^{+}}.$ Then
\begin{equation*}
\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\in[c,c+\frac{1}{n})\} \text{ is open and dense in }\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\},
\end{equation*}
and thus
$\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)=c\}$ is residual in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$
\end{proof}
Now we show that the result of Theorem \ref{thm-Almost-Additive} is the keystone of the proof of Theorem \ref{thm-Almost-Additive2}.
\begin{Lem}\label{lemma-F}
Suppose $(X, f)$ is a dynamical system. Let $d \in \mathbb{N},$ $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O}, and $\alpha:M(f,X)\to \mathbb{R}$ be a continuous function satisfying \eqref{equation-W} and \eqref{equation-AF}. Assume that for any $a\in \mathrm{Int}(L_{A,B}),$ any $\mu_0\in M_{A,B}(a)$ and any $\eta, \zeta>0$, there is $\nu\in M_{A,B}^{erg}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-P(f,\alpha,\mu_0)|<\eta.$
If $\{\mu\in M(f,X):h_{\mu}(f)=0\}$ is dense in $M(f,X),$ then we have
\begin{description}
\item[(1)] For any $a\in \mathrm{Int}(L_{A,B}),$ any $\mu_0\in M_{A,B}(a),$ any $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq c\leq P(f,\alpha,\mu_0)$ and any $\eta, \zeta>0$, there is $\nu\in M_{A,B}^{erg}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-c|<\eta.$
\end{description}
If further entropy function is upper semi-continuous, then
\begin{description}
\item[(2)] For any $a\in \mathrm{Int}(L_{A,B})$ and $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq c< H_{A,B}(f,\alpha,a),$ the set $\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)=c\}$ is residual in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$ If further there is an invariant measure with full support, then for any $a\in \mathrm{Int}(L_{A,B})$ and $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq c< H_{A,B}(f,\alpha,a),$ the set $\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)=c,\ S_\mu=X\}$ is residual in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$
\item[(3)] $\{(\mathcal{P}_{A,B}(\mu), P(f,\alpha,\mu)):\mu\in M(f,X),\ a=\mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B}),\ \max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq P(f,\alpha,\mu)< H_{A,B}(f,\alpha,a)\}$ coincides with $\{(\mathcal{P}_{A,B}(\mu), P(f,\alpha,\mu)):\mu\in M_{erg}(f,X),\ a=\mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B}),\\ \max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq P(f,\alpha,\mu)< H_{A,B}(f,\alpha,a)\}.$
\item[(4)] If further $
\text{span}\left\{\Phi^1, \Psi^1,\cdots,\Phi^d,\Psi^d\right\} \subseteq E(f,X)
$ and $\alpha\equiv0,$ then we have $\{(\mathcal{P}_{A,B}(\mu), h_{\mu}(f)):\mu\in M(f,X),\ \mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B})\}=\{(\mathcal{P}_{A,B}(\mu), h_\mu(f)):\mu\in M_{erg}(f,X),\ \mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B})\}.$
\end{description}
\end{Lem}
\begin{proof}
(1) Fix $a\in \mathrm{Int}(L_{A,B}),$ $\mu_0\in M_{A,B}(a),$ $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq c\leq P(f,\alpha,\mu_0)$ and $\eta, \zeta>0.$ By Lemma \ref{lemma-A}(3), there exists $\nu'\in M_{A,B}(a)$ such that $P(f,\alpha,\nu')=c$ and $\rho(\nu',\mu_0)<\frac{\zeta}{2}.$
For the $a\in \mathrm{Int}(L_{A,B}),$ $\nu'\in M_{A,B}(a)$ and $\eta, \frac{\zeta}{2}>0,$ there is $\nu\in M_{A,B}^{erg}(a)$ such that $\rho(\nu,\nu')<\frac{\zeta}{2}$ and $|P(f,\alpha,\nu)-P(f,\alpha,\nu')|<\eta.$ Then we complete the proof of item(1).
(2) Fix $a\in \mathrm{Int}(L_{A,B})$ and $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq c<H_{A,B}(f,\alpha,a).$ First we show that
\begin{equation}\label{equation-K}
\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)\geq c\} \text{ is dense in }\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.
\end{equation}
Let $\mu_0\in M_{A,B}(a)$ be an invariant measure with $P(f,\alpha,\mu_0) \geq c$ and $\zeta>0$. If $P(f,\alpha,\mu_0)>c$, then there is $\eta>0$ such that $c<c+\eta<P(f,\alpha,\mu_0).$ For the $a\in \mathrm{Int}(L_{A,B}),$ $\mu_0\in M_{A,B}(a)$ and $\eta, \zeta>0,$ there exists an ergodic measure $\nu\in M_{A,B}^{erg}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-P(f,\alpha,\mu_0)|<\eta.$
If $P(f,\alpha,\mu_0)=c$, then we can pick an invariant measure $\mu'\in M_{A,B}(a)$ such that $c<P(f,\alpha,\mu') \leq H_{A,B}(f,\alpha,a)$, and next pick a sufficiently small number $\theta \in(0,1)$ such that $\rho\left(\mu_0, \mu''\right)<\zeta / 2$, where $\mu''=(1-\theta) \mu_0+\theta\mu'.$ By \eqref{equation-W} we have $P(f,\alpha,\mu'')>c.$ By the same argument, there exists an ergodic measure $\nu\in M_{A,B}^{erg}(a)$ such that $\rho(\nu,\mu'')<\zeta/2$ and $P(f,\alpha,\nu)> c.$ So $\rho(\nu,\mu_0)<\zeta.$
By \eqref{equation-K} and Lemma \ref{lemma-A}(1),
\begin{equation}\label{equation-M}
\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)\geq c\}\text{ is residual in }\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}
\end{equation}
By Lemma \ref{lemma-A}(3)
\begin{equation}\label{equation-L}
\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)=c\} \text{ is residual in }\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.
\end{equation}
If there is an invariant measure with full support, then by Lemma \ref{lemma-A}(2) we have
\begin{equation}\label{equation-N}
\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c,\ S_\mu=X\}\text{ is residual in }\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}
\end{equation}
So by \eqref{equation-M}, \eqref{equation-L} and \eqref{equation-N}, we complete the proof of item(2).
(3) Fix $a\in \mathrm{Int}(L_{A,B})$ and $\mu_0\in M_{A,B}(a)$ with $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq P(f,\alpha,\mu_0)< H_{A,B}(f,\alpha,a).$
Then by item(2) $\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)=P(f,\alpha,\mu_0)\}$ is residual in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq P(f,\alpha,\mu_0)\}.$ In particular, there is $\mu_a\in M_{A,B}^{erg}(a)$ such that $P(f,\alpha,\mu_a)=P(f,\alpha,\mu_0).$
(4) Fix $a\in \mathrm{Int}(L_{A,B})$ and $\mu_0\in M_{A,B}(a).$ If $h_{\mu_0}(f)=\sup \left\{h_{\mu}(f): \mu \in M_{A,B}(a)\right\},$ then by Theorem \ref{BarreiraDoutor2009-theorem3}, there is an ergodic measure $\mu_{a} \in M_{A,B}(a)$ such that
$$
h_{\mu_{a}}(f)=\htop(f,R_{A,B}(a))=h_{\mu_0}(f).
$$
If $h_{\mu_0}(f)<\sup \left\{h_{\mu}(f): \mu \in M_{A,B}(a)\right\},$ then by item(2) $\{\mu\in M_{A,B}^{erg}(a):h_{\mu}(f)=h_{\mu_0}(f)\}$ is residual in $\{\mu\in M_{A,B}(a):h_{\mu}(f)\geq h_{\mu_0}(f)\}.$ In particular, there is $\mu_a\in M_{A,B}^{erg}(a)$ such that $h_{\mu_a}(f)=h_{\mu_0}(f).$ So we complete the proof of item(4).
\end{proof}
\noindent{\bf Proof of Theorem \ref{thm-Almost-Additive2}:} Note that the conditions of Theorem \ref{thm-Almost-Additive} is contained in Theorem \ref{thm-Almost-Additive2}. Then for any $a\in \mathrm{Int}(L_{A,B}),$ any $\mu_0\in M_{A,B}(a)$ and any $\eta, \zeta>0$, there is $\nu\in M_{A,B}^{erg}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-P(f,\alpha,\mu_0)|<\eta.$ So we obtain Theorem \ref{thm-Almost-Additive2} by Lemma \ref{lemma-F}.\qed
\section{Proof of Theorem \ref{thm-continuous}}\label{section-thm}
In this section, we use ’multi-horseshoe’ dense property and the results of almost additive sequences of continuous functions obtained in Section \ref{section-entropy-dense}-\ref{section-almost2} to give a more general result than Theorem \ref{thm-continuous}.
\subsection{Uniqueness of equilibrium measures}\label{subsection-equilbrium}
We first recall from \cite{Barreira1996,Barreira2006,Mummert2006} the notion of nonadditive topological pressure. Consider a dynamical sytem $(X,f)$. Let $\mathcal{U}$ be a finite open cover of $X$. Given $n \in \mathbb{N}$, we denote by $\mathcal{W}_{n}(\mathcal{U})$ the collection of $n$-tuples $U=\left(U_{1} \cdots U_{n}\right)$ with $U_{1}, \ldots, U_{n} \in \mathcal{U}$. For each $U \in \mathcal{W}_{n}(\mathcal{U})$ we write $m(U)=n$, and we define the open set
$$
X(U)=\left\{x \in X: f^{k-1} x \in U_{k} \text { for } k=1, \ldots, m(U)\right\} .
$$
We say that a collection $\Gamma \subset \bigcup_{n \in \mathbb{N}} \mathcal{W}_{n}(\mathcal{U})$ covers the set $X$ if $\bigcup_{U \in \Gamma} X(U) \supset X$. Now let $\Phi=\left(\varphi_{n}\right)_{n}$ be a sequence of continuous functions $\varphi_{n}: X \rightarrow \mathbb{R}$. We define the number
$$
\gamma_{n}(\Phi, \mathcal{U})=\sup \left\{\left|\varphi_{n}(x)-\varphi_{n}(y)\right|: x, y \in X(U) \text { with } U \in \mathcal{W}_{n}(\mathcal{U})\right\}
$$
We assume that
$$
\lim _{\operatorname{diam} \mathcal{U} \rightarrow 0} \limsup _{n \rightarrow \infty} \frac{1}{n} \gamma_{n}(\Phi, \mathcal{U})=0.
$$
For each $n$-tuple $U \in \mathcal{W}_{n}(\mathcal{U})$ we write $\varphi(U)=\sup _{X(U)} \varphi_{n}$ when $X(U) \neq \varnothing$, and $\varphi(U)=-\infty$ otherwise. We also define
\begin{equation}\label{equation-AD}
M(\alpha, \Phi, \mathcal{U})=\lim _{n \rightarrow \infty} \inf _{\Gamma} \sum_{U \in \Gamma} \exp (-\alpha m(U)+\varphi(U))
\end{equation}
where the infimum is taken over all collections $\Gamma \subset \bigcup_{k \geq n} \mathcal{W}_{k}(\mathcal{U})$ covering $X$. One can show that the quantity in (\ref{equation-AD}) jumps from $+\infty$ to 0 at a unique value of $\alpha$, and thus we can define
$$
P(\Phi, \mathcal{U})=\inf \{\alpha: M(\alpha, \Phi, \mathcal{U})=0\}.
$$
Moreover, the limit
$$
P(\Phi)=\lim _{\operatorname{diam} U \rightarrow 0} P(\Phi, \mathcal{U})
$$
exists (see \cite{Barreira1996} for details). The number $P(\Phi)$ is called the nonadditive topological pressure of the sequence of functions $\Phi$ (with respect to $f$ on $X$).
Let $\Phi=\left(\varphi_{n}\right)_{n\in\mathbb{N}}$ be an almost additive sequence of continuous functions. A measure $\mu\in M(f,X)$ is said to be an {\it equilibrium measure} associated with $\Phi$ if $$P(\Phi)=h_\mu(f)+\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int \varphi_{n} d \mu.$$
The uniqueness of equilibrium measures was established if $\Phi$ has bounded variation. We say that $\Phi$ has {\it bounded variation} if there exists $\varepsilon>0$ for which
$
\sup _{n \in \mathbb{N}} \gamma_{n}(\Phi, \varepsilon)<\infty
$
with $$\gamma_{n}(\Phi, \varepsilon)=\sup \left\{\left|\varphi_{n}(x)-\varphi_{n}(y)\right|: d\left(f^{k} (x), f^{k} (y)\right)<\varepsilon \text { for } k=0, \ldots, n\right\}.$$
\begin{Lem}\cite[Page 294]{Barreira2006}
Suppose that $(X,f)$ is an expansive dynamical system satisfying specification property. Let $\Phi$ be an almost additive sequence of continuous functions with bounded variation. Then there is a unique equilibrium measure $\mu_{\Phi}$ for $\Phi$.
\end{Lem}
From \cite[Proposition 23.20]{DGS} it's known that if a homeomorphism of a compact metric space is expansive, mixing and has the shadowing property, then it satisfies specification property. So we have the following.
\begin{Cor}\label{corollary-B}
Suppose that $(X, f)$ is topologically Anosov and mixing. Let $\Phi$ be an almost additive sequence of continuous functions with bounded variation. Then there is a unique equilibrium measure $\mu_{\Phi}$ for $\Phi$.
\end{Cor}
Next, using a spectral decomposition theorem due to Bowen, we will show that Corollary \ref{corollary-B} is still true if $(X, f)$ is just transitive.
\begin{Thm}\label{theorem-A}
Suppose that $(X, f)$ is topologically Anosov and transitive. Let $\Phi$ be an almost additive sequence of continuous functions with bounded variation. Then there is a unique equilibrium measure $\mu_{\Phi}$ for $\Phi$.
\end{Thm}
\begin{proof}
Since $(X, f)$ is expansive, transitive and has the shadowing property, by \cite[Theorem 3.1.11]{AH} $X$ admits a decomposition
$$
X=\bigsqcup_{i=0}^{m-1} f^{i}(D)
$$
where $m>0$ is a positive integer, such that $f^{i}(D), 0 \leq i \leq m-1$, are closed $f^{m}$-invariant subsets of $X$, $f^{i}(D)\cap f^{j}(D)=\emptyset$ for any $0\leq i<j\leq m-1,$ and
$$
\left.f^{m}\right|_{f^{i}(D)}: f^{i}(D) \rightarrow f^{i}(D)
$$
is mixing for every $0 \leq i \leq m-1$. Let $k>0$ be an integer, then from \cite[Theorem 2.3.3]{AH} a dynamical system $(X,f)$ has the shadowing property if and only if so does $(X,f^{k})$. So $(D,f^m|_{D})$ has the shadowing property. Since $(X, f)$ is expansive, from the uniform continuity of $f, \cdots, f^{m-1},$ $(D,f^m|_{D})$ is also expansive. So $(D,f^m|_{D})$ is topologically Anosov and mixing.
For any $\mu \in M(X)$, define $\sigma(\mu) \in M(D)$ by: $$\sigma(\mu)(A)=\mu (A \cup f(A) \cup \dots \cup f^{m-1}(A)),$$ where $A$ ia a Borel set of $D$. By \cite[Proposition 23.17]{DGS}, $\sigma$ is a homeomorphism from $M(f,X)$ onto $M(f^m,D)$ and $$\sigma^{-1}(\nu)=\frac{1}{m}(\nu + f_{*}\nu +\dots + f^{m-1}_{*}\nu) \in M(f,X)$$ for any $\nu \in M(f^m,D)$ where $f_*\nu(B)=\nu(f^{-1}(B))$ for any Borel set $B$.
Note that $$h_{\sigma(\mu)}(f^m)=mh_{\mu}(f)$$ and $$\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int \sum_{i=0}^{m-1}\varphi_{n}\circ f^i d \sigma(\mu)=m\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int \varphi_{n} d \mu.$$ Thus maximizing $$h_{\sigma(\mu)}(f^m)+\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int \sum_{i=0}^{m-1}\varphi_{n}\circ f^i d \sigma(\mu)$$ is equivalent to maximizing $$h_\mu(f)+\lim\limits_{n \rightarrow \infty} \frac{1}{n} \int \varphi_{n} d \mu.$$
For $\Phi=\left(\varphi_{n}\right)_{n\in\mathbb{N}}$, define $\Psi=\left(\sum_{i=0}^{m-1}\varphi_{n}\circ f^i\right)_{n\in\mathbb{N}}.$ From the uniform continuity of $f, \cdots, f^{m-1},$ if $\Phi$ is an almost additive sequence of continuous functions with bounded variation, then so does $\Psi.$ Since $(D,f^m|_{D})$ is topologically Anosov and mixing, there is a unique equilibrium measure $\nu_{\Psi}$ for $\Psi$ by Corollary \ref{corollary-B}. So $\sigma^{-1}(\nu_\Psi)$ is the unique equilibrium measure for $\Phi.$
\end{proof}
\subsection{Proof of Theorem \ref{thm-continuous}}
Now we show that the results of Theorem \ref{thm-Almost-Additive} and \ref{thm-Almost-Additive2} hold for topologically Anosov system which is transitive. For a continuous function $\varphi$, let $\varphi_{n}=\varphi+\varphi\circ f+\cdots+\varphi\circ f^{n-1}$ then $\{\varphi_{n}\}_{n\in \N}$ is a additive sequence of contiunous functions and has bounded variation if $\varphi$ has bounded variation. So
let $\alpha\equiv 0$ and $d=1$ in Theorem \ref{thm-almost}, we obtain Theorem \ref{thm-continuous}.
\begin{maintheorem}\label{thm-almost}
Suppose that $(X, f)$ topologically Anosov and transitive. Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O} and $\Phi^i, \Psi^i$ has bounded variation for any $1\leq i\leq d.$ Let $\alpha:M(f,X)\to \mathbb{R}$ be a continuous function satisfying \eqref{equation-W} and \eqref{equation-AF}.
Then:
\begin{description}
\item[(I)] For any $a\in \mathrm{Int}(L_{A,B}),$ any $\mu_0\in M_{A,B}(a)$ and any $\eta, \zeta>0$, there is $\nu\in M_{A,B}^{erg}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-P(f,\alpha,\mu_0)|<\eta.$
\item[(II)] For any $a\in \mathrm{Int}(L_{A,B}),$ any $\mu_0\in M_{A,B}(a),$ any $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq c\leq P(f,\alpha,\mu_0)$ and any $\eta, \zeta>0$, there is $\nu\in M_{A,B}^{erg}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-c|<\eta.$
\item[(III)] For any $a\in \mathrm{Int}(L_{A,B})$ and $\max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq c< H_{A,B}(f,\alpha,a),$ $\{\mu\in M_{A,B}^{erg}(a):P(f,\alpha,\mu)=c,\ S_\mu=X\}$ is residual in $\{\mu\in M_{A,B}(a):P(f,\alpha,\mu)\geq c\}.$
\item[(IV)] $\{(\mathcal{P}_{A,B}(\mu), P(f,\alpha,\mu)):\mu\in M(f,X),\ a=\mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B}),\ \max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq P(f,\alpha,\mu)< H_{A,B}(f,\alpha,a)\}$ coincides with $\{(\mathcal{P}_{A,B}(\mu), P(f,\alpha,\mu)):\mu\in M_{erg}(f,X),\ a=\mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B}),\\ \max_{\mu\in M_{A,B}(a)}\alpha(\mu)\leq P(f,\alpha,\mu)< H_{A,B}(f,\alpha,a)\}.$ If further $\alpha=0,$ then $\{(\mathcal{P}_{A,B}(\mu), h_{\mu}(f)):\mu\in M(f,X),\ \mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B})\}=\{(\mathcal{P}_{A,B}(\mu), h_\mu(f)):\mu\in M_{erg}(f,X),\ \mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B})\}.$
\end{description}
\end{maintheorem}
\begin{proof}
Since $(X,f)$ is expansive, then the entropy function is upper semi-continuous from \cite[Theorem 8.2]{Walters}. Since $\Phi^i, \Psi^i$ has bounded variation for any $1\leq i\leq d,$ then every sequence in $\text{span}\left\{\Phi^1, \Psi^1,\cdots,\Phi^d,\Psi^d\right\} $ has bounded variation. Thus by Theorem \ref{theorem-A} we have $\text{span}\left\{\Phi^1, \Psi^1,\cdots,\Phi^d,\Psi^d\right\} \subseteq E(f,X),$ and $\text{span}\left\{\Phi^1|_{\Lambda}, \Psi^1|_{\Lambda},\cdots,\Phi^d|_{\Lambda},\Psi^d|_{\Lambda}\right\}\subseteq E(f|_{\Lambda},\Lambda)$ if $(\Lambda,f|_{\Lambda})$ is transitive and topologically Anosov.
Then we complete the proof by Theorem \ref{Mainlemma-convex-by-horseshoe}, Theorem \ref{thm-Almost-Additive}, Theorem \ref{thm-Almost-Additive2} and Corollary \ref{Corollary-zero-metric-entropy}.
\end{proof}
\section{Applications}\label{section-applications}
In this section, we apply the results in the previous sections to transitive locally maximal hyperbolic sets and transitive two-side subshits of finite type.
\subsection{Hyperbolic diffeomorphisms}
We now suppose that $f:M\to M$ is a diffeomorphism of a compact $C^{\infty}$ Riemannian manifold $M$. Then the derivative of $f$ can be considered a map $df: TM\to TM$ where $TM=\bigcup_{x\in M}T_xM$ is the tangent bundle of $M$ and $df_x:T_xM\to T_{f(x)}M$.
A closed subset $\Lambda\subset M$ is \emph{hyperbolic} if $f(\Lambda)=\Lambda$ and each tangent space $T_xM$ with $x\in\Lambda$ can be written as a direct sum
$T_xM=E_x^u\oplus E_x^s$
of subspaces so that
\begin{enumerate}
\item $Df(E_x^s)=E_{f(x)}^s,\ Df(E_x^u)=E_{f(x)}^u$;
\item there exist constants $c>0$ and $\lambda\in(0, 1)$ so that
$$\|Df^n(v)\|\leq c\lambda^n\|v\|~\textrm{when}~v\in E_x^s, n\geq 1,\, \text{ and } \,\|Df^{-n}(v)\|\leq c\lambda^n\|v\|~\textrm{when}~v\in E_x^u, n\geq 1.$$
\end{enumerate}
A hyperbolic set $\Lambda$ is said to be locally maximal for $f$ if there exists a neighborhood $U$ of $\Lambda$ in $M$ such that $\Lambda=\bigcap_{n=-\infty}^{+\infty}f^{n}(U)$. It's known that a locally maximal hyperbolic set is expansive by \cite[Corollary 6.4.10]{KatHas} and has shadowing property by \cite[Theorem 18.1.2]{KatHas}.
\begin{Thm}\label{thm-basic-set}
Suppose that $(X,f)$ is a system restricted a transitive locally maximal hyperbolic set. Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O} and $\Phi^i, \Psi^i$ has bounded variation for any $1\leq i\leq d.$ Let $\alpha:M(f,X)\to \mathbb{R}$ be a continuous function satisfying \eqref{equation-W} and \eqref{equation-AF}. Then the results of Theorem \ref{thm-almost} hold.
\end{Thm}
For a Hölder continuous function $\varphi$, let $\varphi_{n}=\varphi+\varphi\circ f+\cdots+\varphi\circ f^{n-1}$ then $\{\varphi_{n}\}_{n\in \N}$ is a additive sequence of contiunous functions and from \cite[Example 2]{Bowen} it has bounded variation. So we have
\begin{Thm}\label{thm-basic-set-2}
Suppose that $(X,f)$ is a system restricted a transitive locally maximal hyperbolic set. Let $\varphi$ be a Hölder continuous function.
Then the results of Theorem \ref{thm-continuous} hold.
\end{Thm}
\subsection{Transitive two-side subshifts of finite type}
Let $k$ be a fixed natural number and let $C=\{0, 1, \ldots, k-1\}$. Put the discrete topology on $C.$ Consider the two-side full symbolic space $\Sigma=\prod_{-\infty}^{\infty} C$, equipped with the product topology, and the shift homeomorphism $\sigma: \Sigma \to \Sigma$ defined by $(\sigma(w))_{n}=w_{n+1}$, where $w =\left(w_{n}\right)_{-\infty}^{\infty}.$ A metric on $\Sigma$ is defined by $d(x, y)=2^{-m}$ if $m$ is the largest natural number with $x_{n}=y_{n}$ for any $|n|<m$, and $d(x, y)=1$ if $x_{0} \neq y_{0}.$ If $X$ is a closed subset of $\Sigma$ with $\sigma X=X$ then $\left.\sigma\right|_{X}: X \to X$ is called a subshift. We usually write this as $\sigma: X \to X .$ A subshift $\sigma: X \rightarrow X$ is said to be of finite type if there exists some natural number $N$ and a collection of blocks of length $N+1$ with the property that $x=\left(x_{n}\right)_{-\infty}^{\infty} \in X$ if and only if each block $\left(x_{i}, \ldots, x_{i+N}\right)$ in $x$ of length $N+1$ is one of the prescribed blocks.
Recall from \cite{Walters2} a subshift satisfies shadowing property if and only if it is a subshift of finite type. As a subsystem of two-side full shift, it is expansive. So we have
\begin{Thm}
Suppose that $(X,f)$ is a transitive two-side subshit of finite type. Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O} and $\Phi^i, \Psi^i$ has bounded variation for any $1\leq i\leq d.$ Let $\alpha:M(f,X)\to \mathbb{R}$ be a continuous function satisfying \eqref{equation-W} and \eqref{equation-AF}. Then the results of Theorem \ref{thm-almost} hold.
\end{Thm}
\begin{Lem}
Let $x,y\in \Sigma_2,$ $n\in\Z^+,$ and $\varepsilon>0.$ If $d(\sigma^i(x),\sigma^i(y))<\varepsilon$ for any $0\leq i\leq n,$ then we have $d(\sigma^i(x),\sigma^i(y))<\varepsilon \cdot2^{-\min\{i,n-i\}}$ for any $0\leq i\leq n.$
\end{Lem}
\begin{proof}
For the $\varepsilon>0,$ there is an integer $i$ such that $\frac{1}{2^{j+1}}< \varepsilon\leq\frac{1}{2^{j}}.$ Then for any $0\leq i\leq n,$ by $d(\sigma^i(x),\sigma^i(y))<\varepsilon,$ we have $(\sigma^i(x))_l=(\sigma^i(y))_l$ for any $-j\leq l\leq j.$ Thus we have $x_l=y_l$ for any $-j\leq l\leq n+j.$ This implies $(\sigma^i(x))_l=(\sigma^i(y))_l$ for any $-j-i\leq l\leq n+j-i$ and any $0\leq i\leq n.$ So
\begin{equation*}
d(\sigma^i(x),\sigma^i(y))\leq 2^{-\min\{i+j,n+j-i\}-1}= 2^{-j-1} \cdot 2^{-\min\{i,n-i\}}<\varepsilon\cdot 2^{-\min\{i,n-i\}}.
\end{equation*}
for any $0\leq i\leq n.$
\end{proof}
Let $\varphi$ a Hölder continuous function on $\Sigma_2$ with constant $K$ and exponent $\alpha.$ If $x,y\in \Sigma_2,$ $n\in\Z^+,$ and $\varepsilon>0$ satisfy $d(\sigma^i(x),\sigma^i(y))<\varepsilon$ for any $0\leq i\leq n,$ then we have
\begin{equation*}
\begin{split}
\left|\sum_{k=0}^{n}\varphi(\sigma^k(x))-\sum_{k=0}^{n}\varphi(\sigma^k(y))\right|\leq &\sum_{k=0}^{n}\left|\varphi(\sigma^k(x))-\varphi(\sigma^k(y))\right|\\
\leq &\sum_{k=0}^{n}K(d(\sigma^k(x),\sigma^k(y)))^\alpha\\
\leq &\sum_{k=0}^{n}K(\varepsilon \cdot 2^{-\min\{i,n-i\}})^\alpha\\
\leq &K (\varepsilon)^\alpha\cdot \frac{1}{1-2^{-\alpha}}.
\end{split}
\end{equation*}
This implies that every Hölder continuous function on $\Sigma_2$ has bounded variation.
Same as Theorem \ref{thm-basic-set-2}, we have
\begin{Thm}\label{thm-basic-set-3}
Suppose that $(X,f)$ is a transitive two-side subshit of finite type. Let $\varphi$ be a Hölder continuous function.
Then the results of Theorem \ref{thm-continuous} hold.
\end{Thm}
\subsection{Homoclinic classes} \label{section-H(p)}
In this subsection, we consider homoclinic classes and give corresponding results on refined Katok’s conjecture.
Let $f$ be a $C^{1}$ diffeomorphism on a compact Riemannian manifold $M,$ we recall that the homoclinic class of a hyperbolic saddle $p$, denoted by $H(p),$ is the closure of the set of hyperbolic saddles $q$ homoclinically related to $p$ (the stable manifold of the orbit of $q$ transversely meets the unstable one of the orbit of $p$ and vice versa).
Let $X=H(p)$ be a homoclinic class. We denote $M^{horse}(H(p))$ the set of measures which can be approximated by horseshoes, that is, an invariant measure $\mu\in M(f,X)$ is in $M^{horse}(H(p))$ if and only if for any $\varepsilon>0,$ there is a $f$-invariant compact subset $\Lambda_{\varepsilon} \subseteq H(p)$ such that $\mu_\varepsilon:=\mu|_{\Lambda_{\varepsilon}}$ satisfies the following three properties
\begin{description}
\item[(1)] $(\Lambda_\varepsilon,f|_{\Lambda_\varepsilon})$ is a transitive locally maximal hyperbolic set which contains a hyperbolic saddles $q$ homoclinically related to $p.$
\item[(2)] $\rho(\mu,\mu_\varepsilon)<\varepsilon.$
\item[(3)] $h_{\mu_\varepsilon}(f)>h_{\mu}(f)-\varepsilon.$
\end{description}
We denote $M^{horse}_{erg}(H(p))=M^{horse}(H(p))\cap M_{erg}(f,X).$
Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O}. Denote
$$L_{A,B}^{horse}=\{\mathcal{P}_{A,B}(\mu):\mu\in M^{horse}(H(p))\}.$$
For any $a\in L_{A,B}^{horse},$ define $$M_{A,B}^{horse}(a)=\{\mu\in M^{horse}(H(p)):\mathcal{P}_{A,B}(\mu)=a\},$$ $$M_{A,B}^{erg,horse}(a)=\{\mu\in M^{horse}_{erg}(H(p)):\mathcal{P}_{A,B}(\mu)=a\}.$$
Let $\alpha:M(f,X)\to \mathbb{R}$ be a continuous function.
We define the pressure of $\alpha$ with respect to $\mu$ by $P(f,\alpha,\mu)=h_\mu(f)+\alpha(\mu).$ For any $a\in L_{A,B}^{horse},$ denote $$H_{A,B}^{horse}(f,\alpha,a)=\sup\{P(f,\alpha,\mu):\mu\in M_{A,B}^{horse}(a)\}.$$ In particular, when $\alpha\equiv0,$ we write
$$H_{A,B}^{horse}(f,a)=H_{A,B}^{horse}(f,0,a)=\sup\{h_\mu(f):\mu\in M_{A,B}^{horse}(a)\}.$$
\begin{maintheorem}\label{thm-almost-2}
Let $f$ be a $C^{1}$ diffeomorphism on a compact Riemannian manifold $M,$ and $X=H(p)$ be a nontrival homoclinic class. Assume that the entropy function of $(X,f)$ is upper semi-continuous. Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O} and $\Phi^i, \Psi^i$ has bounded variation for any $1\leq i\leq d.$ Let $\alpha:M(f,X)\to \mathbb{R}$ be a continuous function satisfying \eqref{equation-W} and \eqref{equation-AF}.
Then:
\begin{description}
\item[(I)] For any $a\in \mathrm{Int}(L_{A,B}^{horse}),$ any $\mu_0\in M_{A,B}^{horse}(a)$ and any $\eta, \zeta>0$, there is $\nu\in M_{A,B}^{erg,horse}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-P(f,\alpha,\mu_0)|<\eta.$
\item[(II)] For any $a\in \mathrm{Int}(L_{A,B}^{horse}),$ any $\mu_0\in M_{A,B}^{horse}(a),$ any $\max_{\mu\in M_{A,B}^{horse}(a)}\alpha(\mu)\leq c\leq P(f,\alpha,\mu_0)$ and any $\eta, \zeta>0$, there is $\nu\in M_{A,B}^{erg,horse}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-c|<\eta.$
\item[(III)] For any $a\in \mathrm{Int}(L_{A,B}^{horse})$ and $\max_{\mu\in M_{A,B}^{horse}(a)}\alpha(\mu)\leq c< H_{A,B}^{horse}(f,\alpha,a),$ $\{\mu\in M_{A,B}^{erg,horse}(a):P(f,\alpha,\mu)=c\}$ is residual in $\{\mu\in M_{A,B}^{horse}(a):P(f,\alpha,\mu)\geq c\}.$
\item[(IV)] $\{(\mathcal{P}_{A,B}(\mu), P(f,\alpha,\mu)):\mu\in M^{horse}(H(p)),\ a=\mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B}^{horse}),\ \max_{\mu\in M_{A,B}^{horse}(a)}\alpha(\mu)\leq P(f,\alpha,\mu)< H_{A,B}^{horse}(f,\alpha,a)\}$ coincides with $\{(\mathcal{P}_{A,B}(\mu), P(f,\alpha,\mu)):\mu\in M^{horse}_{erg}(H(p)),\ a=\mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B}^{horse}), \max_{\mu\in M_{A,B}^{horse}(a)}\alpha(\mu)\leq P(f,\alpha,\mu)< H_{A,B}^{horse}(f,\alpha,a)\}.$
\end{description}
\end{maintheorem}
There are many transitive systems for which the whole space is a homoclinic class and the entropy function is upper semi-continuous including \\
(i) the nonuniformly hyperbolic diffeomorphisms constructed by Katok \cite{Katok-ex}. For arbitrary compact connected two-dimensional manifold $M$, A. Katok proved that there exists a $C^\infty$ diffeomorphism $f$ such that the Riemannian volume $m$ is an $f$-invariant ergodic hyperbolic measure. From \cite{Katok} (or Theorem S.5.3 on Page 694 of book \cite{KatHas}) we know that the support of any ergodic and non-atomic hyperbolic measure of a $C^{1+\alpha}$ diffeomorphism is contained in a non-trivial homoclinic class, then there is a hyperbolic periodic point $p$ such that $M=S_m=H(p).$ Moreover, J. Buzzi \cite{Buzzi1997} showed that every $C^\infty$ diffeomorphism is asymptotically entropy expansive which implies that the entropy function is upper semi-continuous by \cite[Theorem 20.9]{DGS}.
\\(ii) generic systems in the space of robustly transitive diffeomorphisms $\operatorname{Diff}^{1}_{RT}(M).$ By the robustly transitive partially hyperbolic diffeomorphisms constructed by Ma\~{n}\'{e} \cite{Mane-ex} and the robustly transitive nonpartially hyperbolic diffeomorphisms constructed by Bonatti and Viana \cite{BV-ex}, we know that $\operatorname{Diff}^{1}_{RT}(M)$ is a non-empty open set in $\operatorname{Diff}^{1}(M).$ Since any non-trivial isolated transitive set of $C^{1}$ generic diffeomorphism is a non-trivial homoclinic class \cite{BD1999}, we have that $$\mathcal{R}_1=\{f\in \operatorname{Diff}^{1}_{RT}(M): \text{ there is a hyperbolic periodic point } p \text{ such that } M=H(p) \}$$ is generic in $\operatorname{Diff}^{1}_{RT}(M).$ Moreover, $C^1$ generically in any dimension, isolated homoclinic classes are entropy expansive \cite{PV2008}. Since entropy expansive implies upper semi-continuous of the entropy function by \cite[Theorem 20.9]{DGS}, then we have that $$\mathcal{R}_2=\{f\in \mathcal{R}_1: \text{ the entropy function is upper semi-continuous} \}$$ is generic in $\operatorname{Diff}^{1}_{RT}(M).$\\
(iii) generic systems in the space of volume-preserving diffeomorphisms $\operatorname{Diff}^{1}_{vol}(M).$ Let $M$ be a compact connected Riemannian manifold. Bonatti and Crovisier proved in \cite[Theorem 1.3]{BC2004} that there exists a residual $C^1$-subset $\mathcal{R}_1$ of $\operatorname{Diff}^{1}_{vol}(M)$ such that if $f\in\mathcal{R}_1$ then $f$ is a transitive diffeomorphism. Moreover, by its proof on page 79 and page 87 of \cite{BC2004}, if $f\in\mathcal{R}_1$ then there is a hyperbolic periodic point $p$ such that $M=H(p).$ Since the space of diffeomorphisms away from homoclinic tangencies $\operatorname{Diff}^{1}(M)\setminus\overline{HT}$ is open in $\operatorname{Diff}^{1}(M),$ then
$\mathcal{R}_2=\mathcal{R}_1\cap \operatorname{Diff}^{1}(M)\setminus\overline{HT}$ is generic in $\operatorname{Diff}^{1}_{vol}(M)\setminus\overline{HT}.$ Moreover, every $C^1$ diffeomorphism away from homoclinic tangencies is entropy expansive \cite{LVY2013}. Note that entropy expansive implies upper semi-continuous of the entropy function by \cite[Theorem 20.9]{DGS}, if $f\in\mathcal{R}_2$ then there is a hyperbolic periodic point $p$ such that $M=H(p)$ and the entropy function is upper semi-continuous.
\subsubsection{Some lemmas}
Now we give some results corresponding to section \ref{section-entropy-dense}, \ref{Almost Additive} and \ref{section-almost2}.
\begin{Lem}\label{lemma-H(p)}
Let $f$ be a $C^{1}$ diffeomorphism on a compact Riemannian manifold $M,$ and $H(p)$ be a nontrival homoclinic class. Then the set $M^{horse}(H(p))$ is convex.
\end{Lem}
\begin{proof}
Fix $\mu_1,\mu_2\in M^{horse}(H(p))$ and $\theta\in [0,1].$ Then for any $\varepsilon>0$ and $i\in\{1,2\},$ there is a $f$-invariant compact subset $\Lambda_{\varepsilon}^i \subseteq H(p)$ such that $\mu_\varepsilon^i:=\mu^i|_{\Lambda_{\varepsilon}}$ satisfies the following three properties
\begin{description}
\item[(1)] $(\Lambda_\varepsilon^i,f|_{\Lambda_\varepsilon^i})$ is a transitive locally maximal hyperbolic set which contains a hyperbolic saddles $q_i$ homoclinically related to $p.$
\item[(2)] $\rho(\mu_i,\mu_\varepsilon^i)<\varepsilon.$
\item[(3)] $h_{\mu_\varepsilon^i}(f)>h_{\mu_i}(f)-\varepsilon.$
\end{description}
Then $q_1$ is homoclinically related to $q_2,$ since homoclinically related is an equivalence relation by \cite[Proposition 2.1]{Newhouse1972}. This implies that there is a transitive locally maximal hyperbolic set $\Lambda_\varepsilon$ which contains $\Lambda_\varepsilon^1$ and $\Lambda_\varepsilon^2$ (for example, see \cite[Lemma 8]{Newho1979}). Let $\mu_\varepsilon=\theta\mu_\varepsilon^1+(1-\theta)\mu_\varepsilon^2.$ Then we have
\begin{equation*}
\rho(\theta\mu_1+(1-\theta)\mu_2,\mu_\varepsilon)\leq \theta\rho(\mu_1,\mu_\varepsilon^1)+(1-\theta)\rho(\mu_2,\mu_\varepsilon^2)<\varepsilon.
\end{equation*}
and
\begin{equation*}
h_{\mu_\varepsilon}(f)=\theta h_{\mu_\varepsilon^1}(f)+(1-\theta)h_{\mu_\varepsilon^2}(f)>\theta h_{\mu_1}(f)+(1-\theta)h_{\mu_2}(f)-\varepsilon=h_{\theta\mu_1+(1-\theta)\mu_2}(f)-\varepsilon.
\end{equation*}
Note that $\mu_\varepsilon$ is supported on $\Lambda_\varepsilon.$ So $\theta\mu_1+(1-\theta)\mu_2\in M^{horse}(H(p))$ and thus $M^{horse}(H(p))$ is convex.
\end{proof}
\begin{Thm}\label{def-strong-basic-2}
Let $f$ be a $C^{1}$ diffeomorphism on a compact Riemannian manifold $M,$ and $X=H(p)$ be a nontrival homoclinic class. Then $(X, f)$ satisfies the 'multi-horseshoe' entropy-dense property on $M^{horse}(H(p))$, that is, for any $K=\cov\{\mu_i\}_{i=1}^m\subseteq M^{horse}(H(p)),$ and any $\eta, \zeta>0$, there exist compact invariant subsets $\Lambda_i\subseteq\Lambda\subsetneq H(p)$ such that for each $1\leq i\leq m$
\begin{enumerate}
\item $(\Lambda_i,f|_{\Lambda_i})$ and $(\Lambda,f|_{\Lambda})$ are transitive locally maximal hyperbolic sets.
\item $\htop(f, \Lambda_i)>h_{\mu_i}(f)-\eta.$
\item $d_H(K, M(f, \Lambda))<\zeta$, $d_H(\mu_i, M(f, \Lambda_i))<\zeta$.
\end{enumerate}
\end{Thm}
\begin{proof}
Fix $K=\cov\{\mu_i\}_{i=1}^m\subseteq M^{horse}(H(p)),$ and any $\eta, \zeta>0.$ Denote $\tau=\frac{1}{2}\min\{\eta,\zeta\}.$ Then for any $1\leq i\leq m$ there is a $f$-invariant compact subset $\Lambda_{\tau}^i \subseteq H(p)$ such that $\mu_\tau^i:=\mu^i|_{\Lambda_{\tau}}$ satisfies the following three properties
\begin{description}
\item[(1)] $(\Lambda_\tau^i,f|_{\Lambda_\tau^i})$ is a transitive locally maximal hyperbolic set which contains a hyperbolic saddles $q_i$ homoclinically related to $p.$
\item[(2)] $\rho(\mu_i,\mu_\tau^i)<\tau.$
\item[(3)] $h_{\mu_\tau^i}(f)>h_{\mu_i}(f)-\tau.$
\end{description}
Then $q_i$ is homoclinically related to $q_j,$ since homoclinically related is an equivalence relation by \cite[Proposition 2.1]{Newhouse1972}. This implies that there is a transitive locally maximal hyperbolic set $\Lambda_\tau$ which contains $\bigcup_{i=1}^{m}\Lambda_\tau^i$. Note that a transitive locally maximal hyperbolic set is expansive by \cite[Corollary 6.4.10]{KatHas} and has shadowing property by \cite[Theorem 18.1.2]{KatHas}. Then $(\Lambda_\tau,f|_{\Lambda_\tau})$ has the 'multi-horseshoe' entropy-dense property by Theorem \ref{Mainlemma-convex-by-horseshoe}. Thus for the $K_\tau=\cov\{\mu_\tau^i\}_{i=1}^m\subseteq M(f, \Lambda_\tau),$ and $\tau>0$, there exist compact invariant subsets $\Lambda_i\subseteq\Lambda\subsetneq X$ such that for each $1\leq i\leq m$
\begin{enumerate}
\item $(\Lambda_i,f|_{\Lambda_i})$ and $(\Lambda,f|_{\Lambda})$ conjugate to transitive subshifts of finite type.
\item $\htop(f, \Lambda_i)>h_{\mu_\tau^i}(f)-\tau>h_{\mu_i}(f)-2\tau>h_{\mu_i}(f)-\eta.$
\item $d_H(K_\tau, M(f, \Lambda))<\tau$, $d_H(\mu_\tau^i, M(f, \Lambda_i))<\tau$.
\end{enumerate}
From \cite{Anosov2010} any hyperbolic set conjugate to a subshift of finite type is a locally maximal. So $(\Lambda_i,f|_{\Lambda_i})$ and $(\Lambda,f|_{\Lambda})$ are transitive locally maximal hyperbolic sets. By item 3 we have $$d_H(K, M(f, \Lambda))<d_H(K, K_\tau)+d_H(K_\tau, M(f, \Lambda))<2\tau<\zeta,$$ $$d_H(\mu_i, M(f, \Lambda_i))<d_H(\mu_i,\mu_\tau^i)+d_H(\mu_\tau^i, M(f, \Lambda_i))<2\tau<\zeta.$$
So we complete the proof.
\end{proof}
If we replace $M(f,X)$ by $M^{horse}(H(p))$ in Section \ref{Almost Additive} and \ref{section-almost2}, the results of Theorem \ref{thm-Almost-Additive} and \ref{thm-Almost-Additive2} also hold. In fact, the convexity of $M(f,X)$ is one core property in the proof of Theorem \ref{thm-Almost-Additive} and \ref{thm-Almost-Additive2}. By Lemma \ref{lemma-H(p)} the set $M^{horse}(H(p))$ is convex, then the arguments of Section \ref{Almost Additive} and \ref{section-almost2} are also true. Here, we omit the proof.
\begin{Thm}\label{thm-Almost-Additive-H(p)}
Let $f$ be a $C^{1}$ diffeomorphism on a compact Riemannian manifold $M,$ and $X=H(p)$ be a nontrival homoclinic class. Assume that the entropy function of $(X,f)$ is upper semi-continuous. Let $d \in \mathbb{N}$ and $(A, B) \in A(f,X)^{d} \times A(f,X)^{d}$ such that
$B$ satisfies \eqref{equation-O}. Assume that the following holds:
for any $K=\cov\{\mu_i\}_{i=1}^m\subseteq M^{horse}(H(p)),$ and any $\eta, \zeta>0$, there exist compact invariant subsets $\Lambda_i\subseteq\Lambda\subsetneq X$ such that for each $i\in\{1,2,\cdots,m\}$
\begin{description}
\item[(1)] $\htop(f, \Lambda_i)>h_{\mu_i}(f)-\eta.$
\item[(2)] $d_H(K, M(f, \Lambda))<\zeta$, $d_H(\mu_i, M(f, \Lambda_i))<\zeta.$
\item[(3)] $\text{span}\left\{\Phi^1|_{\Lambda}, \Psi^1|_{\Lambda},\cdots,\Phi^d|_{\Lambda},\Psi^d|_{\Lambda}\right\}\subseteq E(f|_{\Lambda},\Lambda).$
\end{description}
Let $\alpha:M(f,X)\to \mathbb{R}$ be a continuous function. Then:
\begin{description}
\item[(I)] For any $a\in \mathrm{Int}(L_{A,B}^{horse}),$ any $\mu_0\in M_{A,B}^{horse}(a)$ and any $\eta, \zeta>0$, there is $\nu\in M_{A,B}^{erg,horse}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-P(f,\alpha,\mu_0)|<\eta.$
\end{description}
If further $\{\mu\in M^{horse}(H(p)):h_{\mu}(f)=0\}$ is dense in $M^{horse}(H(p)),$ and $\alpha$ satisfies \eqref{equation-W} and \eqref{equation-AF},
then we have
\begin{description}
\item[(II)] For any $a\in \mathrm{Int}(L_{A,B}^{horse}),$ any $\mu_0\in M_{A,B}^{horse}(a),$ any $\max_{\mu\in M_{A,B}^{horse}(a)}\alpha(\mu)\leq c\leq P(f,\alpha,\mu_0)$ and any $\eta, \zeta>0$, there is $\nu\in M_{A,B}^{erg,horse}(a)$ such that $\rho(\nu,\mu_0)<\zeta$ and $|P(f,\alpha,\nu)-c|<\eta.$
\item[(III)] For any $a\in \mathrm{Int}(L_{A,B}^{horse})$ and $\max_{\mu\in M_{A,B}^{horse}(a)}\alpha(\mu)\leq c< H_{A,B}^{horse}(f,\alpha,a),$ the set $\{\mu\in M_{A,B}^{erg,horse}(a):P(f,\alpha,\mu)=c\}$ is residual in $\{\mu\in M_{A,B}^{horse}(a):P(f,\alpha,\mu)\geq c\}.$
\item[(IV)] $\{(\mathcal{P}_{A,B}(\mu), P(f,\alpha,\mu)):\mu\in M^{horse}(H(p)),\ a=\mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B}^{horse}),\ \max_{\mu\in M_{A,B}^{horse}(a)}\alpha(\mu)\leq P(f,\alpha,\mu)< H_{A,B}^{horse}(f,\alpha,a)\}$ coincides with $\{(\mathcal{P}_{A,B}(\mu), P(f,\alpha,\mu)):\mu\in M^{horse}_{erg}(H(p)),\ a=\mathcal{P}_{A,B}(\mu)\in\mathrm{Int}(L_{A,B}^{horse}), \max_{\mu\in M_{A,B}^{horse}(a)}\alpha(\mu)\leq P(f,\alpha,\mu)< H_{A,B}^{horse}(f,\alpha,a)\}.$
\end{description}
\end{Thm}
By Theorem \ref{def-strong-basic-2} and Corollary \ref{Corollary-zero-metric-entropy}(1), we have that the set of invariant measures with zero metric entropy is dense in $M^{horse}(H(p)).$
\begin{Lem}\label{lemma-G}
Let $f$ be a $C^{1}$ diffeomorphism on a compact Riemannian manifold $M,$ and $X=H(p)$ be a nontrival homoclinic class. Then $\{\mu\in M^{horse}(H(p)):h_{\mu}(f)=0\}$ is dense in $M^{horse}(H(p)).$
\end{Lem}
\subsubsection{Proof of Theorem \ref{thm-almost-2}}
Since $\Phi^i, \Psi^i$ has bounded variation for any $1\leq i\leq d,$ then every sequence in $\text{span}\left\{\Phi^1, \Psi^1,\cdots,\Phi^d,\Psi^d\right\} $ has bounded variation. Thus by Theorem \ref{theorem-A} we have $\text{span}\left\{\Phi^1|_{\Lambda}, \Psi^1|_{\Lambda},\cdots,\Phi^d|_{\Lambda},\Psi^d|_{\Lambda}\right\}\subseteq E(f|_{\Lambda},\Lambda)$ if $(\Lambda,f|_{\Lambda})$ is transitive and topologically Anosov.
Then we complete the proof by Theorem \ref{def-strong-basic-2}, Theorem \ref{thm-Almost-Additive-H(p)} and Lemma \ref{lemma-G}.\qed
\bigskip
{\bf Acknowledgements. } X. Hou and X. Tian are
supported by National Natural Science Foundation of China (grant No. 12071082,11790273) and in part by Shanghai Science and Technology Research Program (grant No. 21JC1400700).
|
1,314,259,993,176 | arxiv |
\section{#1}\vspace{\sectionReduceBot}}
\newcommand{\csubsection}[1]{\vspace{\subsectionReduceTop}\subsection{#1}\vspace{\subsectionReduceBot}}
\newcommand{\csubsubsection}[1]{\vspace{\subsubsectionReduceTop}\subsubsection{#1}\vspace{\subsubsectionReduceBot}}
\newcommand{\cabstract}[1]{\vspace{\abstractReduceTop}\begin{abstract}#1\end{abstract}\vspace{\abstractReduceBot}}
\setlength{\abovecaptionskip}{6pt plus 0pt minus 0pt}
\setlength{\textfloatsep}{12pt plus 0pt minus 0pt}
\setlength{\floatsep}{15pt plus 0pt minus 0pt}
\section{Introduction}
Convolutional neural networks (CNNs) have become the \textit{de facto} standard for extracting visual representations, and have proven remarkably effective at numerous downstream tasks such as object detection~\cite{lin2017focal}, instance segmentation~\cite{he2017mask} and image captioning~\cite{Anderson2018BottomUpAT}. Similarly, in natural language processing, Transformers rule the roost~\cite{devlin2018bert,radford2015unsupervised,radford2021learning,brown2020language}. Their effectiveness at capturing short and long range information have led to state-of-the-art results across tasks such as question answering~\cite{rajpurkar2016squad} and language understanding~\cite{wang2018glue}.
In computer vision, Transformers were initially employed as long range information aggregators across space (e.g., in object detection~\cite{carion2020end}) and time (e.g., in video understanding~\cite{wang2018non}), but these methods continued to use CNNs~\cite{lecun1998gradient} to obtain raw visual representations. More recently however, CNN-free visual backbones employing Transformer modules~\cite{touvron2020training,dosovitskiy2020image} have shown impressive performance on image classification benchmarks such as ImageNet~\cite{krizhevsky2012imagenet}. The race to dethrone CNNs has now begun to expand beyond Transformers -- a recent unexpected result shows that a multi-layer perceptron (MLP) exclusive network~\cite{tolstikhin2021mlpmixer} can be just as effective at image classification.
On the surface, CNNs~\cite{lecun1998gradient,chollet2017xception,xie2017aggregated,he2016deep}, Vision Transformers (ViTs)~\cite{dosovitskiy2020image,touvron2020training} and MLP-mixers~\cite{tolstikhin2021mlpmixer} are typically presented as disparate architectures. However, taking a step back and analyzing these methods reveals that their core designs are quite similar. Many of these methods adopt a cascade of neural network blocks. Each block typically consists of aggregation modules and fusion modules. Aggregation modules share and accumulate information across a predefined context window over the module inputs (e.g., the self attention operation in a Transformer encoder), while fusion modules combine position-wise features and produce module outputs (e.g., feed forward layers in ResNet).
In this paper, we show that the primary differences in many popular architectures result from variations in their aggregation modules. These differences can in fact be characterized as variants of an affinity matrix within the aggregator that is used to determine information propagation between a query vector and its context. For instance, in ViTs~\cite{dosovitskiy2020image,touvron2020training}, this affinity matrix is dynamically generated using key and query computations; but in the Xception architecture~\cite{chollet2017xception} (that employs depthwise convolutions), the affinity matrix is static -- the affinity weights are the same regardless of position, and they remain the same across all input images regardless of size. And finally the MLP-Mixer~\cite{tolstikhin2021mlpmixer} also uses a static affinity matrix which changes across the landscape of the input.
Along this unified view, we present \mbox{\sc{Container}}\xspace (CONText AggregatIon NEtwoRk), a general purpose building block for multi-head context aggregation. A \mbox{\sc{Container}}\xspace block contains both static affinity as well as dynamic affinity based aggregation, which are combined using learnable mixing coefficients. This enables the \mbox{\sc{Container}}\xspace block to process long range information while still exploiting the inductive bias of the local convolution operation. \mbox{\sc{Container}}\xspace blocks are easy to implement, can easily be substituted into many present day neural architectures and lead to highly performant networks whilst also converging faster and being data efficient.
Our proposed \mbox{\sc{Container}}\xspace architecture obtains 82.7 \% Top-1 accuracy on ImageNet using 22M parameters, improving +2.8 points over DeiT-S~\cite{touvron2020training} with a comparable number of parameters. It also converges faster, hitting DeiT-S's accuracy of 79.9 \% in just 200 epochs compared to 300.
We also propose a more efficient model, named \mbox{\sc{Container-Light}}\xspace that employs only static affinity matrices early on but uses the learnable mixture of static and dynamic affinity matrices in the latter stages of computation. In contrast to ViTs that are inefficient at processing large inputs, \mbox{\sc{Container-Light}}\xspace can scale to downstream tasks such as detection and instance segmentation that require high resolution input images. Using a \mbox{\sc{Container-Light}}\xspace backbone and 12 epochs of training, RetinaNet~\cite{lin2017focal} is able to achieve 43.8 mAP, while Mask-RCNN~\cite{he2017mask} is able to achieve 45.1 mAP on box and 41.3 mAP on instance mask prediction, improvements of +7.3, +6.9 and +6.6 respectively, compared to a ResNet-50 backbone. The more recent DETR and its variants SMCA-DETR and Deformable DETR~\cite{carion2020end,gao2021fast,zhu2020deformable} also benefit from \mbox{\sc{Container-Light}}\xspace and achieve 38.9, 43.0 and 44.2 mAP, improving significantly over their ResNet-50 backbone baselines.
\mbox{\sc{Container-Light}}\xspace is data efficient. Our experiments show that it can obtain an ImageNet Top-1 accuracy of 61.8 using just 10\% of training data, significantly better than the 39.3 accuracy obtained by DeiT. \mbox{\sc{Container-Light}}\xspace also convergences faster and achieves better kNN accuracy (71.5) compared to DeiT (69.6) under DINO self-supervised training framework~\cite{caron2021emerging}.
The \mbox{\sc{Container}}\xspace unification and framework enable us to easily reproduce several past models and even extend them with just a few code and parameter changes. We extend multiple past models and show improved performance -- for instance, we produce a Hierarchical DeiT model, a multi-head MLP-Mixer and add a static affinity matrix to the DeiT architecture. Our code base and models will be released publicly. Finally, we analyse a \mbox{\sc{Container}}\xspace model containing both static and dynamic affinities and show the emergence of convolution-like local affinities in the early layers of the network.
In summary, our contributions include: (1) A unified view of popular architectures for visual inputs -- CNN, Transformer and MLP-mixer, (2) A novel network block -- \mbox{\sc{Container}}\xspace, which uses a mix of static and dynamic affinity matrices via learnable parameters and the corresponding architecture with strong results in image classification and (3) An efficient and effective extension -- \mbox{\sc{Container-Light}}\xspace with strong results in detection and segmentation. Importantly, we see that a number of concurrent works are aiming to fuse the CNN and Transformer architectures~\cite{Li2021LocalViTBL,Xu2021CoScaleCI,liu2021swin, Heo2021RethinkingSD, Vaswani2021ScalingLS, Zhang2021MultiScaleVL, Xu2021CoScaleCI,srinivas2021bottleneck}, validating our approach. We hope that our unified view helps place these different concurrent proposals in context and leads to a better understanding of the landscape of these methods.
\section{Related Work}
\noindent \textbf{Visual Backbones.}
Since AlexNet~\cite{krizhevsky2012imagenet} revolutionized computer vision, a host of CNN based architectures have provided further improvements in terms of accuracy including VGG~\cite{simonyan2014very}, ResNet~\cite{he2016deep}, Inception Net~\cite{szegedy2015going}, SENet~\cite{hu2018squeeze}, ResNeXt~\cite{xie2017aggregated} and Xception~\cite{chollet2017xception} and efficiency including Mobile-net v1~\cite{howard2017mobilenets}, Mobile-net v2~\cite{howard2017mobilenets} and Efficient-net v2~\cite{Tan2021EfficientNetV2SM}.
With the success of Transformers~\cite{vaswani2017attention} in NLP such as BERT~\cite{devlin2018bert} and GPT~\cite{radford2015unsupervised}, researchers have begun to apply them towards solving the long range information aggregation problem in computer vision. ViT~\cite{dosovitskiy2020image}/DeiT~\cite{touvron2020training} are transformers that achieve better performance on ImageNet than CNN counterparts. Recently, several concurrent works explore integrating convolutions with transformers and achieve promising results. ConViT~\cite{d2021convit} explores soft convolutional inductive bias for enhancing DeiT. CeiT~\cite{yuan2021incorporating} directly incorporates CNNs into the Feedforward module of transformers to enhance the learned features. PVT~\cite{wang2021pyramid} proposes a pyramid vision transformer for efficient transfer to downstream tasks. Pure Transformer models such as ViT/DeiT however, require huge GPU memory and computation for detection~\cite{wang2021pyramid} and segmentation~\cite{zheng2020end} tasks, which need high resolution input. MLP-Mixer~\cite{tolstikhin2021mlpmixer} shows that simply performing transposed MLP followed by MLP can achieve near state-of-the-art performance. We propose \mbox{\sc{Container}}\xspace, a new visual backbone that provides a unified view of these different architectures and performs well across several vision tasks including ones that require a high resolution input.
\noindent \textbf{Transformer Variants.}
Vanilla Transformers are unable to scale to long sequences or high-resolution images due to the quadratic computation in self-attention. Several methods have been proposed to make Transformer computations more efficient for high resolution input. Reformer~\cite{kitaev2020reformer}, Clusterform~\cite{vyas2020fast}, Adaptive Clustering Transformer~\cite{zheng2020end} and Asymmetric Clustering~\cite{daras2020smyrf} propose to use Locality Sensitivity Hashing to cluster keys or queries and reduce quadratic computation into linear computation. Lightweight convolution~\cite{wu2019pay} explore convolution architectures for replacing Transformers but only explore applications in NLP. RNN Transformer~\cite{katharopoulos2020transformers} builds a connection between RNN and Transformer and results in attention with linear computation. Linformer~\cite{wang2020linformer} changes the multiplication order of key,query,value into query,value,key by deleting the softmax normalization layer and achieve linear complexity. Performer~\cite{choromanski2020rethinking} uses Orthogonal Random Feature to approximate full rank softmax attention. MLIN~\cite{gao2019multi}
performs interaction between latent encoded nodes, and its complexity is linear with respect to input length.
Bigbird~\cite{beltagy2020longformer} breaks the full rank attention into local, randomly selected and global attention. Thus the computation complexity becomes linear. Longformer~\cite{zaheer2020big} uses local Transformers to tackle the problem of massive GPU memory requirements for long sequences. MLP-Mixer~\cite{tolstikhin2021mlpmixer} is a pure MLP architecture for image recognition. In the unified formulation we provide, MLP-Mixer can be considered as a single-head Transformer with static affinity matrix weight.
MLP-Mixer can provide more efficient computation than vanilla transformer due to no need to calculate affinity matrix using key query multiplication. Efficient Transformers mostly use approximate message passing which results in performance deterioration across tasks. Lightweight Convolution~\cite{wu2019pay}, Involution~\cite{li2021involution}, Synthesizer~\cite{tay2021synthesizer}, and MUSE~\cite{zhao2019muse} explored the relationship between Depthwise Convolution and Transformer. Our \mbox{\sc{Container}}\xspace unification performs global and local information exchange simultaneously using a mixture affinity matrix, while \mbox{\sc{Container-Light}}\xspace switches off the dynamic affinity matrix for high resolution feature maps to reduce computation. Although switching off the dynamic affinity matrix slightly hinders classification performance, \mbox{\sc{Container-Light}}\xspace still provides effective and efficient generalization to downstream tasks compared with popular backbones such as ViT and ResNet.
\noindent \textbf{Transformers for Vision.}
Transformers enable high degrees of parallelism and are able to capture long-range dependencies in the input. Thus Transformers have gradually surpassed other architectures such as CNN~\cite{lecun1998gradient} and RNN~\cite{hochreiter1997long} on image~\cite{dosovitskiy2020image,carion2020end,zhang2021rest}, audio~\cite{baevski2020wav2vec}, multi-modality~\cite{gao2019dynamic,geng2020character,geng2020dynamic}, and language understanding~\cite{devlin2018bert}. In computer vision, Non-local Neural Network~\cite{wang2018non} has been proposed to capture long range interactions to compensate for the local information captured by CNNs and used for object detection~\cite{hu2018relation} and semantic segmentation~\cite{fu2019dual,huang2019ccnet,zhu2019asymmetric,yuan2019object}. However, these methods use Transformers as a refinement module instead of treating the transformer as a first-class citizen. ViT~\cite{dosovitskiy2020image} introduces the first pure Transformer model into computer vision and surpasses CNNs with large scale pretraining on the non publicly available JFT dataset. DeiT~\cite{touvron2020training} trains ViT from scratch on ImageNet-1k and achieve better performance than CNN counterparts. DETR~\cite{carion2020end} uses Transformer as an encoder and decoder architecture for designing the first end-to-end object detection system. Taming Transformer~\cite{esser2020taming} use Vector Quantization~\cite{oord2017neural} GAN and GPT~\cite{radford2015unsupervised} for high quality high-resolution image generation. Motivated by the success of DETR on object detection, Transformers have been applied widely on tasks such as semantic segmentation~\cite{zheng2020rethinking}, pose estimation~\cite{yang2020transpose}, trajectory estimation~\cite{liu2021multimodal}, 3D representation learning and self-supervised learning with MOCO v3~\cite{chen2021empirical} and DINO~\cite{caron2021emerging}. ProTo~\cite{zhao2021proto} verify the effective of transformer on reasoning tasks.
\section{Methods}
In this section we first provide a generalized view of neighborhood/context aggregation modules commonly employed in present neural networks. Then we revisit three major architectures -- Transformer~\cite{vaswani2017attention}, Depthwise Convolution~\cite{chollet2017xception} and the recently proposed MLP-Mixer~\cite{tolstikhin2021mlpmixer}, and show that they are special cases of our generalized view. We then present our \mbox{\sc{Container}}\xspace module in Sec~\ref{approach:container} and its efficient version -- \mbox{\sc{Container-Light}}\xspace in Sec~\ref{approach:container_light}.
\subsection{Contextual Aggregation for Vision}
\label{approach:context}
Consider an input image $X \in \mathbb{R}^{C \times H \times W}$, where $C$ and $H \times W$ denote the channel and spatial dimensions of the input image, respectively. The input image is first flattened to a sequence of tokens $\{X_i \in \mathbb{R}^C \vert i = 1, \ldots, N\}$, where $N = HW$ and input to the network. Vision networks typically stack multiple building blocks with residual connections \cite{he2016deep}, defined as
\begin{equation}
\mathbf{Y} = \mathcal{F}(\mathbf{X},\{\mathbf{W}_i\}) + \mathbf{X}.
\label{eq:res}
\end{equation}
Here, $\mathbf{X}$ and $\mathbf{Y}$ are the input and output vectors of the layers considered, and $\mathbf{W}_i$ represents the learnable parameters. $\mathcal{F}$ determines how information across $\mathbf{X}$ is aggregated to compute the feature at a specific location. We first define an affinity matrix $\mathcal{A} \in \mathbb{R}^{N \times N}$ that represents the neighborhood for contextual aggregation. Equation~\ref{eq:res} can be re-written as:
\begin{equation}
\mathbf{Y} = (\mathcal{A} \mathbf{V}) \mathbf{W}_1 + \mathbf{X}, \label{eq:single_affinity}
\end{equation}
where $\mathbf{V} \in \mathbb{R}^{N \times C}$ is a transformation of $\mathbf{X}$ obtained by a linear projection $\mathbf{V} = \mathbf{X} \mathbf{W}_2$. $\mathbf{W}_1$ and $\mathbf{W}_2$ are the learnable parameters.
$\mathcal{A}_{ij}$ is the affinity value between $X_i$ and $X_j$.
Multiplying the affinity matrix with $\mathbf{V}$ propagates information across features in accordance with the affinity values.
The modeling capacity of such a context aggregation module can be increased by introducing multiple affinity matrices, allowing the network to have several pathways to contextual information across $\mathbf{X}$.
Let $\{\mathbf{V}^i \in \mathbb{R}^{N \times \frac{C}{M}} \vert i=1,\ldots,M\}$ be slices of $\mathbf{V}$, where $M$ is the number of affinity matrices, also referred to as the number of heads. The multi-head version of Equation~\ref{eq:single_affinity} is
\begin{equation}
\mathbf{Y} = \operatorname{Concat}(\mathcal{A}_1 \mathbf{V}_1, \ldots, \mathcal{A}_M \mathbf{V}_M) \mathbf{W}_2 + \mathbf{X},
\label{eq:multi_affinity}
\end{equation}
where $\mathcal{A}_m$ denotes the affinity matrix in each head. Different $A_m$ can potentially capture different relationships within the feature space and thus increase the representation power of contextual aggregation compared with a single-head version.
Note that only spatial information is propagated during contextual aggregation using the affinity matrices; cross-channel information exchange does not occur within the affinity matrix multiplication, and that there is no non-linear activation function.
\subsection{The Transformer, Depthwise Convolution and MLP-Mixer}
\label{approach:threeblocks}
Transformer~\cite{vaswani2017attention}, depthwise convolution~\cite{kaiser2017depthwise} and the recently proposed MLP-Mixer~\cite{tolstikhin2021mlpmixer} are three distinct building blocks used in computer vision. Here, we show that they can be represented within the above context aggregation framework, by defining different types of affinity matrices.
\paragraph{Transformer.} In the self-attention mechanism in Transformers, the affinity matrix is modelled by the similarity between the projected query-key pairs. With $M$ heads, the affinity matrix in head $m$, $\mathcal{A}_m^{sa}$ can be written as
\begin{equation}
\mathcal{A}_m^{sa} = \operatorname{Softmax}({\mathbf{Q}_m \mathbf{K}_m^{T}} / {\sqrt{{C} / {M}}}),
\end{equation}
where $\mathbf{K}_m, \mathbf{Q}_m$ are the corresponding key, query in head $m$, respectively.
The affinity matrix in self-attention is dynamically generated and can capture instance level information. However, this introduces quadratic computational, which requires heavy computation for high resolution feature.
\paragraph{Depthwise Convolution.}
The convolution operator fuses both spatial and channel information in parallel. This is different from the contextual aggregation block defined above. However, depthwise convolution \cite{kaiser2017depthwise} which is an extreme case of group convolution performs disentangled convolution. Considering the number of the heads from the contextual aggregation block to be equal to the channel size $C$, we can define the convolutional affinity matrix given the 1-d kernel ${Ker \in \mathbb{R}^{C \times 1 \times k}}$:
\begin{align}
\mathcal{A}_{mij}^{conv} = \left\{
\begin{array}{ll}
Ker[m,0,\rvert i-j \lvert] \quad &\lvert i - j \rvert \leq k\\\
0 \quad &\lvert i - j \rvert > k
\end{array}
\right., \label{eq:static}
\end{align}
where $\mathcal{A}_{mij}$ is the affinity value between $X_i$ and $X_j$ on head $m$. In contrast with the affinity matrix obtained from self-attention whose value is conditioned on the input feature, the affinity values for convolution are static -- they do not depend on the input features, sparse -- only involves local connections and shared across the affinity matrix.
\paragraph{MLP-Mixer}
The recently proposed MLP-Mixer~\cite{tolstikhin2021mlpmixer} does not rely on any convolution or self-attention operator. The core of MLP-Mixer is the transposed MLP operation,
which can be denoted as $\mathbf{X} = \mathbf{X} + (\mathbf{V}^T \mathbf{W}_{MLP})^T$. We can define the affinity matrix as
\begin{align}
\mathcal{A}^{mlp} = (\mathbf{W}_{MLP})^T, \label{eq:residual}
\end{align}
where $\mathbf{W}_{MLP}$ represents the learnable parameters. This simple equation shows that the transposed-MLP operator is a contextual aggregation operator on a single feature group with a dense affinity matrix. Comparing with self-attention and depthwise convolution, the transpose-MLP affinity matrix is static, dense and with no parameter sharing.
The above simple unification reveals the similarities and differences between Transformer, depthwise convolution and MLP-Mixer. Each of these building blocks can be obtained by different formulating different affinity matrices. This finding leads us to create a powerful and efficient building block for vision tasks -- the \mbox{\sc{Container}}\xspace.
\subsection{The \mbox{\sc{Container}}\xspace Block}
\label{approach:container}
As detailed in Sec~\ref{approach:threeblocks}, previous architectures have employed either static or dynamically generated affinity matrices -- each of which provides its unique set of advantages and features. Our proposed building block named \mbox{\sc{Container}}\xspace, combines both types of affinity matrices via a learnable parameter. The single head \mbox{\sc{Container}}\xspace is defined as:
\begin{align}
\mathbf{Y} &= ((\alpha \overbrace{ \mathcal{A}(\mathbf{X})}^{Dynamic} + \beta \overbrace{\mathcal{A}}^{Static})V) W_2 + \mathbf{X}
\end{align}
$\mathcal{A}(\mathbf{X})$ is dynamically generated from $\mathbf{X}$ while $\mathcal{A}$ is a static affinity matrix. We now present a few special cases of the \mbox{\sc{Container}}\xspace block. In the following, $\mathcal{L}$ denotes a learnable parameter.
\begin{itemize}[leftmargin=*]
\item $\alpha = 1$, $\beta = 0$, $\mathcal{A}(x)=\mathcal{A}^{sa}$: A vanilla Transformer block with self-attention (denoted $sa$).
\item $\alpha = 0$, $\beta = 1$, $M=C$, $\mathcal{A} = \mathcal{A}^{conv}$: A depthwise convolution block. In depthwise convolution, each channel has a different static affinity matrix. When $M \neq C$, the resultant block can be considered a Multi-head Depthwise Convolution block (MH-DW). MH-DW shares kernel weights.
\item $\alpha = 0$, $\beta = 1$, $M=1$, $\mathcal{A} = \mathcal{A}^{mlp}$: An MLP-Mixer block. When $M \neq 1$, we name the module Multi-head MLP (MH-MLP). MH-MLP splits channels into $M$ groups and performs independent transposed MLP to capture diverse static token relationships.
\item $\alpha = \mathcal{L}$, $\beta = \mathcal{L}$, $\mathcal{A}(x)=\mathcal{A}^{sa}$, $\mathcal{A} = \mathcal{A}^{mlp}$: This \mbox{\sc{Container}}\xspace block fuses dynamic and static information, but the static affinity resembles the MLP-Mixer matrix. We call this block \mbox{\sc{Container-Pam}}\xspace (Pay Attention to MLP).
\item $\alpha = \mathcal{L}$, $\beta = \mathcal{L}$, $\mathcal{A}(x)=\mathcal{A}^{sa}$, $\mathcal{A} = \mathcal{A}^{conv}$: This \mbox{\sc{Container}}\xspace block fuses dynamic and static information, but the static affinity resembles the depthwise convolution matrix. This static affinity matrix contains a locality constraint which is shift invariant, making it more suitable for vision tasks. This is the default configuration used in our experiments.
\end{itemize}
The \mbox{\sc{Container}}\xspace\ block is easy to implement and can be readily swapped into an existing neural network. The above versions of \mbox{\sc{Container}}\xspace provide variations on the resulting architecture and its performance and exhibit different advantages and limitations. The computation cost of a \mbox{\sc{Container}}\xspace block is the same as a vanilla Transformer since the static and dynamic matrices are linearly combined.
\subsection{The \mbox{\sc{Container}}\xspace network architecture}
\label{approach:container_net}
We now present a base architecture used in our experiments. The unification of past works explained above allows us to easily compare self-attention, depthwise convolution, MLP and multiple variations of the \mbox{\sc{Container}}\xspace block, and we perform these comparison using a consistent base architecture.
Motivated by networks in past works~\cite{he2016deep, wang2021pyramid}, our base architecture contains 4 stages. In contrast to ViT/DeiT which down-sample the image to a low resolution and keep this resolution constant, each stage in our architecture down-samples the image resolution gradually. Gradually down-sampling can retain image details, which is important for downstream tasks such as segmentation and detection. Each of the 4 stages contains a cascade of blocks. Each block contains two sub-modules, the first to aggregate spatial information (named spatial aggregation module) and the second to fuse channel information (named feed-forward module). In this paper, the channel fusion module is fixed to a 2-layer MLP as proposed in \cite{vaswani2017attention}. Designing a better spatial aggregation module is the main focus of this paper.
The 4 stages contain 2, 3, 8 and 3 blocks respectively. Each stage uses patch embeddings which fuse spatial patches of size $p \times p$ into a single vector. For the 4 stages, the values of $p$ are 4,4,2,2 respectively.
The feature dimension within a stage remains constant -- and is set to 128, 256, 320, and 512 for the four stages. This base architecture augmented with the \mbox{\sc{Container}}\xspace block results in a similar parameter size as DeiT-S~\cite{touvron2020training}.
\subsection{The \mbox{\sc{Container-Light}}\xspace network}
\label{approach:container_light}
We also present an efficient version known as \mbox{\sc{Container-Light}}\xspace which uses the same basic architecture as \mbox{\sc{Container}}\xspace, but switches off the dynamic affinity matrix in the first 3 stages. The absence of the computation heavy dynamic attention at the early stages of computation help efficiently scale the model to process large image resolutions and achieve superior performance on downstream tasks such as detection and instance segmentation.
\vspace{-4mm}
\begin{align}
\mathcal{A}_m^{\mbox{\sc{Container-Light}}\xspace} = \left\{
\begin{array}{ll}
\mathcal{A}_m^{conv} \quad &Stage = 1, 2, 3\\\
\alpha \mathcal{A}_m^{sa} + \beta \mathcal{A}_m^{conv} \quad &Stage = 4
\end{array}
\right., \label{eq:static}
\end{align}
$\alpha$ and $\beta$ are learnable parameters. In network stage 1, 2, 3, \mbox{\sc{Container-Light}}\xspace will switch off $\mathcal{A}_m^{sa}$.
\section{Methods}
\subsection{Overview}
We first revisit the definition of transformer, depth-wise convolution and MLP-Mixer in section~\ref{section:revist}. Then, we provide a unified view of transformer with depth-wise convolution in section ~\ref{section:depthwise} and MLP-Mixer in section~\ref{section:MLP} by linking them using affinity matrix. We show that seemingly different architectures are exploring similar architecture. Built on the unified view, we proposed a fast convergence and high accuracy \mbox{\sc{Container}}\xspace whose building block use mixture of dynamic and static affinity matrix in section ~\ref{section:pacnet}. Then \mbox{\sc{Container-Light}}\xspace provide a simple solution for efficient backbone design by combing the pros of local and global information aggregation and lightweight computation given high resolution input in section~\ref{section:paclight}.
\subsection{Revisit Transformer, Depthwise Convolution and MLP-Mixer}
\label{section:revist}
Transformer consists of multi-head self-attention module(MHSA) followed by feedforward module(FFN). For MHSA, given flattened visual feature $x \in \mathbb{R}^{(HW) \times C }$, where $HW$ represent the number of spatial position while $C$ stands for the channel number, $x$ will be projected into $Q \in \mathbb{R}^{(HW) \times C }$, $K \in \mathbb{R}^{(HW) \times C }$ and $V \in \mathbb{R}^{(HW) \times C }$ space using linear transform. Q, K and V stands for Query, Key and Value feature. After getting Key, Query and Value, Key and Query will be multiplied and normalized by softmax to generate an affinity matrix $A \in \mathbb{R}^{(HW) \times (HW) }$. $A_{ij}$ represent the attention weight between $i_{th}$ Query $Q_i$ and $j_{th}$ Key $K_j$. The dot product equation is denoted below :
\begin{align}
A &= \operatorname{Softmax}(\frac{QK^{T}}{\sqrt{d}}), \label{eq:affinity}
\end{align}
$d$ stands for the number of channel. It will be divided by $QK^T$ to generate a better attention distribution. The affinity matrix in transformer is dynamically generated to reveal the relationship between each query and key. After getting affinity matrix A, A will multiply with V to generate the updated feature for each query.
\begin{align}
x &= x + WAV, \label{eq:residual}
\end{align}
Previous discussed self attention is called single-head self-attention. To increase the attention diversity, MHSA further split Q, K and V into different group and perform multiple self attention in parallel and concatenate the updated feature of each head. For multi-head self-attention, the dynamically generated affinity matrix $A \in \mathbb{R}^{h \times (HW) \times (HW)}$, compared with single-head version, the affinity matrix is a three dimensional tensor whose first dimension h stands for the number of heads in transformer. $A_{h} \in \mathbb{R}^{h \times (HW) \times (HW)}$ stands for the dynamically generated affinity matrix in $h_{th}$ head. The MHSA has been denoted by the following equation:
\begin{align}
A_i &= \operatorname{Softmax}(\frac{Q_i K_i^{T}}{\sqrt{d_h}}), \\
x = &x + W\operatorname{Concat(A_1 V_1, ..., A_h V_h)}
\end{align}
Convolution is an important inductive bias in computer vision which use shift-invariant kernel to aggregation local information. Thanks to the local information and weight sharing, CNN has achieved success in many computer vision and NLP application with high accuracy and good computation efficiency. In this work, we mainly focus a special form of convolution, namely, depth-wise convolution. Naive convolution will fuse information from both local and channel dimension which will result in huge computation cost. Group convolution split channel into groups and perform multiple convolution kernel in parallel to reduce the computation and parameters. Depth-wise can be seen as an extreme version of group convolution whose group size equal to input channel dimension. Given input $x \in \mathbb{R}^{(HW) \times C }$. We will reshape x into $x_r \in \mathbb{R}^{C \times H \times W }$. Thus the $x_r$ contain neighbour information. Given $x_r$, convolution will use a shift-invariant kernel $K \in \mathbb{R}^{C_{in} \times C_{out} \times k \times k}$ to convolve feature and output a new feature map which contain local information. Group convolution use kernel $K \in \mathbb{R}^{C_{in} \times \frac{C_{out}}{G} \times k \times k}$ to convolve with input feature, where G is the number of groups. Depthwise convolution use kernel $K \in \mathbb{R}^{C_{in} \times 1 \times k \times k}$ to convolve with input feature $x_r$. In the discussion, we make padding equal to $\frac{k-1}{2}$ and stride is set to 1 by default. This setting will make input and output feature map share the same spatial resolution. We make connection between transformer and depthwise convolution under the default setting. The convolution operator can be written into the following equation:
\begin{align}
x_r &= x_r + K \circledast x_r, \label{eq:conv1}
\end{align}
Replacing naive convolution with group convolution and depth-wise convolution can make neural network learn better feature and reduce computation burdens.
MLP-Mixer propose to use transposed MLP for spatial information fusion. We omit the discuss of the second MLP layer as it is same as FFN layer in transformer. The transposed MLP can be written in the following equation:
\begin{align}
x &= x + (x^T W_{MLP})^T, \label{eq:residual}
\end{align}
T represent transpose operator. $W_{MLP} \in \mathbb{R}^{HW \times HW}$ stands for the weight of MLP layer.
\subsection{Unifying View for Transformer and Depthwise Convolution}
\label{section:depthwise}
As transformer and convolution capture long range and short range information, how to combine them for efficient learning has been a hot topic. Concurrent to our work, many research naively cascade convolution and transformer and observe gains in performance. Different from them, we provide a unified view of transformer and convolution and write them into a unified equation.
Starting from depthwise convolution with default padding and stride described in previous section, we rewrite equation \ref{eq:conv1} into the following equation:
\begin{align}
x &= x + \operatorname{Concat}(B_c x_c), \label{eq:conv2}
\end{align}
We describe the difference between equation~\ref{eq:conv1} and~\ref{eq:conv2} below. The input and output tensor in equation~\ref{eq:conv1} and~\ref{eq:conv2} is $x_r \in \mathbb{R}^{C \times H \times W }$ and $x \in \mathbb{R}^{(HW) \times C }$m respectively. Equation~\ref{eq:conv1} use convolution operator, while equation~\ref{eq:conv2} utilize matrix multiplication. $B \in \mathbb{R}^{C_{in} \times (HW) \times (HW)}$ where $C_{in}$ stands for the input channel dimension similar as original convolution definition. B matrix in equation~\ref{eq:conv2} can be seen as an affinity matrix defined in ~\ref{eq:affinity}. $B_{cij}$ stands for the information node j will be propogated to node i in channel c. $i_h,i_w,j_h,j_w$ stands for the two dimensional index of $i, j$.
The weight in affinity matrix B can be written below:
\begin{align}
B_{cij} = \left\{
\begin{array}{ll}
K[c:0:\rvert i_h-j_h \lvert:\rvert i_w-j_w \lvert] \\ \quad \lvert i_h-j_h \rvert \leq \frac{k-1}{2} \ \text{and} \ \lvert i_w-j_w \rvert \leq \frac{k-1}{2}\\
0 \\ \quad \lvert i_h-j_h \rvert > \frac{k-1}{2} \ \text{or} \ \lvert i_w-j_w \rvert > \frac{k-1}{2}
\end{array}
\right., \label{eq:static}
\end{align}
Equation \ref{eq:static} change the depthwise convolution operator with kernel K into a pure matrix multiplication operator which is denoted in equation~\ref{eq:conv2} with affinity matrix B. Motivated by the design of transformer, we add to linear projector V and W and result into the following equation:
\begin{align}
x &= x + WBVx, \label{eq:conv3}
\end{align}
Given input feature x, we will add a linear layer to transform x input Value then matrix B will perform local information aggregation, after local information aggregation, W will add another linear transformation to project feature into a new space.
After arriving at equation ~\ref{eq:conv3}, we can find out that depth-wise convolution can be seen as special form of affinity matrix in equation~\ref{eq:residual}. Different from transformer which dynamically generate affinity matrix using key and query, depth-wise convolution can be seen as a static generated affinity matrix where only key belong to the neighbour of query will have an attention weight which is a predefined learnable parameter and shared for key and query with same relative position.
\subsection{Unifying View for Transformer and MLP-Mixer}
\label{section:MLP}
Transformer use dynamically generated affinity matrix for information aggregation. MLP-Mixer use transposed MLP for spatial information aggregation. The seemingly different approach can be unifed by affinity matrix. We start with single head transformer which is shown below:
\begin{align}
x &= x + WAV, \label{eq:residual}
\end{align}
where $A \in \mathbb{R}^{HW \times HW}$.
We add two linear transformer into MLP-Mixer which project the input x into Value, and then perform transposed MLP and then perform fusion which is denoted below:
\begin{align}
x &= x + W(V^T W_{MLP})^T ,\\
x &= x + WW_{MLP}V , \label{eq:mixer}
\end{align}
By comparing equation \ref{eq:residual} and \ref{eq:mixer}, we find that both $W_{MLP}$ and dynamical affinity matrix A can be seen as a information aggregation module. For $i_{th}$ query and $j_{th}$ key, the aggregation weight is $W_{MLP}_{ij}$ and $A_{ij}$. Thus we show the transposed MLP in MLP-Mixer can be seen as a static affinity matrix transformer with head dimension equal to $C$. There is only 1 head in MLP-Mixer. Motivated by multi-head transformer design, we augment MLP-Mixer with Multi-head mechanism and name it as Multi-head MLP-Mixer. Another difference between transformer perform softmax normalization over the affinity matrix while MLP-Mixer do not. In the experiment session, we show the effects of softmax normalization, multi-head mechanism on MLP-Mixer. From the analysis of previous session, both MLP-Mixer and Depthwise convolution use static affinity matrix for information aggregation. Depthwise convolution share attention weight for the same relative position while MLP-Mixer do not have any weight sharing mechanism.
The weight matrix is transposed MLP is of shape $W_{MLP} \in \mathbb{R}^{HW \times HW}$. From image with high resolution, $HW \times HW$ will be huge parameters and cause over-fitting. Thus will decompose the $W_{MLP}$ into two small matrix $W_{MLP1} \in \mathbb{R}^{HW \times \frac{HW}{Down}}$ and $W_{MLP2} \in \mathbb{R}^{\frac{HW}{Down}\times HW}$. To increase the representation power, we add a $GELU$ non linear layer which is popular in transformer architecture. We denote our new MLP-Mixer below:
\begin{align}
x &= x + WW_{MLP1} \operatorname{GELU}(W_{MLP2}V) , \label{eq:mixer}
\end{align}
Motivated by multi-head nomencalture in transformer, we call the transformer-motivated MLP-Mixer as Multi-head MLP-Mixer. Give input $x$, it will be split into $n$ heads. The $W_{MLP}$ is not shared for different head. Each head will perform multiplication of value vector with affinity matrix and concatenate the transformed features of $n$ heads.
\subsection{Peusdo Code implementation from the perspective of Affinity Matrix Generation}
We showcase the pesudo code implementation of depth-wise convolution, transformer and MLP-Mixer under our unified view. We define $x \in \mathbb{R}^{HW \times C}$ to be the input feature. x will be transformed into $K,Q,V \in \mathbb{R}^{HW \times C}$ using linear transform. Then $K,Q,V$ will be split into M heads of feature which is denoted by $K_m,Q_m,V_m \in \mathbb{R}^{HW \times C/M}$. Different algorithms will generate different affinity matrix $A \in \mathbb{R}^{M \times HW \times HW}$ in which M stands for the indices of head. Feature among each head will communication with each other using the corrposeding affinity matrix $A_m$ and then concatenate and fused with a linear transformation layer.
We illustrate the affinity matrix generator using the following algorithm below:
\begin{algorithm}[H]
\SetAlgoLined
\KwResult{Affinity }
A = Parameter(M, HW, HW); $V_m \in \mathbb{R}^{HW \times C/M}$ \tcp*{Shared for all} \
$K_m,Q_m \in \mathbb{R}^{HW \times C/M}$; \tcp*{Transformer} \
$Kernel \in \mathbb{R}^{M \times 1 \times k \times k}$; \tcp*{Depthwise Convolution} \
$W_{MLP1} \in \mathbb{R}^{M \times HW \times \frac{HW}{D}}$, $W_{MLP1} \in \mathbb{R}^{M \times \frac{HW}{D} \times HW}$; \tcp*{MLP-Based Methods} \
\For{m in range(M)}{
\For{i in range(HW)}{
\For{j in range(HW)}{
\If{transformer}{
A[m, i, j] = $Q[m, i]^T K[m, j]$ \;
}
\If{convolution}{
i_h, i_w, j_h, j_w = \lfloor {\frac{i}{H}} \rfloor, i - i_h * H, \lfloor {\frac{j}{H}} \rfloor, j - j_h * H \;
\eIf{|i_h-j_h|<=\frac{k-1}{2} and |i_w-j_w|<=\frac{k-1}{2}\;}
{
$A[m, i, j] = K[m, 0, i_h - j_h, i_w - j_w]$\;
}{
A[m, i, j] = 0\;
}
}
\If{mlp}{
A[m, i, j] = (W_{MLP1}W_{MLP2})^T[m, i, j]\;
}
}
}
}
\If{transformer}{
A = \operatorname{Softmax}(A) \;
}
\caption{Affinity matrix generator. The difference between transformer, depth-wise convolution and MLP-based methods is only on how to generate affinity matrix}
\end{algorithm}
After getting affinity matrix A, each head will perform feature update U according to the following equation:
\begin{align}
U_m &= A_m V_m , \label{eq:mixer}
\end{align}
The updated feature will be concatenated and fused using a linear transformation layer:
\begin{align}
x = x + W\operatorname{Concat}(U_0, ..., U_m, ... U_M)
\end{align}
When heads M equal to $C_{in}$ and add relative shared static affinity inductive bias, we can derive the common depth-wise convolution with kernel k, padding equal to $\floor{\frac{k-1}{2}}$ and stride equal to 1. When heads equal to 1, add a GELU non-linear activation function between two MLP during forward computation and static affinity matrix, we can arrive into MLP-Mixer.
\subsection{Template architecture}
\label{section:template}
Motivated by the network design in previous work cite{ResNet, PVT Net}, our template architecture contain 4 blocks. Different from ViT/DeiT which down-sample image to a low resolution and keep constant, each block in our template architecture will down-sample the image resolution gradually. Gradually down-sampling can keep details of image which is important for downstream tasks like segmentation and object detection. Each block is a cascade of basic module. Each module contain two sub-modules, the first sub-module will propagate spatial information thus we call spatial propagation module, the second sub-module will propagate channel information thus we call channel propagation module. In this paper, channel propagation module is fixed to a 2 layer MLP which was proposed by Transformer\cite{vaswani2017attention}. Designing better spatial propagation module is the main focus of this paper. Given image I, we will extract a $4 \times 4$ patch, and pass through block 1 for B1 layers, then we will extract $2 \times 2$ patch and pass through block 2 for B2 layers, then we will extract $2 \times 2$ patch and pass through block3 for B3 layers, then we will extract $2 \times 2$ patch and pass through block 4 for B4 layers. The feature dimension in each block will be fixed to be the same. The feature input dimension is fixed to be 128, 256, 320 and 512. For \mbox{\sc{Container}}\xspace, each block contain 2,3,8,3 layers which result into a network contain 22 Million parameter. \mbox{\sc{Container-Light}}\xspace adopt the same architecture as \mbox{\sc{Container}}\xspace and only turn off the dynamic affinity matrix in the first 3 stages. Thus the first stages can be efficiently implemented by depth-wise convolution operator offered by modern deep learning framework.
\subsection{\mbox{\sc{Container}}\xspace}
Motivated by the unifed view of presented in the previous session, we run a parallel of transformer and depthwise convolution to aggreagtion local and global feature and result into the following equation:
\begin{align}
x &= x + W_cBV_cx + W_sAV_sx,
\end{align}
$W_c, V_c$ is the weight of linear projector in convolution path while $W_s, V_s$ representes the weight of linear projector in self-attention path. By sharing the weight of linear projector, we could merge the previous equation into a compact form denoted below:
\begin{align}
x &= x + W(B + A)Vx,
\end{align}
B and A stands for the affinity matrix generated from depthwise convolution and self-attention key query. $B + A$ can be seen as a new affinity matrix which is composed of instance dynamically predicted relationship A and dataset level shared relationship B. We then write affinity matrix A into Key Query multiplication form denoted below:
\begin{align}
x &= x + W(B + \operatorname{Softmax}(\frac{QK^{T}}{\sqrt{d}}))Vx,
\end{align}
Motivated by linear transformer which remove softmax normalization, we remove the softmax for normalization of key query product and add a new softmax after mixture of static affinity matrix and dynamically generated affinity matrix and result in the following equation:
\begin{align}
x &= x + W\operatorname{Softmax}(\frac{QK^{T}}{\sqrt{d}} + B)Vx,
\end{align}
Motivated by dynamic gate proposed by LSTM, we use a simple learnable parameter $\alpha$ and $\beta$ to differtibale mix long range and short range equation. We name the following equation the unifying equation:
\begin{align}
x &= x + W\operatorname{Softmax}(\alpha \frac{QK^{T}}{\sqrt{d}} + \beta B)Vx,
\end{align}
Unifying equation define a basic module which can simultaneous learn long range and short range information in one operator and learn the mixture factor $\alpha$ and $\beta$ in a data driven manner. Unifying Module can be a simple drop-in module by inserting into network. The input and output of Unifying Module have the same dimension.
\subsection{\mbox{\sc{Container}}\xspace}
\label{section:pacnet}
In this section, we will starting from DeiT and replacing MHSA module with Unify Module proposed in the previous session. DeiT extract non-overlapping patch $16 \times 16$ in the beginning and then stack multiple layers of transformer for feature extractor. Down-sampling image 16 times in the beginning can achieve high accuracy. However, lost of fine details will be detrimental for downstream tasks which require high resolution input like object detection and semantic segmentation. Motivated by previous hierachical design in image processing, instead of downsampling image in the beginning and keep the feature map resolution constant, we progressively downsample the feature and result into a feature pyramid of feature map. We adopt a four block approach. We adopt downsampling 4 for block 1, downsampling 8 for block 2, downsampling 16 for block 3 and downsampling 32 for block 4. Besides, we replace all LayerNorm with BatchNorm which is a common pratice in computer vision tasks. At last, we replace MHSA module with Unify Module and result into UnifyNet. UnifyNet could perform global and local information aggregation simutaneously and learn the mixture of local and global feature from data.
\subsection{\mbox{\sc{Container-Light}}\xspace}
\label{section:paclight}
UnifyNet has quadratic computation complexity which is similar as original transformer. Thus UnifyNet need unafforadble computation and memory cost for high resolution downstream tasks like object detection and semantic segmentaion. To tackle this, we provide a simple solution for modify UnifyNet better for downstream tasks. We observe that high resolution feature map usually focus on local information while for low resolution feature map attention will perform global information reasoning. Thus for high resolution feature map in block1, block2 and block 3. We set their $\alpha$ to 0 and $\beta$ to 1. While for block 4, we set their $\alpha$ to 1 and $\beta$ to 0. By doing this, we observe significant decrease of computation at the cost of slight decrease of classification accuracy and slower convergence speed.
\subsection{Architecture Detail}
\section{Methodology}
In this section, we first propose a building block -- contextual aggregation with multi-head attention -- for deep neural networks. Then, we revisit three major architectures -- Transformer~\cite{}, depthwise convolution~\cite{} and the recently proposed MLP-Mixer~\cite{}, and show that these architectures can be unified and represented as special cases our proposed building block. In Sec.~\ref{approch:container}, we propose a variation of our building block
we propose a fast convergence and high accuracy building block, which is a mixture of self-attention and depthwise convolution. In Sec.~\ref{approch:base_container_light}, we present the base architecture, and an efficient and lighter version of our building block, which achieves state-of-the-art performance on various of benchmarks.
\subsection{Contextual Aggregation for Vision}
\label{approch:context}
Consider an input image $X \in \mathbb{R}^{C \times H \times W}$, where $C$ and $H \times W$ denote the channel and spatial dimension of the input image, respectively. The input image is first flattened to a sequence of tokens $\{X_i \in \mathbb{R}^C \vert i = 1, \ldots, N\}$, where $N = HW$. Existing vision networks stack multiple building blocks with residual connections \cite{he2016deep}, which is defined as
\begin{equation}
\mathbf{Y} = \mathcal{F}(\mathbf{X},\{\mathbf{W}_i\}) + \mathbf{X}.
\end{equation}
Here, $\mathbf{X}$ and $\mathbf{Y}$ are the input and output vectors of the layers considered, and $\mathbf{W}_i$ is the learning parameters. The core of $\mathcal{F}$ is how to aggregate contextual information into current feature. We first define an affinity matrix $\mathcal{A} \in \mathbb{R}^{N \times N}$ that represents how the contextual features can aggregate into each feature. Thus, Eq.1 can be written as:
\begin{equation}
\mathbf{Y} = (\mathcal{A} \mathbf{V}) \mathbf{W}_1 + \mathbf{X}, \label{eq:single_affinity}
\end{equation}
where $\mathbf{V} \in \mathbb{R}^{N \times C}$ is transformed from $\mathbf{X}$ with linear projection $\mathbf{V} = \mathbf{X} \mathbf{W}_2$. $\mathbf{W}_1$ and $\mathbf{W}_2$ are the learnable parameters.
$A_{ij}$ is the affinity value between $X_i$ and $X_j$.
Multiplying affinity matrix with $V$ can propagate information from all features according to the affinity value.
Multi-head attention \cite{vaswani2017attention} is widely used in transformer based architectures \cite{devlin2018bert}. It allows the model to jointly attend to information from different representation subspaces at different positions. [HS: don't mention multi-head yet. Too specific to transformer. Just mention features can grouped into multiple groups.] Following \cite{vaswani2017attention}, we can learn multiple affinity matrix to capture different relationships. Given the chunked feature ${\mathbf{V}}^i$ where $\{\mathbf{V}^i \in \mathbb{R}^{N \times \frac{C}{M}} \vert i=1,\ldots,M\}$ and $M$ is the number of heads. The multi-head version of Eq. 2 is
\begin{equation}
\mathbf{Y} = \operatorname{Concat}(A_1 \mathbf{V}_1, \ldots, A_M \mathbf{V}_M) \mathbf{W}_2 + \mathbf{X} \label{eq:multi_affinity}
\end{equation}
where $A_m$ stands for the affinity matrix in each head. Features in each head will perform contextual aggregation according to the head-specific affinity matrix $A_m$. Different $A_m$ can capture multi-relationship exist in feature space thus increase the representation power of contextual aggregation compared with single-head version.
Note that only spatial information is propagated while no cross-channel information exchange happen during contextual aggregation using affinity matrix. \textcolor{red}{Besides, motivated by self-attention design, there is no non-linearity activation function inside contextual aggregation. All non-linearity activation will be placed in Feed-forward module. }
In the following session, we will show that classical operator like Self-Attention, Depth-wise Convolution and recently proposed MLP-Mixer share similar computation stack. The main difference between them is how to generate affinity matrix.
\subsection{Connecting Self-Attention, Depthwise Convolution and MLP-Mixer}
Self-attention \cite{vaswani2017attention}, depthwise convolution \cite{kaiser2017depthwise} and recent proposed MLP-Mixer \cite{tolstikhin2021mlpmixer} are three distinct building blocks that are (or potentially to be) widely used in the vision community. Here, we show that the three distinct building blocks can be represented by our proposed contextual aggregation block. The only difference is how to define their affinity matrices.
\paragraph{Self-Attention.} In self-attention mechanism, the affinity matrix is modelled by the similarity between the projected query-key pairs. With $M$ heads, the affinity matrix in head m $A_m^{sa}$ can be written as
\begin{equation}
\mathbf{A}_m^{sa} = \operatorname{Softmax}({\mathbf{Q}_m \mathbf{K}_m^{T}} / {\sqrt{{C} / {M}}}),
\end{equation}
where $\mathbf{K}_m, \mathbf{Q}_m$ are the corresponding key, query in head $m$, respectively.
The affinity matrix in self-attention is dynamically generated and can reveal instance level information. However, this introduced quadratic computation complexity which will generate huge computation given high resolution feature map.
\paragraph{Depthwise Convolution.}
Convolution operator fuses both spatial and channel information in parallel. This is different from the contextual aggregation block defined above. However, depthwise convolution \cite{kaiser2017depthwise} which is a extreme version of group convolution performs disentangled convolution on each channel. Once the number of head from contextual aggregation block is equal to the channel size $C$, we can define the convolutional affinity matrix given the 1-d kernel ${Ker \in \mathbb{R}^{C \times 1 \times k}}$:
\begin{align}
\mathbf{A}_{mij}^{conv} = \left\{
\begin{array}{ll}
Ker[m:0:\rvert i-j \lvert] \quad &\lvert i - j \rvert \leq k\\\
0 \quad &\lvert i - j \rvert > k
\end{array}
\right., \label{eq:static}
\end{align}
where $\mathbf{A}_{mij}$ is the affinity value between $X_i$ and $X_j$ on head $m$. Different from the affinity matrix obtained from self-attention whose value is conditioned on the input feature, the affinity value from convolution is static -- not depends on the input features, sparse -- only define local connections and shared across the affinity matrix.
\paragraph{MLP-Mixer}
The recent proposed MLP-Mixer \cite{tolstikhin2021mlpmixer} does not rely on any convolution or self-attention operator and achieves superior performance on image classification tasks.
The core of MLP-Mixer is the transposed MLP operation,
which can be denoted as $X = X + (V^T W_{MLP})^T$. We can define the affinity matrix as
\begin{align}
A^{mlp} = (W_{MLP})^T, \label{eq:residual}
\end{align}
where $W_{MLP}$ is the learnable parameters. This simple equation shows the transposed-MLP operator is a contextual aggregation operator on a single feature group with a dense affinity matrix. Comparing with self-attention and depthwise convolution, the transpose-MLP affinity matrix is static, dense and with no parameter sharing.
The above simple unification reveals the similarities and differences between self-attention, depthwise convolution and MLP-Mixer in the view of how to construct affinity matrix. This shed us the light to create powerful and efficient building blocks for vision Task.
\subsection{Container Block(New Version)}
\begin{align}
x &= x + W_1\operatorname{Norm}\left(\alpha A(x) + \beta A\right)V,
\end{align}
From the previous analysis, we show that different architecture are performing similar computation stack. The affinity matrix can be static or dynamically generated. Besides, different inductive bias can define different parameter sharing strategy. Motivated by this, we create a more powerful Container module by mixing dynamic and static affinity matrix using learnable parameter. Our container module is defined below:
\begin{align}
x &= x + W_1\operatorname{Norm}\left(\alpha \overbrace{ A(x)}^{Dynamic} + \beta \overbrace{A}^{Static}\right)V,
\end{align}
$A(x)$ is a type of affinity matrix which is dynamically generated from input $x$. Affinity matrix $A^{sa}$ defined in equation is the most classic dynamic contextual aggregation. $A$ is static and can capture the dataset-level information. Affinity matrix $A^{conv}$ and $A^{mlp}$ defined in equation is the most classic static contextual aggregation.
\begin{itemize}
\item $\alpha = 1$, $\beta = 0$, $A(x)=A^{sa}$: Container Block will be Transformer block.
\item $\alpha = 0$, $\beta = 1$, $A = A^{conv}$, $M=C$: Container Block will be depthwise convolution block.
\item $\alpha = L$, $\beta = L$, $A(x)=A^{sa}$, $A = A^{mlp}$ Container Block will be fuse dynamic and static information. We call this module Pay Attention to MLP (PAM).
\item $\alpha = L$, $\beta = L$, $A(x)=A^{sa}$, $A = A^{conv}$ Container Block will be fuse dynamic and static information. Different form PAM, the static affinity matrix contain locality constraint and shift invariance which can better tackle vision task. We call this module Pay Attention to Convolution (PAC).
\end{itemize}
\subsection{Container Block}
\label{approch:container}
To create a more powerful Container block, we want to best of the both world. We mix dynamic contextual of self-attention and locally constrained contextual of depth-wise convolution with learn-able parameter. Our proposed Containter Block is:
\begin{align}
x &= x + W_1\operatorname{Softmax}\left(\alpha \frac{QK^{T}}{\sqrt{d}} + \beta B\right)V,
\end{align}
where $Q$, $K$,$V$ and query, key and value respectively. $W$ and $B$ are the learnable parameters and $\alpha$ and $\beta$ are learned mixture weight that can switch between dynamic contextual and static local contextual in a data driven manner. The container block can approximate self-attention block, local-self attention block, depthwise convolution block with specific $\alpha$, $\beta$ and $B$.
\begin{itemize}
\item $\alpha = 1$ and $\beta = 0$: Container Block will be Transformer block.
\item $\alpha = 0$, $\beta = 1$ and $B = A^{conv}$: Container Block will be depthwise convolution block.
\item $\alpha = 0$, $\beta = 1$ and $B =$ Mask: Container Block will be local-transformer block.
\end{itemize}
\jl{not sure this is the best way to show our proposed container block. We need to show this with a figure. Talk about the merits after that.}
\subsection{Base Architecture and Container-Light}
\label{approch:base_container_light}
\paragraph{Base Architecture}
To verify the learning ability of different contextual module in a fair way, we use the basic network architecture and only replace the contextual module with affinity generated by self-attention, depth-wise convolution, MLP, \mbox{\sc{Container}}\xspace and \mbox{\sc{Container-Light}}\xspace.
Motivated by the network design in previous work \cite{ResNet, PVT Net}, our base architecture contains 4 blocks. Different from ViT/DeiT which down-sample image to a low resolution and keep the feature map resolution to be constant, each block in our template architecture down-samples the image resolution gradually. Gradually down-sampling can keep details of image, which is important for downstream tasks like segmentation and object detection. Each block is a cascade of basic module. Each module contain two sub-modules, the first sub-module will propagate spatial information thus we call spatial propagation module, the second sub-module will propagate channel information thus we call channel propagation module. In this paper, channel propagation module is fixed to a 2 layer MLP which was proposed by Transformer\cite{vaswani2017attention}. Designing better spatial propagation module is the main focus of this paper. Given image I, we will extract a $4 \times 4$ patch, and pass through block 1 for B1 layers, then we will extract $2 \times 2$ patch and pass through block 2 for B2 layers, then we will extract $2 \times 2$ patch and pass through block3 for B3 layers, then we will extract $2 \times 2$ patch and pass through block 4 for B4 layers. The feature dimension in each block will be fixed to be the same. The feature input dimension is fixed to be 128, 256, 320 and 512. For \mbox{\sc{Container}}\xspace, each block contain 2,3,8,3 layers which result into a network contain 22 Million parameter. \mbox{\sc{Container-Light}}\xspace adopt the same architecture as \mbox{\sc{Container}}\xspace and only turn off the dynamic affinity matrix in the first 3 stages. Thus the first stages can be efficiently implemented by depth-wise convolution operator offered by modern deep learning framework.
\jl{show the backbone figure and explain it in words. }
\paragraph{Container-Light}
For computation efficiency and visual backbone deisgn, we truncate the dynamic affinity in \mbox{\sc{Container}}\xspace in high resolution feature and only keep the mixture of dynamic and local static mechanim in the low resolution feature map and result in \mbox{\sc{Container-Light}}\xspace.
UnifyNet has quadratic computation complexity which is similar as original transformer. Thus UnifyNet need unafforadble computation and memory cost for high resolution downstream tasks like object detection and semantic segmentaion. To tackle this, we provide a simple solution for modify UnifyNet better for downstream tasks. We observe that high resolution feature map usually focus on local information while for low resolution feature map attention will perform global information reasoning. Thus for high resolution feature map in block1, block2 and block 3. We set their $\alpha$ to 0 and $\beta$ to 1. While for block 4, we set their $\alpha$ to 1 and $\beta$ to 0. By doing this, we observe significant decrease of computation at the cost of slight decrease of classification accuracy and slower convergence speed.
\begin{align}
A_m^{\mbox{\sc{Container-Light}}\xspace} = \left\{
\begin{array}{ll}
A_m^{conv} \quad &Down = 4, 8, 16\\\
\alpha A_m^{sa} + \beta A_m^{conv} \quad &Down = 32
\end{array}
\right., \label{eq:static}
\end{align}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/overall_graph.png}
\end{center}
\caption{The template architecture used for validating the effectiveness of different contextual learning module. Our template architecture will gradually down-sample image and contain four stages with different down-sampling rate. Each stage will contain several blocks. Inside each block, input feature will be processed by a context module followed by channel fusion module. The template architecture will be pretrained on Imagenet using classification tasks. Then, the pretrained weight will transfer to downstream task such as object detection, instance segmentation, and so on. }
\label{fig:vis}
\end{figure*}
\section{Experiments}
\label{sec:exp}
\input{tables/imagenet_top1}
We now present experiments with \mbox{\sc{Container}}\xspace for ImageNet and with \mbox{\sc{Container-Light}}\xspace for the tasks of object detection, instance segmentation and self-supervised learning. We also present appropriate baselines. Please see the appendix for details of the models, training and setup.
\subsection{ImageNet Classification}
\label{sec:exp_imagenet}
\textbf{Top-1 Accuracy.}
Table~\ref{tab:imagenet_top1} compares several highly performant models within the CNN, Transformer, MLP, Hybrid and our proposed \mbox{\sc{Container}}\xspace families. \mbox{\sc{Container}}\xspace and \mbox{\sc{Container-Light}}\xspace outperform the pure Transformer models ViT~\cite{dosovitskiy2020image} and DeiT~\cite{touvron2020training} despite far fewer parameters. They outperform PVT~\cite{wang2021pyramid} which employ a hierarchical representation similar to our base architecture. They also outperform the recently published state-of-the-art SWIN~\cite{liu2021swin} (they outperform Swin-T which has more parameters). The best performing models continue to be from the EfficientNet~\cite{tan2019efficientnet} family, but we note that EfficientNet~\cite{tan2019efficientnet} and RegNet~\cite{radosavovic2020designing} apply an extensive neural architecture search, which we do not. Finally note that \mbox{\sc{Container-Light}}\xspace not only achieves a high accuracy but does so at lower FLOPs and much faster throughput than models with comparable capacities.
The \mbox{\sc{Container}}\xspace framework allows us to easily reproduce past architectures but also to create effective extensions over past work (outlined in Sec~\ref{approach:container}), several of which are compared in Table~\ref{tab:imagenet_ablation}. H-DeiT-S is a hierarchical version of DeiT-S obtained by simply using $\calA^{sa}$ within our hierarchical architecture and provides 1.2 gain. Conv-3 (naive convolution (conv) with $3 \times 3$ kernel) aggregates spatial and channel information, where as Group Conv-3 splits input features and performs convs using different kernels -- it is cheaper and more effective. When group size $=$ channel dim., we get depth-wise conv. DW-3 is a depthwise convs with 3 by 3 kernel that only aggregates spatial information. Channel information is fused using $1 \times 1$ convs. MH-DW-3 is a multi-head version of DW-3. MH-DW-3 shares kernel parameters within the same group. With fewer kernels, MH-DW-3 achieves comparable performance with DW-3. MLP is an implementation of transposed MLP for spatial propagation. MLP-LR stands for MLP with low-rank decompostion. MLP-LR provides better performance with fewer parameters. MH-MLP-LR adds a multi-head mechanism over MLP-LR and provides further improvements. In contrast to the original MLP-Mixer~\cite{tolstikhin2021mlpmixer}, we do not add any non-linearity like GELU into \mbox{\sc{Container}}\xspace as is specified in the contextual aggregation equation.
\input{tables/imagenet_ablation}
\textbf{Data Efficiency.}
\input{tables/data_efficiency}
\mbox{\sc{Container-Light}}\xspace has a built-in shift-invariance and parameter sharing mechanism. As a result it is more data efficient in comparison to DeiT~\cite{touvron2020training}. Table~\ref{tab:data_efficiency} shows that at the low data regime of 10\%, \mbox{\sc{Container-Light}}\xspace outperforms DeiT by a massive 22.5 points.
\textbf{Convergence Speed.}
Figure~\ref{fig:convergence_viz} (left) compares the convergence speeds of the two \mbox{\sc{Container}}\xspace variants with a CNN and Transformer (DeiT)~\cite{touvron2020training}. The inductive biases in the CNN enable it to converge faster than DeiT~\cite{touvron2020training}, but they eventually perform similarly at 300 epochs, suggesting that dynamic, long range context aggregation is powerful but slow to converge. \mbox{\sc{Container}}\xspace combines the best of both and provides accuracy improvements with fast convergence. \mbox{\sc{Container-Light}}\xspace converges as fast with a slight accuracy drop.
\begin{figure}[]
\begin{center}
\includegraphics[width=\linewidth]{figures/convergence_viz.pdf}
\end{center}
\caption{\textbf{(left)} Convergence speed comparison between \mbox{\sc{Container}}\xspace, \mbox{\sc{Container-Light}}\xspace, Depthwise conv and DeiT. \textbf{(right)} Visualization of the static affinity weights at different positions and layers. Layer 1 displays the emergence of local affinities (resembling convolutions).}
\label{fig:convergence_viz}
\end{figure}
\textbf{Emergence of locality.}
Within our \mbox{\sc{Container}}\xspace framework, we can easily add a static affinity matrix to the DeiT architecture. This simple change (1 line of code addition), can provide a +0.5 Top-1 improvement from 79.9\% to 80.4\%. This suggests that static and dynamic affinity matrices provide complementary information. As noted in Sec~\ref{approach:container}, we name this \mbox{\sc{Container-Pam}}\xspace.
It is interesting to visualize the learnt static affinities at different network layers. Figure~\ref{fig:convergence_viz} (right) displays these for 2 layers. Each matrix represents the static affinities for a single position, reshaped to a 2-d grid to resemble the landscape of the neighboring regions.
Within Layer 1, we interestingly observe the emergence of local operations via the enhancement of affinity values next to the source pixel (location). These are akin to convolution operations. Furthermore, the affinity value for the source pixel is very small, i.e. at each location, the context aggregator does not use its current feature. We hypothesize that this is a result of the residual connection~\cite{he2016deep}, thereby alleviating the need to include the source feature within the context. Note that in contrast to dynamic affinity, the learnt static matrix is shared for all input images. Notice that Layer 12 displays a more global affinity matrix without any specific interpretable local patterns.
\subsection{Detection with RetinaNet}
\input{tables/detection_segmentation}
Since the attention complexity for \mbox{\sc{Container-Light}}\xspace is linear at high image resolutions (initial layers) and then quadratic, it can be employed for downstream tasks such as object detection which usually require high resolution feature maps. Table~\ref{tab:detection_segmentation} compares several backbones applied to the RetinaNet detector~\cite{lin2017focal} on the COCO dataset~\cite{lin2014microsoft}. Compared to the popular ResNet-50~\cite{he2016deep}, \mbox{\sc{Container-Light}}\xspace achieves 43.8 mAP, an improvement of 7.0, 7.2 and 10.4 on $AP_S$, $AP_M$, and $AP_L$ with comparable parameters and cost. The significant increase for large objects shows the benefits of global attention via the dynamic global affinity matrix in our model. \mbox{\sc{Container-Light}}\xspace also surpasses the large convolution-based backbone X-101-64~\cite{xie2017aggregated} and pure Transformer models with similar number of parameters such as PVT-S~\cite{wang2021pyramid}, ViL-S~\cite{Zhang2021MultiScaleVL}, and SWIN-T~\cite{liu2021swin} by large margins. Compared to large Transformer backbones such as ViL-M~\cite{Zhang2021MultiScaleVL} and ViL-B~\cite{Zhang2021MultiScaleVL}, we achieve comparable performance with significantly fewer parameters and FLOPs.
\subsection{Detection and Segmentation with Mask-RCNN}
Table~\ref{tab:detection_segmentation} also compares several backbones for detection and instance segmentation using the Mask R-CNN network~\cite{he2017mask}. As with the findings for RetinaNet~\cite{lin2017focal}, \mbox{\sc{Container-Light}}\xspace outperforms convolution and Transformer based approaches such as ResNet~\cite{he2016deep}, X-101~\cite{xie2017aggregated}, PVT~\cite{wang2021pyramid}, ViL~\cite{Zhang2021MultiScaleVL} and recent state-of-the-art SWIN-T~\cite{liu2021swin} and the recent hybrid approach BoT~\cite{srinivas2021bottleneck}. It obtains comparable numbers to the much larger ViL-B~\cite{Zhang2021MultiScaleVL}.
\subsection{Detection with DETR}
\input{tables/detr}
Table~\ref{tab:detr} shows that our model can consistently improve object detection performance compared to a ResNet-50~\cite{he2016deep} backbone (comparable parameters and computation) on end-to-end object detection using DETR~\cite{carion2020end}. We demonstrate large improvements with DETR~\cite{carion2020end}, DDETR~\cite{zhu2020deformable} as well as SMCA-DETR~\cite{gao2021fast}. See appendix for $AP^S$, $AP^M$, and $AP^L$ numbers. All models in table ~\ref{tab:detr} are trained using a 50 epochs schedule.
\subsection{Self supervised learning}
\input{tables/dino}
We train DeiT~\cite{touvron2020training} and \mbox{\sc{Container-Light}}\xspace for 100 epochs at the self supervised task of visual representation learning using the DINO framework~\cite{caron2021emerging}. Table~\ref{tab:dino} compares top-10 kNN accuracy for both backbones at different epochs of training. \mbox{\sc{Container-Light}}\xspace significantly outperforms DeiT with large improvements initially demonstrating more efficient learning.
\section{Anaylsis}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we have shown that disparate architectures such as Transformers, depth-wise CNNs and MLP-based methods are closely related via an affinity matrix used for context aggregation. Using this view, we have proposed \mbox{\sc{Container}}\xspace, a generalized context aggregation building block that combines static and dynamic affinity matrices using learnable parameters. Our proposed networks, \mbox{\sc{Container}}\xspace and \mbox{\sc{Container-Light}}\xspace show superior performance at image classification, object detection, instance segmentation and self-supervised representation learning. We hope that this unified view can motivate future research in the design of effective and efficient visual backbones.
\textbf{Limitations}: \mbox{\sc{Container}}\xspace is very effective at image classification but cannot be directly applied to high resolution inputs. The efficient version \mbox{\sc{Container-Light}}\xspace, can be used for a variety of tasks. However, its limitation is that it is partially hand-crafted -- the dynamic affinity matrix is switched off in the first 3 stages. Future work will address how to learn this using the task at hand.
\textbf{Negative societal impact}: This research does not have a direct negative societal impact. However, we should be aware that powerful neural networks, particularly image classification networks can be used for harmful applications like face and gender recognition.
\textbf{Disclosure of Funding}
This work was partially supported by the Shanghai Committee of Science and Technology, China (Grant No. 21DZ1100100 and 20DZ1100800).
\section*{Checklist}
\begin{enumerate}
\item For all authors...
\begin{enumerate}
\item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?
\answerYes{}
\item Did you describe the limitations of your work?
\answerYes{Please see Sec~\ref{sec:conclusion}.}
\item Did you discuss any potential negative societal impacts of your work?
\answerYes{Please see Sec~\ref{sec:conclusion}.}
\item Have you read the ethics review guidelines and ensured that your paper conforms to them?
\answerYes{}
\end{enumerate}
\item If you are including theoretical results...
\begin{enumerate}
\item Did you state the full set of assumptions of all theoretical results?
\answerNA{}
\item Did you include complete proofs of all theoretical results?
\answerNA{}
\end{enumerate}
\item If you ran experiments...
\begin{enumerate}
\item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?
\answerYes{Provided in the supplementary material.}
\item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?
\answerYes{Provided in the supplementary material.}
\item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?
\answerNo{Training the large neural networks in this paper is very expensive. Given our limited computational resources, we are unfortunately unable to provide error bars for our experiments.}
\item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?
\answerYes{Provided in the supplementary material.}
\end{enumerate}
\item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...
\begin{enumerate}
\item If your work uses existing assets, did you cite the creators?
\answerYes{}
\item Did you mention the license of the assets?
\answerNA{}
\item Did you include any new assets either in the supplemental material or as a URL?
\answerNA{}
\item Did you discuss whether and how consent was obtained from people whose data you're using/curating?
\answerNA{}
\item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?
\answerNA{}
\end{enumerate}
\item If you used crowdsourcing or conducted research with human subjects...
\begin{enumerate}
\item Did you include the full text of instructions given to participants and screenshots, if applicable?
\answerNA{}
\item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?
\answerNA{}
\item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?
\answerNA{}
\end{enumerate}
\end{enumerate}
\section*{Appendix}
\setcounter{table}{0}
\renewcommand{\thetable}{\Alph{section}.\arabic{table}}
\setcounter{figure}{0}
\renewcommand{\thefigure}{\Alph{section}.\arabic{figure}}
\setcounter{algocf}{0}
\renewcommand{\thealgocf}{\Alph{section}.\arabic{algocf}}
\section{Experimental setups}
\subsection{ImageNet Classification}
ImageNet-1k is an image classification dataset with 1000 object categories. We use the basic architecture explained in Section~\ref{approach:container_net}.
All models are trained with the same setting as DeiT. Depthwise convolution, MLP and \mbox{\sc{Container-Light}}\xspace are trained with 8 16G V100 GPU with each GPU processing 128 images. Transformer and \mbox{\sc{Container}}\xspace are trained with 8 80G A100 GPU and each GPU processes 128 images. Color jitter, random erase and mixup are used as data-augmentation strategies. We use the adamW optimizer. Learning rates are calculated using the following equation:
\begin{align}
lr = \frac{lr_{base} \times Batch \times N_{GPU}}{512}
\end{align}
where base learning rate is chosen to be $5 \times e^{-4}$. We use cosine learning schedule and warm up the model in the first 5 epochs and train for 300 epochs in total.
\subsection{Detection with RetinaNet}
RetinaNet is a one-stage dense object detector using a feature pyramid network and focal loss. It is trained for 12 epochs, starting with a learning rate of 0.0001 which decreases by 10 at epoch 8 and 11. We use adamW optimizer and set weight decay to 0.05. No gradient clip is applied. We warm up for the first 500 iterations. Models are trained with 8 V100 GPU and each GPU holds 2 images. We freeze the batch normalization parameter similar to DETR.
\subsection{Detection and Segmentation with Mask-RCNN}
Mask-RCNN is a multi-task framework for object detection and instance segmentation. Mask-RCNN models are trained with 8 GPUs and each GPU hold 2 images. Mask-RCNN models are optimized by AdamW with a learning rate of 0.0001 and weight deacy of 0.05. We warm up the first 500 iterations. BN parameters are frozen for all layers.
\subsection{Detection with DETR}
DETR is an encoder-decoder transformer for end-to-end object detection. To improve the convergence speed and performance of DETR, SMCA-DETR propose a spatial modulated co-attention mechanism which can increase the convergence speed of DETR. Deformable DETR achieve fast convergence through deformable encoder and decoder. We compare \mbox{\sc{Container-Light}}\xspace with ResNet 50 on DETR without dilation, SMCA without multi scale and Deformable DETR without multi scale. DETR and SMCA DETR are optimized with 8 GPUs and 2 images per GPU, where as Deformable DETR uses 8 GPUs and 4 images per GPU. All models are optimzied with AdamW optimizer and weight clipping. DETR, SMCA DETR and Deformable DETR all use the default parameter setting in the original code release.
\begin{table}[h!]
\setlength\tabcolsep{2pt}
\centering
\begin{tabular}{c|ccccccccc}
\toprule
Method & Backbone & mAP & AP$_S$ & AP$_M$ & AP$_L$\\
\midrule
DETR~\cite{carion2020end} &ResNet50 & 32.3 &10.7& 33.8&53.0 &\\
DETR~\cite{carion2020end} &\mbox{\sc{Container-Light}}\xspace & 38.9 &16.5& 42.2&60.3 &\\
\midrule
\makecell[c]{SMCA w/o\\ multi-scale~\cite{gao2021fast}} & ResNet50 & 41.0 &21.9& 44.3&59.1\\
\makecell[c]{SMCA w/o\\ multi-scale~\cite{gao2021fast}} & \mbox{\sc{Container-Light}}\xspace & 44.2 & 23.8& 47.9&63.1\\
\midrule
\makecell[c]{DDetr w/o\\ multi-scale~\cite{zhu2020deformable}} & ResNet50 & 39.3 & 19.8& 43.5&56.1\\
\makecell[c]{DDetr w/o\\ multi-scale~\cite{zhu2020deformable}} & \mbox{\sc{Container-Light}}\xspace & 43.0 & 23.3& 46.3&61.2\\
\bottomrule
\end{tabular}
\vspace{2pt}
\caption{Comparison with DETR model over training epochs, mAP, inference time and GFLOPs.}
\label{tab:detr_full}
\end{table}
\subsection{Self-supervised Learning DINO}
DINO is a recently proposed self-supervised learning framework. We adopt the default training setup in DINO to test the performance of \mbox{\sc{Container-Light}}\xspace on self-supervised learning.
We compare with ViT-S/16 model using DINO. Baseline model and \mbox{\sc{Container-Light}}\xspace are trained using 100 epoches with cosine schedule for learning rate and weight decay. Learning rate at the end of warmup is 0.0005 while weight decay at the end will be kept constant to 0.4. Batch size per GPU is set to 64. We report kNN accuracy as a metric to evaluate the performance of self-supervised model.
\section{1 line code change for Container-PAM}
\lstinputlisting[
style = Python,
caption = {With just 1 line of code change in the forward pass of the Attention module within ViT, one can implement \mbox{\sc{Container-Pam}}\xspace and obtain a +0.5 improvement on ImageNet top-1 accuracy.},
label = {get_flops.tex}
]{get_flops.tex}
The attention code is borrowed from the TIMM library~\footnote{\texttt{https://github.com/rwightman/pytorch-image-models/tree/master/timm}}. The one-line code addition in the forward pass for \mbox{\sc{Container-Pam}}\xspace is implemented (and commented) in red. This code also requires enabling an additional parameter (also shown in red).
|
1,314,259,993,177 | arxiv | \section{Introduction}
\label{sec:1}
Cilia and flagella are micron-sized filamentous organelles found in eukaryotic cells that play a crucial role in biologically important processes such as locomotion, mucus clearance, embryogenesis and cell motility \cite{Ainsworth,sleigh}. While the biophysical and biochemical mechanisms governing and regulating the activity of these oscillations are still not well understood, there is a growing interest in biomimetic applications of these structures in the field of microfluidics and soft robotics. For example, elastically connected beads actuated by time-periodic magnetic fields \cite{Drey,baba} or external chemical gradient \cite{sasaki,patteson,chelakkot} enable directed transport of cargo.
An alternate mechanism motivated by biological motor filament assays that can also yield controllable oscillations in this case driven by mechanical instabilities involves slender filaments subjected to follower forces. In passive contexts, nonconservative follower loads acting as either a point force or a distributed load play a crucial role in several contexts such as pipes conveying fluid \cite{pai1,pai2}, self-propelled structures \cite{wood} and flutter in rockets \cite{pai3}. Practical engineering applications of systems subjected to follower forces, as well as the theoretical developments on stability analysis of such systems are presented in a number of review studies \cite{Elish,Inger,Bolo,Langthjem}. In biological contexts, follower forces are realized in motor-filament aggregates inside cells and in-vitro assays wherein polar molecular motors attach to filaments and exert forces along their backbone \cite{MBC}.
Mechanical responses such as buckling are a common motif in biology
\cite{Ashkan,Robison,Gopinath11} where filaments are typically constrained in some way by their surroundings. Similar constraints to unfettered motion is seen in animated filaments.
Building on these ideas, several researchers have effectively used continuum models for analyzing the post-buckling behavior of slender inextensible active or inactive filaments subjected to follower forces \cite{qin,de,kevin}. These studies have focused on the dynamics of free-free, fixed-free, and pinned-free filaments with the base state being a straight non-stressed filament. However, the role of pre-stress in emergent oscillations driven by distributed follower forces has received attention only recently despite the richness of its potential for bio-inspired applications \cite{sfb2,Bayly}.
In this paper, we study the effect of boundary constraints on the flapping oscillations of pre-stressed rods animated by follower forces. In particular, we focus on two statically indeterminate scenarios of boundary constraints: fixed-fixed (FF) and pinned-fixed (PF). By 'fixed-fixed', we refer to a rod clamped at both ends, and by 'pinned-fixed' we refer to a rod clamped at one end and attached to a pin joint at the other end allowing free rotation. In both scenarios, the rod is pre-stressed by decreasing the end-to-end distance, thereby generating a buckled shape, and then it is subjected to a uniformly distributed follower force along the centerline tangent. The lack of constraint at the free-end of a cantilever (fixed-free scenario) allows for either lateral oscillations or steady rotations to develop \cite{chelakkot}. In contrast, in statically indeterminate fixed-fixed (FF) and pinned-fixed (PF) scenarios, the slack generated by initial compression offers the necessary degree of freedom to allow for in-plane oscillations (flapping). The rod model presented here is three-dimensional, but due to planar perturbations and loads, the resulting oscillations remain planar. That said, we have observed that out-of-plane oscillations can emerge from two-dimensional base states. In this paper, however, we focus our attention exclusively on planar dynamics and will report our findings on three-dimensional oscillations in a follow-up publication. A broader impact of the results presented in this paper lies in recognizing how the interplay of geometry, elasticity, dissipation and activity unique to the pre-stressed scenarios can be harvested to move or manipulate fluid at various length scales.
\section{Model}
\label{sec:2}
\begin{figure*}
\begin{center}
\includegraphics[width=1.72\columnwidth]{Figure_1.png}
\caption{The graphics on the top show the schematic representations of a rod of unstressed length $L$ in fixed-fixed (FF) scenario (a) and in pinned-fixed (PF) scenario (b). The end-to-end distance in the buckled state is $L_{\mathrm{ee}} < L$. The corresponding shapes of the rod centerline in the buckled states are shown in the middle row for different values of $L_{\mathrm{ee}}/L$. The dashed line corresponds to the {unbuckled} case $L_{\mathrm{ee}}/L=1.0$. Pre-stress contributes to the tension along the filament, $f_3$ (bottom row). For the shapes we study in this paper, there is a one-to-one correspondence between the tension, $f_3$, and pre-stress.}
\label{fig:1}
\end{center}
\end{figure*}
The continuum rod model that we use follows the classical approach of Kirchhoff \cite{kirk}, which assumes each cross-section of the rod to be rigid. The model is described in detail in \cite{sfb2}. To briefly summarize, equilibrium equations (\ref{linear_momentum}) and (\ref{angular_momentum}), and the compatibility conditions (\ref{position_continuity}) and (\ref{orient_continuity}) are given below:
\begin{equation}
m(\frac{\partial \vec{v}}{\partial t} + \vec{\omega} \times \vec{v}) - (\frac{\partial \vec{f}}{\partial s} + \vec{\kappa} \times \vec{f}) - \vec{F} = \vec{0} \label{linear_momentum}
\end{equation}
\begin{equation}
\underline{\mathbf{I_m}}\frac{\partial \vec{\omega}}{\partial t} + \vec{\omega} \times {{ \underline{\mathbf{I_m}}}}\vec{\omega} -(\frac{\partial \vec{q}}{\partial s} + \vec{\kappa} \times \vec{q}) + \vec{f} \times \vec{r} - \vec{Q}= \vec{0} \label{angular_momentum}
\end{equation}
\begin{equation}
\frac{\partial \vec{r}}{\partial t} + \vec{\omega} \times \vec{r} - (\frac{\partial \vec{v}}{\partial s} + \vec{\kappa} \times \vec{v}) =\vec{0} \label{position_continuity}
\end{equation}
\begin{equation}
\frac{\partial \vec{\kappa}}{\partial t} - (\frac{\partial \vec{\omega}}{\partial s} + \vec{\kappa} \times \vec{\omega}) =\vec{0} \label{orient_continuity}
\end{equation}
Here $s$ is the cross-section location along the rod, $t$ is time, $m(s)$ is the mass of the rod per unit length and tensor $\underline{\mathbf{I_m}}(s)$ is the moment of inertia per unit length in the body-fixed frame of reference. Variation of vector $\vec{r}(s, t)$ encodes shear and extension of the rod. In this paper, it is assumed constant to ensure in-extensibility and un-shearability. The vectors $\vec{F}$ and $\vec{Q}$ are the external distributed force and moment, respectively. They include the distributed follower force as well as interactions of the rod with the environment such as fluid drag. Note that the spatial and temporal derivatives in equations (\ref{linear_momentum}) - (\ref{orient_continuity}) are relative to the body-fixed frame, which obviates the need of transforming body-fixed follower forces and drag to inertial frame.
The unknown variables that we need to solve for are: the vector $\vec{\kappa}(s,t)$ that captures two-axes bending and torsion, the vectors $\vec{v}(s,t)$ and $\vec{\omega}(s,t)$ that represent the translational and the angular velocities of each cross-section, respectively, and the vector $\vec{f}(s,t)$ that represent internal shear force and tension. The internal moment vector $\vec{q}(s,t)$ in the angular momentum equation (\ref{angular_momentum}) is related to $\vec{\kappa}(s,t)$ by the linear constitutive law
\begin{equation}
\vec{q}(s,t) = \underline{\mathbf{B}} \vec{\kappa}, \label{const}
\end{equation}
where the tensor $\underline{\mathbf{B}}(s)$ represents the bending and torsional stiffness of the rod. In the body-fixed frame that coincides with principal torsion-flexure axes, the stiffness tensor $\underline{\mathbf{B}}$ is expressed as
\begin{eqnarray}
[\underline{\mathbf{B}}]=
\left[ {\begin{array}{ccc}
EI_1 & 0 & 0 \\
0 & EI_2 & 0 \\
0 & 0 & GI_3\\
\end{array} } \right],
\label{stiffness}
\end{eqnarray}
where $E$ is the Young's modulus, $G$ is the shear modulus, and $I_1$, $I_2$, and $I_3$
are the second moments of cross-section area about the principal torsion-flexure axes.
The \textit{Generalized-$\alpha$} method is adopted to compute
the numerical solution of this system, subjected to necessary
and sufficient initial and boundary conditions. A detailed description of
this numerical scheme applied to the rod formulation is given in \cite{sfb}. We have validated this scheme by comparing our results with the known results of Beck's column \cite{sfb2}.
To model fluid dissipation, we have used either Stokes -like linear drag [S] or the quadratic Morrison drag [M] in our simulations. These drags are given by equations (\ref{eq:dragS}) and (\ref{eq:dragM}) below, respectively \cite{sachin}:
\begin{equation}
\vec{F}_{\textrm{S}}=-\frac{1}{2}\rho_{\textrm{f}} d \Big( C_n \vec{t}\times(\vec{v}\times \vec{t}) + \pi C_t(\vec{v}\cdot \vec{t})\:\vec{t} \Big)
\label{eq:dragS}
\end{equation}
\begin{equation}
\vec{F}_{\textrm{M}}=-\frac{1}{2}\rho_{\textrm{f}} d \Big( C_n|\vec{v}\times \vec{t}|\vec{t}\times(\vec{v}\times \vec{t}) + \pi C_t(\vec{v}\cdot \vec{t})|\vec{v}\cdot \vec{t}|\:\vec{t} \Big)
\label{eq:dragM}
\end{equation}
Here, $\rho_{\textrm{f}}$ is the fluid density, $d$ is diameter of the rod, ${\bf t}$ is the unit tangent vector along the rod's centerline and $C_n$ and $C_t$ are drag coefficients in the normal and tangential directions, respectively. In this paper we primarily focus on results obtained for quadratic drag. However, we also comment on results obtained using the linear drag.
\begin{table*}[htp]
\begin{center}
{\small
\begin{tabular}{ | l | l | l | l |}
\hline
{\bf Quantity} & {\bf Variable} & {\bf Value} & {\bf Units} \\ \hline\hline
Diameter & $d$ & $ 0.0096 $ & m \\ \hline
Length & $L$ & 8 & m \\ \hline
Mass per unit length& $m$ & 0.2019 & kg/m \\ \hline
Young's modulus & $E$ & 68.95 & GPa \\ \hline
Shear modulus & $G$ & 27.58 & GPa \\ \hline
Second moment of area & $I_1=I_2=I$ & 4.24 $\times$10$^{-10}$ & m$^4$ \\ \hline
Polar moment of area & $I_3$ & 8.48 $\times$10$^{-10}$ & m$^4$ \\ \hline
Normal drag coefficient & $C_n$ & 0.1 & s/m [S], none [M] \\ \hline
Tangential drag coefficient & $C_t$ & 0.01 & s/m [S], none [M] \\ \hline
Surrounding fluid density & $\rho_{\textrm{f}} $ & 1000 & kg/m$^3$ \\ \hline
\end{tabular}
\caption{Numerical values for the geometric and elastic properties of the rod and drag coefficients used in the simulations. {The ratio $C_n/\pi C_t = 3.18$ is comparable to the value 2 for the limit of purely viscous (Stokesian) drag ratio for a slender rod using resistivity theory} \cite{chelakkot}.}
\label{tab1}}
\end{center}
\end{table*}
\section{Results}
\label{sec:3}
In this section, we present and compare the simulation results for the post-buckling analysis of pre-stressed rods with fixed-fixed (FF) and pinned-fixed (PF) boundary conditions. The aim is to quantify the effect of pre-stress on the stability margin for both of these boundary conditions. We also compare the results with the cantilever (fixed-free) scenario to shed more light on how pre-stress can be used to manipulate the onset of oscillations (i.e., the critical point).
\begin{figure}
\begin{center}
\includegraphics[width=0.66\columnwidth]{Figure_5.png}
\caption{Configurations of the oscillating rods are depicted during one time-period when $F=20$ N/m, and the drag
force is quadratic [M] in the rod velocity.}
\label{fig:3a}
\end{center}
\end{figure}
In all the simulations an initially straight cylindrical rod is used with the properties given in Table 1. The pre-stress is generated by moving one end of the rod relative to and towards the other as shown in Figure 1. The pre-stress values are determined and controlled by the end-to-end distance, $L_{ee}$. Then we apply uniformly distributed follower load, $F$ to the pre-stressed rod along the tangential direction of the rod's centerline. As the follower force exceeds a critical value $F_{cr}$, the buckled equilibrium is destabilized and flapping oscillations emerge. The simulation snapshots in Figure 2 show some examples of how the shape of the rod evolves during one complete oscillation for all three boundary conditions (FF, PF and Cantilever) and $F=20$ N/m $>F_{cr}$. For the pinned-fixed scenario, flapping oscillations emerge only when the follower force points from the pinned end towards the fixed end. Upon reversing the direction of the follower force, flapping oscillations disappear and stable equilibria evolve. The stable equilibrium shapes that evolve in this scenario with increasing follower force are shown in Figure 3. For the cantilever, the follower force causes flapping oscillations when it points from free end towards fixed end, not otherwise. This is expected as instabilities are due to compressive stresses. Tensile stresses do not lead to instabilities.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.92\columnwidth]{Figure_2.png}
\caption{In pinned-fixed scenario if the direction of the follower force is from the fixed end towards the pinned end, flapping oscillations do not occur. Instead the stable equilibrium shape keeps evolving with increasing follower force. }
\label{fig:3b}
\end{center}
\end{figure}
In the next sub-section, we analyze the flapping oscillations by looking at the transfer of energy to and from the rod, and the variation of total energy stored in the form of strain and kinetic energies. This allows us to rationalize how the instability-driven flapping oscillations are sustained in steady state.
Then, we present results for the critical value of the follower force $F_{\mathrm{cr}}$ versus pre-stress measured by end-to-end distance $L_{\mathrm{ee}}/L$ for both fixed-fixed and pinned-fixed scenarios. Next, we examine how the planar beating frequency, $\omega(F, L_{\mathrm{ee}}/L)$ both at the critical point and for values of the follower force $F > F_{\mathrm{cr}}$ depends on the pre-stress. To highlight the effect of pre-stress we also report $F_{\mathrm{cr}}$ of a cantilever (fixed-free) rod, which has no pre-stress, and show its frequency response as well. Finally, we discuss design implications of some of the observations
\subsection{Energy Exchange During Flapping}
\label{sec:3_0}
Figure 4 shows how strain energy, kinetic energy, work done by follower force, and energy dissipated by fluid drag evolve as flapping oscillations emerge and reach a steady state in a pinned-fixed scenario for $F=F_{cr}$, and slack (related to end-to-end distance) $1 - L_{ee}/L$ $= 0.3$. Snapshots of the shape show the rod flapping from the base state (A) to (C) via strain energy maxima (B) and symmetrically flapping back to configuration (E) via (D). In each cycle, the follower force does work to (i) increase the elastic energy stored in the rod via the strain field, (ii) increase the kinetic energy of the rod, and (iii) overcome the fluid dissipation (from A to B or C to D). Specially, we note the steep ramp-up in these intervals corresponding to an increase in strain energy. In contrast, between (B) and (C), or (D) and (E), the total mechanical energy stored in the system (strain energy and kinetic energy) continues to drive the oscillations overcoming the negative work done by follower force and again the fluid dissipation. Thus elastic and kinetic energies of the deforming rod serve to mediate the transfer of energy eventually from the active forces to the ambient fluid in each cycle.
When the rod shapes from (A) through (E)--the complete cycle--are superimposed (left top graphic in Figure 4), it is obvious that while flapping back the rod doesn't retrace its configuration. The graphic also shows trajectories of three points at $s=0.25L$, $s=0.50L$, and $s=0.75L$ on the rod. The points don't retrace their paths, but instead follow a figure 8-like loop. If the entire rod were to retrace back its configuration while flapping back, i.e., if all points were to retrace their paths, the follower force could not have done any net work.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=1.88\columnwidth]{Figure_3.png}
\caption{Tracking strain energy, $E_B(t)= \int^L_0 \frac{1}{2}E\underline{\mathbf{I_m}}\vec \kappa \cdot \vec \kappa ds$, kinetic energy, $E_K(t)= \int^L_0 \frac{1}{2}m\vec v \cdot \vec v ds$, for $F=F_{cr}$ and slack $1 - L_{ee}/L=0.3$, dissipation energy, $W_{{\bf F}_M}(t)= \int^t_0 \int^L_0 {\bf F}_M \cdot \vec v ds d\tau $, and work done by follower force, $W_{F}(t)= \int^t_0 \int^L_0 F\vec t \cdot \vec v ds d\tau $ illustrates that intervals of positive and negative work done by follower force correspond to increase and decrease in strain energy, respectively, and that intervals of peak kinetic energy correspond to jumps in energy dissipation. Points on the rod are found to trace an 8-like shaped loop. On the right, strain energy, kinetic energy, as well as the snapshots of the rod shapes at the base state energy level (A, C, and E) and at the maximum of strain energy (B and D) show the exchange of energy during oscillations. }
\label{fig:4}
\end{center}
\end{figure*}
\subsection{Critical Force for Onset of Flapping}
\label{sec:3_1}
\begin{figure*} [t]
\begin{center}
\includegraphics[width=1.72\columnwidth]{Figure_4.png}
\caption{(a) Critical load for onset of oscillations, $F_{\mathrm{cr}}$ versus normalized decrease in end-to-end distance, $1- L_{\mathrm{ee}}/L$ for both pinned-fixed and fixed-fixed scenarios.
For $0.05 <1- L_{\mathrm{ee}}/L < 0.5$, the critical force $F_{\mathrm{cr}}$ increases as $1- L_{\mathrm{ee}}/L$, or pre-stress, increases. Normalized critical load for stress-free cantilever is $F_{\mathrm{cr}}L^3/(4\pi^2 EI)=0.0916$ (b) Strain energy of the base state, $E_B$ increases with pre-stress. It is also higher for fixed-fixed boundary condition than for the pinned-fixed condition.}
\label{fig:2}
\end{center}
\end{figure*}
The critical values of the distributed follower force (or critical follower force densities) are computed by numerically integrating the time dependent equations (1)-(7) and seeking the point at which stable oscillations emerge as the follower load increases. Note that this is done using time-integration and not via continuation methods. This procedure is repeated for several values of pre-stress, $1-L_{\mathrm{ee}}/L$. Since our aim is the identification of the parameter range that can be explored experimentally, we focus only on the pre-stress values satisfying $0.95 < 1- L_{\mathrm{ee}}/L $.
As soon as the magnitude of the follower load is above the critical value, $F>F_{\mathrm{cr}}$, base states become unstable and oscillations emerge. Figure 5(a) shows the magnitudes of the critical follower load $F_{\mathrm{cr}}$ against the slack, $1-L_{\mathrm{ee}}/L$ for both fixed-fixed (FF) and pinned-fixed (PF). We find that (i) in both cases the critical follower force density increases in magnitude as the pre-stress in the rod increases (ii) for the same end-to-end distance, other things being equal, FF boundary condition has a larger critical point in comparison to PF, and (iii) the magnitude of critical follower load is nearly the same for linear drag, quadratic drag, or no fluid drag (discrepancies being $<2$\%). We can explain the first finding by looking at the strain energies of the base states shown in Figure 5(b). Larger slack corresponds to larger pre-stress which in turn implies that base state has a larger strain energy. Hence, as slack increases a larger follower force is required to overcome the larger barrier of elastic energy in order to initiate the flapping oscillations. Similarly, the second finding can also be explained by the fact that for a given pre-stress value, FF base states possess higher strain energy than do the PF base states. And finally, the third finding can be explained by the fact that critical point is governed by the linear stability of the system while undergoing small perturbations, therefore nonlinear [M] and linear [S] drags yield the same critical value. In addition, we surmise that the onset of oscillations and the onset of flutter are very close to one another for the parameter range investigated here since values of critical load in absence of drag are close to the values found with fluid dissipation.
Finally,
by examining the slopes in Figure 5(a), we find that the rate at which the critical values increase with pre-stress is larger for PF than for FF, hence PF stability region is more sensitive to pre-stress than FF stability region.
\subsection{Frequency of Flapping}
\label{sec:3_2}
We next examine the frequency of oscillations, i.e., the wave speed associated with the propagation of curvature along the arc-length as the rods cycle in configuration -time space. In all simulations we observe that for $F \geq F_{\mathrm{cr}}$ oscillations eventually reach a stable state, implying the rate of energy input into the system due to the action of the nonconservative follower forces balances the rate of energy dissipated by the fluid drag. We track the oscillations for 40 seconds for each $F$ value corresponding to a minimum of 8 full oscillations up to a maximum of 70 full oscillations (once stable state is attained).
The computational model described in section 2 is used here to systematically investigate the effect of pre-stress and the follower force on the frequency of oscillations, $\omega(F, L_{\mathrm{ee}}/L)$ near the critical point as well as far from the critical point where $|F - F_{\mathrm{cr}}|/F_{\mathrm{cr}} > 1$. To better understand the results, we juxtapose the cantilever (stress-free) force-frequency curve with those of fixed-fixed (FF), and pinned-fixed (PF) loading scenarios. Figure 6 illustrates the frequency of flapping oscillations for rods under various end-to-end distances and subjected to Morrison drag. The frequency values are cast on a log-log scale; shown alongside is the power law relating the frequency to the follower force, $\omega \sim F^{5 \over 6}$ found theoretically using scaling arguments based on power and dissipation rates \cite{sfb2}. When oscillations reach steady state the rate at which energy enters the system due to the work done by nonconservative follower force balances the rate at which energy dissipates due to the fluid drag.
Figure 6 illustrates that frequencies in both PF and FF cases converge to that of the cantilever as the pre-stress vanishes. Moreover, far from the critical point, it can be observed that force-frequency curves for all three loading scenarios and all pre-stress values collapse into one.
\subsection{Implications for Design}
In this section we discuss some of the implications of this study for design of synthetic active filaments in applications such as biomimetic soft robots or micro fluidic devices. First, we comment on how pre-stress can be potentially used to control the oscillations, second we discuss the role boundary conditions can play in regulating the dynamics of oscillations. All of these implications are grounded on, and hence are relevant to the range of parameters explored in this paper
\subsubsection{Regulation by pre-stress}
Based on the results presented in Figure 5, we conclude that pre-stress can be effectively used to regulate the onset of oscillations and to control the stability margin in both FF and PF cases. In the region near the critical point, pre-stress can be also an effective parameter to control the frequency of the oscillations in both FF and PF scenarios, however far form the critical point the frequency of oscillations become independent of pre-stress and independent of the boundary conditions.
\subsubsection{Regulation from boundary constraints}
Another important observation we have made is that in PF scenario if the direction of the follower force is from the fixed end towards the pinned end (F to P), there would be no emergent oscillations possible. Thus, all the PF results presented here can be produced only when direction of the follower force is from the pinned end towards the fixed end (P to F). This feature can be used to manipulate the oscillations. Altering boundary conditions independently from the mechanism that generate follower force can be used to start or stop the flapping oscillations. For example, in FF scenarios by converting one clamped end into a pin joint this feature can be used to suppress oscillations.
\begin{figure}
\includegraphics[width=8cm]{Figure_6.png}
\caption{Frequency for the Morrison [M] drag plotted as a function of the force density $F$ plotted in logarithmic scales to illustrate two salient features; (i) as the follower force increases to values much larger than the critical values, the effect of the pre-stress diminishes--similar frequencies are observed for fixed-fixed (FF) and pinned-fixed (PF) scenarios far from criticality--and (ii) the frequencies in the limit $F \gg F_{\mathrm{cr}}$ scale roughly as $\omega \sim F^{5 / 6}$ consistent with our theoretical prediction.}
\label{fig:5}
\end{figure}
\section{Conclusions}
\label{sec:4}
In this paper we analyzed the post-buckling flapping oscillations of constrained slender structures subjected to tangential follower loads using a computational rod model. This scheme was benchmarked by comparing with established results of critical buckling force for Beck's column \cite{sfb2}. We focused on slender rods maintaining a straight shape in stress-free state (i.e., having neither intrinsic curvature and twist nor axial tension) with boundary conditions being either both ends clamped, or one end clamped and the other pinned. By moving the two ends of the rod towards one another, the structure is forced to buckle. Thus, effects of pre-stress and boundary conditions are systematically tested both on the emergence of buckling instabilities, as well as on the post-buckling oscillations induced by follower force. In these computations the inertia of the rod, geometry and the fluid drag coefficients are held fixed. We found that beyond a critical value of distributed and compressive follower loads the buckled shapes become unstable and oscillatory beating emerges. This critical value is found to be larger for rods with fixed-fixed boundary condition in comparison to the rods with pinned-free constraints. Nonetheless, the magnitude of the critical follower load increases as the magnitude of the pre-stress in the structure increases.
Far from criticality, i.e. for $F$ much greater than the critical value needed to initiate the oscillations, the response frequencies exhibit a power law dependence on $F$ with an exponent $5\over 6$. This exponent is explained by consideration of a power balance between the active energy pumped into the system by the nonconservative follower forces and energy dissipated due to fluid drag.
As mentioned earlier, we have found critical forces for pinned-fixed condition to be smaller that the critical force for fixed-fixed boundary condition for the same value of the slack. This is consistent with previous work on animated filaments without pre-stress \cite{chelakkot} where it is found that the critical load for pinned-free scenario is smaller than that of fixed-free (cantilever). A linear stability analysis for these two cases indicates that non-trivial solutions emerge from the trivial state via a Hopf-Poincare bifurcation with flapping (complex conjugate eigenvalues crossing the real axis) for fixed-free loading condition. For the pinned-free scenario, since there is a rotational degree of freedom at the pin and no energy penalty for free-rotations about this point, the linear stability suggests a simple global bifurcation (single eigenvalue crossing zero) and the nonlinear stable state is a rotating coil. In our study, when the follower force is directed towards the pinned end while the other end is clamped, the strained rod cannot rotate about the pin, instead it deforms and reaches a state of static equilibrium; in the vicinity of the pivot the rod is highly curved.
Our approach provides a platform to investigate the interplay between geometry, elasticity, dissipation, and activity and overall to contribute towards designing bio-inspired multi-functional, and synthetic structures to manipulate and control fluid at various
length scales or generate propulsion in soft robotics. Further extensions and developments of this study need to examine the stability margin and dynamics of emergent oscillations subjected to three-dimensional perturbations. Moreover, the fluid-structure interaction model can be improved to incorporate two-way coupling and hence analyze, inter alia, an ensemble of filaments and their interaction. Finally, continuation and homotopy methods using Newton-GMRES \cite{Anwar} or variants that are adapted to use time-steppers to trace both unstable and stable solutions branches will complement the analysis presented here.
\vspace{0.5cm}
\textbf{Conflict of Interest}: The authors declare that they have no conflict of interest.
|
1,314,259,993,178 | arxiv | \section{Introduction}
\label{sec:intro}
Ultra-intense laser interactions with dense targets represent an interesting regime, both from a fundamental and an applied perspective, that has not yet been exhaustively explored. One less explored phenomenon in this regime is the formation of electron and ion density peaks due to a laser pulse that strongly reflects from a dense target. There are papers that discuss this process -- sometimes called ponderomotive steepening -- going back to Estabrook et al.~1975\cite{estabrook1975two}. Figure~\ref{fig:sketch} provides a qualitative sketch of the physics involved in this laser-plasma interaction. First, a normally incident, linearly polarized laser makes a strong reflection from a dense plasma. The interference between the incident and reflected pulse produces a standing wave pattern (Fig.~\ref{fig:sketch}a). The ponderomotive force associated with this standing wave has a strong effect on the electron distribution (Fig.~\ref{fig:sketch}b) and, over time, peaks form in the density of both the electrons and ions (Fig.~\ref{fig:sketch}c). Readers who are familiar with Kruer's 1988 textbook \cite{kruerbook} will recall the discussion of this phenomenon there. Ponderomotive steepening also draws many parallels to theoretical and computational work that considers the standing electromagnetic (EM) wave formed by crossing two laser pulses to generate plasma optics such as plasma gratings\cite{plaja199diffraction, Sheng2003} and so-called transient plasma photonic crystals\cite{Lehmann2016,Lehmann2017,Lehmann2019} which are phenomena that may have useful applications in the future (see discussions in Refs.~\cite{Sheng2003,Lehmann2017}). From an experimental point of view, ponderomotive steepening only requires one laser pulse, and the high densities near the critical surface allow for larger transverse electric fields than with counter-propagating lasers in low density media.
\begin{figure*}
\includegraphics{SketchPonderomotive.pdf}
\caption{Sketch of the ponderomotive steepening process. As illustrated in (a), a normally incident laser pulse reflects at the critical density of a plasma and forms a standing electromagnetic wave. This causes the electrons to form peaks near the extrema (separated by $\approx \lambda/2$) of the standing wave via the ponderomotive force (b). The modification of the electron density creates a charge imbalance (sustained by the standing wave), which accelerates ions towards the electron peaks. In time, this modifies the density of the plasma as illustrated in~(c). Note that (b) and (c) only include the standing wave region from (a). } \label{fig:sketch}
\end{figure*}
We are motivated to return to this topic with fresh eyes in part due to the maturation of technologies to produce intense laser pulses at mid-infrared (IR) wavelengths ($2$~\si{\um} $\lesssim \lambda \lesssim 10$~\si{\um}) \cite{lasermag}. This presents an opportunity to examine the wavelength dependence of intense laser-matter interactions to see if theoretical models developed from studying laser interactions at shorter wavelengths remain valid at longer wavelengths (e.g.~Ref.~\cite{ngirmang2017particle}, and ongoing research efforts\cite{MURIresearch}). As discussed later, the density peaks that form with ponderomotive steepening are separated by approximately half the laser wavelength. It is therefore challenging to detect and resolve these density peaks in near-IR or shorter-wavelength laser interactions. There have been many experiments that confirm that the ponderomotive force does steepen the plasma profile near the target as expected (e.g~\citet{Fedosejevs_etal1977,Gong_etal2016}) and researchers have found evidence in experiments with counter-propagating near-IR laser pulses that the interference shapes the plasma distribution in a low density medium (e.g.~\citet{suntsov2009femtosecond}). However, multiply-peaked ponderomotive steepening has not yet been directly observed with interferometry or by other means. We aim to provide useful analytic insights for experimentalists working to demonstrate this effect.
A challenge for connecting theory to observation is that ponderomotive steepening is simplest to model and has larger longitudinal electric field strengths when the laser interactions are at normal incidence, whereas at the highest intensities normal incidence experiments are rare because of the potential damage that the reflected pulse could do to optical elements. There are, however, methods to protect optics from the reflected pulse. Normal incidence experiments were conducted, for example, at $\approx 10^{18}$~W~cm$^{-2}$ peak intensities with $\approx 3$~mJ pulses at a kHz repetition rate in Refs.~\cite{Morrison_etal2015,Feister_2017}. Although the present paper is not tied to modeling interactions from a particular laser system, it is important to note that normal incidence experiments can be performed. As will be discussed, ponderomotive steepening is not typically thought of as an ion acceleration process but some ions do reach significant energies due to the charge separation caused by the ponderomotive force; and experiments could investigate this regime. According to estimates that agree with our 2D(3v) PIC simulations, under the right conditions and laser parameters these interactions have the potential to accelerate ions to energies exceeding 100~keV. Experiments of this kind would also be interesting as a new type of code validation experiment for high intensity laser-plasma interactions. Both during and after the laser interaction, ions move and the electron and ion density profiles change over time which can be investigated with interferometry\cite{Feister_etal2014,Grava_etal2008, Filevich_etal2009} and measurements of escaping ion energies (e.g.~Ref.~\cite{morrison_etal2018}). The simplicity and symmetry of normal incidence interactions would be helpful for comparing experiment to simulation and theory in a straightforward way.
In Sec.~\ref{sec:theory}, we provide a brief review of the physics of ponderomotive steepening and identify the relevant timescales for ion motion using simple analytic models. In Sec.~\ref{sec:sims} we describe 2D(3$v$) PIC simulations that exhibit multiply-peaked ponderomotive steepening. In Sec.~\ref{sec:results} the simulation results are presented and compared to the analytic models discussed in Sec.~\ref{sec:theory}. Finally, we address implications of our results in the concluding sections.
\section{Ponderomotive Steepening and Ion Acceleration}\label{sec:theory}
The traditional analytic approach for ponderomotive steepening considers a steady state solution to the fluid equations, to which a term for the ponderomotive force is added. The electric field is then assumed to take a particular form based on the geometry of the problem and to allow for numerical solutions or approximate solutions~\cite{estabrook1975two,Lee_1977, Jones_1981,Estabrook_1983,wenda1988ponderomotive,kruerbook}. These approximations limit the validity of the conclusions, and the steady state solution provides little insight into the dynamics of the phenomenon. We investigate these dynamics by developing a simple model to estimate the longitudinal electric fields experienced by the ions and comparing the predictions to PIC simulations. Our simple model is similar in many ways to an analytic model described in a recent paper by \citet{Lehmann2019} that considers the dynamics of the electron motion for the case of lower intensity counter-propagating laser beams in a low density medium with 1D Vlasov simulations. Our paper is complimentary to theirs because we consider standing waves that form from the normal incidence reflection of intense laser pulses from an overdense target preceded by a $\approx$1/20th of critical density shelf and as just mentioned, we focus on the dynamics of the ions. Our 2D(3$v$) PIC simulations also include the focusing of the laser. Where appropriate, we provide comments for those wishing to compare our work to \citet{Lehmann2019}. The timescales and intensity thresholds we develop are very similar to their models.
\subsection{A simple model for ponderomotive and electrostatic forces in ponderomotive steepening}
As sketched in Fig.~\ref{fig:sketch}, the laser creates a charge imbalance due to the ponderomotive force on the electrons, which in turn creates a longitudinal electric field to accelerate the ions towards the electron peaks. We develop a simple model that balances the ponderomotive force with the Coulomb force associated with the charge separation.
A charged particle in an inhomogeneous EM field experiences the ponderomotive force, which is a cycle-averaged force that models the motion of these particles on timescales larger than the laser period. For a particle of mass $m$ with charge $e$ and an electric field with frequency $\omega$ and amplitude {\bf E}, the ponderomotive force is given by
\begin{equation}\label{eq:ponderEq}
{\bf{F}}_p = -\frac{e^2}{4 m \omega^2} \nabla {\bf E^2(x)},
\end{equation}
where the electric field is cycle-averaged. While this effect is experienced by both electrons and ions, for the laser intensities we are concerned with here, the much more massive ions are hardly affected by the ponderomotive force.
We consider a linearly polarized plane electromagnetic wave propagating in the $+x$ direction and reflecting off of a semi-infinite overdense plasma at $x>0$. Similar to Refs.~\cite{kemp2009hot,may2011mechanism}, we assume that the plasma is a perfect conductor and reflects 100\% of the light, resulting in a standing EM wave (x$<$0) with electric and magnetic fields described by
\begin{eqnarray}\label{eq:sw}
E_z &= 2 E_0 \sin \left(\frac{2\pi x}{\lambda} \right)\sin(\omega t) \\
B_y &= 2 B_0 \cos \left( \frac{2\pi x}{\lambda} \right)\cos(\omega t).
\end{eqnarray}
where $E_0$ is the electric field strength of the laser if there had been no reflection and $B_0 = E_0 / c$. In real experiments we do not expect 100\% reflectivity. Yet, the reflectivity can be high for laser interactions near, but not significantly above, the threshold for relativistic effects since the high temperatures produce a nearly collisionless plasma, but relativistic absorption is not yet pronounced \cite{levy2014petawatt,Orban_etal2015}. Inserting Eq.~\ref{eq:sw} into Eq.~\ref{eq:ponderEq} yields the longitudinal ponderomotive force associated with the standing wave,
\begin{equation}\label{eq:pf}
{{F}}_p = -\frac{\lambda e^2 E_0^2}{2 \pi m_e c^2}\sin \left(\frac{4\pi x}{\lambda}\right),
\end{equation}
which will be compared to the Coulomb force associated with the charge separation.
\subsubsection{Sinusoidal Density Variation Model}
\label{sec:sine}
We begin with a simple model that balances the ponderomotive force on the electrons with the Coulomb force associated with a sinusoidal density variation in the pre-plasma. We assume that before reaching the overdense plasma at $x>0$ the laser travels through a constant, sub-critical density shelf. We estimate the strength and spatial dependence of the electrostatic force in this shelf by choosing a distribution to perfectly balance the ponderomotive force when integrated with the one-dimensional Poisson equation. This produces a sinusoidal electron density modulation of the form
\begin{equation}
n_{\rm ele} = n_0 + n_e\cos \left(\frac{4\pi x }{ \lambda}\right), \label{eq:density}
\end{equation}
where $n_0$ is the average electron density in the plasma (i.e. the electron density at that location in the plasma before the laser pulse arrives) and $n_e$ describes the amplitude of the density modulation. Equation~\ref{eq:density} is useful for gaining qualitative insight into the ponderomotive steepening process. We remind the reader that the ponderomotive force is time averaged, so this simple model does not fully capture the physics involved. Moreover, as presented in the following sections, simulations indicate that the electron distribution is more strongly peaked than this.
Note that because the local electron density must always be greater than or equal to zero, $n_e$ in Eq.~\ref{eq:density} must not exceed $n_0$ as you cannot remove more electrons than are available in the plasma. Since the laser only travels in the sub-critical-density region of the plasma, $n_0$ must also be less than the critical density, $n_{\rm crit} = {4\pi^2\varepsilon_o m_e c^2}/{\lambda^2e^2},$ and the maximum electron density is limited. Integrating Eq.~\ref{eq:density} with the one-dimensional\footnote{For experimental beam profiles, we assume that the laser spot size is much larger than $ \lambda/2$. The laser focus is well into the target for our simulations.} Poisson equation results in a quasi-static electric field in the longitudinal (${x}$) direction of the form
\begin{equation}\label{eq:ef}
E = -\frac{e n_e \lambda}{4\pi \varepsilon_0}\sin \left(\frac{4\pi x}{ \lambda} \right).
\end{equation}
According to Eq.~\ref{eq:ef} the peak longitudinal electric field is expressed by
\begin{equation}
E_{\rm max} = \frac{n_e}{n_{\rm crit}} \frac{\pi m_e c^2}{e \lambda}
\end{equation}
which is equivalent to
\begin{equation}\label{eq:emax1}
E_{\rm max} = (1.6 \times 10^{12} \, {\rm V/m}) \times \left( \frac{n_e}{n_{\rm crit} }\right) \left( \frac{1 \, \si{\um}}{\lambda}\right).
\end{equation}
Note that $n_e$ depends on the intensity of the laser. If we equate the peak ponderomotive force (Eq.~\ref{eq:pf}) to the electrostatic force from (Eq.~\ref{eq:ef}), one finds that for this model the laser is limited to displacing electron densities up to
\begin{align}\label{eq:nmax}
n_{\rm e, max}&= \frac{4 I }{m_e c^3} \nonumber \\&= (1.6 \times 10^{21} \, {\rm cm^{-3}})\times \left( \frac{I}{10^{18} \, {\rm W~ cm^{-2}}} \right)
\end{align}
where the laser intensity $I= c \varepsilon_0 |E_0|^2/2$ has been used to simplify the expression and $n_{\rm e, max}$ is less than the initial electron density in the plasma. For this model, the density modulation is saturated with a critical intensity of
\begin{align} \label{eq:critIntensity}
I_{\rm crit} &= \frac{m_e c^3 n_0}{4} \nonumber \\&= ( 6.8 \times 10^{17}~ {\rm W ~cm^{-2})} \times \left( \frac{n_0}{n_{\rm crit}}\right) \left(\frac{1 \si{\um}}{\lambda}\right)^2.
\end{align}
For this critical intensity, the normalized vector potential $a_0$ for the laser ($a_0 = eE_0 / m_e \omega c$), is
\begin{equation}
a_{0,\rm{crit}} = \sqrt{\frac{n_0}{2n_{\rm crit}}},
\end{equation}
or in terms of the electron plasma frequency, $\omega_{pe} = \sqrt{n_e e^2 / m_e \varepsilon_0}$ (using $n_e = n_0$),
\begin{equation}
a_{0,\rm{crit}} = \sqrt{\frac{1}{2}} \frac{\omega_{pe}}{\omega}.
\end{equation}
Since $a_{0,\rm{crit}} \lesssim 0.7$, it is clear that the applicability of this model does not extend to the strongly relativistic regime. Intensities somewhat above this limit are considered in the next subsection. We note the similarity to this estimate with the wave-breaking limit in laser wake-field acceleration \cite{TajimaDawsonWakefield}, and in \citet{Lehmann2019} this intensity threshold relates to the transition between what they call the ``collective electron" regime to the ``single electron bouncing" regime. For high electron temperatures, this type of model could be extended by considering the Bohm-Gross frequency\cite{BohmGross} like in Ref\cite{Lehmann2019}. Laser driven instabilities would also play a role in certain regimes\cite{kruerbook,drake2010high}.
\subsubsection{Maximum Depletion Limiting Case}
\label{sec:maxdeplete}
At laser intensities significantly above the critical estimate derived in the previous subsection, the electrons are more strongly peaked than predicted by Eq.~\ref{eq:density} and our sinusoidal model breaks down, as demonstrated by simulation results that will be presented later. Although the sinusoidal model breaks down, there is a simple way to determine the maximum longitudinal electric fields in this limiting case. If all of the available electrons are evacuated to the peaks, the maximum electric field in the maximum depletion regime is a factor of $\pi$ greater than the sinusoidal model. This comes from integrating the charge density in the depletion region ($e n_0 \times \lambda/4)$, giving the maximum longitudinal electric field to be
\begin{align}
E_{\rm max} &= \frac{e n_0 }{\varepsilon_0} \left(\frac{\lambda}{4} \right)= \frac{n_0}{n_{\rm crit}} \frac{\pi^2 m_e c^2}{e \lambda} \nonumber \\
&= (5 \times 10^{12} \, {\rm V/m}) \times \left( \frac{n_e}{n_{\rm crit} }\right) \left( \frac{1 \, \si{\um}}{\lambda}\right)
\end{align}
This result is notable simply in that it implies that the longitudinal electric field is enhanced (relative to Eq.~\ref{eq:emax1}) at intensities slightly exceeding $I_{\rm crit}$ from Eq.~\ref{eq:critIntensity}, rather than being suppressed.
\subsection{Timescale of the ion acceleration}\label{sec:timescale}
This subsection determines a timescale for ion motion (for ions to reach an electron peak), which will be useful for comparison to the duration of the laser pulse. If the timescale for ion motion is longer than the laser pulse duration, then we characterize this as the `short pulse' regime. If instead, the laser pulse duration is significantly longer than this timescale, we label this as the `long pulse' regime.
We assume the plasma to be an initially neutral mixture of electrons and ions with charge $+Ze$ where $Z$ is the average ionization. Now we consider the electrostatic force on an ion of mass $m_i$ between two of the electron peaks. Following the sinusoidal model, we focus on the ions at a distance of $\lambda/8$ or less away from an electron peak as they will reach the electron peak more quickly (the farthest away ions are considered in Appendix~\ref{ap:maxIonE}), and we approximate the electric field as linear in this region (matching the slope of Eq.~\ref{eq:ef} near its root), or
\begin{equation}\label{eq:force}
F = -\frac{4Ze^2I}{m_e c^3 \varepsilon_0} x.
\end{equation}
This results in simple harmonic motion with an angular frequency of
\begin{equation}
\omega_{\rm ion} = \sqrt{\frac{4 Z e^2 I }{m_i m_e c^3 \varepsilon_0}}.
\end{equation}
We use this equation to compute the oscillation period of the ion motion. Since we are primarily interested in the dynamics of the ion density peak growth, we are concerned with the timescale for an ion to move from its initial location to the electron density peak. This timescale is equivalent one-quarter of the ion oscillation period (Appendix~\ref{ap:freq}), which is
\begin{equation}\label{eq:timescale}
\tau_{\rm ion} =\frac{\pi}{4}\sqrt{\frac{m_i m_e \varepsilon_0 c^3}{Ze^2 I}},
\end{equation}
where we note this formula is only valid for $I\le I_{\rm crit}$ where $I_{\rm crit}$ is given by Eq~\ref{eq:critIntensity}. For higher intensities, as discussed in Sec.~\ref{sec:sine}, the maximum electron density that the laser displaces is limited by the number of available electrons and critical density. This results in a minimum timescale of
\begin{align}\label{eq:timescale_min}
\tau_{\rm ion, min} &=\frac{\pi}{2}\sqrt{\frac{m_i \varepsilon_0}{Ze^2 n_0}}\nonumber \\ &= (51 \, {\rm fs}) \left(\frac{n_{\rm crit}}{n_0} \right)^{1/2} \left( \frac{\lambda}{1~ \si{\um}}\right) \left( \frac{m_{\rm i}}{2 Z m_p} \right)^{1/2}
\end{align}
where in the approximation we have for convenience assumed $Z=1$ and $m_i \approx 2 m_p$ where $m_p$ is the mass of a proton.
Equations~\ref{eq:timescale} and \ref{eq:timescale_min} are represented in Fig.~\ref{fig:estimate} which illustrates the division between the short pulse and long pulse regime as a function of laser intensity and wavelength. The laser wavelength does not appear in Eq.~\ref{eq:timescale}, which is why at low intensities in Fig.~\ref{fig:estimate} the timescale does not depend on wavelength.
At higher intensities the separation between the two regimes does depend on wavelength because our minimum timescale (Eq.~\ref{eq:timescale_min}) depends on the initial plasma density, where $n_{\rm crit}$ does depend on wavelength. Above this critical intensity, according to the sinusoidal model, the maximum electron density that the laser could displace exceeds the available number of electrons in the plasma near the electron peak. It should be noted that Eq.~\ref{eq:timescale} has the same scaling with parameters as the time estimate in \citet{Lehmann2019} for an ion ``grating" to develop in the standing wave.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{Timescale.pdf}
\vspace{-0.5cm}
\caption{The division between the short pulse and long pulse regime as a function of laser intensity and for a variety of wavelengths (for $n_0 \approx n_{\rm crit}$) for our simple model. The timescale on the vertical axis is the timescale of ion motion from electrostatic forces in ponderomotive steepening. The dashed line represents the timescale for a shelf density of $n_{\rm crit}/20$, pertaining to the simulations in this paper. The individual points on the graph represent the full pulse duration for our simulations (scaled by ion mass). } \label{fig:estimate}
\end{figure}
\subsubsection{Maximum ion velocities and energies}\label{sec:maxVelEnergy}
We assumed simple harmonic motion to obtain $\tau_{\rm ion}$ in Eq.~\ref{eq:timescale}. This approach also provides a characteristic ion energy which can be compared to our simulations. Assuming simple harmonic motion with an amplitude of $\lambda/8$ and an available electron density of $n_{\rm e}$ we have an ion velocity that increases with time as
\begin{equation}\label{eq:velion}
v_{\textnormal{ion}} \approx \frac{\pi}{4} \sqrt{\frac{Z m_e}{m_i}}\sqrt{\frac{n_{\rm e}}{n_{\rm crit}}}c ~\sin\left(\frac{\pi}{2}\frac{t_{\rm SW}}{\tau_{ \rm ion}}\right),
\end{equation}
where $t_{\rm SW}$ is time elapsed since the standing wave fields began. Although Eq.~\ref{eq:velion} does not explicitly depend on the laser wavelength, as mentioned earlier our expression for $\tau_{\rm ion}$ is only valid at laser intensities below the critical intensity (Eq.~\ref{eq:critIntensity}) which does depend on wavelength. Since shorter wavelength lasers have a higher critical intensity, one can reach much smaller values of $\tau_{\rm ion}$ as illustrated in Fig.~\ref{fig:estimate}, which would allow the ion velocity (Eq.~\ref{eq:velion}) to grow more quickly. But this growth is limited by the duration of the laser pulse if we are considering the short pulse regime ($t_{\rm SW} \ll \tau_{\rm ion}$).
For a sufficiently long duration laser pulse the standing wave fields will last long enough that $t_{\rm SW}$ approaches $\tau_{\rm ion}$. From Eq.~\ref{eq:velion}, it is straightforward to show that this implies a maximum kinetic energy exceeding 100~keV,
\begin{align}\label{eq:kemax}
{\rm{KE}}_{\rm max} &\approx \frac{\pi^2}{32}Zm_e\frac{n_{\rm e}}{n_{\rm crit}} c^2 \sin^2\left(\frac{\pi}{2}\frac{t_{\rm SW}}{\tau_{ \rm ion}}\right) \nonumber\\ &\approx 157.6\textnormal{~keV} \times Z\left(\frac{n_{\rm e}}{n_{\rm crit}}\right) \sin^2\left(\frac{\pi}{2}\frac{t_{\rm SW}}{\tau_{ \rm ion}}\right) .
\end{align}
Interestingly, this expression is independent of wavelength except for the wavelength dependence of $n_{\rm crit}$. We note that this model is limited as it neglects motion of ions that are initially further than $\lambda/8$ from an electron peak, which require a longer pulse for maximum energy. We also approximate the field as linear. Alternatively, one could find the maximum energy from the work done by the electric field, this is included in Appendix~\ref{ap:maxIonE}.
\section{Particle-In-Cell Simulations}
\label{sec:sims}
Multiply-peaked ponderomotive steepening is examined numerically with implicit 2D(3$v$) PIC simulations performed with the LSP PIC code\cite{Welch_etal2004}. The initial conditions are such that we are in the short pulse regime of our model and we have exceeded the critical intensity for our model (Fig.~\ref{fig:estimate}). For these simulations, an $x-z$ Cartesian geometry is used, where the laser propagates in the $+x$ direction and the polarization is in the $z$ direction. The simulations have a spatial resolution of 25~nm $\times$ 25~nm ($\lambda/32 \times \lambda/32$) and were run for 400~fs with a 0.1~fs time step.
To isolate the dynamics of the ion peak formation process we consider an idealized geometry of a rectangular target with an extended pre-plasma shelf. The plasma is assumed to be singly ionized with fixed ionization. This choice is made to prevent the critical surface from moving significantly due to ionization caused by the laser pulse. Ponderomotive steepening still occurs in simulations when the critical surface moves forward from ionization (e.g.~Refs.~\cite{Orban_etal2015,ngirmang2017particle}) but we ignore this effect in order to focus on the electron and ion dynamics. In the laser propagation direction, the target consists of a 7~\si{\um} long constant sub-critical density plasma shelf ($n = 8.594 \times 10^{19}$ cm$^{-3}$ $\approx n_{\rm crit}/20$) with a sharp interface between an 15~\si{\um} overdense target ($n = 10^{23}$ cm$^{-3}$ $\approx 60 n_{\rm crit}$) as illustrated in Fig.~\ref{fig:init_conditions}. In the polarization direction, the target is 30~\si{\um} wide. The ions are modeled as collisionless, which is discussed in Sec.~\ref{sec:ion_motion}.
\begin{figure}
\includegraphics[width=1\linewidth]{init_dens.pdf}
\caption{Initial conditions for the 2D(3$v$) PIC simulations. The laser propagates in the $+x$ direction with a rectangular target composed of an extended constant under-dense pre-plasma shelf region preceding an overdense region. } \label{fig:init_conditions}
\end{figure}
We describe three different simulations with targets composed of fully ionized hydrogen, deuterium, and tritium ions in order to investigate different charge-to-mass ratios. These simulations all keep the laser intensity and initial target electron and ion number densities constant. The overdense region is given a number density similar to our group's previous work~\cite{Orban_etal2015}. The simulations were initialized with 9 particles per cell for the electrons and 7 particles per cell for the ions with initial thermal energies of 1~eV.
We consider an 800~nm wavelength, normally incident laser pulse propagating in the $+x$ direction that would reach a peak intensity of \Wcm{18} if no target were present. The pulse duration is 42~fs full width at half maximum (FWHM) with a sine-squared envelope and a Gaussian spot size of 1.5~\si{\um} (FWHM). These parameters are similar to those of the Ti:Sapphire kHz repetition rate laser system described in Refs.~\cite{Morrison_etal2015,Orban_etal2015,Feister_2017}. The laser focus is set at the back of the target, as shown in Fig.~\ref{fig:init_conditions}, in order to create a large region over which ponderomotive steepening can occur. We use these parameters to explore the short pulse regime of our model with $t_{\rm SW} < \tau_{\rm ion}.$
\begin{figure*}
\centering
\includegraphics{dens_diff_2d_plot.pdf}
\vspace{-1.5em}
\caption{Ion density near the reflection point for the deuterium simulation. The laser finishes reflecting around 130~fs from the beginning of the simulation (a), although the peaks continue to grow as shown at 260~fs (b) and then begin to dissipate as illustrated at 390~fs (c). The width of the box (in $z$) represents the region considered in Fig.~\ref{fig:growth_plt} and the entire box represents the region considered for ion trajectories in Fig.~\ref{fig:init_trajectories}. This density peak growth process including the electron density is highlighted in the supplemental video included with this article. } \label{fig:DensitySnapShot}
\end{figure*}
\section{Results}\label{sec:results}
\subsection{Peak Formation and Density Profile Modification}\label{peakFormation}
Figure~\ref{fig:DensitySnapShot} shows snapshots of the ion density from the deuterium simulation at three different times. The standing EM wave causes the electrons to form peaks which, over time, produce peaks in the ion density separated by approximately $\lambda/2$ throughout the under-dense region. The hydrogen and tritium simulations show similar behavior, with the growth of the ion peaks happening sooner for lighter ions and later for more massive ions. In all three simulations we observe more than 10 ion density peaks in the 7~\si{\um} long underdense region.
\begin{figure}
\includegraphics[width=1\linewidth]{FirstPeakGrowth.pdf}
\vspace{-0.9cm}
\caption{Change in density of the first peak in the ion density. The lines represent the maximum density of the first ion peak averaged over the width of the laser pulse. This is calculated by averaging the maximum density for each value of $z$ in the region -2 \si{\um} $< z < 2$ \si{\um}, where error bars represent the standard deviation. We note that the exact density at the peak depends on the cell size (and sharpness of the peak), thus this graph comments more on densities in the region near the peak rather than the peak itself. } \label{fig:growth_plt}
\end{figure}
As mentioned, if no target were present, the laser pulse in this simulation would reach \Wcm{18}. Instead, the laser is focused many microns into the target, making the intensity near the sharp interface much lower than it would be in the vacuum case. In our simulations the intensity at the sharp interface is $\approx 2.6 \times$\Wcm{17}. According to Eq.~\ref{eq:critIntensity}, for our wavelength and plasma density $I_{\rm crit} \approx 5 \times$\Wcm{16}. Our simulations therefore explore the regime where the intensity is about five times larger than this threshold. Regarding the timescale of ion motion, for these simulations $\tau_{\rm ion} = 129$~fs $\times \sqrt{m_{\rm ion}/m_{p}}$. This timescale in all three simulations is longer than the 42~fs FWHM laser pulse (and even the full simulated 84~fs pulse with a sine-squared envelope), making these interactions well within the short pulse regime as illustrated in Fig.~\ref{fig:sketch}.
As discussed in the next section, an examination of the ion trajectories confirm that ions accelerated from both sides of the peak are streaming past each other. Figure~\ref{fig:growth_plt} shows this happening in all three simulations, albeit on different timescales. For each simulation, the peak ion density increases to $\approx 2.5 \times 10^{20}$~cm${}^{-3}$ (approximately three times the initial density), which lasts for tens to hundreds of femtoseconds, and then begins to decrease. Multiply-peaked ponderomotive steepening in this short pulse regime is therefore a highly transient effect.
\subsection{Ion Motion}\label{sec:ion_motion}
To better understand the dynamics of the peak formation process, we consider the motion of the ion macroparticles in the simulation. In particular, if we consider the ion trajectories (Fig.~\ref{fig:init_trajectories}) we see that the ions are accelerated towards the electron peaks while the standing wave is present. Later, as the standing wave dissipates, the inertia of the ions allows them to continue to travel with a roughly constant velocity.
\begin{figure}
\includegraphics[width=1\linewidth]{positionPlot.pdf}
\caption{The average trajectories in $x$ for a sample of particles starting in the boxed regions in Fig.~\ref{fig:DensitySnapShot}, representing the first three peaks in the ion density. The white vertical lines represent approximately when the laser begins reflecting, reaches its half maxima, and stops reflecting. Shaded vertical lines correspond to the times represented in Fig.~\ref{fig:DensitySnapShot}. The ions continue to travel after the standing wave has dissipated, and the observed peaks are created by the crossing ions.} \label{fig:init_trajectories}
\end{figure}
We see from Fig.~\ref{fig:init_trajectories} that many of the ions travel through the peak before the end of the simulation, which produces the broadening observed in Fig.~\ref{fig:DensitySnapShot}. We note that the transverse movement of the ions is
negligible compared to the longitudinal motion.
The energy distribution of the ions is represented in Fig.~\ref{fig:ion_energy} which highlights results from the deuterium simulation and overlays the average ion energies from the hydrogen and tritium simulations. In all three simulations, the ions are accelerated while electron density peaks from the standing wave are present, reaching keV energies. The average ion energy decreases slightly as the standing wave dissipates. The conversion efficiencies from laser energy to ($>100$~eV) ion energy were approximately $0.027\%,~0.016\%,$ and $0.011\%$ for the three simulations respectively.
\begin{figure}
\includegraphics{EnergyHist.pdf}
\caption{Longitudinal ion energies for particles starting in the boxed regions in Fig.~\ref{fig:DensitySnapShot}. The average energies for each simulation are plotted in time and the distribution of ion energies in the background corresponds to the deuterium simulation (logarithmic grayscale). The energies increase while the charge separation caused by standing EM wave is present. } \label{fig:ion_energy}
\end{figure}
We did not include ion-ion collisions in these simulations, which could potentially change the behavior of the ions and potentially lengthen the duration of the peak. However, the peak forms in a relatively low initial density ($8.5 \times 10^{19}$ cm$^{-3}\approx n_{\rm crit} / 20$) plasma shelf. In Appendix~\ref{ap:ionion} we determine that the mean free path of ion-ion collisions for our conditions is larger than the scale of the peak for the higher energy ions in the shelf region.
\subsection{Peak Electric Fields}
Figure~\ref{fig:EFields} shows a line out of the longitudinal electric field along the laser axis from the deuterium simulation compared to various models for context. As mentioned, the intensity of this standing wave exceeds $I_{\rm crit}$ by about a factor of 5, which means that we do not expect the sinusoidal model to be accurate in this case. As seen in Fig.~\ref{fig:EFields}, the peak sustained longitudinal electric fields in the simulation are close to $2\times 10^{11}$ V~m$^{-1}$ which is larger than one would expect in this case from the sinusoidal model ($10^{11}$ V~m$^{-1}$) by about a factor of 2. This is still somewhat below the peak electric field of the ``maximum depletion" model shown in Fig.~\ref{fig:estimate} which is near $3.1 \times 10^{11}$~V~m$^{-1}$. This model is described in Sec.~\ref{sec:maxdeplete} and it concludes that the peak electric fields are up to a factor of $\pi$ larger than the sinusoidal model as a limiting case. The results from the simulation lie between these two bounds. At the critical surface, where there are more available electrons, larger fields are present as shown in Fig.~\ref{fig:EFields}, although there are oscillations in the field. When moving further away from the laser axis, there are oscillations in the longitudinal electric field.
\begin{figure}
\includegraphics[width=1\linewidth]{Ex.pdf}
\caption{The observed longitudinal component of the electric field at 70~fs after the beginning of the deuterium simulation near the center of the laser pulse (PIC) averaged over several cells, as compared to the simple sinusoidal density variation model (Sine), maximum depletion (Max), and the expected ponderomotive force (Eq.~\ref{eq:pf}) divided by $e$ for reference ($E_p$). The electric fields found in the simulation lie between the sinusoidal model and maximum depletion model as expected for this intensity and density. } \label{fig:EFields}
\end{figure}
\subsection{Ion Energies}
Equation~\ref{eq:kemax} estimates the maximum ion energies from the interaction that we compare to the PIC simulations, however this estimate requires some assumption for how long the standing wave is in place ($t_{\rm SW}$). This is a difficult number to uniquely establish because the intensity envelope of the laser pulse is sine-squared and there is no abrupt turn on and turn off of the standing wave. From considering the results of Fig.~\ref{fig:ion_energy}, using the 42~fs FWHM of the laser pulse as the duration of the standing wave is too short because the ion energies continue to grow even 42~fs after it begins to rise. Using the 84~fs full pulse duration of the laser pulse as the duration of the standing wave is too long, both empirically from Fig.~\ref{fig:ion_energy} and from the reality that the standing waves are created by the overlap of the forward and reflected laser pulse. In Tab.~\ref{tab:energies} we therefore use both of these timescales for the standing wave in our model in order to bracket the possible ion energies. We empirically find that choosing $t_{\rm SW}$ to be 76~fs yields particularly accurate estimates for the max ion energies in all three simulations.
\begin{table}
\caption{\label{tab:energies}Maximum ion energies reported in keV from the simulation shortly after the standing wave has dissipated. This is compared to the energies predicted with Eq.~\ref{eq:velion}. Because the laser pulse has a temporal profile that is sine squared (rather than square), the time-dependent maximum amplitude makes comparison to the model more ambiguous. We compare the simulation result to the model with three different assumptions for the duration longitudinal electric field caused by the charge separation from the standing wave ($t_{\rm sw}$).}
\begin{ruledtabular}
\begin{tabular}{lc|c cc}
& Simulation & & Model & \\
& & $t_{\rm sw}=42$~fs & $t_{\rm sw}=76$~fs&$t_{\rm sw}=84$~fs \\
\hline
$^1\rm H^+$ (keV)&4.9 &1.9& 5.0 & 5.7\\
$^2\rm H^+$ (keV)&3.0 &1.0& 2.9 & 3.4\\
$^3 \rm H^+$ (keV)&2.0 & 0.7& 2.0 & 2.4\\
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Discussion}
\label{sec:discuss}
A multiply-peaked density modulation is observed in our simulations throughout the under-dense shelf region for these initial conditions. The short pulse regime for ponderomotive steepening identified in this theoretical work shows large longitudinal electric fields (potentially up to $\approx~10^{12}$~V~m$^{-1}$ for 800~nm light near the critical density) that accelerate ions to tens to hundreds of keV in energy when the above conditions are satisfied. The consequences of these conditions seem to be overlooked in the literature. From a peak ion energy standpoint, this mechanism is not as appealing as conventional laser-based acceleration schemes such as Target Normal Sheath Acceleration (TNSA)\cite{hatchett2000tnsa}, but because the energies are still sufficient to produce fusion, experiments of this kind may be useful, for example, for producing neutrons with a very small source size.
Largely because the spacing between density peaks is close to $\lambda/2$, features like these have not yet been observed in optical interferometry. By using intense mid-IR laser systems to produce these modulations this may be possible, so long as one is careful to consider that the peaks are a highly transient effect. In our simulations with 800~nm wavelengths, 42~fs FWHM pulse durations, and peak intensities near $10^{18}$~W~cm$^{-2}$ the features persist for less than a picosecond. While there are interferometric systems that can operate at this short timescale (e.g.~Ref.~\cite{Feister_etal2014}), experiments at longer wavelengths, lower intensities, and ions with lower charge-to-mass ratios can be designed to make the ion acceleration happen over a longer timescale in order to study the evolution of these peaks. The interferometric data would be useful as a novel validation test of kinetic plasma codes, especially if the experiment can be performed at normal incidence.
There are papers in the literature that study the presence of periodicity in the density distribution from the overlap of two crossed laser pulses (e.g.~\citet{suntsov2009femtosecond, Sheng2003}) because this produces a kind of transient ``plasma grating" that can be detected with probe light. The growth of this plasma grating is similar in many ways to the peaks that form via ponderomotive steepening and we outline a number of parallels in the present paper to recent work by \citet{Lehmann2019} who consider overlapping laser pulses through a low density medium. This phenomenon has interesting potential applications as discussed in Refs.~\cite{Lehmann2017, Sheng2003}.
Compared to approaches with counter-propagating laser pulses, there are some advantages to producing these density modulations through the reflection of laser light from an overdense target. Specifically, less total laser energy is required because the reflected laser pulse interferes with itself and there is no need to carefully time the overlap of the pulses since the laser naturally reflects from an overdense surface. The other advantage of overdense targets, as we have explored in this paper, is simply that the density of the shelf or medium the laser travels through can be significantly larger than counter-propagating laser experiments would allow. Larger densities allow for significantly larger longitudinal electric fields for accelerating ions. The density of the medium in experiments with overlapping laser pulses is typically a few orders of magnitude below critical density because of the need to avoid intensity dependent index of refraction effects. Experiments with overdense targets are not as constrained by this because irradiating an overdense target with an appropriate ``pre-pulse" produces a few-to-many-micron sub-critical density plasma in front of it. Besides increasing the peak ion energies, the other advantage of producing density modulations in a higher density medium is that the difference between the peak and minimum density will be larger, which should produce more easily detectable fringe shifts in efforts to perform interferometric imaging.
We have emphasized the novelty of performing experiments of this kind in the mid-IR ($2$~\si{\um} $\lesssim \lambda \lesssim 10$~\si{\um}). Our results also imply that it would be interesting to investigate ponderomotive steepening with shorter wavelengths as well. Shorter wavelength lasers are able to propagate into denser regions and, as previously discussed, denser plasmas produce larger peak electric fields which are advantageous for accelerating ions. This detail is important for the possibility of using experiments of this kind to create a neutron source with a very small source size because, as is well known, neutron yields increase significantly with ion energy\cite{davis2008angular}. In a suitably designed experiment, one could try to produce neutrons from the collision of counter-streaming ions in the density peaks. However, as considered in Appendix~\ref{ap:ionion}, the mean free path for these collisions is large compared to $\lambda/2$. Neutron-producing fusion reactions are more likely to come from ions that stream towards the first peak near the target and continue into the overdense region. This would be a ``pitcher-catcher" type configuration where the pitcher and catcher are separated by only $\approx \lambda / 2$.
A crucial assumption of this work is that the plasma remains highly reflective. This is certainly true of our simulations, but it is well known that the intensity and wavelength of the laser are important factors for the reflectivity. To make more reliable extrapolations to shorter and longer wavelengths and smaller and larger intensities than we consider in the simulations we present here, one would need to carefully consider the scaling of the reflectivity with various parameters (e.g.~\citet{levy2014petawatt}). While it is outside the scope of the present work, this remains an important priority for future investigations.
\section{Conclusions}
\label{sec:summary}
The formation of multiply-peaked density modulations associated with ponderomotive steepening is of fundamental interest as a basic plasma process and of practical interest as a means to modify the density profile of a target and to accelerate ions. Our PIC simulations indicate that these peaks are especially transient, lasting less than a picosecond after the end of a short-pulse laser interaction. This is important to factor into the design of future experiments to detect this phenomenon. We also find that the large longitudinal electric fields that are produced in these laser interactions accelerate ions to few keV energies in short pulse laser interactions, and potentially up to hundreds of keV energies in longer duration interactions. In our simulations these fields reach $2\times 10^{11}$~V~m$^{-1}$.
We outline a simple model to estimate the timescale of ion motion and peak energies of ions in these interactions. This model matches the peak ion energies in our simulations reasonably well. We also comment on extensions to this model that provide some insight even when the laser intensity exceeds a critical value. The model indicates that higher field strengths are achieved with shorter wavelength interactions due to the increased critical density. Ion acceleration should be much less pronounced in longer wavelength interactions, but this may still be an interesting regime to perform interferometric imaging as a novel validation test of plasma codes if the experiments are performed at normal incidence.
Multiply-peaked ponderomotive steepening has many parallels to studies of counter-propagating laser pulses which is a phenomenon with interesting potential applications for the field\cite{suntsov2009femtosecond,Sheng2003}. A key difference is that interference from reflection occurs at a comparatively higher density. As a result, the longitudinal electric field strengths are much larger, as just mentioned, and there are important subtleties to analytically modeling this phenomenon and challenges in experimentally probing it that we have outlined.
\begin{acknowledgments}
This research is supported by the Air Force Office of Scientific Research under LRIR Project 17RQCOR504 under the management of Dr.~Riq Parra. This project also benefited from a grant of time at the Onyx supercomputer (ERDC) and storage space at the Ohio Supercomputer Center. Support was also provided by the DOD HPCMP Internship Program and the AFOSR summer faculty program.
\end{acknowledgments}
|
1,314,259,993,179 | arxiv | \section{INTRODUCTION}
Two particle non-relativistic bound states in QCD (QED) have, at least,
three well
separated scales: the mass $m$ (hard scale), the typical relative momentum $|\vec p|$
(soft scale) and the typical bound state energy $E$ (ultrasoft scale). This
allows to introduce a hierarchy of effective field theories when sequentially
integrating out each of these scales. After integrating out the hard scale
$m$ QCD (QED) becomes a non-relativistic theory, the so called NRQCD (NRQED)
\cite{Lepage}.
This effective theory is local in space and time and it is still naturally
written in terms of quark fields.
Integrating out the soft scale leads to a so far elusive effective field
theory which we call potential NRQCD (pNRQCD). We claim that this effective
theory is
local in time but it is non-local in space and it is naturally written in
terms of wave function fields.
We present some original results for the above mentioned effective field theories.
In section 2
we give the matching coefficients for the four-quark operators at one loop for NRQED
and also for NRQCD in the case of different quark masses.
In section 3 we put forward our proposal for pNRQCD and give some results for
pNRQED.
\section{FROM QCD TO NRQCD}
The matching for NRQCD (and HQET) has been known at tree level since
long. This can be obtained doing the matching at tree level of S-matrix
elements or just performing a F-W transformation of the QCD lagrangian.
Some results at one loop have also been known for HQET.
Nevertheless, attempts to perform the matching
beyond tree level using dimensional regularisation (DR)
in NRQCD have not appeared until recently. The problem
was that in NRQCD unlike in HQET the kinetic term was
incorporated in the propagator. In DR the high modes are not explicitly suppressed
by a cut-off $\mu$ ($\mu << m$) and give non-vanishing contributions which break the
power counting rules.
Lately several people have addressed this problem \cite{res,otros,Manohar}
and recently the situation has been clarified \cite{Manohar}. There, it
is claimed that the matching should be performed just like in HQET. Let
us
make some comments in favor of this approach. The key point is that when
doing the matching it is not so important to know the power counting in the effective
theory as
to know that the scales of the effective theories are much lower than
the
mass. The power counting will tell us the relative importance between
different operators but this would not change the value of the matching
coefficients. That is, we only need
\begin{equation}
m >> |\vec p|,\, E,\, \Lambda_{QCD}
\end{equation}
whatever the relation between $|\vec p|$, $E$ and $\Lambda_{QCD}$ is.
In ref. \cite{Manohar} DR was used
for both ultraviolet (UV) and infrared (IR) divergences in the full and
the
effective theory. In fact, it is not so important to know the way the UV
divergences of the full theory are regulated since the comparison is
done between S-matrix elements which are UV finite. Nevertheless, it is
essential to regulate in the same way the IR divergences of both
theories in order to cancel. This will happen since both theories have
by construction
the same IR behavior. It is also very important, from a practical
point of view, to regulate the UV
divergences of the effective theory using DR.
In this way,
in ref. \cite{Manohar}
the calculation in the effective theory becomes
trivial, since there is no dimensionful parameter in the integrand, and
the matching coefficients for the bilinear terms in fermions
are calculated at one loop.
Nevertheless, the four-quark operators were not taken into
account. The way to deal with these operators is not obvious since we are
faced
with the computation of S-matrix elements of
four-quarks in QCD and HQET. It is in these S-matrix elements, which are
never calculated in traditional applications of HQET, where the
distinct
IR behavior
of two heavy quark bound states becomes apparent. Power-like IR divergences appear
in loops where a quark and an antiquark in HQET interact through a potential.
We call these divergences Coulomb pole. They are naturally regulated once
the kinetic term is introduced.
This IR behavior should appear in both the full and the effective theory.
However,
it is important to bear in mind
that the matching coefficients are independent
of the IR behavior. Therefore, we do not need to regulate the IR singularity
with the introduction of the kinetic term and hence we can take advantage of
a more convenient regularisation. For instance, a regularisation such that
we could avoid the effort of computing this pole in
both theories (which can be very painful).
The procedure consist of computing the matrix elements on-shell and at
threshold
($|\vec p| =0$). In this way there is no scale in the effective theory, and hence
the diagrams in the effective theory are just trivially zero.
Therefore, only the diagrams in QCD need to be computed
in this
peculiar
kinematical situation. This precise kinematical
situation produces IR divergences which get canceled by those of the
effective theory. The computation is done in the $\overline{MS}$ scheme.
We stress that the Coulomb pole does not appear at all
doing the matching in this way. Let us explain what happens. In order to
define some integrals we have to
move to dimensions high enough for the IR Coulomb
singularity to be regulated. When coming back to four dimensions we can trace back the
IR
Coulomb singularity as poles in dimensions different from four.
The point is that we have not introduce
the relative momentum and hence DR has no
way to reproduce the Coulomb pole or any non-local behavior in the relative
momentum. This fact has already been observed
in ref.
\cite{nos2}, where it was noticed that in HQET with a quark and antiquark
moving exactly at the same velocity no imaginary anomalous dimensions
occur using DR.
The important thing doing the matching is to take into account all the
non-analytical behavior which cannot be obtained in the effective theory. We
are taking into account all the non-analytical behavior due to the masses.
The remaining non-analytical behavior is encoded in the effective theory.
Let us now give the results for the four-quark effective lagrangian
(non-equal mass case)
\begin{eqnarray}
\label{lag1}
\nonumber
&&\delta {\cal L}_{NRQCD} =
{d_{ss} \over m_1 m_2} \psi_1^{\dag} \psi_1 \chi_2^{\dag} \chi_2
\\
&&
\nonumber
+
{d_{sv} \over m_1 m_2} \psi_1^{\dag} {\vec \sigma} \psi_1
\chi_2^{\dag} {\vec \sigma} \chi_2
+
{d_{vs} \over m_1 m_2} \psi_1^{\dag} {\rm T}^a \psi_1
\chi_2^{\dag} {\rm T}^a \chi_2
\\
&&
+
{d_{vv} \over m_1 m_2} \psi_1^{\dag} {\rm T}^a {\vec \sigma} \psi_1
\chi_2^{\dag} {\rm T}^a {\vec \sigma} \chi_2
\,,
\end{eqnarray}
\begin{eqnarray}
&&
\nonumber
d_{ss}=
- {N^2_c-1 \over 4N^2_c} {\alpha_s^2 \over m_1^2-m^2_2}
\Biggl\{m_1^2\left( \ln{m^2_2 \over \nu^2}
+ {1 \over 3} \right)
\\
&&
-
m^2_2\left( \ln{m^2_1 \over \nu^2}
+ {1 \over 3} \right)
\Biggr\}
\end{eqnarray}
\begin{equation}
d_{sv}=
{N^2_c-1 \over 4N^2_c} {\alpha_s^2 \over m_1^2-m^2_2}
m_1 m_2\ln{m^2_1 \over m^2_2}
\end{equation}
\begin{eqnarray}
\label{dvs}
\nonumber
&&d_{vs}=
- {2 C_f \alpha^2_s \over m_1^2-m^2_2}
\Biggl\{m_1^2\left( \ln{m^2_2 \over \nu^2}
+ {1 \over 3} \right)
\\
&&
-
m^2_2\left( \ln{m^2_1 \over \nu^2}
+ {1 \over 3} \right)
\Biggr\}
\\
&&
\nonumber
+ { C_A \alpha^2_s \over 4 (m_1^2-m^2_2)}
\Biggl[
3\Biggl\{m_1^2\left( \ln{m^2_2 \over \nu^2}
+ {1 \over 3} \right)
\\
&&
\nonumber
-
m^2_2\left( \ln{m^2_1 \over \nu^2}
+ {1 \over 3} \right)
\Biggr\}
\\
&&
\nonumber
+
{ 1 \over m_1m_2}
\Biggl\{m_1^4\left( \ln{m^2_2 \over \nu^2}
+ {10 \over 3} \right)
\\
&&
\nonumber
-
m^4_2\left( \ln{m^2_1 \over \nu^2}
+ {10 \over 3} \right)
\Biggr\}
\Biggr]
\end{eqnarray}
\begin{eqnarray}
\label{dvv}
&&d_{vv}=
{2 C_f \alpha^2_s \over m_1^2-m^2_2}
m_1 m_2\ln{m^2_1 \over m^2_2}
\\
&&
\nonumber
+
{ C_A \alpha^2_s \over 4 (m_1^2-m^2_2)}
\Biggl[
\Biggl\{m_1^2\left( \ln{m^2_2 \over \nu^2}
+ 3 \right)
\\
&&
\nonumber
-
m^2_2\left( \ln{m^2_1 \over \nu^2}
+ 3 \right)
\Biggr\}
-
3 m_1 m_2\ln{m^2_1 \over m^2_2}
\Biggr]
\,,
\end{eqnarray}
where
$$
C_f = {N^2_c-1 \over 2N_c} \quad \quad {\rm and} \quad \quad C_A=N_c \,.
$$
The QED coefficients are easily obtained from these results. We just have to
omit $d_{vs}$ and $d_{vv}$ and replace ${N^2_c-1 \over 4N^2_c}$ by $1$.
In the equal mass case annihilation processes are allowed and they should be
taken into account. For QED, joining all the contributions we get
\begin{equation}
\label{dssqed}
d_{ss}=
{3 \pi \alpha \over 2}
\Biggl\{ 1
- {2 \alpha \over 3 \pi}
\left( \ln{m^2 \over \nu^2}
+ {23 \over 3} - \ln2 + i {\pi \over 2} \right)
\Biggr\}
\end{equation}
\begin{equation}
\label{dsvqed}
d_{sv}=
-{ \pi \alpha \over 2}
\Biggl\{ 1
- {2 \alpha \over \pi}
\left( {22 \over 9} + \ln2 - i {\pi \over 2} \right)
\Biggr\} \, .
\end{equation}
These results are compatible with those found by Labelle et al. \cite{Labelle}
except for a finite piece in (\ref{dssqed}).
A more detailed explanation of the procedure and the full results at one
loop for the equal and non-equal mass case in QCD will be given in ref.
\cite{nos9}.
\section{FROM NRQCD TO pNRQCD}
In the last section we have integrated out the hard gluons. Here,
we integrate out soft gluons, with energies of the order of the relative
momentum.
Two point functions are insensitive to the relative momentum and hence the bilinear
terms in fermions in the NRQCD lagrangian and in the pNRQCD at quark level lagrangian
will read
exactly the same. However one has to keep in mind that in the latter only
gluons with ultrasoft momenta are kept. On the contrary, four point functions
do know about relative
momentum and generate non-trivial terms in the pNRQCD lagrangian. These terms are
nothing but
the potential piece of the new lagrangian.
Due to the massless nature of the gluons the coefficients are going to be
non-local in the relative space coordinate although local in time. The
important point to be realized is that the appearance of a
potential can be understood as the effect of integrating out soft gluons. Hence the
potential can be calculated by matching NRQCD to pNRQCD.
We have carried out the matching to a given order in $1/m$ and
$\alpha$ using
HQET propagators and the Coulomb gauge. This produces a very strong simplification since the kinetic
term can be treated perturbatively when computing the potential. Now, it is
very easy to know how far we must go in the computation of the potential if
we want to compute, for instance, the energy up to order $m\alpha^n$. In this case, we must
compute the matching up to order $({1 \over m})^s\alpha^r$, with $s$, $r$ such
that $s+r \leq n-1$. The lowest order just gives the standard Schr\"odinger
equation.
In addition we use DR for
both UV and IR divergences, and hence any loop in pNRQCD gives zero because there is no
scale. The point is that on the one hand there is no way of
reproducing the Coulomb pole since the computation is done with HQET
propagators in both theories. On the other hand any loop with
ultrasoft gluons in pNRQCD is zero since these are only sensible to the incoming energy
and total momentum of the S-matrix element, which we set to zero (we remark that
on-shell quarks in HQET have zero energy).
Therefore, the calculation from
NRQCD leads directly to the potential.
Further terms in the pNRQCD lagrangian are obtained when matching four point
functions with an arbitrary number of ultrasoft gluon legs.
In order to make explicit the distinction between soft and ultrasoft gluons it is most
convenient to project NRQCD or pNRQCD to the two particle sector and promote the wave
function $\psi (\vec x_1 ,\vec x_2 , t) $ to a field ($\psi (\vec x_1 ,\vec x_2 , t) $
is a $3\times 3$ matrix
in color space and a $2\times 2$ matrix in spin space). Then the relative coordinate
$\vec x =\vec x_1 -\vec x_2$, whose typical size is the inverse of the soft scale, is
explicit and can be consider a small scale compared with the typical wavelength of the
ultrasoft gluon which is also of the order of the inverse of the energy $E$.
Gluon fields appear in this
formalism at
the points $\vec x_1$ and $\vec x_2$. Ultrasoft gluons are those which $B_{\mu}
(\vec x_{i},\, t)$
can be expanded about the center of mass coordinate in a power series of derivatives
and relative coordinates, the so-called multipole expansion (refs. \cite{otros,mp}
also deal with ultrasoft momenta through the multipole expansion). Then the most natural
way of writing the pNRQCD lagrangian
is as a functional of $\psi (\vec x ,\vec X , t) $ and $B_{\mu}
(\vec X , \, t)$, which is local in $\vec\nabla_{\vec x}$,
$\vec\nabla_{\vec X}$ but non-local in $\vec x$. For definiteness we will
give $L_{pNRQCD}$ at the leading order in the potential.
\begin{eqnarray}
\label{lpnrqcd}
&&
{\cal L}_{pNRQCD} =
\int d^3{\vec x} d^3{\vec X} dt tr \biggl(\psi^{\dagger} (\vec x_1 ,\vec x_2 , t)
\\
&&
\nonumber
\Bigl\{
iD_0 +{\vec D_{\vec x_1 }^2\over 2m}+
{\vec D_{\vec x_2 }^2\over 2m}
\Bigr\}\psi (\vec x_1 ,\vec x_2 , t)\biggr)
\\
&&
\nonumber
+{\alpha \over \vert x_1 - x_2 \vert }tr \biggl(T^{a}
\psi (\vec x_1 ,\vec x_2 , t)T^{a}\psi^{\dagger} (\vec x_1 ,\vec x_2 , t)\biggr)
\,.
\end{eqnarray}
Recall that the gluon fields in the covariant derivatives are ultrasoft and
hence they must be multipole expanded which spoils the explicit gauge
invariance.
However, this can be recovered by writing $\psi (\vec x ,\vec X , t)
$ in terms of the scalar
$S (\vec x ,\vec X , t) $ and octet $O (\vec x ,\vec X , t) $ wave function fields,
whose local gauge transformations depend on the center of mass coordinate only. Namely,
\begin{eqnarray}
&&
\nonumber
\psi (\vec x_1 ,\vec x_2 , t)= P\bigl[e^{ig\int_{\vec x_2}^{\vec x_1} \vec B
d\vec x} \bigr]S({\vec x}, {\vec X}, t)
\\
&&
\nonumber
+P\bigl[e^{ig\int_{\vec X}^{\vec x_1} \vec B d\vec x}
\bigr]O (\vec x ,\vec X , t)P\bigl[e^{ig\int^{\vec X}_{\vec x_2} \vec B d\vec x}
\bigr]
\end{eqnarray}
$$\psi (\vec x_1 ,\vec x_2 , t)\rightarrow g(\vec x_1 ,t)\psi (\vec x_1 ,\vec x_2 , t)
g^{-1}(\vec x_2 ,t ) $$
$$S (\vec x ,\vec X , t)\rightarrow S (\vec x ,\vec X , t) $$
$$O (\vec x ,\vec X , t)\rightarrow g(\vec X ,t)O (\vec x ,\vec X , t)g^{-1}(\vec X
,t)
$$
In this way (\ref{lpnrqcd}) reads
\begin{eqnarray}
&&{\cal L}_{pNRQCD} =
\int d^3{\vec x} d^3{\vec X} dt tr \Biggl\{
\\
&&
\nonumber
S^{\dagger}
\Bigl\{
i\partial_0 - { {\vec p}^2 \over m} + {C_{f} \alpha \over |{\vec x}|}
\Bigr\} S
\\
&&
\nonumber
+ O^{\dagger}
\Bigl\{
iD_0 - { {\vec p}^2 \over m} - {1 \over 2N_c} {\alpha \over |{\vec x}|}
\Bigr\} O
\\
&&
\nonumber
+g\vec x O \vec E (\vec X , t)
S^{\dagger}
+g\vec x O^{\dagger} \vec E (\vec X , t)
S
\\
&&
\nonumber
+{g\over 2}\vec x O O^{\dagger} \vec E (\vec X , t)
+{g\over 2}\vec x O^{\dagger} O \vec E (\vec X , t) \Biggr\}
\,.
\end{eqnarray}
This lagrangian suffices to obtain
the leading non-perturbative
contributions to the two heavy quark bound states when $E >> \Lambda_{QCD}$
\cite{VL} (see also \cite{yndnos}).
If we leave aside non-perturbative effects ($\sim \Lambda_{QCD}$), each term in
the lagrangian above has a well defined size, unlike in the NRQCD lagrangian. Relative
coordinates $\vec x $ and its associated momentum $\vec\nabla_{\vec x}$ must be
counted as
soft scales ($(\vec x )^{-1}\sim\vec\nabla_{\vec x}\sim m\alpha $). Gluon fields
$B_{\mu}
(\vec X ,t)$ and derivatives with respect to the center of mass coordinate
$\vec\nabla_{\vec X}$ must be counted as ultrasoft scales ($\sim m\alpha^2 $).
Then if we
wish to calculate a given observable to a given order in $\alpha$ we know immediately
which terms are to be kept in the pNRQCD lagrangian. As an example, let us present
$L_{pNRQED}$ from which one can obtain next to next to
leading corrections to the energy ($\sim m\alpha^5 $).
\begin{eqnarray}
\label{lpnrqed}
\nonumber
&&{\cal L}_{pNRQED} =
\int d^3{\vec x} d^3{\vec X} dt S^{\dagger}({\vec x}, {\vec X}, t)
\\
&&
\nonumber
\Biggl\{
i\partial_0 - { {\vec p}^2 \over m} + { \alpha \over |{\vec x}|}
+ { {\vec p}^4 \over 4m^3}
\\
&&
\nonumber
- { \delta^{(3)}({\vec x}) \over m^2}
\left( \pi \alpha \left(c_D -2c_F^2 \right) +d_{ss}+3d_{sv} \right)
\\
&&
\nonumber
+ { \alpha \over 2 m^2} { 1 \over {\vec x}}
\left( {\vec p}^2 + { 1 \over {\vec x}^2} {\vec x}
({\vec x} \cdot {\vec p}){\vec p} \right)
\\
&&
\nonumber
- { \delta^{(3)}({\vec x}) \over m^2} {\vec S}^2
\left( \pi \alpha { 4 \over 3}c_F^2 -2 d_{sv} \right)
\\
&&
\nonumber
- { \alpha \over 4 m^2} { 1 \over |{\vec x}|^3} {\vec L} \cdot {\vec S}
\left( 2c_S+4c_F \right)
\\
&&
\nonumber
- { \alpha c_F^2 \over 4 m^2} { 1 \over |{\vec x}|^3}
S_{12} ({\vec x})
- \delta V ({\vec x})
+ e {\vec x} \cdot {\vec E} ({\vec X},t)
\Biggr\}
\\
&&
S ({\vec x}, {\vec X}, t)
\,.
\end{eqnarray}
The coefficients $c_{i}$ are given in \cite{Manohar} and $d_{ss}$, $d_{sv}$
in (\ref{dssqed}) and (\ref{dsvqed}).
$c_{i}=1+O(\alpha )$ and $d_{ij}=O(\alpha )+O(\alpha^2 ) $ are obtained from the
one loop matching between QED and NRQED. The last term corresponds
to the
ultrasoft photons which contribute at this order. The potential terms are
obtained upon
matching NRQED to pNRQED up to one loop. $\delta V$ encodes the computation
of the potential at one loop. It reads
\begin{eqnarray}
\nonumber
&&\delta V ({\vec x}) = - {\alpha^2 \over m^2}
\left(\delta^{(3)}({\vec x}) \ln\nu^2+ {1 \over 2\pi} {\rm reg}
{1 \over |{\vec x}|^3} \right)
\\
&&
\nonumber
- { 4 \alpha^2 \over 3 m^2}
\left(\delta^{(3)}({\vec x}) \ln\nu^2+ {1 \over 2\pi} {\rm reg}
{1 \over |{\vec x}|^3} \right)
\\
&&
- {\alpha^2 \over m^2} \delta^{(3)}({\vec x}) C
\end{eqnarray}
with unknown $C$. The first $\ln\nu$ (which was produced by an UV
divergence in the potential)
gets canceled by the $\ln\nu$ in $d_{ss}$. The second $\ln\nu$ (with IR
origin) cancels with a piece of the UV divergent contribution coming from
the ultrasoft photons. The remaining contribution from the ultrasoft photons
cancels the contribution coming from $c_D$. The net result is the total
energy to be scale independent as it must be.
With (\ref{lpnrqed}) the binding energy at order $m\alpha^5$ for
arbitrary $n,l$ states can be calculated. We find agreement with
\cite{Labelle} for the hyperfine splittings. The $m \alpha^5\ln\alpha$ correction
has also been
calculated finding agreement with known results \cite{Gupta}.
We have
also used these techniques with DR to reproduce the Lamb shift in the
simpler case of a hydrogen-like atom \cite{nos8}.
We remark that in this section we are assuming $|\vec p | >> E$ which was
not needed in the
previous section. As far as we are making the matching to some order in $\alpha$
(we are computing the potential perturbatively) we also assume $|\vec p |
>> \Lambda_{QCD}$. Notice that the relative size between $E$ and
$\Lambda_{QCD}$ is left arbitrary. We can further distinguish between
two
situations: (i) $E >> \Lambda_{QCD}$ and (ii) $\Lambda_{QCD}
{\ \lower-1.2pt\vbox{\hbox{\rlap{$>$}\lower5pt\vbox{\hbox{$\sim$}}}}\ } E$.
For
the situation (i), as we mention before, we can calculate perturbatively
from pNRQCD and
parametrise the non-perturbative contributions
by means of local condensates. For the situation (ii) the calculations in
pNRQCD
cannot be carried out perturbatively anymore.
If $\Lambda_{QCD}
{\ \lower-1.2pt\vbox{\hbox{\rlap{$>$}\lower5pt\vbox{\hbox{$\sim$}}}}\ }
|{\vec p}|$ the matching to
pNRQCD cannot be carried out perturbatively. Even in this situation NRQCD is
extremely useful to parametrise
non-perturbative contributions in many processes \cite{Lepage}.
|
1,314,259,993,180 | arxiv | \section{Introduction}
The extremely small $K_{L}$-$K_{S}$ mass difference has been measured accurately decades ago. The origin of this difference is the $K^0$-$\overline{K}^{0}$ mixing via second order weak interactions. Conventionally, the mass difference is separated into short-distance part and long-distance part. While the short-distance contribution has been calculated to next-to-leading order \cite{Herrlich:1993yv}, the long-distance contribution could only be determined non-perturbatively, which contributes around $30\%$ to the mass difference \cite{Buchalla:1995vs}. Norman Christ suggested a Lattice QCD method to compute the long-distance contribution \cite{Christ:2010zz}. This proceeding is the first numerical experiment about the new method.
\section{Second order correlator}
To compute $K_L$-$K_S$ mass difference on a Euclidean space lattice, we could evaluate the time-integrated second-order product over a time interval $[t_a, t_b]$:
\begin{equation}
{\cal A}=\frac{1}{2}\sum_{t_1=t_a}^{t_b}\sum_{t_2=t_a}^{t_b}<\overline{K}^{0}(t_f)H_W(t_2)H_W(t_1)K^{0\dag}(t_1)>
\label{eq:amplitude}
\end{equation}
Here the Kaon is created at $t_i$, the second-order weak Hamiltonian acts between time interval $[t_a, t_b]$, and the outcome anti-Kaon is annihilated at $t_f$. The amplitude is represented schematically in Figure \ref{fig:schematic}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{./figures/schematic.png}
\caption{Demonstration of the second order correlator $\cal A$ in Equation \protect\ref{eq:amplitude}. Here the two four quark operators are integrated over the shadowed region.}
\label{fig:schematic}
\end{figure}
If we assuming that $t_a-t_i$ and $t_f-t_b$ are large enough for the interpolating operators to project onto Kaon states, after inserting a complete set of intermediate states, Eq \ref{eq:amplitude} becomes:
\begin{equation}
{\cal A} = |Z_K|^2e^{-M_K(t_f-t_i)}\sum_{n}\frac{<\overline{K}^0|H_W|n><n|H_W|K^0>}{(M_K-E_n)^2}\left\{e^{(M_K-M_n)T}-(M_K-M_n)T-1\right\}
\label{eq:integrated_amp}
\end{equation}
Here $T=t_b-t_a+1$ and $Z_K$ is the normalization factor of Kaon interpolating operator. We assume that there is no intermediate state degenerate with kaon in this expression, which is true in this work. The term proportional to $T$ in Equation \ref{eq:integrated_amp} gives the finite volume approximation to $K_L$-$K_S$ mass difference.
\begin{equation}
\Delta M_K^{FV} = 2\sum_{n}\frac{<\overline{K}^0|H_W|n><n|H_W|K^0>}{M_K-E_n}
\end{equation}
The other terms in Equation \ref{eq:integrated_amp} on $T$ could be classified into three categories: i). The exponential decreasing term, if $E_n>M_K$, these terms could be neglected when $T$ is sufficiently large; ii). Exponential increasing term, if $E_n<M_K$, these terms must be identified independently and subtracted from the result; iii) The term independent of $T$, which is trivial.
The full $\Delta S=1$ effective Hamiltonian consist of 7 independent four-quark operators \cite{Blum:2001xb}, we include only the current-current operator $Q_1$ in this work.
\begin{equation}
Q_1 = (\bar{s}_\alpha d_\alpha)_{V-A}(\bar{u}_\beta u_\beta)_{V-A}
\label{eq:operator}
\end{equation}
The four different types of contractions are listed in Figure \ref{fig:contraction}. In this work, we only compute type 1 and type 2 contractions and drop type 3 and type 4 contraction. Type 3 contraction is dropped because it is disconnected diagram. We need to compute extra random source propagators for type 4 contraction. So we also drop type 4 contraction in this first numerical experiment.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{./figures/contraction.pdf}
\caption{Four types of contractions in the 4-point correlator, only type 1 and type 2 are included in the calculation, type 3 and type 4 are dropped.}
\label{fig:contraction}
\end{figure}
As we mentioned before, we must identified the exponential increasing terms in Eq \ref{eq:integrated_amp}. These terms comes from the intermediate states which are lighter than Kaon, such states are $\pi^0$ state and vacuum state in this calculation. Since the disconnected diagrams are neglected, there will be no vacuum intermediate state. Then we must calculate $<\pi^0|Q_1|K^0>$ and subtract the exponential increasing contribution from Equation \ref{eq:integrated_amp}. The contractions in this calculation are given in Figure \ref{fig:ktopi}. We drop type 2 to be consistent with 4-point correlator calculation.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{./figures/ktopi_contraction.pdf}
\caption{Two types of contractions in $<\pi^0|Q_1|K^0>$ 3-point correlator, type 2 is dropped to be consistent with 4-point correlator in Figure \protect\ref{fig:contraction}.}
\label{fig:ktopi}
\end{figure}
\section{Short distance effect}
The computation was performed on $N_f=2+1$ flavor $16^2\times32\times16$ lattice with DWF, Iwasaki gauge action, $a^{-1}=1.73(3)$ Gev, $421$ Mev pion mass and $559$ Mev kaon mass. The two wall-source kaons are located at time slice $t_i=0$ and $t_f=27$. The two $\Delta S=1$ operator act between time slice $[4,23]$. We calculated 600 configurations separated by 10 Monte Carlo time units. The result is given in the first plot in Figure \ref{fig:cutoff}. The plot shows the integrated second order correlator as a function of integration time interval T. For each given integration time interval T, we calculate the integrate correlator in Equation \ref{eq:integrated_amp} between all possible time interval $[t_a, t_a+T-1]$ and take the average as final result. The two curves in the plot are the results before and after the subtraction of $\pi^0$ exponential term. The results have both long-distance part and short-distance part. The short distance part means that the two $\Delta S=1$ operator are close to each other on the lattice. We expect the short distance part to be quadratically divergent because of the up quark loop in Figure \ref{fig:schematic}. To get a detailed understanding of the short distance effect, we could introduce an artificial cut off, i.e., require the separation between two operators $|x_2-x_1|> r$. This cutoff will reduce the short distance effect and the long distance part will remain untouched. The other plot in Figure \ref{fig:cutoff} show the results with cutoff radius 5. We can see that the amplitude of the result is reduced substantially after introcuding the cutoff. And the contribution from $\pi^0$ intermediated state becomes visible. We could measure the mass difference at different cutoffs. From Equation \ref{eq:integrated_amp}, the mass difference is given by the coefficient of the linear term up to some factor when T is large. We choose to fit the slope of integrated correlator plot in the range $T\in[11,20]$. The mass differences are listed in Table \ref{tab:cutoff}. We could do a naive inverse quadratic fit for the mass differences at different cutoffs. The result is in Table \ref{tab:cutoff}, which suggest the short distance effect is quadratically divergent.
\begin{table}[ht]
\caption{Mass differences at different cutoff radius}
\label{tab:cutoff}
\begin{center}
\begin{tabular}{cccccc}
\hline
Cutoff Radius & 1 & 2 & 3 & 4 & 5\\
\hline
$\Delta M_K$ & 0.3342(80) & 0.1533(30) & 0.0796(17) & 0.0560(15) & 0.0455(14) \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\textwidth]{./figures/nocutoff.pdf}
\includegraphics[width=0.6\textwidth]{./figures/cut5.pdf}
\caption{Integrated second order correlator as a function of integration time interval. Red and blue curve show the results before and after the subtraction of $\pi^0$ exponential term. First plot is the result without any cutoff. Second plot shows the result with cutoffs radius 5.}
\label{fig:cutoff}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\textwidth]{./figures/mass_cutoff.pdf}
\caption{The mass differences at different cutoff radius, the blue cure is a naive two parameter fit.}
\label{fig:mass_cutoff}
\end{figure}
\vspace{-5mm}
\section{Charm quark and GIM}
In order to remove the short distance, we introduce valence charm quark to the calculation. Then GIM mechanism will reduce the quadratic divergency into logarithmically. To implement this in the lattice calculation, we could replace all the up quark propagators in Figure \ref{fig:contraction} with the difference between up quark propagator and charm quark propagator. We use $5$ difference valence charm quark mass ranged from 200Mev to 1000Mev. The integrated correlators after GIM subtraction are plotted in Figure \ref{fig:charm}. The mass difference for difference valence charm quarks are listed in Table \ref{tab:charm}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.6\textwidth]{./figures/1000mevcharm.pdf}
\caption{The integrated correlator after the inclusion of 1000Mev charm quark, Red and blue curve show the results before and after the subtraction of $\pi^0$ exponential term}
\label{fig:charm}
\end{figure}
\section{Remain Short distance effect}
Even with the valence charm quark, there will still be some short distance lattice artifacts remain. The short distance part in Equation \ref{eq:amplitude} could be given by:
\begin{eqnarray}
{\cal A}_{SD}&=&\frac{1}{2}\sum_{t=t_a}^{t_b}<\overline{K}^0(t_f)C(\mu){\cal O}(t)K^{0\dag}(t_i)>\nonumber \\
&=&\frac{1}{2}|Z_K|^2e^{-M_K(t_f-t_i)}C(\mu)<\overline{K}^0|{\cal O}|K^0>T
\end{eqnarray}
Here $T=t_b-t_a+1$, ${\cal O}=(\bar{s}d)_{V-A}(\bar{s}d)_{V-A}$, $C(\mu)$ is the conversion factor at a certain momentum scale $\mu$. To identify $C(\mu)$ by using RI/SMOM technic, we could evaluate off-shell, four quark, amputated green function for the two diagrams in Figure \ref{fig:npr_demo} at some large external momentum scale $\mu$.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.7\textwidth]{./figures/npr.pdf}
\caption{Off-shell, amputated four quark green functions, left diagram is for two $\Delta S=1$ operators, right diagram is for one $\Delta S=2$ operator.}
\label{fig:npr_demo}
\end{figure}
The external momentum satisfy $p_1^2=p_2^2=(p_1-p_2)^2=\mu^2$. Suppose the result for two diagrams are $G_{ijkl}$ and $F_{ijkl}$ respectively. Then we project the green functions into the desired gamma structure $P_{ijkl}=((1-\gamma^5)\gamma_{\mu})_{ji}((1-\gamma^5)\gamma_{\mu})_{lk}$. The conversion factor is given by Equation \ref{eq:npr}.
\begin{equation}
G(\mu)=G_{ijkl}P_{ijkl} \quad F(\mu)=F_{ijkl}P_{ijkl} \quad C(\mu) = \frac{G(\mu)}{F(\mu)}
\label{eq:npr}
\end{equation}
In Figure \ref{fig:npr_result}, the left plot shows the dependence of $C(\mu)$ on the momentum scale $\mu$ if we use 1 Gev valence charm quark. As we expected, $C(\mu)$ will decrease while increasing $\mu$, because the difference between charm quark and up quark will decrease while the momentum scale get larger. In the right plot, we fix momentum scale to be 2 Gev and plot $C$ as a function of charm quark mass. When charm quark mass get smaller, the remain short distance effect becomes smaller. In Table \ref{tab:charm}, we show the remain short distance effect at different charm quark masses. We conclude that the remain short distance effects are so small that we could neglect them in this work.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./figures/npr1000mevcharm.pdf}
\hspace{-5mm}
\includegraphics[width=0.5\textwidth]{./figures/npr2gev.pdf}
\label{fig:npr_result}
\caption{The left plot shows the conversion factor $C(\mu)$ as a function of momentum scale qith 1 Gev charm quark. The right plot shows $C(\mu)$ at different charm quark masses at fixed $\mu$ = 2Gev.}
\end{figure}
\begin{table}[ht]
\caption{Mass differences at different valence charm quark masses}
\label{tab:charm}
\begin{center}
\begin{tabular}{cccccc}
\hline
$M_c$(Mev) & 200 & 400 & 600 & 800 & 1000\\
\hline
$\Delta M_K$ & 0.0440(10) & 0.0455(12) & 0.0496(13) & 0.0556(14) & 0.0628(15) \\
\hline
$(\Delta M_K)_{SD}$ & 6.2e-5 & 2.4e-5 & 6.2e-4 & 0.0013 & 0.0023\\
\hline
\end{tabular}
\end{center}
\end{table}
\section{Conclusion}
We perform a numerical study of the long distance part of $K_L$-$K_S$ mass difference. The short distance part could be reduce by the inclusion of valence charm quark. The exponential increasing term could be identified and subtracted. The remain short distance effect could be computed by using RI/SMOM technic and removed from the results.
The author thanks all his RBC/UKQCD collaborators for discussions and suggestions. Especially thanks to Prof. Norman Chris for detailed instructions and discussion.
|
1,314,259,993,181 | arxiv | \section{Introduction}
Almost all conventional optical step-index waveguides are unsuitable for confining and supporting the low-loss propagation of acoustic waves. This is because the high refractive index materials making up the core of optical waveguides tend to support acoustic waves propagating at larger velocities than in the low-refractive index cladding layers \cite{eggleton2019brillouin}. Consequently, acoustic waves do not experience total internal reflection (TIR) at the core-cladding interface, and dissipate by free propagation into the cladding. Conversely, acoustic waveguides relying on a reversed design --- with the acoustically \textit{slow} material making up the core, and the \textit{fast} material the cladding --- usually do not guide optical waves,
due to a general association between refractive index and material density. Therefore, if we are to pursue systems implementing efficient interaction between propagating optical and acoustic waves, and particularly Brillouin scattering, we need to look beyond the simple physics of TIR.
A number of designs have been put forward to address this challenge \cite{eggleton2019brillouin,Safavi-Naeini:19,aspelmeyer2014cavity,eggleton2013inducing}. In some, the waveguides are suspended in air by either sparsely positioned \cite{shin2013tailorable,kittlaus2016large,van2015net} or specifically engineered supporting structures \cite{schmidt19}. In others, both light and sound are guided along line defects of phoxonic crystals \cite{doi:10.1063/1.2216885,zhang2017design,Yu:18}. Finally, a combination of the desired material properties --- high refractive index and low stiffness --- has been identified in chalcogenides, allowing researchers to revisit step-index architectures for optoacoustic waveguides \cite{Pant:11, morrison2017compact}.
In this work, we suggest a simple, novel class of waveguides capable of supporting the simultaneous propagation of co-localized optical and acoustic waves, based on the concept of Anti-Resonant Reflection Optical Waveguides (ARROWs) \cite{Litchinitser:03}. ARROWs were originally studied to enable low-loss optical guidance in the earliest integrated optical waveguides \cite{duguay1986antiresonant}. At that time, integrated photonic devices relied on a small contrast of refractive index inducing TIR between the doped silica medium making up the core, and the pure silica of the cladding. In ARROWs however, this design was inverted, allowing light to be guided in a low-refractive-index (fast) core, surrounded by a high-refractive-index ({slow}) cladding. This is achieved by engineering the cladding to behave like a Fabry-Perot layer operating at the anti-resonance condition \cite{archambault,Litchinitser:02}. Variations of ARROWs are now widely used in liquid core waveguides developed for biomolecular detection \cite{C0LC00496K,testa2016liquid,7282086}.
As we show here, the acoustic analogue of such waveguides --- Anti-Resonant Reflecting Acoustic Waveguides (ARRAWs), are capable of guiding acoustic waves through an acoustically fast core due to anti-resonances in the acoustically slow cladding. For example, in the particular designs of silicon/silica/silicon planar and cylindrical waveguides depicted in figure~\ref{Fig1}, the acoustic field of ARRAW modes would be predominantly localized to the silicon core.
Furthermore, such ARRAWs can simultaneously support the conventional TIR guidance of light in the high-refractive index core, and consequently amplify local optoacoustic interactions between the co-localized optical and acoustic waves, including Brillouin scattering. Out of the two interaction mechanisms previously identified as contributing to Brillouin effects: photoelasticity and radiation pressure \cite{rakichPRX,sipe2016hamiltonian,wolff2015stimulated}, the former relies on the acoustic field locally modifying the refractive index of the bulk of the medium, forming a moving grating for the optical fields. This effect necessarily relies on the spatial overlap between the optical and acoustic field, and the built-up amplitude of the induced acoustic field, quantified by the mechanical quality factor of the acoustic mode $Q_m$.
Here we will analyze in detail how ARRAWs can be optimized to support high-$Q_m$ acoustic, as well as optical modes, co-localized in the core of the waveguide, giving rise to strong backwards Brillouin scattering.
The paper is structured as follows: In the two following sections we discuss ARRAW behavior in planar and cylindrical waveguides. Since sound can propagate in a solid medium in the form of both transverse waves (referred to throughout as \textit{S} waves) and longitudinal waves (\textit{P} waves), characterized by different velocities, we expect that ARRAW waveguides will exhibit a richer structure of modes than their optical counterparts. For each structure we highlight special, optics-like cases in which shear waves become decoupled from the longitudinal components, and the system is an analogue of an anti-resonant \textit{optical} waveguide. Finally, we demonstrate how cylindrical ARRAW waveguides can be used to support simultaneous and co-localized optical and acoustic modes in the core of the waveguide, and discuss how such structures thus support efficient backward Brillouin scattering.
\begin{figure}[htbp!]
\begin{center}
\includegraphics[width=.5\columnwidth]{Fig1_2}
\caption{Schematics of (a) planar and (b) cylindrical waveguides. In both systems the acoustically \textit{fast} core is made of the same material as the outermost cladding, and supports both longitudinal ($P$) and shear ($S$) waves with velocities larger than the respective velocities in the \textit{slow} {inner claddings}. The propagation along the waveguide ($\hat{\mathbf{z}}$ axis) is characterized by the wavenumber $\beta$ which is constant throughout the structure, and the transverse wavenumbers $k_{s/p}^{(j)}$ in medium `j'.}
\label{Fig1}
\end{center}
\end{figure}
\section{ARRAW condition in planar structures}
In this section, we consider the simplest geometry of a planar waveguide, shown schematically in figure~\ref{Fig1}(a). The central section of the waveguide --- the \textit{core} (marked as `3') --- is made up of the same material as the semi-infinite {outermost cladding layers} (`1' and `5'), and supports \textit{S} waves with velocity $v_\text{s}^{(1)}$, larger than the respective velocity $v_\text{s}^{(2)}$ in the {inner cladding layers} `2' and `4' ($v_\text{s}^{(1)}>v_\text{s}^{(2)}$). Similar ordering is established for the velocities of \textit{P} waves in the core and cladding ($v_\text{p}^{(1)}>v_\text{p}^{(2)}$). While a more realistic design of the waveguide would include a finite outermost cladding layer surrounded by air, we focus on the semi-infinite model to stress that the acoustic guidance is induced solely by the anti-resonance in the cladding layer, rather than reflection from the solid-air interface. \revision{We also make two approximations about all the considered materials, neglecting their viscosities, and assuming isotropic elastic responses (see \ref{AppendixC} for the values of elastic parameters used here). These approximations allow us to develop tractable analytical models for the physics of ARRAW. Furthermore, for the application in Brillouin scattering \cite{eggleton2019brillouin}, we will only consider modes with larger quality factors $Q_m>100$, for which the small losses introduced by viscosity can be considered as additive to the radiative dissipation \cite{rakichPRX}}. Here we demonstrate the ARRAW behavior in the particular platform consisting of a silicon waveguide core and outer cladding, and a silica inner cladding. This choice is motivated by the advancement of the fabrication protocols for these materials, and the significant interest in implementing platforms for Brillouin interaction in silicon \cite{eggleton2019brillouin}.
The waveguiding modes of these structures are found by solving the elastic equation of motion in each layer $j$,
\begin{equation}\label{wave.equation}
\rho^{(j)} \frac{\textrm{d}^2}{\textrm{d}t^2}\mathbf{u}^{(j)} = \nabla \cdot \mathbf{T}^{(j)},
\end{equation}
relating the density $\rho^{(j)}$, displacement field $\mathbf{u}^{(j)}$, and stress tensor $\mathbf{T}^{(j)}$ \cite{auld1973acoustic}. Stress is related to strain $\mathbf{S}^{(j)}=\nabla_S\mathbf{u}^{(j)}$ --- the symmetrized gradient of the displacement field --- via Hooke's law and the rank 4 stiffness tensor $\mathbf{c}^{(j)}$, by the dyadic product $\mathbf{T}^{(j)}=\mathbf{c}^{(j)}:\mathbf{S}^{(j)}$. In isotropic media, this relationship can be expressed through the Lam\'e parameters $\lambda^{(j)}$ and $\mu^{(j)}$ as $T^{(j)}_{kl} = 2\mu^{(j)}{S}^{(j)}_{kl}+\lambda^{(j)} \delta_{kl}{S}^{(j)}_{nn}$. To obtain the solution, we use the ansatz
\begin{equation}\label{harmonicsolution}
\mathbf{u}(\mathbf{r}_{\perp},z,t) = \mathbf{U}(\mathbf{r}_{\perp}) \rme^{\rmi(\beta z - \Omega t)} + \text{c.c.},
\end{equation}
where c.c. denotes the complex conjugate, $\Omega$ and $\beta$ are the angular frequency and the longitudinal wavenumber of the mode, respectively, and $\mathbf{r}_{\perp}$ is the transverse coordinate. The fields in neighboring layers are related via the elastic boundary conditions, which require the continuity of the normal components of the adjacent stress tensors, and all the components of the adjacent displacement fields \cite{auld1973acoustic}. The waveguide modes are found by requiring that the amplitudes of incoming fields in the outer-most layers (`1' and `5' in the planar structure --- see figure~\ref{Fig1}(a)) vanish. For a planar structure, these modes are found in the basis of shear and longitudinal planewaves in each of the layers, with the transverse wavevectors in medium `j' denoted by \ensuremath{k_\text{s}^{(j)}} and \ensuremath{k_\text{p}^{(j)}}, respectively. Details of the calculations are given in \ref{AppendixA}.
\subsection{Out-of-plane polarization (pure shear)}
We first consider the case of \textit{pure shear waves}, with solely out-of-plane displacement fields ($u^{(j)}_x=u^{(j)}_z=0$). These are uncoupled from longitudinal modes throughout the structure (see the derivations in \ref{AppendixA} or \cite{auld1973acoustic}). The guidance condition (vanishing amplitude of incoming waves in the outermost layer) yields a rather complex transcendental equation relating $\Omega$, $\beta$ and $k_{s/p}^{(j)}$ (not shown here). However, it can be simplified by considering separately the modes which are symmetric and anti-symmetric with respect to the $\hat{\mathbf{y}}$-$\hat{\mathbf{z}}$ symmetry plane of the structure at $x=0$. The resulting transcendental equations for the symmetric and anti-symmetric modes can be expressed as
\begin{equation}\label{sym}
\left(-1+\rme^{\rmi c \ensuremath{k_\text{s}^{(1)}}}r_s\right)-\rme^{2 \rmi d \ensuremath{k_\text{s}^{(2)}}}r_s\left(-r_s+\rme^{\rmi c \ensuremath{k_\text{s}^{(1)}}}\right)=0,
\end{equation}
\begin{equation}\label{asym}
\left(1+\rme^{\rmi c \ensuremath{k_\text{s}^{(1)}}}r_s\right)-\rme^{2 \rmi d \ensuremath{k_\text{s}^{(2)}}}r_s\left(r_s+\rme^{\rmi c \ensuremath{k_\text{s}^{(1)}}}\right)=0,
\end{equation}
respectively, where as marked in figure~\ref{Fig1}, $c$ and $d$ are the thicknesses of the core and cladding layers, and
\begin{equation}
r_s = \frac{\ensuremath{k_\text{s}^{(1)}} \mu^{(1)}-\ensuremath{k_\text{s}^{(2)}} \mu^{(2)}}{\ensuremath{k_\text{s}^{(1)}} \mu^{(1)}+\ensuremath{k_\text{s}^{(2)}} \mu^{(2)}},
\end{equation}
is the acoustic reflection coefficient for a pure \textit{S} wave propagating in medium `1', reflecting off the interface with medium `2'.
\subsubsection{Optics-like anti-resonance condition}
We note that equations \eqref{sym} and \eqref{asym} also describe the symmetric and antisymmetric \textit{s}-polarized \textit{optical} waveguiding modes, if we replace $r_s$ with the optical reflection coefficient for non-magnetic materials $r^\text{opt} = (k^{(1)}-k^{(2)})/(k^{(1)}+k^{(2)})$. This is thanks to the purely transverse nature of these modes, and the close mapping between acoustic and optical boundary conditions (with the continuity of tangential components of the electric field, and of normal components of the magnetic field, mirroring the continuity of acoustic displacement and normal stress components, respectively).
Furthermore, to further explore the analogy to optics for ARRAWs, we can simplify the conditions given in equations \eqref{sym} and \eqref{asym} by eliminating the dependence on the thickness of the core layer ($c$) from the transcendental equations. To this end, we choose the core radius $c$ to correspond to the lowest order \textit{symmetric} modes supported by the core:\cite{Litchinitser:03}
\begin{equation}\label{sym.2}
\ensuremath{k_\text{s}^{(1)}} c = (2n+1)\pi,
\end{equation}
for $n=0,1,2,\dots$, and arrive at the simplified symmetric ARRAW condition from \eqref{sym}
\begin{equation}\label{condition.core}
\rme^{2 \rmi \ensuremath{k_\text{s}^{(2)}} d} = r_s.
\end{equation}
Similarly, the lowest order \textit{antisymmetric} modes \eqref{asym} are found for a core width satisfying
\begin{equation}\label{asym.2}
\ensuremath{k_\text{s}^{(1)}} c = 2n \pi,
\end{equation}
which simplifies \eqref{asym} to the ARRAW condition given in \eqref{condition.core}, identical to the symmetric ARRAW modes.
To simplify this condition even further, we can consider waves propagating in the core at a glancing incidence to the cladding interface, where the reflection coefficient $r_s\approx -1$. We then retrieve from \eqref{condition.core} the approximate relation:
\begin{equation}\label{arraw}
\ensuremath{k_\text{s}^{(2)}} d = (2m+1)\frac{\pi}{2},
\end{equation}
for $m=0,1,2,\dots$, which can be used to identify anti-resonant behavior in the dispersion of acoustic waveguides. This condition is analogous to that found for ARROWs, which reads $k^{(2)} d = (2m+1){\pi}/{2}$.
\subsubsection{Exact dispersion relation}
\begin{figure*}[htbp!]
\includegraphics[width=.9\textwidth]{Fig2_6}
\caption{(a) Dispersion relation, (b) normalized loss, and (c) field distribution plots for out-of-plane polarized symmetric shear modes of a planar waveguiding structure \revision{with core width $c=1~\mu$m}. In (a) horizontal gray dotted lines correspond to the analytic conditions for odd core resonance \eqref{sym.2} with $n=0,~1$, and colored dashed lines are described by the cladding anti-resonance condition given in \eqref{arraw} with $m=0,1$ and $2$. Thicker black lines denote the dispersion of conventionally guided (A-B) and leaky shear (C-E) modes shown in (c).}
\label{Fig2}
\end{figure*}
The complete, exact dispersion relation of the symmetric, pure shear modes of the planar waveguide with arbitrary core radius $c$ is found by solving \eqref{sym} numerically, and includes families of both conventionally guided and leaky modes. To differentiate between the two, and to establish a parallel between acoustic and optical frameworks, we arbitrarily choose the largest shear velocity of the system $v_\text{s}^{(1)}$ as a reference, and define an acoustic \textit{effective mode index}
\begin{equation}
n_{\text{eff}} = \frac{\mathrm{Re}(\beta)}{\ensuremath{k_{\text{s},0}^{(1)}}},
\end{equation}
where $\ensuremath{k_{\text{s},0}^{(1)}}=\Omega/v_\text{s}^{(1)}$. Conventionally guided modes, characterized by a real longitudinal wavenumber $\beta$, and the acoustic field propagating predominantly in the slow cladding layer thus correspond to $1 < \ensuremath{n_\text{eff}} < v_\text{s}^{(1)}/v_\text{s}^{(2)}$. Conversely, leaky modes found for $n_{\text{eff}} < 1$ have complex longitudinal wavenumber
These features are demonstrated in figure~\ref{Fig2}, where we analyze the symmetric, pure shear modes of a Si(core)/SiO$_{2}$(inner cladding)/Si(outer cladding) planar waveguide. For this setup, the ratio of shear velocities $v_\text{s}^{(1)}/v_\text{s}^{(2)}\approx 1.42$. The core width is set to $c=1~\mu$m, and we vary the cladding width $d$ from approximately 0 to $d=0.5~\mu$m --- a range of sizes which, for acoustic modes propagating at frequency $\Omega/2\pi= 15$~GHz, spans $\ensuremath{k_{\text{s},0}^{(2)}} d\approx 0$ to $4\pi$. In figure~\ref{Fig2}(a) we plot the effective mode index $n_{\text{eff}}$, and in (b) the normalized attenuation parameter $\mathrm{Im}(\beta)/\ensuremath{k_{\text{s},0}^{(1)}}$.
In figure~\ref{Fig2}(a) we clearly identify the families of modes conventionally guiding the acoustic waves in the inner cladding layer, characterized by $1<\ensuremath{n_\text{eff}}<v_\text{s}^{(1)}/v_\text{s}^{(2)}$. These modes exhibit purely real transverse wavenumbers in the cladding (\ensuremath{k_\text{s}^{(2)}}), and purely imaginary transverse wavenumbers in the core (\ensuremath{k_\text{s}^{(1)}}). This indicates oscillatory behavior of the fields in the cladding, along the transverse direction $\hat{\mathbf{x}}$, and exponential localization of the fields to the interfaces in the core. We clearly identify these features in the two upper panels of figure~\ref{Fig2}(c), which show the displacement field profile of two modes depicted as A and B in figure~\ref{Fig2}(a,b), differentiated by the number of nodes of the displacement field in the cladding (with $\ensuremath{k_\text{s}^{(2)}} d \approx 3\pi/2$ in A and $5\pi/2$ in B).
Modes below the shear core sound line ($\ensuremath{n_\text{eff}}<1$) are characterized by complex wavenumbers $\beta$, and $\ensuremath{k_\text{s}^{(j)}}$'s, meaning that the field in the core does not simply exponentially decay with $x$ away from the core/cladding interface, but exhibits oscillations governed by $\mathrm{Re}(\ensuremath{k_\text{s}^{(1)}})$. Simultaneously, in a manner consistent with optical \textit{leaky} modes, the imaginary part of $\ensuremath{k_\text{s}^{(1)}}$ becomes negative, and the outgoing fields increase exponentially with $x$ outside the structure. Just below the $\ensuremath{n_\text{eff}}=1$ line, the glancing modes form flat-dispersion sections where acoustic waves are confined predominantly to the core (see panels C and E in figure~\ref{Fig2}(c); modes C, D and E were selected to match the local minima of {normalized loss} $\mathrm{Im}(\beta)/k_s^{(1}$). This behavior is described approximately by the horizontal gray dotted lines depicting resonances of the core ($n=0$ in \eqref{sym.2}), which cross with the colored dashed lines indicating the cladding anti-resonance ($m=0,1,2$ in \eqref{arraw}) near the minimum of the loss function. This agreement breaks down for mode D, which lies far below the $\ensuremath{n_\text{eff}}=1$ line, and does not meet the glancing incidence criterion. We also identify a prominent anti-crossing behavior near the crossings between the resonances of the core (horizontal dotted gray lines) and resonances of the cladding ($\ensuremath{k_\text{s}^{(2)}} d= m\pi$, anti-crossings marked with hollow circles). At these points the cladding transmission reaches a local maximum, suppressing the formation of a waveguiding mode.
This simple analysis allows us to identify the ARRAW modes as the leaky acoustic modes localized to the \textit{fast} medium, and found at the local minima of loss, right below the \textit{sound-line} corresponding to the velocity of waves in the fastest medium (core) --- here illustrated as modes C and E. As we show below, this definition naturally extends to polarizations and media supporting the propagation of \textit{P} waves.
\subsection{In-plane polarization}
In-plane polarization of the waveguiding modes ($u_y^{(i)}=0$) necessarily couples the in-plane shear (\textit{S}) and longitudinal (\textit{P}) waves.
Consequently, the transcendental equation for the modes is more complex (see derivation in \ref{AppendixA}), even if we focus on a particular symmetry of both the components. Nevertheless, it submits to numerical solution. As in the out-of-plane polarization case, we focus on symmetric modes for simplicity. Furthermore, to observe anti-resonant behavior of longitudinal waves, characterized by substantially longer wavelengths (since $v_\text{p}^{(2)}/v_\text{s}^{(2)}\approx 1.6$), we consider claddings of larger thickness, comparable to the longitudinal wavelength in the cladding medium.
\begin{figure*}[htbp!]
\includegraphics[width=.9\textwidth]{Fig3_8}
\caption{(a) Dispersion relation, (b) normalized loss, and (c) field distribution plots for in-plane polarization symmetric modes of a planar waveguiding structure \revision{with core width $c=1~\mu$m}, with orange and blue lines respectively denoting shear and longitudinal contributions to the total (green lines) field, which is continuous at interfaces. Note that the thicknesses of cladding layers $d$ differ slightly for various modes. Characteristics of the selected modes are described in details in the text.}
\label{Fig3}
\end{figure*}
In the following discussion, it is useful to draw from the analytical framework presented in \ref{AppendixA}, and treat \textit{S} (shear) and \textit{P} (longitudinal) components of the acoustic wave (denoted as $\mathbf{u}_s$ and $\mathbf{u}_p$) as if they were independent, and characterize regimes in which these components behave as evanescent (exponentially decaying away from interfaces in either direction), conventionally guided (in the inner cladding) or leaky waves. While, as we show below, the coupling between the \textit{S} and \textit{P} waves necessarily blurs the characteristics of these regimes, this approach is instructive in developing intuition about the in-plane polarization acoustic guidance.
The dispersion relations and loss of the symmetric, in-plane modes are shown in figure~\ref{Fig3}(a) and (b), respectively. The field distributions (separated into \textit{S} and and \textit{P} components described as the norms of $\mathbf{u}_s$ and $\mathbf{u}_p$ fields defined in \ref{AppendixA}) of modes marked as A-F are shown in (c), chosen to best represent 4 distinct regimes of acoustic guidance. The first, denoted in figure~\ref{Fig3}(a) as \textit{conventional S} and
\textit{evanescent P} ($1<\ensuremath{n_\text{eff}}<v_\text{s}^{(1)}/v_\text{s}^{(2)}$), is characterized by real $\beta$, \ensuremath{k_\text{s}^{(2)}}, and imaginary \ensuremath{k_\text{s}^{(1)}} (indicating conventional \textit{S} guidance in the cladding), as well as purely imaginary transverse \textit{P} wave $k_p^{(i)}$'s (indicating \textit{surface states} with fields exponentially decaying away from the interface in the core and the outer layer). Two examples of such modes are shown in panels A and B in figure~\ref{Fig3}(c).
For $\ensuremath{n_\text{eff}} < 1$, $\beta$ becomes complex, and the shear components of the fields diverge exponentially outside of the structure as in the out-of-plane case (see panels C-F), as expected for the leaky \textit{S} modes. In particular, as \ensuremath{n_\text{eff}}~is reduced below $v_\text{s}^{(1)}/v_\text{p}^{(2)}\approx 0.89$, we first find the regime ($\ensuremath{n_\text{eff}}>v_\text{s}^{(1)}/v_\text{p}^{(1)}\approx 0.6$) in which \textit{P} components become conventionally guided in the inner cladding (see modes D and E). Modes found in this regime exhibit very particular characteristics, as they mix the conventional-like localization of the \textit{P} waves with the leaky-like exponential increase of displacement fields in the outer cladding layer (see blue lines in D and E) due to the negative imaginary component of $\ensuremath{k_\text{p}^{(1)}}$ (resulting from complex $\beta$, or equivalently, coupling to the \textit{S} waves). Furthermore, for particular geometric parameters the normalized loss of these modes (figure~\ref{Fig3}(b)) appears to dip towards 0 --- a behavior which we identify with the onset of the simultaneous anti-resonant guidance of \textit{S} waves (note the localization of \textit{S} waves represented by orange lines to the core in D and E) and conventional \textit{P} guidance.
Finally, further reducing \ensuremath{n_\text{eff}}~below $v_\text{s}^{(1)}/v_\text{p}^{(1)}$ brings us to the leaky \textit{S} and \textit{P} regime,
in which the displacement fields of both the \textit{P} and \textit{S} contributions oscillate in the core and in the inner cladding layer. We can therefore expect to find modes for which both the \textit{P} and \textit{S} waves build up an anti-resonance in the inner cladding, by approximately meeting the ARRAW condition given for \textit{S} waves in \eqref{arraw}. These conditions will not be fulfilled exactly, since the transverse wavenumbers \ensuremath{k_\text{s}^{(2)}} and \ensuremath{k_\text{p}^{(2)}} are imaginary, and the two components are coupled. Nevertheless, we can find ARRAW modes, such as the one shown in panel F, which meet our previous definition: they are characterized by a local minimum of loss, lie immediately below the $\ensuremath{n_\text{eff}}=v_\text{s}^{(1)}/v_\text{p}^{(1)}$ sound line, and exhibit a strong localization of both the \textit{P} and \textit{S} components in the core.
We should also note that the combination of Si/SiO$_2$ materials forbids the formation of modes with both \textit{S} and \textit{P} components which are conventionally guided, since the regions of effective mode velocities $(v_\text{s}^{(2)},v_\text{s}^{(1)})$ and $(v_\text{p}^{(2)},v_\text{p}^{(1)})$ do not overlap. Such regions could be found for other combinations of materials, e.g. Si/As$_2$S$_3$ or SiO$_2$/As$_2$S$_3$.
\section{ARRAW modes in cylindrical waveguides}
In order to bring the concept of ARRAW closer to applications in nonlinear optical system, we now investigate ARRAW behavior in cylindrical waveguides, as shown schematically in figure~\ref{Fig1}(b). In these designs, the core (medium `1') of radius $a$ is surrounded by a cladding `2' of thickness $d$ and, as for layered waveguides, the core and outer semi-infinite layer `3' are made up of the same material. The mathematical formulation of this problem is discussed in detail in \ref{AppendixB}, where we expand the fields into a basis of torsional (\textit{S} wave components only) and dilatational modes. For simplicity, we will consider only the azimuthally symmetric case ($m=0$), in which the two are decoupled, and the torsional modes have azimuthal component $\mathbf{u} = u_\theta\hat{\bm{\theta}}$ only \cite{auld1973acoustic} (see \ref{AppendixB}). We thus arrive at a system much like that found in planar waveguides, where one family of modes (torsional) depends on the shear velocities only, while the other family (dilatational) mixes \textit{S} and \textit{P} components. We thus expect to recover the optics-like ARROW characteristics of the former, and the ARRAW-like, complex modes of the latter.
\subsection{Torsional modes}
\label{section31}
\begin{figure*}[htbp!]
\includegraphics[width=.9\textwidth]{Fig4_5}
\caption{(a) Dispersion, (b) loss, and (c) field distribution plots of azimuthally symmetric ($m=0$) torsional modes of a cylindrical waveguide. Radius of the silicon core is set to 0.5~$\mu$m, and the cladding thickness $d$ (cladding region is marked with gray background in (c)) is normalized by the wavelength of \textit{S}-waves in the cladding material (silicon) $\lambda_s^{(2)}\approx 0.25~\mu$m at the angular frequency of $2\pi\times15$~GHz. Field distributions represent conventionally guided \textit{S} (A,B), and leaky (C-E) modes.} \label{Fig4}
\end{figure*}
In figure~\ref{Fig4}(a) we present the dispersion of torsional modes, and identify two families of modes: \textit{conventional S}, with $1<\ensuremath{n_\text{eff}}<v_\text{s}^{(1)}/v_\text{s}^{(2)}$, and \textit{leaky S}, with $\ensuremath{n_\text{eff}}<1$. The azimuthal (and only) component of the displacement field is shown, for a collection of modes, in figure~\ref{Fig4}(c). For A and B, the conventionally guided \textit{S} modes are localized to the cladding layer, and decay exponentially in the outermost layer due to the purely imaginary transverse wavenumber $k_s^{(3)}=\rmi\kappa_s^{(3)}$ (with $\kappa_s^{(3)}>0$). For the leaky modes C-E, we find that the effective mode index \ensuremath{n_\text{eff}}~of all the modes increases with the cladding thickness $d$, until the dispersion crosses into the \textit{conventional S} guidance regime. In particular, by tracing the evolution of the branch with modes C and D, we see that as the dispersion approaches the $v_\text{s}^{(1)}$ sound line, oscillations in the cladding layer become more pronounced, and the anti-resonant response becomes significantly stronger, leading to substantial quenching of losses, as observed previously in optical ARROW systems.
\subsection{Dilatational modes}
\begin{figure*}[htbp!]
\includegraphics[width=.9\textwidth]{Fig5_3}
\caption{(a) Dispersion, (b) loss and (c) field distribution plots of azimuthally symmetric ($m=0$) dilatational modes of a layered cylindrical waveguide (structure and parameters are identical as in figure~\ref{Fig4}).}
\label{Fig5}
\end{figure*}
Dilatational modes share the fundamental dispersion characteristics of in-plane modes in planar waveguides, with the four regimes marked in figure~\ref{Fig5}(a) and illustrated in figure~\ref{Fig5}(c), mixing evanescent, conventional, and leaky characteristics of the \textit{S} (orange lines) and \textit{P} (blue lines) waves of the displacement field. \revision{Parameters of the waveguides considered in this calculation are identical to those discussed in section \ref{section31}.}
In particular, for $v_\text{s}^{(1)}/v_\text{p}^{(1)}<\ensuremath{n_\text{eff}}<v_\text{s}^{(1)}/v_\text{p}^{(2)}$ we again find the peculiar modes (C and D) characterized by very small loss, for which the \textit{P} waves have the dual characteristic of conventional-like localization to the inner cladding, and leaky-like exponential growth in the outer cladding. For these modes, \textit{S} waves are clearly localized through the anti-resonance to the core of the waveguide.
Furthermore, as for the in-plane modes of the planar waveguide, we expect to find the ARRAW modes right below the fastest sound velocity $v_\text{p}^{(1)}$ line, in the leaky \textit{S} and \textit{P} regime. While these modes are not clearly defined by a dip in loss, we can nevertheless identify ARRAW behavior in the field distributions of the mode. For example, mode E carries the characteristics of both the \textit{S} and \textit{P} components localized to the core, and oscillatory behavior of the components in the cladding layer.
The above-identified ARRAW modes would clearly not constitute very good acoustic waveguiding channels, as their normalized loss barely reaches $10^{-2}$ (see point F in figure~\ref{Fig3} and point E in figure~\ref{Fig5}). This is mostly due to the fact that the core layers of both the planar and cylindrical systems investigated to this point are too narrow to fit multiple wavelengths of the longer, \textit{P} acoustic waves ($\ensuremath{k_\text{p}^{(1)}} a, \ensuremath{k_\text{p}^{(1)}} c \ll \pi$). While their geometric parameters were chosen to provide insights into the different regimes of operation of layered acoustic waveguides, we can now consider systems with larger core thicknesses that will provide much better examples of ARRAW guidance, characterized by lower losses and stronger localization of the field to the core, for application in nonlinear Brillouin scattering.
\section{Stimulated Brillouin Scattering in cylindrical ARRAWs}
In the previous section, we considered the acoustic response of cylindrical silicon/silica ARRAWs. Below, we show that such systems, slightly modified to suppress acoustic losses, can simultaneously support propagation of conventionally guided optical waves, and --- thanks to the co-localization of optical and acoustic excitations --- enable nonlinear Brillouin scattering of light propagating through the waveguide. We will focus on the particular case of \textit{Backward Stimulated Brillouin Scattering} (BSBS) \cite{Safavi-Naeini:19,wolff2015stimulated}. In BSBS, two counter-propagating optical modes with wavenumbers $k_i$ and frequencies $\omega_i$ ($i=1,2$) couple via scattering with an acoustic wave characterized by $\beta$ and $\Omega$. The general phase- and frequency-matching conditions
\begin{equation}\label{phase_matching}
k_1+\mathrm{Re}(\beta) = k_2,~\quad \omega_1 + \Omega = \omega_2,
\end{equation}
can be simplified if we consider the special case of intra-modal BSBS \cite{sipe2016hamiltonian}, in which the counter-propagating optical fields occupy the same mode. Furthermore, since the acoustic frequencies (up to tens of GHz) are much smaller than the \revision{ones corresponding to optical waves in the visible and IR range ($\omega_1\approx \omega_2 \sim 2\pi \times 3\times 10^{14}~\text{Hz}\gg\Omega$)}, we find a simple relationship between the acoustic and optical wavenumbers: $\beta\approx 2|k_1|$.
The nonlinear Brillouin interaction can be used to transfer energy between the two propagating optical fields, at the rate determined by the Brillouin gain $\Gamma$ \cite{Wolff:14,wolff2015stimulated,eggleton2019brillouin,rakichPRX}. In a typical realization, Brillouin interaction amplifies the flux of energy of a weak Stokes optical field $\mathcal{P}^{(S)}$ co-propagating (for Forward SBS) or counter-propagating (BSBS) with respect to a much stronger pump field $\mathcal{P}^{(p)}$ according to $\mathcal{P}^{(S)}(z) = \mathcal{P}^{(S)}(0) \exp\left(\Gamma \mathcal{P}^{(p)} z\right)$. The Brillouin gain coefficient can be expressed as
\begin{equation}\label{Bgain2_maintext}
\Gamma = 4\omega_{1} \frac{Q_m \left|\ensuremath{\mathcal{Q}_1^{\textrm{(PE)}}}+\ensuremath{\mathcal{Q}_1^{\textrm{(MB)}}}\right|^2}{\mathcal{E}_b \mathcal{P}^{(1)}\mathcal{P}^{(2)}},
\end{equation}
where $\mathcal{E}_b$ denotes the energy density of the acoustic wave, and $\mathcal{P}^{(i)}$ describes the energy flux of optical mode $i$. \ensuremath{\mathcal{Q}_1^{\textrm{(PE)}}} and \ensuremath{\mathcal{Q}_1^{\textrm{(MB)}}} describe two physical processes governing the Brillouin interaction: the photoelastic effect and radiation pressure (referred to as the Moving Boundary, or MB effect), respectively \cite{florez2016brillouin}, localized in the bulk and at the {boundary} of the waveguide. We expect, and verify, that the ARRAWs will primarily enhance the former effect, quantified by the transverse overlap integral between the electric ($\mathbf{e}^{(i)}$) and acoustic ($\mathbf{u}$) fields:
\begin{equation}\label{Q1}
\ensuremath{\mathcal{Q}_1^{\textrm{(PE)}}} = -\varepsilon_0 \int \text{d}^2\mathbf{r}~\varepsilon_a^2 \sum_{ijkl} [{e}_i^{(1)}]^*{e}_j^{(2)} p_{ijkl} \partial_k u_l^* =\int_0^{\infty} \text{d}r \ensuremath{\tilde{\mathcal{Q}}_1^{\textrm{(PE)}}}(r),
\end{equation}
and the Pockels tensor ($p_{ijkl}$). The interaction due to the radiation pressure is typically negligible for waveguides with transverse sizes over 1~$\mu$m \cite{rakichPRX}. Finally, $Q_m$ is the mechanical quality factor, typically used in lieu of the propagation loss, and here defined by the real and imaginary components of the longitudinal acoustic wavenumber $Q_m=\mathrm{Re}(\beta)/[2\mathrm{Im}(\beta)]$.
For simplicity, we focus on the interaction between conventionally guided optical TM, TE and hybrid (HE/EH) modes localized to the core of the waveguide \cite{snyder2012optical}.
\begin{figure*}[htbp!]
\includegraphics[width=.9\textwidth]{Fig6_7}
\caption{Brillouin interaction between conventionally guided optical and acoustic ARRAW modes of a layered cylindrical waveguide. (a) Dispersion relation (blue lines) and mechanical quality factors $Q_m$ (dashed orange lines) of the highest-$Q_m$ ARRAW modes for increasing cladding thickness $d$ for the cylindrical structure with core radius $a=2~\mu$m and operating at frequency $\Omega/2\pi= 15~$GHz. (b) SBS gain $\Gamma$ calculated for the ARRAW modes identified in (a) mediating interaction between optical TM modes (top panel), TE (central panel) and hybrid HE and EH modes (bottom panel) phase-matched to the acoustic mode \eqref{phase_matching} and characterized by the lowest effective optical mode index $\ensuremath{n_\text{eff}}^{\text{opt}}$. (c) SBS interaction between the acoustic mode identified as corresponding to the maximum of $Q_m$ in (a) as A and the HE$_{17}$ optical mode. The three panels represent the displacement field separated into \textit{S} and \textit{P} components, the axial Poynting vector $P_z$ of the optical mode, and the dominant, real part of the overlap function $ \ensuremath{\tilde{\mathcal{Q}}_1^{\textrm{(PE)}}}(r) $ originating from the PE interaction defined in \eqref{Q1}. (d) SBS gain between the acoustic mode marked as A in (a) and the phase-matched HE modes of decreasing order (and increasing effective optical mode index) HE$_{17}$ through HE$_{12}$.}
\label{Fig6}
\end{figure*}
To optimize the BSBS gain, we look for parameters of the system that will simultaneously maximize the mechanical quality factors $Q_m$, and the overlap term \ensuremath{\mathcal{Q}_1^{\textrm{(PE)}}}:
\subsection{Optimizing the mechanical quality factor}
The mechanical quality factor $Q_m$ is optimized by tuning the geometric parameters of the structure (core radius $a$ and cladding thickness $d$), and mechanical frequency $\Omega$. Since the underlying physics of acoustic systems is linear, the modes are invariant under simultaneous rescaling of the geometric parameters and wavelength (or inverse frequency), and consequently, we can consider two of these three parameters as independent. Here, we fix $\Omega / 2\pi= 15$ GHz, and optimize $a$ and $d$. For clarity, in figure~\ref{Fig6}(a) we show the dispersion relation and mechanical quality factors of the highest-$Q_m$ ARRAW modes lying right below the last sound line ($\ensuremath{n_\text{eff}} \leq v_\text{s}^{(1)}/v_\text{p}^{(1)}$), as function of the cladding thickness $d$. \revision{For each of the ARRAW modes denoted with solid blue lines, we identify a resonant dependence of the $Q_m$ on $d$. These features clearly point to the resonances in the reflection within the cladding layer, and the ARRAW nature of the guidance.}
Since we aim to optimize the waveguiding properties of the structure, we consider larger core radii $a=2~\mu$m, which offer mechanical quality factors over 1000 (structures with different core radii are analyzed in Table~\ref{Table1}). As the maximum $Q_m$ increases with the core radius (see results in Table~\ref{Table1}), it is tempting to favour the larger core geometries. However, we should note that our calculations do not account for the viscous losses in the material, which typically limit the $Q_m$ to the order of $\sim 1000$, and thus suppress the advantage of eliminating the acoustic radiative dissipation channels. Furthermore, it was shown in previous reports that the PE contribution to SBS gain exhibits an approximate $a^{-2}$ scaling with the transverse dimension of the waveguide \cite{rakichPRX}.
\subsection{Overlap term \ensuremath{\mathcal{Q}_1^{\textrm{(PE)}}}}
The particular choice of the optical mode which can undergo BSBS coupling via a selected ARRAW mode is predominantly limited by the phase-matching condition \eqref{phase_matching}, which dictates the magnitude of the longitudinal wavenumber $k_1$ of the optical mode. Therefore, the only parameters which we can optimize here are the type of optical mode (TE, TM or hybrid), and its effective mode index $\ensuremath{n_\text{eff}}^{\text{opt}}$. We first focus on the lowest $\ensuremath{n_\text{eff}}^{\text{opt}}$ (and high-order) modes supported by the core, and later (figure~\ref{Fig6}(d)) explore the dependence on the order of the optical mode. In particular, in figure~\ref{Fig6}(b) we analyze the BSBS gain coupling between these three types of modes (TM$_4$, TE$_4$ and EH$_{17}$/HE$_{17}$) as a function of cladding thickness --- a parameter which should have little effect on the optical guidance, but governs the ARRAW behavior. We find that the gain closely follows the dependence of the quality factor $Q_m$, suggesting that neither the overlap integral \ensuremath{\mathcal{Q}_1^{\textrm{(PE)}}} nor the normalization factors $\mathcal{E}_b$ and $\mathcal{P}^{(i)}$ change significantly with $d$.\\
\fulltable{\label{Table1}Backwards SBS gain between acoustic ARRAW modes and optical modes of a silicon/silica/silicon cylindrical waveguide shown in figure~\ref{Fig1}(b). For each waveguide geometry and acoustic frequency $\Omega$ we choose the mode with maximum mechanical quality factor $Q_m$ and calculate BSBS gain coefficient $\Gamma$ (given in \ensuremath{1/(\text{Wm})}) governing Brillouin coupling between TM$_l$, TE$_l$, HE$_{1l}$ and EH$_{1l}$ optical modes. Optical wavelengths $\lambda$ are given for reference, and setups for which the gain is smaller than 5~\ensuremath{1/(\text{Wm})} are dismissed.}
\br
geometry & \centre{2}{acoustics} & \centre{12}{optics} \\
\crule{1} & \crule{2} & \crule{12} \\
&& & \centre{3}{TM$_l$} & \centre{3}{TE$_l$} & \centre{3}{HE$_{1l}$}& \centre{3}{EH$_{1l}$} \\
$a/\revision{d}$ & $\Omega/2\pi$ & $Q_m$ & $l$ & $\lambda$ & $\Gamma$ & $l$ & $\lambda$ & $\Gamma$ & $l$ & $\lambda$ & $\Gamma$ & $l$ & $\lambda$ & $\Gamma$ \\
($\mu$m) & (GHz) && & ($\mu$m) & & & ($\mu$m)& & & ($\mu$m)& & & ($\mu$m)& \\
\mr
1.5/2.82 & 15 & 570 & 4 & 1.89 & 23.5 & 4--0 & -- & $<5$ & 7 & 2.03 & 42.6 & 7--1 & -- & $<5$ \\
& & & 3 & 2.19 & 44.3 & & & & 6 & 2.39 & 37.8 & & & \\
& & & 2 & 2.62 & 28.1 & & & & 5 & 3.50 & 5.2 & & & \\
& & & 1 & 3.17 & 10.8 & & & & 4--2 & -- & $<5$ & & & \\
& & & 0 & 3.80 & $<5$ & & & & & & & & & \\\mr
2/1.2 & 15 & 1815 & 6 & 1.83 & 46.5 & 6 & 1.85 & 7.6 & 7 & 1.92 & 90.2 & 7 & 1.96 & 8.5 \\
& & & 5 & 2.02 & 110.9 & 5 & 2.08 & 6.6 & 6 & 2.15 & 113.4 & 6 & 2.22 & 6.5 \\
& & & 4 & 2.29 & 99.1 & 4--0 & -- & $<5$ & 5 & 2.45 & 81.9 & 5--1 & -- & $<5$ \\
& & & 3 & 2.63 & 62.8 & & & & 4 & 2.83 & 43.3 & & & \\
& & & 2 & 3.04 & 30.8 & & & & 3 & 3.27 & 17.9 & & & \\
& & & 1 & 3.50 & 11.6 & & & & 2 & 3.74 & 5.4 & & & \\
& & & 0 & 3.94 & $<5$ & & & & 1 & -- & $<5$ & & & \\ \mr
2.5/1.74 & 15 & 4247 & 7 & 1.94 & 177.9 & 7 & 1.97 & 13.0 & 9 & 1.86 & 121.2 & 9 & 1.88 & 15.0 \\
& & & 6 & 2.13 & 195.8 & 6 & 2.18 & 10.5 & 8 & 2.03 & 200.5 & 8 & 2.07 & 12.5 \\
& & & 5 & 2.36 & 151.2 & 5 & 2.42 & 7.9 & 7 & 2.24 & 175.5 & 7 & 2.30 & 9.6 \\
& & & 4 & 2.64 & 98.7 & 4--0 & -- & $<5$ & 6 & 2.49 & 122.6 & 6 & 2.87 & 5.1 \\
& & & 3 & 2.96 & 55.6 & & & & 5 & 2.79 & 73.1 & 5--1 & -- & $<5$ \\
& & & 2 & 3.32 & 26.8 & & & & 4 & 3.14 & 37.5 & & & \\
& & & 1 & 3.70 & 10.8 & & & & 3 & 3.52 & 15.9 & & & \\
& & & 0 & 4.01 & $<5$ & & & & 2 & 3.87 & 5.6 & & & \\
& & & & & & & & & 1 & -- & $<5$ & & & \\ \mr
1/1.14 & 30 & 1815 & 6 & 0.91 & 294.7 & 6 & 0.92 & 58.7 & 7 & 0.96 & 704.8 & 7 & 0.98 & 65.2 \\
& & & 5 & 1.01 & 891.4 & 5 & 1.04 & 54.8 & 6 & 1.07 & 888.3 & 6 & 1.11 & 49.2 \\
& & & 4 & 1.15 & 793.4 & 4 & 1.18 & 40.0 & 5 & 1.23 & 628.6 & 5 & 1.27 & 34.3 \\
& & & 3 & 1.31 & 492.8 & 3 & 1.36 & 27.2 & 4 & 1.41 & 338.7 & 4 & 1.46 & 23.0 \\
& & & 2 & 1.52 & 240.8 & 2 & 1.56 & 18.0 & 3 & 1.64 & 140.2 & 3 & 1.68 & 15.1 \\
& & & 1 & 1.75 & 93.5 & 1 & 1.78 & 12.0 & 2 & 1.87 & 42.1 & 2 & 1.91 & 8.9 \\
& & & 0 & 1.97 & 25.2 & 0 & 1.98 & 8.1 & 1 & 2.04 & 16.6 & 1 & -- & $<5$ \\ \br
\endfulltable
The orange dot in the plots in figure~\ref{Fig6}(a,b) denotes the parameters of optical and acoustic modes analyzed in detail in figure~\ref{Fig6}(c). The radial profile of the acoustic field, shown in the top panel of figure~\ref{Fig6}(a), indicates significant localization to the core --- as expected for large-$Q_m$ ARRAW modes. Together with the core-localized optical mode (middle panel), this results in a strong localization of the \ensuremath{\tilde{\mathcal{Q}}_1^{\textrm{(PE)}}}(r) overlap integral kernel shown in the bottom panel.
To further enhance the Brillouin gain, we can consider changing the geometric parameters of the waveguide (e.g. core radius $a$), operating mechanical frequency $\Omega$, or explore coupling to different orders of the optical modes. We provide a comparison of selected BSBS ARRAW systems in Table~\ref{Table1}, and find a few guiding principles for designing such systems:
\begin{itemize}
\item as reported by Rakich \etal \cite{rakichPRX}, the smaller cross section waveguides yield larger BSBS gain --- however, this principle trades off against the sharp decrease in mechanical quality factor for small core radii; simultaneous increase of mechanical frequencies (e.g. towards $30~$GHz frequency) should allow us to retain high $Q_m$'s due to the linear nature of the acoustic physics, but the overall gain would likely become suppressed by the increased non-radiative acoustic losses at higher frequencies,
\item dependence on the order of optical mode (or effective optical mode index) is not monotonic, and in fact is the smallest for the most homogeneous field of the lowest order mode; this dependence is shown, for a number of hybrid modes from HE$_{17}$ to HE$_{12}$, in figure~\ref{Fig6}(d).
\end{itemize}
\subsection{Comparison to other BSBS waveguides}
It is instructive to compare the nonlinear performance of the investigated ARRAW to other BSBS waveguiding systems. The mechanical quality factor $Q_m$ of our structure can easily reach $10^3$, which is a result comparable to that found in suspended waveguides, in which the radiative dissipation of acoustic waves is suppressed, and $Q_m$ becomes limited by the intrinsic viscous losses in silicon \cite{schmidt19,van2015interaction}. Furthermore, the maximum Brillouin gain found in our system (nearly 1000~\ensuremath{1/(\text{Wm})}) is of a similar order to that reported in GHz BSBS systems, including those based on sub-micron silicon slot waveguides where the optoacoustic interaction is dominated by radiation pressure \cite{doi:10.1063/1.4955002}, or relying on materials with different optoacoustic properties, such as chalcogenides \cite{Pant:11}.
\subsection{Outlook}
Our simple designs can be further modified to better suit integrated platforms by considering finite outer-cladding layers or multiple anti-resonant layers. The principles of ARRAW guidance can also be combined with other mechanisms, for example by using the anti-resonant reflection to suppress the acoustic dissipation from exposed cores of rib waveguides into the substrate. Alternatively, the entire designs could be reversed and implement optical anti-resonant guidance and conventional acoustic TIR.
\section{Summary}
We have proposed a new type of multilayered optoacoustic Anti-Resonant Reflecting Acoustic Waveguide capable of supporting the simultaneous and co-localized guidance of GHz acoustic and near-IR optical signals. While the optical waves are TIR-guided in the high-refractive index core, the acoustic waves are localized to the core by anti-resonant reflection in the inner cladding layer. This mechanism can be harnessed to efficiently suppress the dissipation of acoustic waves into the outermost layers, and enable efficient Brillouin scattering between the counter-propagating optical waves. Our estimates indicate that silicon/silica ARRAWs can match the record performance of backward stimulated Brillouin scattering in silicon/silica platforms without relying on sub-micron confinement of fields or interactions induced by radiation pressure.
\ack
Authors acknowledge funding from Australian Research Council (ARC) (Discovery Project DP160101691) and the Macquarie University Research Fellowship Scheme (MQRF0001036).
|
1,314,259,993,182 | arxiv | \section{Introduction}\label{sec1}
First the absence of non-metrical nomenclature in our title is
somewhat misleading and intended not to discourage potential
readers cultivating a broader perspective. Some modest working
experience seems to indicate that non-metric manifolds are not
studied in autarchy from the metric ones, but rather as an
excrescence of them. Several
truths
transcend the metrical barrier with more ease than our
psychological apprehension. Much of this plasticity originates
from the metrical impulse ({\it big bang}), and proving a
universal statement oft requires working out its metric
version first (reminding somehow what Cherry calls the
vertical structure of mathematics).
By the way the jargon ``non-metric manifolds'' (like
non-Hausdorff manifolds, etc.) is often futile (at least
cumbersome), whenever causing an artificial subdivision of
statements holding true in
a unified setting.
\iffalse Thus we shall omit the adjective ``non-metric''
whenever the omission does not lead to a faulty statement. Now
let us come
concretely to our subject.
\fi
Trying to interpret some (existential?)
incantation (of D. Hilbert) ``{\it Wir m\"ussen wissen. Wir
werden wissen.}'' the ultimate verdict would probably involve
an
infiltration
of
non-metric manifolds into the real world (assuming its
existence, of course) via some geometric modelling (like
perhaps, quantum gravity of strings
or cytoplasmic
vibrations of living
beings). Needless-to-say we have
no serious idea on how to work this out concretely (yet see
Section~\ref{Gravitation:sec} for some toy examples). At any
rate mathematically, it
may look
puzzling that the continuum and the non-denumerable are well
tolerated in the small, yet not
very popular in the large.
This is surely a
too severe caricature, as non-metric manifolds enjoy a
respectable theory
(if not a drastic
{\it renouvellement des mat\'eriaux} in R.
Thom's prose) originating
with the
seminal discoveries of:
$\bullet$ Cantor 1883\footnote{For references regarding the
following chain of items, see the bibliographies of
\cite{BGG1}, \cite{GabGa_2011}.}, Hausdorff 1915, Vietoris
1921, Alexandroff 1924: {\it long ray} and {\it long line}
(the first found non-metric manifolds, yet maybe not the
simplest to visualize),
$\bullet$ Pr\"ufer--Rad\'o 1922--1925, R.\,L. Moore
1930--1942, Calabi-Rosenlicht 1953: construction of perfectly
geometric non-metric surfaces, whose prototype is the
so-called {\it Pr\"ufer surface} discovered near the end of
1922,
$\bullet$ H. Kneser and son M. Kneser: classification of
Hausdorff $1$-manifolds into 4 species (or 7 in the bordered
case) (1958),
real-analytic structures on the long 1-manifolds (1960), and a
3-manifold foliated by a {\it unique}
2D-leaf (1960, 1962),
$\bullet$ M.\,E. Rudin 1974, Zenor 1975: first set-theoretical
independence result involving the concept of {\it perfect
normality} (question of Alexandroff-Wilder),
$\bullet$ Nyikos:
bagpipe structures 1984, smoothings of 1-manifolds 1989, long
cytoplasmic expansions
of surfaces hybridizing
Cantor and Pr\"ufer
1990,
$\bullet$ Cannon 1969 \cite{Cannon_1969}: extension of Jordan
and Schoenflies to non-metric surfaces and an almost empty,
yet not completely nihilist, study of quasi-conformal
structures \`a la Gr\"otzsch 1928-Lavrentieff 1929-Ahlfors
1935-Teichm\"uller 1938,
$\bullet$ Gauld: independence result for powers, 125
equivalent criterions for the metrizability of a manifold,
phagocytosis principle \`a la Morton Brown: any countable
subset of a manifold is contained in a cell ($\approx$ chart
$\approx{\Bbb R}^n$),
$\bullet$ Baillif: homotopical aspects,
\noindent More modest recent contributions includes:
$\bullet$ Baillif-Gabard-Gauld 2008 \cite{BGG1}: foliated
rigidity in some long manifolds with a cylindrical structure
``squat$\times$long ray'',
$\bullet$ Gabard-Gauld 2010 \cite{GaGa2010}: re-exposition of
the Jordan-Schoenflies aspects of Cannon 1969,
$\bullet$ Gabard-Gauld 2011 \cite{GabGa_2011}: elementary
study of dynamical flows on surfaces mostly, yet with many
loose ends.
\iffalse Besides there are many brilliant expository sources
like Spivak, 1970, Milnor, 1970, etc. whereas several articles
by Nyikos provide a more serious
genealogy of the subject. \fi
The present paper is essentially a foliated-dual to the latter
article \cite{GabGa_2011}. Whereas in the flow-case {\it
dichotomy} (every Jordan curve separates) is---since
Poincar\'e-Bendixson---a clear-cut barrier
to {\it transitivity} (dense trajectory), the foliated case
presents a more subtle landscape modulated by the ``size'' of
the fundamental group and its obstructive influence upon
foliated-transitivity (existence of a dense leaf). Precisely
when we navigate at low temperatures, say $\pi_1$ of
low-ranks ($0\le r\le 3$), then when $0\le r\le 1$ the
situation is completely frozen (intransitive). As the rank
increases to $2$ or even $3$ the marmalade starts its
ebullition in the liquid phase (with pockets of intransitivity
still resisting, yet under progressively rarefying
circumstances controlled by the topology, cf.
Figure~\ref{monolith:fig}). Finally as the rank reaches values
$\ge 4$ then we live in the volatile-gaseous
regime, where {\it any metric surface is transitive}.
Non-metric extensions take the following form:
{\it frozen-intransitive
configurations remain frozen} when imbedded into the cosmic
freezer
of non-metric manifolds, whereas of
course the
reverse engineering foils, as putting something liquid or
gaseous in the non-metrical fridge may well create a frozen
lollypop. This happens for instance to the long plane ${\Bbb
L}^2$, which punctured as often as you please, still remains
intransitive, e.g. by the foliated rigidity previously
mentioned \cite{BGG1}.
The above metaphoric trichotomy (3 phases delineated by the
rank $r$ of $\pi_1$) quantifies somehow the well-known
principle that simple topology impedes complicated dynamics
both for {\it flows} (continuous ${\Bbb R}$-actions) as for
{\it foliations} (geometric structures
microscopically modelled
after the
slicing of a number-space ${\Bbb R}^n$ into parallel
$p$-planes). Of course the range of the principle is
primarily two-dimensional. For instance $S^3\times S^3$ admits
a {\it minimal} (=all orbits dense) smooth flow (probably
rather chaotic) furnished by a non-constructive Baire type
argument of Fathi-Herman (1977).
In an earlier paper \cite[p.\,5]{GabGa_2011},
we advanced the naive
speculation that positive curvature
obstructs the presence of a minimal flow on a closed manifold.
If true,
the impact is rather gigantic: first all spheres
lack
minimal flows (Gottschalk conjecture of 1958, still open) and
$S^3\times S^3$
lack positive curvature (a still older
question of Heinz Hopf from the 1930's).
Back to the
more down-to-earth two-dimensionality, the
prototype for the above principle is the Poincar\'e-Bendixson
theory,
primarily based on the Jordan separation theorem. The latter
holds not merely in the plane ${\Bbb R}^2$, but in any planar
(schlichtartig) surface.
In fact Jordan separation holds true
non-metrically in simply-connected surfaces (see Gabard-Gauld
2010 \cite{GaGa2010}, and also R.\,J. Cannon 1969
\cite{Cannon_1969}).
To reach the ultimate generality one can
adopt Jordan separation as an ``axiom''
specifying the class of {\it dichotomic} surfaces and derive
the following (via the classical Poincar\'e-Bendixson
trapping-bag argument):
\begin{lemma}\label{Poinc-Bendix:flows} A dichotomic surface
is flow-intransitive
(no dense orbit).
\end{lemma}
This applies for instance to the {\it doubled Pr\"ufer
surface}\footnote{
In our opinion, the best thing to do is (following e.g.,
Nyikos 1984 \cite{Nyikos84}) to define first the {\it bordered
Pr\"ufer surface} $P$ through a purely geometric process (e.g.
like in \cite{Gabard_2008} and the references therein esp.
R.\,L. Moore (1942), and Bredon's book), and then deduce
various versions via the operation of collaring, doubling or
folding; yielding resp. the classical Pr\"ufer surface $P_{\rm
collar}$ (appearing first in print in Rad\'o 1925
\cite{Rado_1925}, yet discovered accidentally near the end of
1922 by H. Pr\"ufer), the Calabi-Rosenlicht surface $2P$ (1953
\cite{Calabi-Rosenlicht_1953}), and the Moore surface
$M=P_{\rm folded}$ (1942 in print, yet discovered earlier
$\approx 1930$, cf. the historiography in \cite{GabGa_2011}).
As pointed out by Daniel Asimov, the drawback of our
terminology is that it
boosts Pr\"ufer's credit vs. Calabi-Rosenlicht
contribution. Yet we feel that the bordered viewpoint is very
convenient, reducing to a single (geometric) process the
generating mode of all those manifolds. Maybe Pr\"ufer's short
life justifies anyway some little
distortion.} $2P$, considered in Calabi-Rosenlicht 1953
\cite{Calabi-Rosenlicht_1953} (cf. also Figure~\ref{Train:fig}
for an intuitive picture and \cite[5.5]{GabGa_2011} for a
proof of $2P$'s dichotomy). The same
intransitivity as (\ref{Poinc-Bendix:flows})
is
faulty when it comes to foliations. Indeed a
noteworthy example of Dubois-Violette 1949 \cite[p.\,897,
Point 4.]{Dubois-Violette_1949}, smoothly rediscovered in
Franks 1976 \cite{Franks_1976}, or Rosenberg 1983
\cite[p.\,29, V.\,Rem.\,2)]{Rosenberg_1983}, foliates the
thrice-punctured plane
${\Bbb R}^2_{3*}$ by dense leaves.
This is manufactured from a foliated disc with two thorns
singularities glued with a replica
after an irrational
rotation (Figure~\ref{Dubois:fig}). Alternatively it can be
regarded as the quotient of
Kronecker's irrational winding of the torus divided by the
(hyper)-elliptic involution.
\begin{figure}[h]
\centering
\epsfig{figure=Dubois.eps,width=122mm}
\caption{\label{Dubois:fig}
Some labyrinths in quadruply-connected domains of the plane}
\end{figure}
As recently observed by D. Gauld (in
BGG2~\cite[Prop.\,3.1]{BGG2}), the stronger
simple-connectivity impedes a dense leaf. Indeed a
localized perturbation of the foliation (akin to the closing
lemma paradigm)
creates a closed leaf (cf. Figure~\ref{David's_trick},
Case~1), which
bounds a disc by Schoenflies (an absurdity
as it is foliated). Here we use the universal (non-metric)
version of Schoenflies presented in \cite{GaGa2010} (also
implicit in Cannon 1969 \cite{Cannon_1969}).
In the light of this remark of Gauld, our immediate motivation
was two-fold:
(1) adapt the
Haefliger-Reeb theory (1957
\cite{Haefliger_Reeb_1957}) describing foliated structures on
the plane ${\Bbb R}^2$ to any simply-connected (non-metric)
surfaces by a purely formal repetition of their arguments,
(2)
exploit this general theory to deduce a somewhat rather
special result, saying that {\it surfaces whose (fundamental)
groups $\pi_1$ are infinite cyclic ${\Bbb Z}$ also lack
transitive foliations}. This should have involved the
universal cover, yet a more Poincar\'e-Bendixson like method
turned out to be more efficient.
Since the torus or punctured torus (with $\pi_1={\Bbb Z}^2$,
resp. $F_2$ free of rank 2) admit {\it minimal} foliations
(all leaves dense), this
exhibits ${\Bbb Z}$ as the largest possible (fundamental)
group impeding
a transitive
foliation.
During the process of aping non-metrically Haefliger-Reeb
(especially the issue that a leaf in a foliated plane divides)
we
encountered a separation theorem generalizing the separation
by a Jordan curve (embedded circle). Specifically any
hypersurface which is closed as a point-set (a {\it divisor}
for short) in a
simply-connected manifold
(of arbitrary dimension) separates the
manifold (\ref{Riemann-separation}). This
can be deduced from a trick \`a la Riemann attaching to any
divisor $H$ in a manifold a
double (unramified) cover polarized along $H$. Intuitively,
this covering
consists of electrically charged particles, switching their
charge signs whenever the cross the hypersurface.
({\it Warning:} This
does not reproves the classical Jordan curve theorem
as our
hypersurfaces verify a local flatness condition, not a priori
known for
``wild'' Jordan curves in the plane, but true a posteriori via
Schoenflies.)
Then questions enchained quite naturally leading to a slightly
broader perspective which we shall now try to review. Of
course all results must be fairly classical in the metric case
(albeit as yet we were not very assiduous in locating
references). Indeed even at the metric level our exposition
contains some lacunae (maybe the most acute one being our
inaptitude to check the foliated-intransitivity of the
twice-punctured Klein bottle!), and we
hope to manufacture a sharper version in the near future
(after some editorial duties).
\subsection{Overview
and methods}
Methodically, we can distinguish two trends
relying either on the Schoenflies bounding disk property or on
the weaker Jordan separation. From the
qualitative viewpoint, the former forbids any recurrence to a
foliated chart whereas the second permits only
moderate recurrences
for {\it oriented} foliations. Coupled with the
double cover
induced by a non-orientable foliation
(\ref{orienting:2-fold-covering}) this allows
in some favorable situations to draw general conclusions
regarding {\it all} foliations.
\iffalse
(A) Either one uses the full strength of simple-connectivity
(and the Schoenflies theorem) or one
(B) Uses only the weaker dichotomy (i.e.
separation by Jordan curves) which albeit weaker still restricts
the foliated dynamics in the case of oriented foliations which
behave similar to flows, and to which we may apply the
Poincar\'e-Bendixson trapping argument.
\fi
The first method (mostly suggested by Gauld) gives the
following
repetition of Haefliger-Reeb's
results:
(1) {\it In a (non-metric) simply-connected surface, each leaf
appears at most once in a fixed foliated chart}
(\ref{Alex_separation})\footnote{Cross-reference to the
main-body of the text.}. Like in the metric case, this issue
is the
pillar of a non-metric Haefliger-Reeb theory.
Consequently, the leaf-space is a (non-Hausdorff)
$1$-manifold
and leaves are closed as point-sets (but of course open as
manifolds). The complete absence of recurrence forbids
transitivity. Any leaf is locally flat and thus using
Riemann's covering trick (\ref{Rieman-polarized-cover}) it
divides the surface (\ref{Alex_separation}(d)). As a corollary
the leaf-space is a simply-connected $1$-manifold.
Via the second method based on the
weaker Jordan separation, we get the following results
using the Poincar\'e-Bendixson method:
(2) {\it In a dichotomic surface, an oriented foliation cannot
have a dense leaf, nor can a finite union of leaves be dense}
(\ref{Poinc-Bendixson_many}). This follows by examination of
the returns of a leaf to a foliated chart, which occur in a
orderly fashion. Since foliations of simply-connected
manifolds are orientable (\ref{orienting:2-fold-covering}),
this reproves the intransitivity of such surfaces (without
Schoenflies).
Besides,
{\it surfaces with infinite cyclic group lack transitive
foliations} (\ref{infinite-cyclic-group}). Our proof uses
Jordan
separation
in pseudo-cylinders, namely {\it orientable surfaces with
$\pi_1={\Bbb Z}$ are dichotomic}
(\ref{dichotomy:orient_plus_inf_cycl}). (This
fails without orientability as shown
by the M\"obius band.)
When
married with Riemann's trick of branched coverings (familiar
in complex function theory, yet pleasant to see at work in the
foliated context), the Poincar\'e-Bendixson method gains
more swing. For instance,
{\it
dichotomic
surfaces with $\pi_1$ free of rank $2$ lack transitive
foliations} (\ref{dichotomic-free-of-rank-2:prop}). This
shows the sharpness of Dubois-Violette's example:
$3$ is the minimal number of punctures in the plane
to manufacture a transitive foliation (labyrinth). To complete
the picture we also notice a non-orientable version: {\it a
non-orientable surfaces with $\pi_1$ free of rank $2$ is
foliated-intransitive}
(\ref{rank_two_non-orientable:intransitive}), plus some
sporadic obstructions in rank $3$
(\ref{intransitivity:new-obstruction}).
(Here is missing the issue with the Klein bottle twice
punctured, that we already confessed!)
All these results are first established metrically (with a
pivotal reliance on \Kerekjarto's cylindrical ends
(\ref{Kerkjarto:end}), as a recipe to
compactify
metric surfaces of finite-connectivity). The non-metrical
boosting involves a Lindel\"of exhaustion with calibrated
fundamental groups (\ref{calibrated-exhaustions:lemma}) (yet
another application of Schoenflies) amounting to fill in the
holes of a $\pi_1$-epimorphic subregion to adjust its group to
that of the ambient surface. The logical flattening of
the details frequently sidetracked us into purely topological
considerations, that were ultimately collected in the first
section.
Despite the abundance of details,
the underlying
metabolism remains rather
basic:
\begin{principle}[Freudo-Lindel\"ofian Anschauung transfer--FLAT]
What\-soever you are able to see of a non-metric manifold
(which is a sort of
quantum-plasma in ebullition) it is
(in first approximation) its Lindel\"of subregions (those
truly accessible to the ``Anschauung'') which govern both the
qualitative ``analysis situs'' (Jordan, Schoenflies,
orientability, dichotomy, fundamental group, etc.) as well as
the foliated (or dynamical) destiny of the whole.
\end{principle}
Little corrections are required when a truly non-metric
phenomenology is prompted by a particular
manifold. Yet this is really a second strata of sophistication
not affecting
tremendously the generic
value of the first principle.
Beside those geometrical
methods, we have also
a point-set obstruction:
(3) A transitive one-dimensional foliation (abridged
$1$-foliation) of an $n$-manifold $M^n$ with $n\ge 2$ implies
{\it separability}\footnote{Existence of a countable dense
subset.} of the underlying $M^n$ (\ref{separability}). In fact
more is true:
any chaotic behaviour of a one-dimensional leaf is caused
by one of its metrical short end, whereas long
sides of leaves (being sequentially-compact) are always
``decently'' {\it properly} embedded (the leaf-topology
matches with the relative topology) (\ref{long-semi-leaf}).
The above results reflects so-to-speak
qualitative
features of foliations on some classes of topologically
particularized surfaces. A dual aspect is the {\it
quantitative theory}, asking for a
classification of foliations (on a fixed manifold). This game,
which is almost always hopeless in the metric realm (e.g.,
${\Bbb R}^2$ is hard yet well-understood, $S^3$ hopeless),
turns out to be
much easier on some special non-metric surfaces like, e.g.,
the long plane ${\Bbb L}^2$ \cite{BGG1}. Here and on some
related surfaces one
experiments a rigidity in the large, imposing an asymptotic
leaves pattern
with freedom
left only on certain metric subregions, viz. squares
transversally foliated along two opposite sides and
tangentially on the remaining
two. By the theorem of \Kerekjarto-Whitney
\cite{Kerekjarto_1925}, \cite{Whitney33} creating for oriented
foliations compatible flows (valid only in the metric case),
such a square permits (up to homeomorphism) a unique foliated
extension of its boundary data. It followed in \cite{BGG1}
that ${\Bbb L}^2$ tolerates only 2 foliations up to
homeomorphism. (This is to be contrasted with the menagerie of
foliations grooving the plane ${\Bbb R}^2$.)
A plain consequence of this rigidity is the intransitivity of
thrice-punctured long-plane
${\Bbb L}^2_{3*}$ despite its group, $F_3$ (free of rank 3)
(\ref{puncturing}), is one
susceptible of complicated foliated dynamics (recall
Dubois-Violette). Hence, albeit the fundamental group has much
to say, it does not control completely the
situation, which depends ultimately upon
some finer granularity
(encoded in the geometry of the manifold). For an even simpler
example, the bagpipe $\Lambda_{0,4}$ with orientable bag of
genus 0 with 4 contours and pipes
modelled after the long cylinder $S^1\times {\Bbb L}_{\ge 0}$,
is a (dichotomic) surface with $\pi_1=F_3$, yet intransitive.
In fact $\Lambda_{0,4}$ cannot even be foliated \cite{BGG1},
because any pipe acts as a black hole aspirating leaves in a
purely vertical fashion or creating many horizontal circle
leaves $S^{1}\times \{\alpha\}$. Thus an appropriate surgery
reduces one to the
compact Euler-Poincar\'e obstruction. Both examples cited are
trivial,
inasmuch as their intransitivity also derives from their
non-separability (via (3) above).
The transitivity decision problem becomes more
perfidious if one
wonders about the transitivity of the separable, $M_{3*}$,
thrice-punctured Moore surface. (Recall that the Moore surface
is the folded Pr\"ufer surface, cf. Figure~\ref{Train:fig} for
an intuitive picture.) Albeit there is no universal algebraic
obstruction (recall again Dubois-Violette), we experiment a
geometric one related to the
granularity of the Moore
surface (\ref{Baillif:adapted_by_Gabard}). Indeed an argument
of M. Baillif (Buenos Aires era, near 2008--2009,
under press in \cite{BGG2} and reproduced below
(\ref{Baillif:nano-black-holes})) the ``thorns'' of the Moore
surface (i.e., the folded images of the boundaries of
Pr\"ufer)
acts as a ``continuum'' series of miniature
black holes inveigling {\it almost all}
leaves. Precisely, {\it almost all thorns (all but at most
countably many exceptions) are semi-leaves of any foliated
Moore surface} (\ref{Baillif:nano-black-holes}). This
implements a scenario of gravitational collapse at the
microscopic scale
in sharp contrast---but somehow dual---to the macroscopic
scale at which lives the (super-massive) black hole
sitting at the long end of a Cantor cylinder $S^1\times {\Bbb
L}_+$ \cite{BGG1}. All these examples imaginatively suggest to
contemplate foliated structures (like lignite distributions)
merely a mean of evidencing the magneto-gravitational
($\approx$geometric, since Newton-Euler vs.
Leibniz-Euler(!)-Riemann-Einstein\footnote{Compare, e.g., the
historiography in Speiser 1927 \cite{Speiser_1927}.})
anomalies of the underlying manifold.
The above results
do not
tell whether the doubled Pr\"ufer surface $2P$ accepts a
labyrinth (=transitive foliation). Yet, it probably does in
view of the toy:
\begin{exam} (Dubois-Violette Pr\"uferized) \label{Dubois}
{\rm Taking Dubois-Violette's foliated disc
(Fig.\,\ref{Dubois:fig}, Rosenberg's version) and Pr\"uferize
the 2 leaved-arcs ending to the $2$ punctures to get a
bordered surface $\wp$, which glued with a copy after an
irrational twist, yields a separable surface $2\wp$
transitively foliated, which is non-metric, dichotomic, etc.
and very resemblant to $2P$.}
\end{exam}
It is not impossible (and indeed highly probable)
that this Pr\"uferized
surface $2\wp$ is homeomorphic to $2P$ (just ``magnify'' some
bridges). Yet this foliation
is not minimal, and it is natural to wonder if $2P$ is
minimally foliated (more about this soon!). For flows, an easy
argument of propagation \cite{GabGa_2011} showed that
minimality forces metrisability, raising some hope to classify
all flow-minimal surfaces. Presumably not so with foliated
surfaces, compare \cite{BGG2} for some minimally foliated
non-metric surfaces. The simplest example, depicted on
Figure~\ref{Venn} below, involves a punctured torus minimally
foliated \`a la Kronecker with one of the two leaves ending to
the puncture elongated up to reach the length of Cantor's long
ray). This Pinocchio expansion near the puncture exploit a
construction of Nyikos (cf. \cite{BGG2} for more details and
the original reference).
Now what about $2P$ being minimally foliated? Since the
fundamental group is extremely voluminous (free on a continuum
of generators), the rank is big,
pushing the surface in the very gaseous-volatile regime where
transitivity mutates in minimality. However the real answer is
quick and easy thanks to the gravitational clumping
of Baillif, to the effect that a (finally violent)
condensation of diffuse gas must occur along the ``bridges''
(i.e. the images of the
boundaries of $P$ in $2P$ via the canonical inclusion
$P\hookrightarrow 2P$).
Indeed arguing as for the Moore surface
(\ref{Baillif:nano-black-holes}), Baillif's method shows that
in any foliation of $2P$ {\it almost all} bridges are leaves
(as above this means all but countably many exceptions). Thus
we have with $2P$ (or better its Dubois-Violette model $2\wp$
(\ref{Dubois}))
a surface which is transitive, yet not minimal. (The author
does not know if such an example exist metrically.)
\subsection{Questions and
ramifications}
\iffalse In view of the aforementioned obstruction
(\ref{dichotomic-free-of-rank-2:prop}), any twice-punctured
simply-connected surface is intransitive under a foliation;
e.g., the twice-punctured Moore surface, $M_{2*}$. If we allow
three punctures in Moore $M$, then there is no algebraic
obstruction to transitivity, yet it is not
easy to construct a transitive foliation. \fi
\iffalse (A similar question arise if we puncture once the
M\"obius band, then $\pi_1\approx F_2$ but non-dichotomic
since non-orientable, and we cannot apply our obstruction
(\ref{dichotomic-free-of-rank-2:prop}) to transitivity.) \fi
Here we mention a short list of questions which are probably
not structurally hard, but rather unsolved due to the
incompetence of the writer.
(1) {\it Some metrical missing links}. As just noticed what is
the simplest example of a metric surface which is transitively
foliated but not minimally. Also is the twice punctures Klein
bottle $\Klein_{2*}$ foliated-intransitive? This is actually
the only missing case to complete our picture
(Figure~\ref{monolith:fig}) classifying finitely-connected
surfaces according to their foliated-transitivity. Which
metric surfaces can be {\it biminimally foliated} (i.e. so
that each semi-leaves are dense)? Cf.
(\ref{biminimal-impeded-by-puncture}) for a partial answer.
(2) {\it Any pseudo-Moore surface foliates?}
In the case of flows, the
{\it phagocytosis lemma} (saying that any countable subset of
a manifold is contained in a chart)
found a nice application to what we called in
\cite{GabGa_2011} the {\it pseudo-Moore problem} (no
non-singular flow on a non-metric, simply-connected, separable
surface). Such surfaces are referred to as {\it pseudo-Moore},
with the Moore surface being the simplest prototype. In the
foliated case, it is not obvious to guess an applied avatar,
except for the over-optimistic option that all pseudo-Moore
surfaces foliate.
Recall that for flows, the Moore surface had no brush, and
this
turned out to be the
fate of any pseudo-Moore surface \cite{GabGa_2011}. But now
the Moore surface foliates, thus should we expect that any
pseudo-Moore surface foliates? We believe the answer is
negative, in view of Nyikos long cytoplasmic expansions (cf.
the discussion following Question
(\ref{corona-sun:question})).
(3) {\it Euler obstruction in the $\omega$-bounded case.}
Another
frustrating problem is what happens to the
Euler-Poincar\'e obstruction? Specifically we conjecture that
$\omega$-bounded\footnote{A space is {\it $\omega$-bounded}
if countable subsets have compact closures. In the
manifold-case, this amounts to Lindel\"of subregions having
compact closures. This concept is a non-metric avatar of
compactness, especially acute for surfaces in view of Nyikos'
bagpipe theorem.} surfaces with $\chi<0$ lack foliations
(independently of any
specification of the pipes).
In the case of flows, it was
comparatively easy to show \cite{Gab_2011_Hairiness} that a
non-vanishing $\chi\neq 0$ obstructs
non-singular flows (non-metric hairy-ball theorem).
(4) {\it Freeness of the fundamental group of
curves(=non-Hausdorff $1$-manifolds) and the Haefliger
twistor.}
Can somebody
prove the (hypothetical) Lemma~\ref{twistor} below, which
seems to be folklore since Haefliger 1955
\cite{Haefliger_1955}, and which could play a crucial r\^ole
in showing that all (non-Hausdorff) $1$-manifolds have a free
fundamental group. Prior to this we
show the $\pi_1$-freeness
of all open
(Hausdorff) surfaces by reduction to the metric case
(\ref{freeness-for-open-surfaces:prop}). (Hausdorffness is of
course essential, as seen by picturing flying-saucers, e.g.,
$S^1\times (\text{line with two origins})$ with $\pi_1={\Bbb
Z}^2$.)
In guise of provisory conclusion, we
diagnostic that our
understanding of foliated structures (especially on surfaces)
is slightly less sharp than the corresponding one for
dynamical flows, where deeper paradigms entered effectively
into the arena (like phagocytosis or the Euler-Poincar\'e
obstruction).
\section{Topological
preparations}
This section collects
purely topological
results, independent of (yet related to) our foliated
investigations. The reader can skip
it
referring to individual results later if necessary.
The Leitfaden below
is supposed to help
navigating through the menagerie of
details.
\begin{figure}[h]
\hskip-20pt\epsfig{figure=leit.eps,width=139.2mm}
\end{figure}
\subsection{Foundations (Leibniz, Euler, Gauss, Listing,
M\"obius, Riemann, Klein, Dyck,
Schoenflies,
\Kerekjarto, Rad\'o)}
We first recall without proofs (but cross-references) the key
results in the topology of the plane and surfaces. A pillar of
the theory is the following theorem often attributed to
Schoenflies (1906), albeit there are serious
function-theoretical competitors building over the Riemann
mapping theorem and conformal representation (including Osgood
1900--1913, Carath\'eodory 1912, Hilbert-Courant, etc.), not
to mention the early attempt in M\"obius 1863
\cite{Moebius_1863}:
\begin{theorem} (Schoenflies 1906)\label{Schoenflies:thm-plane}
Any Jordan curve (embedded circle) in the plane ${\Bbb R}^2$
bounds a disc.
\end{theorem}
\begin{proof}
Compare e.g. Siebenmann 2005 \cite{Siebenmann_2005}.
\end{proof}
\begin{lemma} (e.g., Weyl, 1913)\label{Weyl:triangulation}
A triangulated surface is metric.
\end{lemma}
\begin{proof}
Aggregating simplices
by adjacency, the surface is expressible as a countable union
of compacta, hence Lindel\"of, so metric (Urysohn, 1925).
\end{proof}
\begin{theorem} (Rad\'o, 1925)\label{Rado:triangulation}
Any metric surface can be triangulated.
\end{theorem}
\begin{proof} The classical proof relies in principle on
Schoenflies (\ref{Schoenflies:thm-plane}), cf.
Rad\'o 1925 \cite{Rado_1925} or
Ahlfors-Sario 1960 \cite[pp.\,105--110]{Ahlfors-Sario_1960}.
For a proof
circumventing Schoenflies compare Moise 1977
\cite[p.\,60]{Moise_1977}.
\end{proof}
When specialized to compact (bordered) surfaces one gets using
some combinatorial tricks the following seminal classification
theorem (initiated by pre-Morse theoretical considerations in
M\"obius 1863 \cite{Moebius_1863}, extended to the
non-orientable case in Klein's writings (partly motivated by
real algebraic curves and his paradigm of the ``Galois-Riemann
Verschmelzung'', plus apparently
some slight helping-hand from L. Schl\"afli). We can also
mention Klein's students
like Weichold 1883, and von Dyck 1888 (plus some earlier
works) . Later the combinatorial
machine is
purified
in Dehn-Heegaard 1907 and
Brahana 1923.
\begin{theorem} (M\"obius 1860--63, Jordan, Klein, Dyck 1888,
Dehn-Heegaard, Brahana+Rad\'o)
\label{Moebius-Klein-classification} A compact bordered
surface is classified by
the Euler characteristic
$\chi$, the number of contours and the indicatrix
(=orientability character). When orientable the surface is a
sphere with $g$ handles $\Sigma_g$, and otherwise it is
homeomorphic to the sphere with $g\ge 1$ cross-caps, denoted
$N_g=S^2_{gc}$.
\end{theorem}
\begin{proof} It is probably fair to qualify all early proofs
as semi-intuitive
inasmuch as they required some geometric `structuration' lying
beyond the naked topological manifolds (those were
perhaps first defined in print in \Kerekjarto{
}1923~\cite{Kerekjarto_1923}, though the idea
is much older, e.g., Riemann 1. March\footnote{Compare, e.g.,
Speiser 1927 \cite[p.\,107-8]{Speiser_1927}} 1853--1854,
Betti, Poincar\'e 1895, Tietze 1907, Brouwer, Weyl 1913,
etc.). Thus
one first triangulates the surface with Rad\'o
(\ref{Rado:triangulation}) and then apply the combinatorial
reduction to a normal form \`a la Dehn-Heegaard, say or
alternatively do M\"obius-Morse theory. For a modern book
form, cf. e.g., Massey 1967 \cite{Massey_1967}.
\end{proof}
We now list several consequences, starting with the following,
historically perhaps first proved via the uniformization
theorem (Klein, Poincar\'e, Koebe 1882--1907). Recall also
an alternative proof via triangulations and the combinatorial
device of van der Waerden-Reichardt (ref. as in
\cite{GaGa2010} (arXiv version)):
\begin{prop}\label{uniformization} A metric simply-connected surface is either $S^2$
or the plane.
\end{prop}
\begin{proof} Cf. also Ahlfors-Sario \cite{Ahlfors-Sario_1960}
and Massey \cite{Massey_1967}
(in the exercise).
\end{proof}
This in turn implies first the metric-case of the following:
\begin{lemma}[Homotopic Schoenflies]
(Baer 1928, Cannon 1969) \label{Schoenflies-Baer} A
\hbox{null-}\\homotopic Jordan curve in a surface (metric or
not) bounds a disc. In particular any Jordan curve in a
simply-connected surface bounds a disc, which is unique
whenever the surface is open (equivalently not the sphere).
\end{lemma}
\begin{proof}
Via passage to the universal covering (still metric by
Poincar\'e-Volterra and the countability of the $\pi_1$
ensured by Rad\'o's triangulation (\ref{Rado:triangulation}),
or alternatively just lift the triangulation and use Weyl
(\ref{Weyl:triangulation})), we may apply in view of
(\ref{uniformization}) the classic Schoenflies theorem
(\ref{Schoenflies:thm-plane}). An argument of R. Baer, 1928
(compare e.g., \cite{GaGa2010}), shows that the bounding disc
for the lifted Jordan curve is homeomorphically projected down
in the original surface.
The non-metric case reduces
to the metric one, by covering the range of a null-homotopy by
a Lindel\"of subregion (as observed in Cannon 1969
\cite{Cannon_1969}).
\end{proof}
\subsection{Other gadgets: freeness of $\pi_1$
(Ahlfors-Sario) and Whitehead's spine}
\begin{lemma}\label{Ahlfors-Sario:freeness:lemma} The
fundamental group of an open metric surface is free on
countably many generators.
\end{lemma}
\begin{proof} Cf.
Ahlfors-Sario 1960 \cite[\S 44A., p.\,102]{Ahlfors-Sario_1960}
or Massey
1967 \cite{Massey_1967}.
\end{proof}
Using Whitehead's spine we get the stronger assertion:
\begin{lemma} \label{Whitehead-spine}
Any open metric surface retracts by deformation onto a
subgraph of the $1$-skeleton of any of its triangulation. In
particular it is homotopy equivalent to a (countable) graph.
\end{lemma}
\begin{proof} The theory of the spine originates in Whitehead
1939, cf. also Massey's book 1967 \cite{Massey_1967} for a
discussion.
\end{proof}
\subsection{Indicatrix and orientability (Gauss, Listing,
M\"obius, Klein, Schl\"afli, etc.)}
Those classical notions (originating with the discovery
(circa 1860) of the {\it M\"obius band} involving a
well-documented($\pm$)
question of priority between
close colleagues, namely Gauss and Listing) is clearly
independent of a metric and makes sense for all manifolds.
Several viewpoints are possible (combinatorial vs. naked
TOP-manifolds). A first ``naked'' aspect is to define the {\it
indicatrix} (or even better the {\it orientation covering}):
\begin{lemma}\label{indicatrix-orient-covering} Given a manifold $M$, one can propagate ``local
orientations'' around loops to obtain a morphism $\pi_1(M)\to
\{\pm 1\}$ (called the indicatrix). The latter is in fact just
the monodromy of $M_{\circlearrowleft}\to M$ the
double orientation
cover given by de-doubling points by their two possible local
orientations.
Being purely local, the construction works for all locally
Euclidean spaces even without Hausdorff proviso, and being
perfectly intrinsic it
has the following:
(Naturality) If $L\subset M$ is a subregion of the manifold
$M$, then its orientation covering $L_{\circlearrowleft}\to L$
is just the restriction of that of $M$ to $L$.
\end{lemma}
\begin{proof} It boils down to define ``local orientations'',
cf. e.g. Dold's Algebraic Topology.
\end{proof}
A manifold is said to be {\it orientable} if its indicatrix is
trivial (equivalently, if its orientation covering is
trivial).
\begin{lemma}\label{orientability:heredity}
$\bullet$ (Heredity) Any subregion of an orientable manifold
is orientable.
\noindent $\bullet$ (Transfer) A manifold all of whose
Lindel\"of subregions are orientable is orientable.
\end{lemma}
\begin{proof} The hereditary claim reduces to the fact
that triviality of a covering is preserved by restricting to
subregions of the base. The transfer claim requires a little
argument. If not orientable the given manifold, $M$, has a
non-trivial orientation-covering, which is therefore
connected. Thus there is a path in $M_{\circlearrowleft}$
connecting the two points lying above the (arbitrarily) fixed
basepoint of $M$. This path, being compact, is contained in
some Lindel\"of subregion, which up to taking a Lindel\"of
exhaustion of the base $M$ can be assumed to be the inverse
image of a Lindel\"of subregion $L$ of $M$.
By naturality (\ref{indicatrix-orient-covering}) $L$ is
non-orientable, violating the assumption.
\end{proof}
Another common definition of orientability (of a manifold of
any dimensionality) is that any embedded circle has a trivial
tubular neighbourhood. Yet we probably want to exclude wild
knots. Several respectable theories (PL, DIFF, etc.) explain
how to tame
wildness, yet as we are primarily concerned with the 2D-case
there is an intrinsic weapon namely Schoenflies
(\ref{Schoenflies:thm-plane}) and a resulting tubular
neighbourhood theory (cf. e.g., Siebenmann 2005
\cite{Siebenmann_2005}) permitting to circumvent any
specialisation to such
structures (whose existence is rather
weak in the non-metric context and as we know even for compact
manifolds not universally available as soon as the dimension
is $\ge 4$).
\begin{lemma}\label{orientability-in-terms-of-Jordan-curves}
A surface is orientable iff any Jordan curve has a trivial
tubular neighbourhood. In particular puncturing finitely many
points in a surface does not affect the indicatrix.
\end{lemma}
\begin{proof} $[\Rightarrow]$ Let $J$ be a Jordan curve in the
surface $M$, and let $T$ be its tubular \nbhd, which is an
${\Bbb R}$-bundle over the circle $S^1$. By
classical bundle theory there is only two such bundles: the
trivial one and a twisted one (the open M\"obius band). The
latter option is precluded by heredity
(\ref{orientability:heredity}).
$[\Leftarrow]$ The converse looks more tricky, and we are only
able to perform a reduction to the metric case. Let $L$ be a
Lindel\"of subregion of $M$. Then clearly the assumption of
triviality of Jordan \nbhd{s} holds in $L$ as well, thus by
the metric case of the lemma, $L$ is orientable. By transfer
(\ref{orientability:heredity}) it follows that $M$ is
orientable.
{\it Metric case (outline).} The proof in the metric case
works maybe as follows: fix a triangulation and subdivide
barycentrically until all 2-simplexes lye in charts. Then
local orientations takes a more down-to-earth interpretation
as the borders of those simplices. Now the Jordan triviality
assumption specialized to combinatorial loops ensure that
there is a coherent way to orient simplices in the
combinatorial sense, implying the topological sense of
(\ref{indicatrix-orient-covering}). (Exercise: find a
reference where this is properly done, e.g. M\"obius 1865,
Weyl 1913, etc.)
For the last clause, just observe that if the original surface
is non-orientable, then it contains a M\"obius band and the
punctures can be performed outside of it.
\end{proof}
\subsection{Dichotomy (Leibniz,
K\"astner, Bolzano, Jordan, Veblen)}
After a long series of precursors (and successors), Jordan
(1887) showed that any embedded circle in the plane
disconnects the plane in two components. This motivates the
following jargon (borrowed from O. H\'ajek \cite{Hajek_1968}):
\begin{defn} {\rm A surface is {\it dichotomic} if any Jordan
curve (=embedded circle) divides the surface.
More common
synonyms are {\it planar} (or {\it schlichtartig}), yet both
sound too restrictive when it comes to allow non-metric
surfaces.}
\end{defn}
Using homology and the five lemma, one can show (cf.
\cite[5.3, 5.4]{GabGa_2011}):
\begin{lemma}\label{dicho:hered-transfer}
$\bullet$ (Heredity) Any subregion of a dichotomic surface is
dichotomic.
\noindent $\bullet$ (Transfer) A surface all of whose
Lindel\"of subregions are dichotomic is dichotomic.
\end{lemma}
\begin{lemma}\label{dicho-implies-orientable}
A dichotomic surface is orientable.
\end{lemma}
\begin{proof} In view of the orientability criterion
(\ref{orientability-in-terms-of-Jordan-curves}),
let $J$ be a Jordan curve in the surface $M$, and let $T$ be a
tube around it. By heredity of dichotomy
(\ref{dicho:hered-transfer}) the latter is dichotomic, hence
cannot be the M\"obius band.
\end{proof}
\iffalse
\begin{proof} By definition, orientability of a manifold
(any dimensionality) means that any embedded circle has a
trivial tubular neighbourhood. So let $J$ be a Jordan curve in
$M^2$, and $T$ be its tubular neighbourhood, which is an
${\Bbb R}$-bundle over $J$. Since any (open) subregion of a
dichotomic surface is dichotomic
(\ref{dicho:hered-transfer}), the tube $T$ is dichotomic.
Hence $T$ cannot be the unique twisted bundle (M\"obius band)
and so is the trivial bundle.
\end{proof}
\fi
While the converse of (\ref{dicho-implies-orientable}) is not
true (e.g., torus),
it is sometimes:
\begin{lemma}\label{dichotomy:orient_plus_inf_cycl}
An orientable surface with infinite cyclic group is
dichotomic.
\end{lemma}
\def\Si{\Sigma}
\iffalse Let us first try geometrically albeit the argument is
as yet not perfectly convincing, and then we provide a more
algebraic argument.
\def\Si{\Sigma}
{\small
\begin{proof} (The following argument is homological, so rather
algebraic; for a more
geometric proof using Schoenflies, see
Remark~\ref{new-proof}.) Let $J$ be a Jordan curve in such a
surface $\Si$. We have to show that $J$ divides $\Si$. The
assertion is certainly true if $J$ is null-homotopic (apply
Schoenflies as in GaGa, 2010). If not then $J$ is non-trivial
in $\pi_1(\Si)\to H_1(\Si,{\Bbb Z})$ which is isomorphic,
since $\pi_1(\Si)$ is abelian. If $J$ represent the generator
(of either the fundamental or the first homology group of
$\Si$) then separation is easy to show. Indeed we can
construct a tubular neighbourhood $T$ of $J$, which since
$\Si$ is orientable has to be trivial $T\approx {\Bbb S}^1
\times {\Bbb R}$. Since $J$ divides $T$, and $H_1(T) \to
H_1(\Si)$ is onto (because of our assumption that $[J]$
generates $H_1(\Si)$), an application of the five lemma (cf.
GaGa 2001, Dyn, Lemma~4.13(ii) [at least in the numbering of
the post-arXiv version]) shows that $J$ divides $\Si$. q.e.d.
Yet it remains the case where $J$ is homologous to a multiple
of the generator of $H_1(\Si)$. Intuitively this situation
looks incompatible with the fact that $J$ has to be imbedded,
but it remains to find a convincing argument...
\end{proof}
}
Here is one, yet at the expense of the usual algebraic
formalism: \fi
\begin{proof}
(The following argument is homological, so rather algebraic;
for a more
geometric proof using Schoenflies, see
Remark~\ref{new-proof}.) Let $J$ be a Jordan curve in the
surface $\Si$. We can assume that $J$ is not null-homotopic,
since otherwise Jordan separation is obvious as $J$ bounds a
disc (\ref{Schoenflies-Baer}). We fix $T$ a tubular
neighbourhood of $J$, which is trivial, i.e. $T\approx {\Bbb
S}^1\times {\Bbb R}$ (since $\Si$ is orientable).
To show that $\Si-J$ is disconnected
we examine the homology exact sequence of the pair $(\Si,
\Si-J)$ written down as the third line of the diagram below.
Just above it we have the sequence of the tube pair $(T,T-J)$,
which we embed as the complement of the poles of the
$2$-sphere (denoted $S$) while mapping $J$ to the equator.
This gives us the first line which is the sequence of the pair
$(S,S-J)$. By naturality all squares are commutative.
\iffalse
\def\ziehen{\hskip-10pt}
$$
\begin{matrix}
&H_2(T)\ziehen &\rightarrow\ziehen &H_2(T,T-J)\ziehen
&\rightarrow\ziehen& H_1(T-J)\ziehen &\rightarrow\ziehen
&H_1(T)\ziehen &\rightarrow\ziehen &H_1(T,T-J)\ziehen
&\rightarrow\ziehen& H_0(T-J)\ziehen &\rightarrow\ziehen
&H_0(T)\ziehen &\rightarrow\ziehen & H_0(T,T-J)=0 \cr
&\downarrow & &\downarrow & &\downarrow& &\downarrow&
&\downarrow & &\downarrow & &\downarrow & &\downarrow \cr
&H_2(\Si)\ziehen &\rightarrow\ziehen &H_2(\Si,\Si-J)\ziehen
&\rightarrow\ziehen & H_1(\Si-J)\ziehen &\rightarrow\ziehen
&H_1(\Si)\ziehen &\rightarrow\ziehen &H_1(\Si,\Si-J)\ziehen
&\rightarrow\ziehen & H_0(\Si-J)\ziehen &\rightarrow\ziehen
&H_0(\Si)\ziehen &\rightarrow\ziehen & H_0(\Si,\Si-J)=0
\end{matrix}
$$
\fi
\begin{figure}[h]
\hskip-22pt\epsfig{figure=diag.eps,width=135mm}
\vskip-5pt\penalty0
\end{figure}
\noindent The excision isomorphisms are denoted by
vertical equivalence symbols. Boldface ``{\bf 0}''
symbols indicate trivial groups, while other bold indices
indicate the rank of the corresponding abelian group. Looking
at the first line we find that $t=1$ and $u=1$, which values
propagates downstairs by the excision isomorphisms. Next the
group $H_2(\Si)=0$ is trivial, because $\Si$ is an open
$2$-manifold and the postulated fundamental group ${\Bbb Z}$
does not occur among the list of closed surfaces. (We used
implicitly the vanishing of the top-dimensional homology $H_n$
of Hausdorff $n$-manifolds, compare e.g. Samelson 1965
\cite{Samelson_1965-homology}.) Thus by exactness of the
bottom line, we have $1-s+1-1+r-1=0$, i.e. $r=s$, provided all
ranks are finite. For this we apply the {\it five lemma}
saying that if the diagram of abelian groups has exact rows
and each square is commutative:
\def\ziehen{\hskip-6pt}
$$
\begin{matrix}
&C_1\ziehen &\rightarrow\ziehen&C_2\ziehen
&\rightarrow\ziehen& C_3\ziehen &\rightarrow\ziehen&
C_4\ziehen &\rightarrow\ziehen& C_5 \cr
&\quad\;\;\downarrow f_1 &&\quad\;\;\downarrow f_2 &
&\quad\;\;\downarrow f_3 & &\quad\;\;\downarrow f_4 &
&\quad\downarrow f_5 \cr
&D_1\ziehen &\rightarrow\ziehen &D_2\ziehen
&\rightarrow\ziehen & D_3\ziehen &\rightarrow\ziehen&
D_4\ziehen &
\rightarrow\ziehen& D_5
\end{matrix}
$$
\vskip-10pt \noindent Then
(1) if $f_2$ and $f_4$ are onto and $f_5$ injective, then
$f_3$ is onto.
(2) if $f_2$ and $f_4$ are injective and $f_1$ is onto, then
$f_3$ is injective.
\noindent Part (1) does not apply to the $f_i$ (we do not know
$f_4$ to be onto), but it applies to the $g_i$ showing that
$g_3$ is onto, so $r\le 2$. Since the group indexed by $s$ is
squeezed in an exact sequence with zeros extremities, it has
finite rank as well. Now part (2)
applies to the $f_i$ (but not to the $g_i$!), thus $f_3$ is
injective and $s\ge 2$,
so $r\ge 2$ as we knew $r=s$. This completes the proof.
\iffalse
\begin{rem}
{\rm In fact above we used that $f_4$ is onto since $J$
is not null-homotopic, to get the finiteness of $s$. However}
\end{rem}
\fi
\end{proof}
\iffalse
\begin{rem}
{\rm Maybe pushing the argument forward, one can show that
if $J$ is not null-homotopic then it has to represent the
generator of $H_1(\Si)$. This amounts to show that $f_4$ is onto.
This follows if $f_3$ is onto and $g_3$ injective. The latter
follows as $g_3$ has free abelian source and range. But the former
ontoness of $f_3$ looks not obvious... }
\end{rem}
\fi
\subsection{Riemann's branched coverings}
The following
mechanism originating in complex
function theory (Riemann's Thesis 1851) will later find a
pleasant application to foliated structures:
\begin{lemma}\label{Riemann:branched-cover}
Given a finite $d$-sheeted covering $\Sigma\to F_{n*}$ of a
punctured surface $F$, there is a canonical recipe to fill
over the punctures to deduce a branched covering $\Sigma^{*}
\to F$ whose total space is a surface. Moreover if the
(unpunctured) surface $F$ is compact, then so is $\Sigma^*$,
and their Euler characteristics are related by the so-called
Riemann-Hurwitz formula:
\begin{equation}\label{Riemann-Hurwitz:gnal-case}
\chi(\Sigma^*)=d\chi(S^2)-\deg(R),
\end{equation}
where $\deg(R)$ is the ramification counted with multiplicity.
Further orientability of $F$ transfers to $\Sigma^{*}$.
\end{lemma}
\begin{proof} If we look at a ``pierced neighbourhood''
$U$ of a puncture $p\in F-F_{n*}$ topologically like ${\Bbb
C}^*$ (punctured complex plane) we obtain a covering
$p^{-1}(U)\to U$. Since $\pi_1(U)$ is ${\Bbb Z}$, the
coverings of $U$ are completely classified,
being the mappings $z\mapsto z^k$ (from ${\Bbb C}^*$ to
itself) for some integer $k\ge 1$. So there is a natural way
to fill over the punctures (Riemann's trick) to obtain
$\Sigma^*\to F$ a branched covering of degree $d$ whose total
space $\Sigma^{*}$ is a surface.
The Riemann-Hurwitz formula follows by a Euler characteristic
counting. Triangulate $F$
so that punctures are vertices, and lift simplices to
$\Sigma^{*}$ and count the alternating sum of those, which
behaves multiplicatively up to the correction effected by
ramification.
The assertion regarding orientability can be checked
combinatorially, or by noticing that puncturing does not
affect orientability. Hence $F$ orientable implies $F_{n*}$
orientable (\ref{orientability:heredity}), and in turn the
covering $\Sigma$ is orientable, and finally $\Sigma^*$ is
orientable. (Little exercises.)
\end{proof}
\subsection{Dichotomic coverings (via branched coverings)}
The following specialization of (\ref{Riemann:branched-cover})
will
be useful
for the sharpness of Dubois-Violette's labyrinths (i.e., 3 is
the minimal number of punctures required in the plane to
construct a transitive foliation):
\begin{lemma}\label{dicho-covering-is-dicho}
The total space of a
double covering $p\colon \Sigma\to M$ of a dichotomic surface
$M$ with $\pi_1(M)=F_2$ is itself dichotomic.
\end{lemma}
\begin{rem}{\rm The result is sharp as
shown by the standard branched covering $T^2\to S^2$ ramified
at $4$ points: divide the torus by the (holomorphic)
involution $z\mapsto-z$, or, rotate by $180^0$ a Euclidean
model of the torus in revolution.}
\end{rem}
\begin{proof} We
first establish the metric case and then
boost the result beyond the metrical barrier via the
usual exhaustion method. The metric case
involves the trick of
branched coverings (\ref{Riemann:branched-cover}).
{\bf Metric case.} By (a special case (\ref{Kerekjarto:dicho})
of) \Kerekjarto's classification (\ref{Kerkjarto:end}), $M$ is
homeomorphic to
$S^2_{3*}$ (sphere with $3$ punctures), and we compactify $M$
to the sphere $S^2$ by adding 3 points.
\iffalse
If we look at a ``neighbourhood'' $U$ of a puncture
$p\in S^2-M$ topologically like ${\Bbb C}^*$ (punctured
complex plane) we obtain a covering $p^{-1}(U)\to U$. Since
the $\pi_1(U)$ is ${\Bbb Z}$ its coverings are completely
classified,
being the mappings $z\mapsto z^k$ (from ${\Bbb C}^*$ to
itself) for some integer $k\ge 1$. So there is a natural way
to \fi By filling over the punctures (Riemann's trick) we
obtain $\Sigma^*\to S^2$ a branched covering of degree 2. The
space $\Sigma^{*}$ is a surface which is compact, borderless
and orientable ($M$ being dichotomic, hence orientable
(\ref{dicho-implies-orientable}), thus so
is $\Sigma^{*}$). By Riemann-Hurwitz
we have
\begin{equation}\label{Riemann-Hurwitz}
\chi(\Sigma^*)=2\chi(S^2)-\deg(R),
\end{equation}
where $\deg(R)$ is the ramification counted with multiplicity.
Since the degree of the map is 2 there is only simple
ramification so that $\deg(R)$ is just the cardinality of the
branched points. In our situation, $\deg(R)\le 3$ and since
$\chi(\Sigma^*)=2-2g$ where $g$ is the genus, we deduce that
$g=0$. By classification (\ref{Moebius-Klein-classification})
$\Sigma^*$ is the sphere, which is dichotomic by the Jordan
curve theorem. Thus $\Sigma$ is dichotomic as well by heredity
(\ref{dicho:hered-transfer}).
{\bf Non-metric case.} We choose an exhaustion
$M=\bigcup_{\alpha<\omega_1} M_{\alpha}$ by Lindel\"of
subregions $M_{\alpha}$ and we may arrange
$\pi_1(M_{\alpha}) \approx F_2$. Such a ``calibration'' of the
fundamental group is
justified in
Lemma~\ref{calibrated-exhaustions:lemma} below. Since $M$ is
dichotomic, the $M_\alpha$ are also dichotomic
(\ref{dicho:hered-transfer}). Thus $\Sigma_{\alpha}
:=p^{-1}(M_{\alpha})\to M_{\alpha}$ is dichotomic as well by
the metric case. \iffalse Now recall from
\cite[5.4]{GabGa_2011} that dichotomy is
anti-hereditary in the sense that a surface all of whose
Lindel\"of subregions are dichotomic is dichotomic. \fi Now
given $L$ a Lindel\"of subregion of $\Sigma$, there is some
$\alpha$ such that $L\subset \Sigma_{\alpha}$. By heredity $L$
is dichotomic, and the
transfer~(\ref{dicho:hered-transfer}) completes the proof.
\end{proof}
\begin{rem}\label{new-proof} {\rm This argument reproves
(\ref{dichotomy:orient_plus_inf_cycl}), i.e. {\it dichotomy of
orientable surfaces
with infinite cyclic group}. Let us carry out this simple
exercise.
\smallskip
\begin{proof} [Another proof of~\ref{dichotomy:orient_plus_inf_cycl}] By
(\ref{calibrated-exhaustions:lemma}) we have an exhaustion
$M=\bigcup_{\alpha<\omega_1} M_{\alpha}$ by Lindel\"of (hence
metric) subregions with $\pi_1(M_\alpha)\approx{\Bbb Z}$.
Since $M$ is orientable, so are the $M_\alpha$ which are
therefore open cylinders (again by an appropriate special case
of \Kerekjarto~(\ref{Kerkjarto:end})), hence in particular
dichotomic. By
transfer
(\ref{dicho:hered-transfer}) it is enough to show that any
Lindel\"of subregion $L$ of $M$ is dichotomic, and so is the
case by heredity
(\ref{dicho:hered-transfer}) because $L$ is contained in some
$M_\alpha$, which is dichotomic.
\end{proof}
}
\end{rem}
\subsection{Calibrating the fundamental group (Cannon)}
We now check the pivotal lemma about exhaustions respecting
the fundamental group (which is yet another consequence of
Schoenflies going back to R.\,J. Cannon 1969
\cite[p.\,98]{Cannon_1969}, ``fill in the holes'' argument).
First we show a kernel killing procedure.
{\it Warning:} our clumsy(?) proof uses beside Schoenflies,
some other gadgets like the freeness of the fundamental group
of open metric surfaces ({\ref{Ahlfors-Sario:freeness:lemma}})
plus the stronger theory of J.\,H.\,C. Whitehead's spine
(\ref{Whitehead-spine}) telling that such surfaces retract by
deformation to a countable graph---referred to as `the' {\it
spine}.
\begin{lemma}\label{killing:kernel} (Kernel killing procedure)
Given a Lindel\"of subregion $L$ in a surface $M$ so that the
natural map $\pi_1(L)\to \pi_1(M)$ is epimorphic, there is a
larger Lindel\"of subregion $L'\supset L$ such that
$\pi_1(L')\to \pi_1(M)$ is isomorphic.
\end{lemma}
\begin{proof}
If the natural morphism $j\colon\pi_1(L)\to \pi_1(M)$ is not
injective, then for any element of the kernel we have a
shrinking homotopy whose compact range may be covered by
finitely many charts which aggregated to $L$ gives some
$L^{*}$. Since $L$ is metric, its group $\pi_1(L)$ is
countable, and we need only iterate countably many times the
procedure, thereby conserving Lindel\"ofness for the enlarged
$L^{*}$. It may seem that $\pi_1(L^{*})\to\pi_1(M)$ is now
isomorphic. However when killing an element of the kernel may
well accidentally create a
parasite ``handle'' or ``connectivity'', jeopardizing the
desideratum.
\iffalse Maybe the difficulty is the same as the one
encountered by Mathieu in the bridge paper and which he could
ultimately solve thanks to the Schoenflies technology. \fi
The trick is to
take advantage of some geometric topology \`a la Schoenflies,
to kill (or better plumb) the holes in a
surgical way, without generating new ones by inadvertence.
Suppose first that
an element in $\ker j$ is represented by a Jordan curve, which
being null-homotopic in $M$ bounds a disc in $M$
(\ref{Schoenflies-Baer}), which aggregated to $L$
kill one holes without creating new ones.
Unfortunately, not all element of the $\pi_1$ of a surface are
representable by Jordan curves (e.g., the
generator squared
in the group of a punctured plane is not
Jordan-representable). By carefully selecting who to kill in
the kernel, namely the primitive elements
(yet not their proper powers),
as the former admit Jordan representants (cf.
(\ref{Jordan-representant}) below) completes the
procedure. As countably many discs are aggregated
Lindel\"ofness is preserved.
Also
killing the primitive elements of the kernel
suffices to kill the whole kernel. Indeed the latter is a
subgroup of $\pi_1(L)$ which is known to be free when $L$ is
an open surface, hence free as well and therefore generated by
its primitive elements. Of course assuming $L$ open is not
expensive, since otherwise $L$ is compact, hence clopen, so
identic to $M$ and the lemma is trivially true.
\end{proof}
\begin{lemma}\label{calibrated-exhaustions:lemma}
A surface $M$ with
finitely (or countably) generated fundamental group has an
exhaustion by Lindel\"of
subregions $M_\alpha$ such that the morphisms
$\pi_1(M_{\alpha})\to \pi_1(M)$ induced-by-inclusions are
isomorphic for all $\alpha$.
\end{lemma}
\begin{proof}
Choose
a finite (or countable)
generating system of $\pi_1(M)$, and representing loops $c_i$.
Each $c_i\colon [0,1] \to M$ is a continuous map with
$c_i(0)=c_i(1)= \star$ the basepoint of $M$. Cover randomly
the range of the $c_i$ by charts to get a Lindel\"of
subregion $L_0$. By construction $\pi_1(L_0)\to \pi_1(M)$ is
epimorphic, and by kernel killing (\ref{killing:kernel}) we
find $M_0:=L_0'$ with the required properties of
Lindel\"ofness and incompressibility. Then aggregate randomly
countably many new charts to get $L_1\supset M_0$ and again
kernel killing $\pi_1(L_1)\to \pi_1(M)$ gives $M_1$.
Transfinite induction
completes the proof by defining $M_{\lambda}$ to be a kernel
killing enlargement of $L_{\lambda}=\bigcup_{\alpha<\lambda}
M_{\alpha}$ whenever $\lambda$ is a limit ordinal.
\end{proof}
\begin{lemma}\label{Jordan-representant} In the
fundamental group of
an (open) metric surface $M$ any primitive element
(i.e., not a proper power)
is representable by a Jordan curve.
\end{lemma}
\begin{proof}
Our argument is not very intrinsic relying on
combinatorial methods. (Is there an argument via the
universal covering?)
\iffalse With Rad\'o (1925) (\ref{Rado:triangulation}), first
triangulate the surface. \iffalse (Rad\'o 1925
\cite{Rado_1925}, also Ahlfors-Sario
\cite[p.\,105--110]{Ahlfors-Sario_1960}, or Moise
\cite[p.\,60]{Moise_1977}). \fi\fi
Via Whitehead's
spine
(\ref{Whitehead-spine}),
$M$ retracts by deformation $M\to \Gamma$ to a countable graph
$\Gamma$. By the primitivity assumption, the loop pushed in
the spine
is homotopic to a simple loop (imagine an edge in the bouquet
of circles resulting by collapse of a maximal tree), hence
representable by a Jordan curve in the graph, so a fortiori in
the surface $M$.
\iffalse Alternatively argue with the universal covering which
is the plane. The given element $\gamma\in \pi_1(M)$ induces a
deck-translation of $\widetilde{M}$. \fi
\end{proof}
\iffalse
\begin{lemma}\label{calibrated-exhaustions:lemma} Given a surface $M$ with free fundamental group
$F_r$ of finite rank $r$, there is an exhaustion of $M$ by
Lindel\"of
subregions $M_\alpha$ such that the morphism
$\pi_1(M_{\alpha})\to \pi_1(M)$ induced-by-inclusion is
isomorphic for any $\alpha$.
\end{lemma}
\begin{proof}
Fix $a_1, \dots ,a_r$ a basis of $\pi_1(M)$ and choose
representing loops $c_1,\dots c_r$ with $c_i\in a_i$. Each
$c_i\colon [0,1] \to M$ is a continuous map with
$c_i(0)=c_i(1)= \star$ the basepoint of $M$. Cover randomly
the range of the $c_i$ by charts to get a Lindel\"of connected
open set $L_0$. If the natural morphism $j\colon\pi_1(L_0)\to
\pi_1(M)$ is not injective, then for any element of the kernel
we have a shrinking homotopy whose compact range may be
covered by finitely many charts which we aggregate to $L_0$ to
get $M_0$. Since $L_0$ is metric, its group $\pi_1(L_0)$ is
countable, and we need only iterate countably many times the
procedure, thereby conserving the Lindel\"ofness of $M_0$. By
construction it seems that $\pi_0(M_0)\to\pi_1(M)$ is now
isomorphic. However when killing an element of the kernel may
well accidentally create a
parasite ``handle'' or ``connectivity'', jeopardizing the
whole process.
\iffalse Maybe the difficulty is the same as the one
encountered by Mathieu in the bridge paper and which he could
ultimately solve thanks to the Schoenflies technology. \fi
The idea is that
taking advantage of some geometric topology \`a la
Schoenflies, one can kill the holes in a clever way, without
generating new ones by inadvertence. Imagine first that
an element in $\ker j$ is represented by a Jordan curve, which
being null-homotopic in $M$ bounds a disc in $M$, which
aggregated to $L_0$
kill one holes without creating new ones.
Unfortunately, not all element of the $\pi_1$ of a surface are
representable by Jordan curves (e.g., the
generator squared
in the group of a punctured plane is not
Jordan-representable). By carefully selecting who to kill in
the kernel, namely the primitive elements
(yet not their proper powers),
as the former admit Jordan representant (cf.
Lemma~\ref{Jordan-representant} below) completes the kernel
killing procedure.
Once $M_0$ is constructed with the required properties,
aggregate randomly countably many new charts to get $L_1$ and
kill the kernel of $\pi_1(L_1)\to \pi_1(M)$ by the
above procedure.
Transfinite induction
completes the proof.
\end{proof}
\begin{lemma}\label{Jordan-representant} In the fundamental group of
an (open) metric surface $M$ any primitive element
(i.e., not a proper power)
is representable by a Jordan curve.
\end{lemma}
\begin{proof}
Our argument is not perfectly intrinsic relying on
some combinatorial methods.
Alternatively there is perhaps an argument with the universal
covering. With Rad\'o (1925) (\ref{Rado:triangulation}), first
triangulate the surface. \iffalse (Rad\'o 1925
\cite{Rado_1925}, also Ahlfors-Sario
\cite[p.\,105--110]{Ahlfors-Sario_1960}, or Moise
\cite[p.\,60]{Moise_1977}). \fi Then by simplicial
approximation find a simplicial representant of the homotopy
class of the given loop. Finally use Whitehead's spine to show
that $M$ is homotopy equivalent to a countable graph, in the
strong sense that there is a retraction by deformation $M\to
\Gamma$ to the spine. Then the assertion is more-or-less
evident, for the primitivity assumption implies that the loop
pushed in the spine graph is homotopic to a simple loop (think
of it as an edge in the bouquet of circles resulting from the
collapse of a maximal tree), hence representable by a Jordan
curve in the graph, so a fortiori in the surface $M$.
\iffalse Alternatively argue with the universal covering which
is the plane. The given element $\gamma\in \pi_1(M)$ induces a
deck-translation of $\widetilde{M}$. \fi
\end{proof}
\fi
\subsection{Puncturing and cross-capping (Cro-Magnon, von Dyck)}
This section gives algebraic arguments
for two intuitively obvious issues:
\begin{lemma}\label{puncturing}
Puncturing an open surface adds one free generator to
the fundamental group.
\end{lemma}
\begin{proof}
If $S$ is open, any puncture increases by one the rank of the
$H_1$. This follows e.g. by writing the exact sequences of the
pairs $(S,S_{*}=S-1pt)$ and $(U,U_{*})$, where $U$ is a chart
containing the puncture:
\iffalse {\small
$$
H_2(S)\to H_2(S,S_{*})\to H_1(S_*) \to H_1(S)\to
H_1(S,S_{*})\to H_0(S_*) \to H_0(S)\to H_0(S,S_{*})=0
$$}\fi
\begin{figure}[h]
\hskip-5pt\epsfig{figure=diag2.eps,width=122mm}
\vskip-10pt\penalty0
\end{figure}
\noindent Hence if $\pi_1(S)$ is free of rank $r$, then
$\pi_1(S_*)$ being free by
({\ref{freeness-for-open-surfaces:prop}}) is free of rank
$r+1$. In particular if $S$ is $1$-connected and open, then
$\pi_1(S-k\, pts)$ is $F_k$, free of rank $k$.
\iffalse {\small
One could hope that Seifert-van Kampen give a more direct
proof of the assertion: \iffalse that a single puncture in an
open surface increases the rank of the $\pi_1$ by one unit.
More precisely\fi if $S$ is an open surface with
$\pi_1(S)=F_r$, then the once punctured surface $S_*$ has
$\pi_1(S_{*})=F_{r+1}$. Choose $U$ a chart about the puncture;
then $S=S_*\cup U$ and $S_*\cap U=U_*$.
By Seifert-van Kampen $\pi_1(S)\approx
\pi_1(S_*)\ast_{\pi_1(U_*)} \pi_1(U)$ (amalgamated product),
from where the assertion
looks more-or-less clear. However
this argument does not capture the assumption that $S$ is
open. This explains why we opted for the first homological
argument.
}\fi
\end{proof}
Likewise cross-capping has the same impact on the fundamental
group:
\begin{lemma}\label{cross-capping} Cross-capping an
open surface adds one free generator to
the fundamental group.
\end{lemma}
\begin{proof} Recall
the cross-capping operation (von Dyck, 1888) of a surface $M$
amounting to identify diametrically opposite points at the
border of a embedded compact disc $D\subset M$. Denote $M_c$
the cross-capped surface. Choose $U$ a \nbhd{ }of the
cross-cap, which is homeomorphic to an (open) M\"obius band.
We have $M_c=U \cup M_{\star}$, where $M_{\star}=M-D$. The
Mayer-Vietoris sequence: {\small
$$
\underbrace{H_2(M_c)}_{0}\to \underbrace{H_1(U \cap
M_{\star})}_{\Bbb Z}\to H_1(U)\oplus H_1( M_{\star})\to
H_1(M_c) \to H_0(U \cap M_{\star}) \to H_0(U) \oplus H_0(
M_{\star}),
$$}
\!\!whose last arrow is injective, truncates as $H_1(M_c)\to
0$. The first group is trivial since $M_c$ is open. It follows
that the rank of $H_1(M_c)$ equals that of $H_1(M_{\star})$,
which is one more than that
of $H_1(M)$ (puncturing a compact disc amounts to puncture a
point and use (\ref{puncturing})). The claim now follows by
the freeness of the $\pi_1$
(\ref{freeness-for-open-surfaces:prop}).
\end{proof}
\subsection{Deleting a closed long ray
(indicatrix and $\pi_1$-invariance)}
The {\it closed long ray} ${\Bbb L}_{\ge 0}$ is the unique
bordered non-metric (Hausdorff) 1-manifold.
\begin{lemma}\label{deleting:a_closed_long_ray} Given a closed long ray $L$ embedded in a surface
$M$, the surface $M-L$ slitted along $L$ has the same $\pi_1$.
In fact the natural morphism $\pi_1(M-L)\to \pi_1(M)$ is
isomorphic. Further the orientability character (indicatrix)
of $M$ and $M-L$
are the same. Finally the same holds for the ends-number.
\end{lemma}
\begin{proof} If $M$ is orientable, then so is $M-L$ by (\ref{orientability:heredity}).
Conversely if $M$ is not orientable, there is a one-sided
Jordan curve $J$ in $M$
(\ref{orientability-in-terms-of-Jordan-curves}). We can find a
tube neighbourhood $T$ for a sub-arc $A\approx [0,1]$ of $L$
such that $A\supset J \cap L$. $T$ is homeomorphic to a
rectangle and we have a homeomorphism of triads $(T,L\cap T,
A)\approx ([-1,2]\times[-1,1], [0,2]\times \{0\},[0,1]\times
\{0\})$. Using a self-homeomorphism of the rectangle which is
the identity on the boundary and pushing the arc outside
itself (and outside $L\cap T$),
its extension to $M$ (by the identity outside $T$) yields a
``finger move'' pushing $J$ (plus its M\"obius tubular
\nbhd{} $N$) outside $L$ (enlarge $A$ if necessary so that
$A\supset N \cap L$). So the configuration $(N,J)$ is pushed
into $M-L$ showing its non-orientability.
The assertion regarding the $\pi_1$ is proved by the same
``finger move'' trick. Indeed given a loop in $M$ we may push
it into $M-L$ (noticing that the finger move homeomorphism is
isotopic to the identity). Thus the natural morphism
$\pi_1(M-L)\to \pi_1(M)$ is onto. To get its injectivity,
assume that $[c]$ is in the kernel. So $c$ is loop in $M-L$
which is null-homotopic in $M$. Since the range of the
homotopy is a compactum $K$ we can find a subarc $A$ of $L$
large enough as to contain $K\cap L$, and a finger move push
this outside $L$ and produce a null-homotopy for $c$ ranging
through $M-L$.
\end{proof}
\subsection{Finitely-connected surfaces, cylinder ends (\Kerekjarto)}
With
loose conventions, we could define the {\it connectivity} of a
surface as the rank of its $H_1$ with integer coefficients.
This conflicts slightly with the classical Riemann-Betti
convention, where simple-connectivity really corresponds to
rank zero (not one!). So eventually just
employ:
\begin{defn} {\rm The {\it rank} of a surface (metric or not)
is its first Betti number, i.e. the rank of the first
(singular) homology group $H_1(M,{\Bbb Z})$. When finite,
say the surface to be {\it of finite-connectivity}.}
\end{defn}
The following trick of \Kerekjarto{ }1923
\cite{Kerekjarto_1923} (only a baby case of his more general
classification of all open metric surfaces)
is quite foundational (equivalent to the classification of
$1$-connected metric surfaces) and pivotal subsequently:
\begin{theorem} (\Kerekjarto{ }1923)\label{Kerkjarto:end}
A metric surface of finite-connectivity is homeomorphic to a
finitely-punctured closed surface. The latter closed model is
uniquely defined, and consequently
finitely-connected surfaces are classified by the connectivity
(=rank of $\pi_1$), the number of ends $\varepsilon$ and the
indicatrix.
In particular, open metric surfaces of finite-connectivity
possess an end neighbourhood homeomorphic to a punctured
plane.
\end{theorem}
\def\ende{\varepsilon}
\def\rk{{\rm rk}}
\begin{proof} (Via the classification of $1$-connected
surfaces (\ref{uniformization}), and some dirty
tricks.---Hausdorff would say: {\it Ich mache Komplexe mit
Komplexen!}) If the surface $M$ is compact there is nothing to
prove (\ref{Moebius-Klein-classification}).
Otherwise, $\pi_1(M)$ is free
(\ref{Ahlfors-Sario:freeness:lemma}), and $H_1$ has finite
rank. Fix a finite generating system $a_1,\dots,a_r $ of
$\pi_1(M)$. By regular \nbhd{ }theory, any compactum in a
PL-manifold is contained in a finite bordered sub-manifold.
(This goes back to Whitehead, cf. e.g. Rourke-Sanderson as
quoted in Nyikos \cite{Nyikos84}.) This applies to metric
surfaces by Rad\'o's
triangulations (\ref{Rado:triangulation}). Representing the
$a_i$ by loops $c_i$, we may cover the ranges of the $c_i$ by
a compact bordered subsurface $W\subset M$. By construction
$\varphi\colon\pi_1(W)\to \pi_1(M)$ is epimorphic. Using the
kernel killing procedure (\ref{killing:kernel}), one can
arrange $\varphi$ to be isomorphic (controlling compactness as
we only need to kill finitely many primitive elements of the
kernel). (Following Nyikos \cite{Nyikos84}, we could say that
$W$ is a bag for $M$.)
\begin{claim}\label{claim:incompressible-implies-countour_equal-ends} Let $W$ have $n$ contours (=boundary components), then
we claim (and prove clumsily below) that $M-W$ has also $n$
components $\ende_i$, whose closures $\overline{\ende_i}$ are
non-compact bordered surfaces with one contour.
\end{claim}
\noindent{\bf Proof of Claim.} Indeed since $W$ is a bordered
surface, each contour of $\partial W$ has a collar and
therefore is two-sided in $M$. Choose the collar-sides lying
outside $W$. If two contours get connected outside $W$ in $M$,
then we can construct (by aggregating an outer connection with
an inner connection inside $W$) a loop in $M$ whose
intersection number (in homology mod 2) with both contours is
$1$, and therefore which cannot be homotoped into $W$
(violating the surjectivity of $\varphi$). Thus $M-W$ has at
least $n$ components (and of course cannot have more). Further
each residual piece $\overline{\ende_i}$ cannot be compact,
otherwise by classification
(\ref{Moebius-Klein-classification}) jointly with Seifert-van
Kampen some alteration at the $\pi_1$-level would be detected.
For instance if $\overline{\ende_i}$ is a disc, some loop in
$W$ trivializes in $M$, violating the injectivity of
$\varphi$, and if $\overline{\ende_i}$ is a complicated
pretzel with one contour, then again by looking at an
appropriate intersection number (with a fixed curve) gives a
loop in $M$ which cannot be homotoped to $W$.
\smallskip
\noindent{\bf Back to the proof of \ref{Kerkjarto:end}.} When
capping-off $\overline{\ende_i}$ by a disc we obtain the open
surface $\overline{\ende_i}_{cap}$, which punctured is
homeomorphic to $\ende_i$. Thus by (\ref{puncturing}), the
rank $\rk H_1(\ende_i)= \rk H_1(\overline{\ende_i}_{cap})+1$.
Aggregating just one piece, say $\overline{\varepsilon}$, of
the decomposition $M=W\cup (\bigcup_{i=1}^n
\overline{\ende_i}_{cap})$, the Mayer-Vietoris sequence
{\small
$$
\underbrace{H_2(W\cup \overline{\ende})}_{0}\to
\underbrace{H_1(W \cap \overline{\ende})}_{\Bbb Z}\to
H_1(W)\oplus H_1( \overline{\ende})\to H_1(W\cup
\overline{\ende}) \to H_0(W \cap \overline{\ende}) \to H_0(W)
\oplus H_0( \overline{\ende}),
$$}
\!\!whose the last arrow is injective, truncates as
$H_1(W\cup \overline{\ende})\to 0$. So $\rk H_1(W\cup
\overline{\ende})= \rk H_1(W) + \rk H_1(\overline{\ende})-1$.
Hence by induction, $\rk H_1(W\cup
\bigcup_{i}\overline{\ende_{i}})= \rk H_1(W) + \sum_{i=1}^n\rk
H_1(\overline{\ende_{i}})-n$.
By construction $\rk H_1(W)= \rk H_1(M)$,
so $\sum_{i=1}^n\rk H_1(\overline{\ende_{i}})=n$. Since a
bordered surface with $H_1=0$ having a unique compact contour
is
compact (cf. \cite[Lemma~10]{GaGa2010}) it follows from
(\ref{claim:incompressible-implies-countour_equal-ends}) that
$\rk H_1(\overline{\ende_{i}})\neq 0$. Hence $\rk
H_1(\overline{\ende_{i}})=1$ for all $i$, and $\rk
H_1(\overline{\ende_i}_{cap})=0$. Thus by freeness
(\ref{Ahlfors-Sario:freeness:lemma}),
$\pi_1(\overline{\ende_i}_{cap})=0$, and by the classification
(\ref{uniformization}) it follows that
$\overline{\ende_i}_{cap}\approx{\Bbb R}^2$. It remains now
only to compactify each end $\varepsilon_i$ by adding the
point at infinity giving us the searched closed surface. This
shows the first clause.
The third (last) clause is a trivial consequence of the first.
Finally, the second clause follows from the classification of
closed surfaces (\ref{Moebius-Klein-classification}). Indeed
if $M$ has $n$ ends it is---by the first clause---homeomorphic
to
$F_{n*}$, a closed model $F$ affected by $n$ punctures.
Comparing the
characteristic of $M$ with that of ``its'' closed model $F$,
we have the following; e.g., remove from $F$ small discs about
the $n$ punctures to get a bordered surface $W$, to which $M$
retracts by deformation (hence $\chi(M)=\chi(W)$), and which
has lost $n$ $2$-simplices
w.r.t. $F$ (hence $\chi(W)=\chi(F)-n$):
\begin{equation}
\chi(M)=1-b_1+b_2=\chi(F)-n.
\end{equation}
Now assuming (an unfortunate)
general collapse of our optical
systems, with our brain-memories only able to remind from $M$
its numerical invariants of connectivity $b_1$ (rank),
indicatrix and ends-number $n$,
the above formula (where $b_2=0$ as soon as $M$ is open)
determines
$\chi(F)$ (of ``the'' compact model) uniquely. Since
puncturing finitely many points does not affect the indicatrix
of a surface
(\ref{orientability-in-terms-of-Jordan-curves})---though it
may well do so for a non-Hausdorff curve (e.g., {\it
lasso}!)---the indicatrix of $F$ is
prescribed by that of $M$. Thus by the compact classification
(\ref{Moebius-Klein-classification}) the topology of $F$ is
unambiguously determined, and so is the topological type of
$M$. (Recall that the group of self-homeomorphisms of a
manifold acts transitively over finite configurations for any
prescribed cardinality.)
\end{proof}
\iffalse We now list several consequences, starting with the
following (which was historically perhaps first proved via the
uniformization theorem (Klein, Poincar\'e, Koebe, 1882--1907).
Recall also (to avoid a fatal vicious circle!) an alternative
proof via triangulation and the combinatorial scheme of van
der Waerden-Reichardt (ref. as in \cite{GaGa2010}):
\begin{prop}\label{uniformization} A metric simply-connected surface is either $S^2$
or the plane.
\end{prop}
\begin{proof} If compact, then we have $S^2$ by the
classification \ref{Moebius-Klein-classification}
If not then there is a cylindrical end, and we may compactify
the end by adding the puncture to obtain a new surface $M^*$.
If $M^*$ is open, then since $M^*$ punctured is $M$ it follows
from (\ref{puncturing}) that $\pi_1(M)$ would be non-trivial.
Thus $M^*$ is compact. Now inspecting what happens to the
$\pi_1$ after puncturing any of the closed surfaces, the only
case generating a trivial group is $S^2$. Thus $M^*$ is $S^2$
and we are finished. Indeed if $M^*$ is oriented of genus $g$
then when punctured it becomes homotopy equivalent to the
bouquet of $2g$ circles, so that $\pi_1=F_{2g}$. When $M^*$ is
non-orientable then $M^*=S^2_{gc}$ (sphere with $g\ge 1$
cross-caps). So $M=S^2_{*, gc}$ which by (\ref{puncturing})
has $\pi_1=F_g$.
\end{proof}
This in turn implies the following, via passage to the
universal covering and the classic Schoenflies theorem
(argument of R. Baer, 1928; compare \cite{GaGa2010}, as well
as the first hand sources quoted therein esp. Epstein 1966,
Cannon, 1969 \cite{Cannon_1969}):
\begin{lemma} (Schoenflies-Baer-Cannon) \label{Schoenflies-Baer}A null-homotopic Jordan curve in a surface
(metric or not) bounds a disc. In particular a Jordan curve in
a simply-connected surface bounds a disc, which is unique
whenever the surface is open (equivalently not the sphere).
\end{lemma}
\fi
Here are two examples that will play a special r\^ole in the
foliated sequel:
\begin{lemma}\label{Kerekjarto:dicho}
A dichotomic metric surface with $\pi_1=F_2$ is $S^2_{3*}$
(thrice punctured sphere).
\end{lemma}
\begin{proof}
Having finite-connectivity, the surface $M$ is, by
\Kerekjarto{ }(\ref{Kerkjarto:end}), a punctured closed
surface $F_{n*}$. Since dichotomic implies orientable
(\ref{dicho-implies-orientable}), the closed model $F$ is
orientable
(\ref{orientability-in-terms-of-Jordan-curves}) hence
$F\approx \Sigma_g$ (sphere with $g$ handles). Since
$\pi_1=F_2$ is not the group of a closed surface, $M$ is open,
and so $b_2=0$. Thus $\chi=1-b_1=2-2g-n$. Since $b_1=2$, we
have $2g+n=3$ implying (as $g,n\ge 0$) that $(g,n)=(0,3)$ or
$(1,1)$; the latter option being precluded by dichotomy.
\end{proof}
\iffalse
\begin{proof}
\Kerekjarto's lemma (\ref{Kerkjarto:end}) applies to our
surface, $M$, since $F_2$ is not the group of a closed
surface. By aggregating the puncture we get a new surface
$M^*$ which punctured gives $M$.
$\bullet$ If $M^*$ is compact, then $M^*$ is either
homeomorphic to $\Sigma_g$ (orientable of genus $g$) or $N_g$,
$g\ge 0$, the sphere with $g$ cross-caps, denoted $S^2_{gc}$.
Puncturing back gives resp. $\Sigma_{g,*}$ or $M=S^2_{*,gc}$
(one puncture, $g$ cross-caps). In the first case
$\Sigma_{g,*}$ is homotopy equivalent to a bouquet of $2g$
circles, so $g=1$, but this violates the dichotomy of $M$. In
the second case,
using (\ref{cross-capping}),
$\pi_1(M)\approx F_g$ (free of rank $g$). Thus $g=2$, and so
$M^*=N_2=\Klein$, again against the dichotomy of $M$.
$\bullet$ When $M^*$ is open, so its group is free
(\ref{freeness-for-open-surfaces:prop}) and of rank one less
than that of $M$ (\ref{puncturing}). Hence
$\pi_1(M^*)=F_1\approx {\Bbb Z}$. So $M^*$ has still
finite-connectivity, and we may apply once more \Kerekjarto's
end lemma (\ref{Kerkjarto:end}) to produce a new surface,
$M^{2*}$, which punctured once gives back $M^*$.
---If $M^{2*}$ is compact, then it is either $\Sigma_g$ or
$N_g=S^2_{gc}$, for some $g\ge 1$. The first case is
impossible since when punctured it generated the group
$\pi_1=F_{2g}$ which is never $F_1$. In the second case
puncturing $M^{2*}_{*}=S^2_{*,gc}$ and since this leads back
to $M^*$ with $\pi_1=F_1$, it follows from (\ref{puncturing})
that $g=1$. Thus $M^{2*}=N_1={\Bbb R}P^2$, violating again the
dichotomy of $M$.
---When $M^{2*}$ is open, then its group is free
(\ref{freeness-for-open-surfaces:prop}) and is now of rank
zero (\ref{puncturing}), hence it is trivial. Hence $M^{2*}$
is the plane (\ref{uniformization}). This completes the proof.
\end{proof}
\fi
\begin{lemma}\label{Kerekjarto:non-orient}
A non-orientable metric surface with $\pi_1=F_2$ is
$\proj_{**}$ or $\Klein_*$ (twice-punctured projective plane
or once-punctured Klein bottle).
\end{lemma}
\begin{proof}
Being of finite-connectivity, the surface $M$ is, by
\Kerekjarto{ }(\ref{Kerkjarto:end}), a punctured closed
surface $F_{n*}$. Since orientability is hereditary to
subregions (\ref{orientability:heredity}), the closed model
$F$ is non-orientable, hence $F\approx S_{gc}$ (sphere with
$g\ge 1$ cross-caps). Since $\pi_1=F_2$ is not the group of a
closed surface, $M$ is open, and so $b_2=0$. Thus
$\chi=1-b_1=2-g-n$. Since $b_1=2$, we have $g+n=3$ implying
(as $g\ge 1$, $n\ge 1$) that $(g,n)=(1,2)$ or $(2,1)$.
\end{proof}
\iffalse
\begin{proof}
\Kerekjarto's lemma (\ref{Kerkjarto:end}) applies to our
surface, $M$, since $F_2$ is not the group of a closed
surface. So by aggregating the puncture we get a new surface
$M^*$ which punctured gives $M$.
$\bullet$ If $M^*$ is compact, then as it is non-orientable
so $M^*$ is homeomorphic to $N_g$, $g\ge 1$, the sphere with
$g$ cross-caps, denoted $S^2_{gc}$. Puncturing back gives
$M=S^2_{*,gc}$ (one puncture, $g$ cross-caps). By
(\ref{cross-capping}), $\pi_1(M)\approx F_g$ (free of rank
$g$). Thus $g=2$, and so $M^*=N_2=\Klein$. q.e.d.
$\bullet$ If not, i.e. $M^*$ is open, so its group is free
(\ref{freeness-for-open-surfaces:prop}) and of rank one less
than that of $M$ (\ref{puncturing}). Hence
$\pi_1(M^*)=F_1\approx {\Bbb Z}$. So $M^*$ has still
finite-connectivity, and we may apply once more \Kerekjarto's
end lemma to produce a new surface, $M^{2*}$, which punctured
once gives back $M^*$.
---If $M^{2*}$ is compact, then by the classification it is
$N_g=S^2_{gc}$, for some $g\ge 1$. So puncturing
$M^{2*}_{*}=S^2_{*,gc}$ and since this leads back to $M^*$
with $\pi_1=F_1$, it follows from (\ref{puncturing}) that
$g=1$. Thus $M^{2*}=N_1={\Bbb R}P^2$. q.e.d.
---When $M^{2*}$ is open, then its group is free
(\ref{freeness-for-open-surfaces:prop}) and is now of rank
zero (\ref{puncturing}), hence it is trivial. So $M^{2*}$ is
simply-connected, hence orientable, violating the
non-orientability of $M$. This completes the proof.
\end{proof}
\fi
\subsection{The soul of a
non-metric finitely-connected surface
(\Kerekjarto, Nyikos)}
One can imagine that any surface of finite-connectivity has a
metric soul capturing its salient topological features and
outside which nothing more happens. This reminds
the
phraseology ``{\it the garbage must cease}'' coined in Nyikos
1984 \cite{Nyikos84}.
In the $\omega$-bounded case (which
implies finite-connectivity cf. e.g.,
\cite{Gab_2011_Hairiness}) the above
desideratum is a weak form of the bagpipe theorem of Nyikos
1984 \cite{Nyikos84}. Thus
the present soul is
merely a non-metric version of \Kerekjarto{ }cylindrical ends
theorem (\ref{Kerkjarto:end}) as well as an extension of
Nyikos' bagpipe (at any rate a typically Hungarian
endeavor.)
As we are doing 2D-topology, the God-given recipe to capture
metrically the whole topology is to impose
``incompressibility'' at the fundamental group level:
\begin{defn}\label{soul:def} A soul
$S$ for a (non-metric) surface $M$ of
finite-connectivity is a metric
subregion $S\subset M$ such that the morphism induced by
inclusion $\varphi\colon\pi_1(S)\to \pi_1(M)$ is isomorphic.
\end{defn}
{\it Existence} of a soul is immediate from the kernel killing
procedure (\ref{killing:kernel}), and the interesting issue is
{\it uniqueness} (up to homeomorphism):
\begin{theorem}\label{soul:uniqueness} Let $M$ be a (non-metric)
surface of finite-connectivity. Then the three characteristic
invariants $(\chi, \varepsilon, a)$ (viz. Euler
characteristic, number of ends and indicatrix $a=0,1$ whether
orientable or not) of a soul are uniquely defined by the whole
surface $M$,
coinciding with
those of $M$. Consequently:
{\rm (a)} The topological type of a soul is uniquely defined
and
referred to as the soul of the finitely-connected surface $M$
(apply \Kerekjarto{ }{\rm (\ref{Kerkjarto:end})}).
{\rm (b)} Any finitely-connected surface has a finite number
of ends (an issue not completely obvious a priori).
\end{theorem}
\begin{proof} If $M$ is compact this adds nothing new to the
classical classification (\ref{Moebius-Klein-classification}).
So assume $M$ open and then $b_2=0$ (by vanishing of the
top-dimensional homology, cf. e.g., Samelson
\cite{Samelson_1965-homology}), so that $\chi=1-b_1$. Hence
the knowledge of $\chi$ is equivalent to that of the
connectivity $b_1$. Hence the matching of $\chi$ is immediate
from the {\it soul}-definition (\ref{soul:def}).
The equality of the indicatrix (telling us orientability) is
evident as well. For instance one can use the canonical
group-morphism $\mu_M\colon\pi_1(M) \to \{\pm 1\}$ obtained by
propagating local orientation around loops
(\ref{indicatrix-orient-covering}). Orientability ($a=0$)
amounts to the triviality of $\mu_M$. Since we naturally have
$\mu_M \varphi = \mu_S$, equality of the indicatrix follows
since $\varphi$ of (\ref{soul:def}) is isomorphic.
\def\endus{ends-number}
It remains only to check the equality of the ends-number. We
recall its:
\begin{defn} {\rm The {\it\endus}{ }of a space $X$ is the maximal
cardinality of non-relatively-compact residual components of a
compactum $K$ of $X$:
$$
\varepsilon(X)=\sup_{K cpct\subset X } {\rm card} \{ C\in
\pi_0(X-K) \colon \overline{C} \textrm{ is non-compact} \}
$$
}
\end{defn}
{\small
---{\it Example: } Consider a letter ``Y'' with 3
branches going to infinity. Choose as compactum a point right
below the branching, we count 2 residual components, but
enlarging it we get 3 residual components (and never more!).
The space has 3 ends.
}
\smallskip
{\bf Step~1 (Deriving from a soul a weak bag-pipe
decomposition).} Given a soul $S$ of $M$ (hence of
finite-connectivity), we know that it is homeomorphic to
$F_{n*}$ a finitely $n$ times punctured closed surface $F$
(\ref{Kerkjarto:end}). Thus there is a compact bordered
subsurface $B \subset S$ obtained by removing from $F$ the
interior of little discs centered at the punctures. We call
$B$ a bag. It is
a retract-by-deformation of the soul $S$, thus having the same
$\chi$, a number of contours equal to $n$
and the same indicatrix.
If we remove the interior of the bag $B$ from $S$ we
have $n$ residual components. Thus removing ${\rm int} B$ from
$M$ gives $k\le n$ components $P_1, \dots, P_k$ which are
bordered surfaces with $d_i\ge 1$ contours. Note that $\sum_i
d_i=n$.
First we claim that $k=n$ and that all $P_i$ are non-compact,
for otherwise
arguing as in
Claim~\ref{claim:incompressible-implies-countour_equal-ends}
violates the isomorphy of $\varphi'\colon \pi_1(B)\to
\pi_1(M)$ (incompressibility condition). It follows that
$d_i=1$ for all $i$ (all $P_i$ have a single contour).
Next using the Mayer-Vietoris sequence we have
the following additivity relation
(intuitively the overlapping occurs along circles not
contributing to $\chi$):
\begin{equation}\label{Mayer-Vietoris:char}
\chi(M)=\chi(B)+\textstyle\sum_{i=1}^n \chi(P_i).
\end{equation}
Since each $P_i$ is bordered (and connected), $b_2(P_i)=0$,
and so $\chi(P_i)=1-b_1(P_i)\le 1$. If $b_1(P_i)=0$, then
as $P_i$ has a single contour it follows that $P_i$ is compact
(cf. \cite{GaGa2010}, Lemma 10), an absurdity.
\iffalse and therefore the disc (by
classification (\ref{Moebius-Klein-classification})). But
then the corresponding contour $\Gamma_i$ of $B$ is
trivialized in $\pi_1(M)$, violating the incompressibility,
excepted when $B$ is a disc, in which case $M$ would be the
sphere (which was ruled out at the start). \fi
Hence
$\chi(P_i)\le 0$, and since $\chi(M)=\chi(B)$ it follows from
\eqref{Mayer-Vietoris:char} that $\chi(P_i)=0$ for all $i$.
Thus
the filled $P_i$, denoted $P_{i,filled}$ (defined by gluing a
disc
to its unique contour),
is a
$1$-connected surface (as its $\chi=\chi(P_i)+1=1$, so its
$b_1=0$, and as $\pi_1$ is free
(\ref{freeness-for-open-surfaces:prop})
its $\pi_1=0$). (In Nyikos' jargon the $P_i$ now truly deserve
the name of {\it pipes}, yet not necessarily {\it long pipes}
which terminology might
be reserved to the $\omega$-bounded case).
{\bf Step 2 (Computing the \endus)} Since $M$ contains the bag
$B$ (as a compactum)
leaving $n$ residual (non-relatively-compact) components (cf.
Step~1), we have $\varepsilon(M)\ge n$. Conversely given a
compactum $K\subset M$, we may decompose it according to the
bagpipe decomposition $M=B\cup \bigcup_{i=1}^n P_i$ to obtain
a fragmentation $K=K_B\cup \bigcup_{i=1}^n K_i$, where
$K_B=K\cap B$ and $K_i=K\cap P_i$ which are all compacta
(recall the bag and the pipes to be bordered hence closed as
point-sets). Regarding each $K_i$ in the filled pipe
$P_{i,filled}$,
we can trace a Jordan curve $J_i$ containing $K_i$ in its
interior, cf. (\ref{enclosing-trick:lemma}) below. Since the
interior $U_i$ of $J_i$ is homeomorphic to an open-cell (or
the plane) which is one-ended $U_i-K_i$ has exactly one
component which is not relatively-compact. Reconstructing the
manifold $M$ from its bagpipe structure it follows that $M-K$
has {\it at most} $n$ components, which are not
relatively-compact. (Some ``percolation'' of the connectedness
may of course occur within the bag.) This shows that
$\varepsilon(M)\le n$, completing the proof.
\end{proof}
\begin{lemma}\label{enclosing-trick:lemma}
Any compactum $K$ of an open simply-connected surface $M$ can
be ``enclosed'' in a Jordan curve $J$, in the sense that the
bounding disc for $J$ (given by Schoenflies {\rm
(\ref{Schoenflies-Baer})}) contains $K$ in its interior.
\end{lemma}
\begin{proof}
By
calibration
(\ref{calibrated-exhaustions:lemma}), we have an exhaustion of
$M$ by Lindel\"of subregions $M_{\alpha}$ with trivial groups
$\pi_1(M_{\alpha})\approx\pi_1(M)=0$. Thus $M_{\alpha}$ is
$S^2$ or ${\Bbb R}^2$ (\ref{uniformization}). In the sphere
case $M_\alpha$ is both closed and open, hence equal to $M$
(connectedness), violating the openness assumption. Thus
$M_{\alpha}$ is the plane for all $\alpha$. Since $K$ is
compact, there is $\beta<\omega_1$ such that $M_{\beta}\supset
K$. By Heine-Borel, $K$ is closed and bounded, hence contained
in a ball of large radius.
\end{proof}
\section{Algebraic distractions (Freiheitss\"atze)}
\subsection{Freeness of the fundamental group of open surfaces
(Ahlfors-Sario)}
It is well known that the fundamental group of an open metric
surface is free on countably many generators
(\ref{Ahlfors-Sario:freeness:lemma}).
\iffalse (cf.
Ahlfors-Sario, 1960 \cite[\S 44A.,
p.\,102]{Ahlfors-Sario_1960} or Massey's book 1967
\cite{Massey_1967}). \fi
Using Whitehead's spine
(\ref{Whitehead-spine}) we get the stronger assertion that
such surfaces
are homotopy equivalent to a countable graph. In general, a
{\it non-metric} (Hausdorff) surface may well
deliver a free fundamental group requiring uncountably many
generators, as for the doubled Pr\"ufer surface $2P$ (cf.
Calabi-Rosenlicht \cite[p.\,339--40]{Calabi-Rosenlicht_1953}
for a complicated(?) proof of the non-denumerability of
$\pi_1(2P)$ or Gabard 2008 \cite[Prop.\,3]{Gabard_2008} for an
easy computation via Seifert-van Kampen, which was suggested
by M. Baillif). It puzzled us, over a long period of time,
whether the fundamental group of an arbitrary (non-metric)
open surface is free (e.g., both
\cite[p.\,272]{Gabard_2008} and Baillif 2011
\cite{Baillif_2011} raise this question), yet it
is probably a trivial exercise. The basic idea
is that if there is a relation in the $\pi_1$ of the (big)
non-metric surface then, covering by charts the range of a
null-homotopy materializing this relation, we get a Lindel\"of
subregion where this relation holds already, violating the
freeness in the metric case. The trick looks theological,
as it does {\it not} exhibit a basis for the
fundamental group.
Let us look if this naive idea can be completed to a serious
argument. \iffalse The following is rather loose in this
respect and should be seriously improved (if possible?).
Let us take a closer look. \fi
\begin{prop}\label{freeness-for-open-surfaces:prop} The
fundamental group of any open surface is a free group.
\end{prop}
\begin{proof} Let $M$ be any open surface.
If $G:=\pi_1(M)$ is not free then there is a reduced non-empty
word $w=w(x_1,\dots,x_k)$ in some variables $x_i$ which purely
specialized to elements $g_i\in G$
yields the equation $w(g_1,\dots,g_k)=1$
in $G$ (cf. Lemma~\ref{non-free} below and the definition
after it for the meaning of pureness). Choose $c_i$ some loops
representing the $g_i$. Cover the range of a null-homotopy
shrinking the concatenation $w(c_1,\dots,c_k)$ to the
basepoint by a finite number of charts to obtain a Lindel\"of
subregion $L$. Of course $L$ contains the $c_i$ (their ranges
to be accurate), and so the $c_i$ define elements in
$\pi_1(L)$, say $\gamma_i$. Of course the relation
$w(\gamma_1,\dots,\gamma_k)=1\in \pi_1(L)$ continues to hold
and the $\gamma_i$ are all non-trivial since they
map to the $g_i\neq 1$ under the
morphism $\pi_1(L)\to
\pi_1(M)$ induced-by-inclusion. Notice
that
the specialisation of $w$ via the assignment
$x_i\to \gamma_i$ is pure. By the reverse
implication of Lemma~\ref{non-free} we deduce that $\pi_1(L)$ is
not free, violating the classical (metric) case of
the proposition (\ref{Ahlfors-Sario:freeness:lemma}).
\end{proof}
The next lemma
sounds tautological: {\it a group is not free iff there is a
relation},
and just amounts to the interplay between the universal
description of free groups with the more concrete model in
terms of words spelled in an alphabet:
\iffalse yet since it was formulated by the great Alexander
with very roosted algebraic reminiscences, it is not
impossible that it is quite false. \fi
\begin{lemma}\label{non-free}
A group $G$ is \emph{not} free if and only if there is
a non-empty reduced word $w=w(x_1,\dots,x_k)$ in $k\ge 1$
letters $x_1, \dots, x_k$ and a pure specialization $x_i\to
g_i$ to elements $g_i\in G$
(cf. definition below) such that $w(g_1, \dots, g_k)=1$.
\end{lemma}
\begin{defn} {\rm A specialisation of a word $w$ in a group
$G$ is {\it pure} if whenever two letters $x,y$ of $w$ are
adjacent they do not specialize on $g\in G$ and $g^{-1}$, and
if $xy^{-1}$ or $x^{-1}y$ appears in the word $w$, $x$ and $y$
do not specialize on the same element $g\in G$. We also demand
that no letter of $w$ specialize to $1\in G$.}
\end{defn}
\begin{proof} [{\bf Proof}] [$\Rightarrow$]
If $G$ is not free, then $G$ is still the quotient of a free
group $\varphi\colon F \to G$ with non-trivial kernel
$\ker\varphi$. Pick a non-trivial element $1\neq x \in
\ker\varphi$. Let the set $X$ be a basis for the free group
$F$.
As is well-known $X$ generates $F$ and
$x\in F$ can be written as a non-empty reduced word
$w=w(x_1,\dots,x_k)$ involving finitely many $x_i\in X$. Let
$g_i=\varphi (x_i)$. As $\varphi(x)=1$, we have
$w(g_1,\dots,g_k)=1$.
Furthermore pureness of the specialisation $x_i\to g_i$
follows, if we take care assuming
the word $x$ to have minimal length among all those
non-trivial elements of the kernel $\ker\varphi$.
[$\Leftarrow$]
Assume that $G$ is free, say with basis $X\subset G$. Let
$w=w(x_1,\dots,x_k)$ be a non-empty reduced word in some
abstract symbols $x_i$ and let $x_i\to g_i\in G$ be a pure
specialization;
to show $w(g_1,\dots,g_k)\neq 1$. Each $g_i$ can be written as
a non-empty reduced word $w_i$ involving finitely many letters
of the alphabet $X$. Substitute these expressions in $w$ to
obtain the big word $w(w_1,\dots,w_k)$. \iffalse , a priori
not reduced. The game is over if we are able to show that this
word is non-empty after reduction. Since not all $g_i$ are
$1$, one at least say $g_j$ corresponds to a non-empty word
$w_j$. Since the specialization is pure it is clear that there
is no complete cancellation in the big word $W$ which is thus
non-empty after reduction. \fi If the latter collapses
completely under reduction then this forces two adjacent words
$w_i$, $w_j$ to cancel out, violating the pureness of the
specialization.
\end{proof}
\iffalse (And so it is!; hence try the second version right
below)
{\small
\begin{lemma}\label{non-free} A group $G$ is \emph{not} free if and only if there is
some integer $k\ge 1$ and a reduced word $w=w(x_1,\dots,x_k)$
in $k$ letters $x_1, \dots, x_k$ and a specialization $x_i\to
g_i$ to elements $g_i\in G$ not all $1\in G$ (neutral element)
such that $w(g_1, \dots, g_k)=1$.
\end{lemma}
\begin{proof} [{\bf Proof}] [$\Rightarrow$] This is the easy implication. If
$G$ is not free, then $G$ is still the quotient of a free
group $\varphi\colon F \to G$ with non-trivial kernel
$\ker\varphi$. Pick a non-trivial element $1\neq x \in
\ker\varphi$. Let the set $X$ be a basis for the free group
$F$. We may assume that for all $y\in X$, $\varphi(y)\neq 1$
for otherwise we can remove this element without affecting the
ontoness of $\varphi$. As is well-known $X$ generates $F$ and
$x\in F$ can be written as a non-empty reduced word
$w=w(x_1,\dots,x_k)$ involving finitely many $x_i\in X$. Let
$g_i=\varphi (x_i)$. As $\varphi(x)=1$, we have
$w(g_1,\dots,g_k)=1$, which proves the first implication.
Notice that all $g_i$ are not $1$ by the way we have chosen
$X$.
[$\Leftarrow$] This implication looks harder. Assume that $G$
is free, say with basis $X\subset G$. Let $w=w(x_1,\dots,x_k)$
be a non-empty reduced word and let $x_i\to g_i$ be a
specialization to some $g_i\in G$ (not all $1$); to show
$w(g_1,\dots,g_k)\neq 1$. Write each $g_i$ as a word $w_i$ in
the alphabet $X$. Substitute these expressions in $w$ to
obtain the big word $w(w_1,\dots,w_k)$, a priori not reduced.
The game is over if we are able to show that this word is
non-empty after reduction. Since not all $g_i$ are $1$, one at
least say $g_j$ corresponds to a non-empty word $w_j$, etc.
\end{proof}
}
But notice that the lemma is erroneous, for it is enough to
consider the word $w=x_1x_2^{-1}$ with the specialization
$x_1\to g $ and $x_2\to g$. So we must demand that the
specialization is pure in the sense that $x_i$, $x_j$ do not
specialize to $g$, $g^{\pm 1}$ whenever $x_i$, $x_j$ are
adjacent letters in the word $w$.
So perhaps modulo arranging the full argument one can perhaps
complete the proof of the freeness theorem.
\fi
\subsection{Freedom for non-Hausdorff 1-manifolds
by reduction to
surfaces
}
Another ``mystical'' question
(also eluding us for a long time, and indeed still eluding us
slightly) is whether the fundamental group of a
(non-Hausdorff) $1$-manifold is always free. (For Hausdorff
$1$-manifolds, we have a classification in 4 specimens, which
all have trivial groups, except the circle.)
Of
course the same Lindel\"of reduction as we just did for
Hausdorff surfaces is formally possible, yet not very
effective unless the Lindel\"of case is
settled.
Maybe
the royal road (suggested by a discussion with A. Haefliger)
to this freeness
curiosity is a geometric
construction
exhibiting any (non-Hausdorff) $1$-manifold $M^1$
as the base of a fibration of a Hausdorff surface $M^2$
by real-lines (recovering via the leaf-space the given $M^1$).
The exact homotopy sequence of a fibration\footnote{This
foolhardy idea
was suggested
orally by Haefliger (circa 2006). One
has to convince that the classical proof does not use the
Hausdorffness of the base; compare Hopf-Eckmann,
Ehresmann-Feldbau, Steenrod, etc., yet a detailed redaction is
maybe desirable.}
reads (denoting by $F\approx {\Bbb R}$ the fibre):
\begin{equation}\label{homotopy-sequence}
\{1\}=\pi_1(F)\to \pi_1(M^2)\to \pi_1(M^1) \to \pi_0(F),
\end{equation}
from which we deduce the required freeness of $\pi_1(M^1)$ via
(\ref{freeness-for-open-surfaces:prop}). In the simplest case
where $M^1$ is a branching line or a line with two origins it
is clear how to construct such a ``thickened'' fibration
$M^2\to M^1$, essentially like for {\it train-tracks} (\`a la
Thurston-Penner), cf. Figure~\ref{Train:fig}.
The above idea originates in Haefliger, 1955 \cite[p.\,8,
point {\bf 2.}]{Haefliger_1955}, where
we read: {\small
\begin{quote}
On peut montrer que toute vari\'et\'e \`a une dimension avec
un nombre fini de bord et dont le groupe fondamental a un
nombre fini de g\'en\'erateurs\footnote{Of course we may
wonder if this proviso is really required. We believe it is
not.}, est l'espace des feuilles d'une structure feuillet\'ee,
et m\^eme la base d'une fibration par des droites d\'efinie
sur une vari\'et\'e s\'epar\'ee \`a deux dimensions.
\end{quote}}
\noindent It also reappears in Haefliger-Reeb 1957
\cite[p.\,125, last sentence]{Haefliger_Reeb_1957}, where
it is asserted (again without proof) that any second-countable
$1$-connected (non-Hausdorff) \hbox{$1$-manifold} can be
realised as the leaf-space of a suitable foliation of the
plane.
\begin{figure}[h]
\centering
\epsfig{figure=traint.eps,width=122mm}
\caption{\label{Train:fig}
The Haefliger
twistor of a
(non-Hausdorff) 1-manifold}
\vskip-5pt\penalty0
\end{figure}
\subsection{Twistor or train-tracks (Haefliger, Thurston, Penner)}
Can somebody prove the following {\it hypothetical} lemma
(strongly inspired from Haefliger and Thurston, Penner's
train-tracks):
\begin{lemma} \label{twistor} (Twistor trick or train-tracks.)
Any (non-Hausdorff) $1$-manifold can (non-canonically) be
materialized
as the base of a
fibration $p\colon M^2\to
M^1$
of a Hausdorff surface
fibred by real-lines ${\Bbb R}$. In particular, the projection
$p$ induces an isomorphism on the $\pi_1$.
\end{lemma}
\iffalse
\begin{proof} (LOOSE END PROOF, to be improved...) This is again an exercise de ``pens\'ee pure''
(jargon of R. Thom, at least used for it). Let us choose about
each point $p\in M^1=:C$ (curve) a chart $U_p\approx {\Bbb
R}$. We consider the set $M^2$ of couples $(p,q)\in C \times
C$, where $q\in U_p$. We define the projection $p$ by
$p(p,q)=p$. (Geometrically this can be thought of as an
infinitesimal perturbation of the point $p$.) How to
topologize $M^2$? A possibility is to take the induced
topology from $C\times C$. $M^2$ is readily seen to be locally
Euclidean of dimension $2$ and it is easy to check that it is
Hausdorff. Indeed given two distinct points in $M^2$, say
$(p_1,q_1)$ and $(p_2,q_2)$. We distinguish some cases:
(i) if $p_1= p_2$ then $q_1\neq q_2$. Since all points belong
to $U_{p_1}\approx {\Bbb R}$ there is neighbourhoods $V_{q_i}$
of $q_i$ $(i=1,2)$ such that $V_{q_1}\cap
V_{q_2}=\varnothing$. Then $U_{p_1}\times V_{q_1}$ and
$U_{p_1}\times V_{q_2}$ separates $(p_1,q_1)$ from
$(p_2,q_2)$.
(ii) if $p_1\neq p_2$ and $q_1=q_2$. Then $q_1\in U_{p_1}\cap
U_{p_2}$.
(ii.a) If $q_1$
(iii) if $p_1\neq p_2$ and $q_1\neq q_2$. Then if $q_2\in
U_{p_1}$ then we separate it from $q_1$ in $U_{p_1}$, by say
$V_{q_1}$ and $V_{q_2}$ and we are finished by taking
$U_{p_1}\times V_{q_1}$ and $U_{p_2}\times V_{q_2}$. If not,
i.e. $q_2\notin U_{p_1}$ and likewise we may assume that
$q_1\notin U_{p_2}$ (otherwise we can separate the
second-coordinate in $U_{p_2}$)
\end{proof}
\fi
\iffalse
But maybe the above construction cannot work as exemplified by
the branched line. So one needs a more clever construction.
Perhaps the twistor is somehow related or definable as a
tangent (micro)-bundle, if we do not want to use smoothness.
(compare the next section)
\fi
We hope the result is true, being implicitly used in
Haefliger-Reeb \cite{Haefliger_Reeb_1957} at least when the
manifold $M^1$ is $1$-connected and second-countable
(equivalently Lindel\"of, because manifolds are locally
second-countable). Another little piece of evidence is that a
(Morse theoretical) reverse engineering seems to hold
metrically: {\it any open metric surface can be fibred by
lines so that the quotient is a non-Hausdorff curve}
(\ref{Morse-Thom:surfaces}). Even if
(\ref{twistor}) should work only in the Lindel\"of case, this
would be
punchy enough to settle the general freeness
question (in view of the Lindel\"of reduction trick used in
(\ref{freeness-for-open-surfaces:prop})).
\begin{exam}[Non-Lindel\"of twistors] {\rm It is worth noticing
that Pr\"ufer's construction (and its
derived products like Moore or Calabi-Rosenlicht) provides
twistors for several toy-examples of non-Lindel\"of
$1$-manifolds (cf. Figure~\ref{Train:fig}, bottom part). For
instance the horizontally foliated (classical) Pr\"ufer
surface
twistorize the line with continuously ($\frak c={\rm card
({\Bbb R})}$) many branches. (Hence there is at least no
visceral
incompatibility between the twistor desideratum
(\ref{twistor})
and the non-Lindel\"of context.) Likewise the leaf-space of
the horizontally-foliated doubled Pr\"ufer surface $2P$ is the
line with $\frak c$-many origins. The vertical foliation on
the Moore surface punctured along the folded points admits as
leaf-space (and therefore is a twistor for) the {\it
everywhere doubled line} (described in Baillif-Gabard 2008
\cite[\S 3]{BG_2008_PAMS}). Finally, the horizontally-foliated
Moore surface punctured at all thorns singularities is a
twistor for the ``{\it lasso \'etrangl\'e}'' with continuously
many (infinitesimal) loops. }
\end{exam}
We now
tabulate
formal consequences of this geometric construction
(\ref{twistor}):
\begin{cor} (Haefliger-Reeb 1957
{\rm \cite{Haefliger_Reeb_1957}}) All simply-connected
second-countable (non-Hausdorff) $1$-manifold occur as the
leaf-space of a suitable foliation of the plane. Relaxing
second-countability of $M^1$, the same is true for a suitably
foliated simply-connected surface.
\end{cor}
\begin{proof} Given such a $1$-manifold $M^1$, we consider its
twistor $\tau M^1\to M^1$ given by (\ref{twistor}). By the
isomorphism \eqref{homotopy-sequence}, the total space
$M^2:=\tau M^1$ is simply-connected and Lindel\"of (by the
general topology version of Poincar\'e-Volterra, cf. Bourbaki
or Guenot-Narasimhan as referenced in \cite{GabGa_2011}).
Since $M^2$ is non-compact (containing lines as closed
subsets), it is homeomorphic to ${\Bbb R}^2$ by classification
of 1-connected surfaces
(\ref{uniformization}).
\end{proof}
More generally we have the following (with relaxable
parenthetical provisos):
\begin{cor}
Any (second-countable) $1$-manifold is the leaf-space of a
foliation by lines of a (metric) surface with the same
fundamental group.
\end{cor}
Finally regarding the fundamental group structure we have:
\begin{cor}
All
(non-Hausdorff) $1$-manifolds have free fundamental groups.
\end{cor}
\begin{proof}
Consider again the twistor $M^2\to M^1$ of the $1$-manifold.
By \eqref{homotopy-sequence} again, $\pi_1(M^2)$ is isomorphic
to $\pi_1(M^1)$, and the former is free by
(\ref{freeness-for-open-surfaces:prop}). In case the twistor
trick (\ref{twistor}) should hold only for Lindel\"of
$1$-manifolds, then first establish the corollary in that
case, and next extend universally by the Lindel\"of reduction
trick used in the proof of
(\ref{freeness-for-open-surfaces:prop}).
\end{proof}
\iffalse
\subsection{Haefliger's twistor as the tangent bundle?}
In this section we try to provide an intrinsic construction of
the twistor bundle of a (non-Hausdorff) one-manifold. One can
try two roads:
(1) introduce a smooth structure of class say $C^1$ on any
one-manifold. (That this is possible in the Hausdorff
(non-metric) case is a result of Kneser, Nyikos.) Then we have
a tangent bundle $TM^1$ and the main point is to check that
the latter is Hausdorff.
(2) alternatively one can bypass the smoothing step, by
directly using topological avatars of the tangent bundle
(e.g., Nash's tangent bundle, or Milnor's micro-bundles).
Check that these ``germs'' or micro-constructions are not
confined to the Hausdorff case.
In both cases the little miracle would be that the tangent
bundle construction has a ``Hausdorffizing'' property. This is
a rather subtle point which we are not able to confirm nor to
infirm seriously. Of course for us it would be advantageous if
$TM$ would be Hausdorff as it would supply a proof of
Lemma~\ref{twistor} (and this is perhaps not specific to
$M^1$). So we arrive at the following ``tangential
Hausdorffness'' question:
\begin{ques} Let $M^n$ be an $n$-manifold (not necessarily
Hausdorff, nor metric) with a smooth structure. Then its
tangent bundle $TM^n$ is a $2n$-manifold which is Hausdorff.
(More generally the same assertion holds when $M^n$ is merely
topological if one defines $TM$ \`a la Nash or Milnor.)
\end{ques}
Let us confine to the case $n=1$ of a curve $C=M^1$ which is
of primary interest to us. Let us attempt a proof of the
Hausdorffness of the tangent bundle $TC$ (where we assume $C$
smooth for simplicity).
\begin{proof} Let $\xi_1, \xi_2\in TC$ be two (distinct)
tangent vectors. Let $p\colon TC \to C$ be the natural
projection attaching to a vector its ``basepoint''. Let
$p_i=p(\xi_i)$ We distinguish some cases:
(i) If $p_1=p_2=:p$, then the two $\xi_i$ can be thought of as
having different ``lengths'' $\ell_i$ (say to respect with a
local Riemannian metric introduced on a chart about the point
$p$). Then is easy to separate $\xi_1$ from $\xi_2$ by
allowing small variation of the lengths (not exceeding
$(\ell_1-\ell_2)/2$) and by allowing $p$ to vary slightly
within a chart.
(ii) If $p_1\neq p_2$ and if they are Hausdorff-separable in
$M^1$ it is a trivial matter to separate the $\xi_i$ (just
pull-back the open sets via $p$). So the only difficult case
is the following:
(iii) If $p_1\neq p_2$, yet (Hausdorff) inseparable in $M^1$,
then we have only a mystical argument to propose (probably
completely wrong?). Fix two charts $U_i$ about the two $p_i$.
The union $L:=U_1\cup U_2$ is a Lindel\"of non-$T_2$
1-manifold. Albeit non-Hausdorff, we can try to introduce a
Riemannian metric on $L$, or at least one, say $g_i$, on each
$U_i$. We assume that the $\xi_i$ have different lengths in
both metrics $g_i$ on $U_i$. Then under a small perturbation
$\xi_i'$ of the $\xi_i$ it may well happen that the new
basepoints $p_i'$ coincide yet their lengths will remain
sufficiently far apart so as not to coincide. Then it is easy
to construct non-overlapping neighbourhoods for the $\xi_i$.
(iv) The above length distortion trick of the $\xi_i$ is
impossible when both vectors $\xi_i$ are the zero-vectors. In
this case one would require another dirty trick.
\end{proof}
\fi
\section{Foliated foundations}
Before penetrating truly to our main object, we recall some
classical facts for later references. Below, {\it
$1$-foliation}
abbreviates ``one-dimensional foliation''.
\subsection{Orienting
double cover (Haefliger, Hector-Hirsch, etc.)}
\begin{prop}\label{orienting:2-fold-covering}
Given a $1$-foliation of a manifold, there is a
double cover such that the lifted foliation is orientable. In
particular, any $1$-foliation of a simply-connected manifold
is orientable.
\end{prop}
\begin{proof} If no smoothness is postulated,
some tricks with germs act as a substitute to the tangent line
bundle of the foliation (cf. Haefliger 1962 \cite{Haefliger62}
or Hector-Hirsch 1981-83 \cite{HectorHirschA, HectorHirschB}).
Since the construction is purely local, there is no hindrance
in implementing it in the globalized world of non-metric
manifolds.
\end{proof}
\subsection{Compatible flows (\Kerekjarto, Whitney)}
In contrast the following paradigm is much more {\it
metric}\,-sensitive
(indeed false without one, as amply discussed in
\cite{GabGa_2011}):
\begin{theorem}\label{Kerek-Whitney:thm}
(\Kerekjarto{ }1925, Whitney 1933) Given an oriented
$1$-foliation of a metric manifold, there is a compatible
flow, whose trajectories are the leaves.
\end{theorem}
\begin{proof} The 2D-case is due to
\Kerekjarto{ }1925 \cite{Kerekjarto_1925}, and the general one
to Whitney 1933 \cite{Whitney33}.
\end{proof}
\begin{cor}\label{disc-cannot-be-foliated}
The $2$-disc cannot be foliated (tangentially).
\end{cor}
\begin{proof} Recall two classical arguments:
$\bullet$ {\it Via Brouwer.} Assuming it could, then as the
disc is 1-connected the foliation is orientable
(\ref{orienting:2-fold-covering}), hence admits a compatible
flow (\ref{Kerek-Whitney:thm}). Passing to dyadic times
$t_n=1/2^n$ of the flow, gives a nested sequence of non-empty
closed sets (Brouwer's fixed point theorem) whose common
intersection is non-void by compactness. Thus a rest-point for
all times is created, violating the flow compatibility with
the foliation.
$\bullet$ {\it Via H. Kneser.} Double the foliated disc to get
a foliated sphere with $\chi=2$, violating Kneser's
combinatorial proof of the Euler obstruction
(\ref{Kneser:Poincare-Dyck:Euler obstruction}) below.
\end{proof}
\begin{cor}\label{Euler:classical-obstruction}
More generally, a closed topological manifold
foliated by curves has zero Euler characteristic.
\end{cor}
\begin{proof} If not, then passing to the orienting cover
(\ref{orienting:2-fold-covering}), we may assume the foliation
oriented. Consider a compatible flow
(\ref{Kerek-Whitney:thm}), which by Lefschetz's fixed point
theorem \cite{Lefschetz_1937} (version for ANR's) has a fixed
point, an absurdity. In the surface case one can also argue
elementary \`a la Kneser via (\ref{Kneser:Poincare-Dyck:Euler
obstruction}).
\end{proof}
\subsection{Beck's technique (plasticity of flows)}
\label{Beck's_technique:section}
Albeit we are primarily interested in foliations, some facts
concerning flows will be useful in the sequel. A basic
desideratum, when dealing with flows, is a two-fold yoga of
``restriction'' and ``extension'':
(1) {\it Given a flow on a space $X$ and an open subset
$U\subset X$, find a flow on $U$ whose phase-portrait is the
trace of the original one}; and conversely:
(2) {\it Given a flow on $U$, find a flow on $X\supset U$
whose phase-portrait restricts to the given one.}
\smallskip
Thus, one expects that any open set of a
brushing\footnote{That is a space admitting a fixed-point free
flow.} is a brushing, and that any separable super-space of a
transitive space is
transitive, provided the sub-space is dense (or becomes so,
after a suitable inflation).
Problem (1) is solved in Beck~\cite{Beck_1958}, when $X$ is
metric (via passage to the induced foliation this also derives
from \Kerekjarto-Whitney (\ref{Kerek-Whitney:thm})). (An
example in \cite{GabGa_2011} indicates a non-metric
disruption.) The same technique of Beck (clever time-changes
afforded by suitable integrations), solves Problem~(2) in the
metric case (compare \cite[Lemma 2.3]{Jimenez_2004}):
\begin{lemma}\label{Beck's_technique:extension:lemma}
Let $X$ be a locally compact metric
space and $U$ and open set of $X$. Given a flow $f$ on $U$,
there is a new flow $f^{\star}$ on $X$ whose orbits in $U$ are
identic to the one under $f$.
\end{lemma}
\subsection{Foliated triangulations (H. Kneser)}
It is hard to resist recalling Hellmuth Kneser's combinatorial
approach to the Euler obstruction. We admit the following
referring for clean proofs to Kneser 1924 \cite{Kneser24} or
Hector-Hirsch \cite{HectorHirschA}.
\begin{lemma}\label{Kneser:triangulation-compatible-foliation}
(Kneser 1924) Any metric foliated surface has a
``generic'' triangulation where each $2$-simplex is
transversely foliated as depicted on Fig.\,\ref{Kneser:fig}.b.
\iffalse (no edge lyes in a leaf), say as the standard simplex
$\Delta=\{(x_1,x_2)\in {\Bbb R}^2: x_1+x_2\le 1, x_i\ge 0 \}$
foliated by horizontal lines $x_1-x_2=\text{constant}$. \fi
\end{lemma}
\begin{proof} (Dirty outline)
By definition of a foliated structure it is rather clear that
we have a tessellation by foliated boxes, which are squares.
Then we add diagonals to get triangles and whenever two of
them
are adjacent along a piece of leaf, we perform Kneser's flip
depicted on Fig.\,\ref{Kneser:fig}.e (gaining transversality).
\end{proof}
\begin{figure}[h]
\centering
\epsfig{figure=akneser.eps,width=122mm}
\vskip-105pt\penalty0
\caption{\label{Kneser:fig}
Kneser's proof of the Euler-Poincar\'e-Dyck obstruction}
\end{figure}
Since any open metric surface can be foliated
(\ref{Morse-Thom:surfaces}) (=Morse theoretical trick), this
suggests another proof of Rad\'o's triangulation theorem
(\ref{Rado:triangulation}) at least for open surfaces. (Of
course Rad\'o was well aware of
Kneser's paper, cf. his article \cite{Rado_1925}, but probably
not of the Morse theoretical trick.)
\begin{cor}\label{Kneser:Poincare-Dyck:Euler obstruction}
(Poincar\'e 1885, Dyck 1888, Kneser 1924) A closed surface
which is foliated has vanishing Euler characteristic $\chi=0$.
\end{cor}
\begin{proof} (Kneser). By
(\ref{Kneser:triangulation-compatible-foliation}) there is a
triangulation transverse to the foliated structure, which
is finite by compactness. So we
may compute the characteristic as the alternating sum of the
cardinalities $e_i$ of the set $\sigma_i$ of simplices of
dimensionality $i=0,1,2$:
\begin{equation}\label{Kneser:char}
\chi=e_0-e_1+e_2.
\end{equation}
First ignoring the foliation, recall
the relation $2e_1=3e_2$, cf.
(\ref{Kneser-Descartes-etc:lemma}) right below. Besides, any
(transversely foliated) $2$-simplex has a distinguished vertex
through which a piece of leaf
traverses the 2-simplex (Fig.\,\ref{Kneser:fig}.b). So we have
a map $\sigma_2\to \sigma_0$ which is onto and
2-to-1 as the leaf extends in 2 directions. Hence $e_2=2e_0$.
Plugging those relations in \eqref{Kneser:char} gives:
$
\chi=e_0-e_1+e_2=\textstyle\frac{1}{2}e_2-\frac{3}{2}e_2+e_2=0.
$
\end{proof}
\begin{lemma}\label{Kneser-Descartes-etc:lemma} (Descartes, Euler 1750, L'Huilier 1811, who else?)
In any finite triangulation of a closed (compact non-bordered)
surface the relation $2e_1=3e_2$ holds true between the
numbers
$e_1$, $e_2$ of edges, respectively triangles.
\end{lemma}
\begin{proof}
Let
$\sigma_i$ be the set of simplices of dimension $i$ and
consider the incidence relation {\it two triangles have a
common edge} (adjacent triangles, Fig.\,\ref{Kneser:fig}.c):
$$
I=\{(\Delta_1,\Delta_2)\in \sigma_2 \times \sigma_2: \Delta_1
\cap \Delta_2=\text{one edge} \}
$$
Mapping such a pair to its common edge yields a map $I\to
\sigma_1$ which is onto (the surface being non-bordered) and
2-to-1 as the pair-order is permutable. Thus the cardinality
of $I$ is $\# I = 2 e_1$.
Besides, projecting
on the first factor (say) gives a map
$I\to \sigma_2$ which is onto and 3-to-1
(Fig.\,\ref{Kneser:fig}.d), whence
$\# I = 3 e_2$.
\end{proof}
\iffalse
\begin{lemma}\label{Kneser:triangulation-compatible-foliation}
(Kneser, 1924) Any metric foliated surface has a compatible
triangulation such that each $2$-simplex is canonically
foliated, say as the standard simplex $\Delta=\{(x_1,x_2)\in
{\Bbb R}^2: x_1+x_2\le 1, x_i\ge 0 \}$ foliated by horizontal
lines $x_2=\text{constant}$.
\end{lemma}
\begin{proof}
In fact I am not even sure that the above statement is right,
compare Figure below where I am unable to find such a
triangulation for the ``Reeb'' foliation.
\end{proof}
\begin{figure}[h]
\centering
\epsfig{figure=akneser.eps,width=142mm}
\caption{\label{Kneser:fig}
Kneser's proof (with a severe gap)}
\end{figure}
Since any open metric surface can be foliated this implies
another proof of Rad\'o's triangulation theorem
(\ref{Rado:triangulation}) at least for open surfaces. (Of
course Rad\'o was well aware of this, cf. his paper).
\begin{cor} (Poincar\'e, Dyck, etc.)
A closed surface which is foliated has $\chi=0$.
\end{cor}
\begin{proof} (Kneser). By
(\ref{Kneser:triangulation-compatible-foliation}) there is a
triangulation compatible with the foliated structure, which
since the surface is compact is a finite triangulation. So we
may compute the characteristic as the alternating sum of the
cardinalities $e_i$ of the set $\sigma_i$ of simplices of
dimensionality $i=0,1,2$:
\begin{equation}\label{Kneser:char}
\chi=e_0-e_1+e_2.
\end{equation}
Each edge is either tangentially ($\tau$) or transversely
($\pitchfork$) foliated, yielding a partition
$\sigma_1=\sigma_{1,\tau} \sqcup \sigma_{1,\pitchfork}$, hence
$e_1=\tau+\!\pitchfork$, where the latter denotes the number
of tangent resp. transverse edges. Since each 2-simplex
determines uniquely a tangent-edge, which is the common face
of two such 2-simplices, we have $e_2=2\tau$. After passing to
a 2-fold cover (\ref{orienting:2-fold-covering}), we may
assume the foliation oriented. Assigning to each tangent-edge
its extremity gives a map $\partial\colon \sigma_{1, \tau} \to
\sigma_0$, and since the edge is uniquely recovered, it is
bijective. Thus $e_0=\tau$. Each foliated 2-simplex with
oriented leaves can be thought of as collapsing to the
transverse-edge where the arrows are pointing. Thus each
$\pitchfork$-edge determines uniquely a $2$-simplex collapsing
to it, which in turn has a unique tangent-edge. Hence we have
a map $\sigma_{1,\pitchfork}\to \sigma_{1,\tau}$ from the
transverse edges to the tangent ones, which is onto and
two-to-one. (Psychologically, it is slightly easier to think
about the multivalent-map assigning to a tangent edge the
collapse of its two adjacent triangles.) At any rate, it
follows that $\pitchfork=2\tau$. Plugging these relations in
\eqref{Kneser:char} gives:
\begin{equation}\label{Kneser:char:detail}
\chi=e_0-e_1+e_2=\tau-(\tau+\pitchfork)+2\tau=-\pitchfork+2\tau=0.
\end{equation}
\end{proof}\fi
\subsection{Open metric surfaces fibrates (Morse, Thom, etc.)}
As well-known, metric differentiable manifolds admit {\it
Morse functions}, which as a reaction to the complicated
topology (or rather the compactness)
deliver generally critical points. (In the late 60's, Morse as
well as Kirby-Siebenmann explained how to get rid off the
differentiable proviso
using so-called {\it topological} Morse functions.)
When the manifold is open, one can (in principle)
eliminate critical points by rejection to $\infty$:
\begin{theorem}\label{Morse-Thom-Whitehead-Hirsch}
Any open metric (topological) manifold has a critical point
free Morse function. The latter, being a submersion, defines
a codimension-one foliation, whose transverse ``line-field''
gives a $1$-foliation whose leaves are lines.
\end{theorem}
\begin{proof} {\it (Heuristic outline).} Choose any Morse
function $f$, i.e. locally resembling a quadratic
non-degenerate form $x_1^2+\dots+x_p^2-x_{p+1}^2-\dots-x_n^2$.
In particular critical points are isolated. In every open
metric manifolds $M$, one can---starting from any
point---trace a {\it ventilator} (jargon borrowed from L.
Siebenmann), i.e., an arc $A$ homeomorphic to a semi-line
$[0,\infty)$ such that $M$ slitted along $A$ is homeomorphic
to $M$, i.e., $M-A\approx M$. Since $M$ is metric, the
critical-set of $f$, being discrete, is countable. Thus by a
(hazardous?)
infinite-repetition, we may remove inductively ventilators
emanating from the critical points, to reach the first claim.
The addendum follows by aping ``the'' gradient flow of the
Morse function via a technique of
Siebenmann~\cite{Siebenmann72}. (In the smooth case just
integrate the transverse line-field, w.r.t. an auxiliary
Riemannian metric.)
{\it (Variant of proof in the PL-case)} Compare Hirsch
1961~\cite{Hirsch_1961}{}, using the theory of Whitehead's
spine.
\end{proof}
\begin{prop}\label{Morse-Thom:surfaces} Any metric open
surface
foliates. More is true it can be foliated by lines, probably
even in the following hygienical way:
{\it (Hypothetical addendum).}---Any such surface can be
regarded as the total space of a fibration by lines whose base
is a (non-Hausdorff) $1$-manifold.
\end{prop}
\begin{proof} {\it (High-brow proof)} This follows by
specializing the above (\ref{Morse-Thom-Whitehead-Hirsch}),
taking (optionally) advantage of the smoothability of metric
surfaces. Smoothing(s) can be deduced from Rad\'o
(\ref{Rado:triangulation}) using
eventually the trick of Stoilow-Heins to introduce a
(stronger) Riemann surface (${\Bbb C}$-analytic) structure in
the orientable case (and by adapting a Klein(=di-analytic)
surface structure) in the non-orientable case.
{\it (Elementary proof?)} Start from a triangulation given by
Rad\'o (\ref{Rado:triangulation}) and try to find some clever
combinatorial procedure to
propagate a
foliated texture \`a la Kneser.
(Details left to the imaginative
readers.)
---{\it Outlined addendum.} Integrating the vector
field orthogonal to the level curves of a critical-point-free
Morse function $f$, and having speed-one w.r.t. a complete
Riemannian metric), we obtain a (fixed-point-free) flow
$\varphi\colon {\Bbb R}\times M \to M$ without
``recurrences''. Let $\cal F$ be the underlying foliation.
The projection on the leaf-space $p\colon M \to M/{\cal F}$ is
a fibration (by lines). Indeed, given
a trajectory of $\varphi$ (say that of the point $x$), one can
let evolve in time a small 1D-chart $V$ (selected) in the
$f$-level-curve, $f^{-1}(f(x))$, through $x$
to manufacture a
2-cell $U:=\varphi({\Bbb R}\times V)\approx {\Bbb R}\times V$
(via $\varphi(t,v)\leftarrowtail(t,v)$) which is (trivially)
fibred by lines (the trajectories). The projection of $U$ in
the leaf-space is open (its inverse image being $U$ which is
open) and
homeomorphic to $V\approx {\Bbb R}$ (restrict $p$ to $V$).
Hence the leaf-space is a $1$-manifold, and $p$ is a fibration
(trivial over $p(V)$).
\end{proof}
This addendum looks
somewhat dual to the engineering of Haefliger (\ref{twistor})
permitting to
conceive any (non-Hausdorff) 1-manifold as the base of a
fibration by lines of a Hausdorff surface.
---{\it Baby example.} Consider a ``Y''-shaped surface
resembling a tree in usual 3-space ${\Bbb R}^3$ with three
trunks going to infinity (Fig.\,\ref{ababy:fig}.a). The height
function ``$z$'' (third coordinate) has a critical point where
the two branches of the tree ``Y'' bifurcate. The latter can
be eliminated just by
deforming one of the branch horizontally and letting it
disappear to $\infty$ like a ``cusp''
(Fig.\,\ref{ababy:fig}.b). Those surfaces are just
diffeomorphic to a punctured cylinder that can be imagined
endowed with a complete Riemannian
whose line-elements diminish in size (w.r.t. to the Euclidean
element) as the puncture is approached
(Fig.\,\ref{ababy:fig}.c). The height-function is
critical-point-free and the orthogonal trajectories are
vertical lines on the cylinder-model (with a sole interruption
at the puncture), yet drastically slowed-down (w.r.t. the
Euclidean perception) as we approach the dark-matter
concentrated near the puncture. The leaf-space (of the
transverse foliation) is a circle-with two origins (just
identify two transverse circles lying above resp. below the
puncture, whenever they are intercepted by a same leaf)
(Fig.\,\ref{ababy:fig}.d).
\begin{figure}[h]
\centering
\epsfig{figure=ababy.eps,width=108mm}
\vskip-10pt\penalty0
\caption{\label{ababy:fig}
Foliating open surfaces by lines}
\vskip-5pt\penalty0
\end{figure}
\section{Haefliger-Reeb theory for non-metric
simply-connected
surfaces}
The starting point for this section (and actually of the whole
paper) was the observation by D. Gauld that a foliated avatar
of the ``closing lemma'' (Figure~\ref{David's_trick}, Case~1)
shows that a simply-connected surface (even when non-metric)
cannot be transitively foliated. Exploiting this remark allows
one to extend the Haefliger-Reeb theory to all 1-connected
surfaces, pushing its validity outside any metrical
predisposition, evidencing a rather robust character of their
theory.
\subsection{Haefliger-Reeb derives from Schoenflies (Gauld)}
The game in this section is to ape
non-metrically the Haefliger-Reeb theory 1957
\cite{Haefliger_Reeb_1957} describing
foliations on the plane ${\Bbb R}^2$.
(This was also studied earlier in 1940 by W. Kaplan.)
Replacing the plane by an arbitrary (non-metric)
simply-connected surface, we passively observe that most of
the classical theory remains valid in this broader context.
(The only minor divergence is that the projection on the
leaf-space can now cease to be a fibration, an issue only
emphasised in subsequent papers, e.g. of Godbillon and Reeb.)
The {\it raison d'\^etre} for this extension is
the
non-metric availability of the Schoenflies theorem which is
implied by, and indeed equivalent to, simple-connectivity (see
Gabard-Gauld 2010 \cite{GaGa2010} or
(\ref{Schoenflies-Baer})).
\begin{prop} \label{Alex_separation} A
foliated simply-connected surface satisfies:
{\rm (a)} Any leaf is open as a manifold (i.e., no compact
circle leaf).
{\rm (b)} A leaf
appears at most once in any foliated chart. More precisely if
a leaf intersects a foliated chart then this intersection
reduces to a single line
(plaque).
{\rm (c)} From {\rm (b)}, it follows that the leaf-space is a
$1$-manifold, in particular any leaf is closed as a point-set.
Also leaves are proper, i.e. the leaf topology matches with
the relative
topology. Still from {\rm (b)} leaves cannot be dense.
{\rm (d)} By {\rm (a)} any leaf has two ends and runs to
infinity in both directions while dividing the surface in two
components (called halves).
\end{prop}
\begin{rem} {\rm This statement
may well be empty when the surface lacks any foliation. This
is the case of the 2-sphere, but can also occur to non-compact
surfaces, e.g. the long glass ${\Bbb S}^1\times {\Bbb L}_{\ge
0}$ capped off by a $2$-disc (compare \cite{BGG1}). Point (b)
is exactly Th\'eor\`eme 1 in Haefliger-Reeb
\cite[p.\,120]{Haefliger_Reeb_1957}, and a direct consequence
is that the leaf-space is a (generally non-Hausdorff)
1-manifold. }
\end{rem}
\begin{proof} (a) is obvious, for a circle leaf would bound a
foliated disc by Schoenflies (see \cite{GaGa2010} or
(\ref{Schoenflies-Baer})), which is an absurdity
(\ref{disc-cannot-be-foliated}).
The proof of (b) is a similar Schoenflies obstruction modulo
some tricks reminiscent of the Poincar\'e-Bendixson trapping
argument or rather the closing lemma (for dynamical flows).
Assume that a leaf returns to a foliated chart. Orient the
foliated box as well as the leaf. Then one distinguishes two
cases depending on whether the first-return to the box matches
or reverses the orientation (cf. Figure~\ref{David's_trick}).
In fact
since the foliation is orientable
(\ref{orienting:2-fold-covering}), only the first case needs
attention.
\begin{figure}[h]
\centering
\epsfig{figure=closing.eps,width=108mm}
\caption{\label{David's_trick}
Absence of recurrences for a leaf in a simply-connected surface}
\vskip-5pt\penalty0
\end{figure}
In the first case one can perturb the foliation within the box
(e.g., piecewise linearly) while creating a circle leaf (impossible
by Schoenflies).
\smallskip
{\footnotesize {\bf Very optional over-exhaustive case
distinctions.} In the other case one has a ``tongue shape''
whose double produces a foliated disc.
In fact
in the second case there is
a tricky subcase (Case 2bis on
Figure~\ref{David's_trick}) corresponding to the situation,
where
\iffalse the bounding disc for the Jordan curve starting from
$h$, say, and extended until its first impact, $i$, on a
cross-section of the foliated-box and closed-up by the unique
arc in the cross-section form $i$ back to $h$ \fi
the bounding disc for the Jordan curve starting from the first
escapement $e$, say, of the oriented leaf $L$ from the
foliated-box $B$ and extended until its first impact, say $b$,
on the box $B$ and closed-up by the unique arc in $\partial B$
from $b$ back to $e$ (transverse to ${\cal F}$) contains the
foliated-box $B$. In this case we start by pushing the side
$\overline{eb}$ into the box up to position $\overline{cd}$.
Then we flatten the boundary of the bounding disc $D$ for the
Jordan curve $J$ (through $e,b,c,d,e$) near the critical
region (compare the figure), and finally we double $D$ with a
replica $D'$ yielding a foliated $2$-sphere having as unique
singularities two ``tripods'' singularities located at the
points $c$ and $d$. Such tripods singularities have an index
$j=-\frac{1}{2}$ each, yielding a total sum of $-1$,
disagreeing with the Euler characteristic of $S^2$. This
violates the Poincar\'e-{\Kerekjarto}-Hopf index formula for
line fields (cf. e.g., H. Hopf \cite[p.\,109 and Theorem II,
p.\,113]{Hopf_1946_1956}).
}
\iffalse
\begin{rem}\label{simplification:rem} {\rm This proof can be
simplified, if one notices first that since the ambient
surface $S$ is simply-connected, any of its foliation is
orientable,
hence Cases 2 and 2bis need no examination.
Thus the above cases distinction is in
reality over-exhaustive, yet it
avoids
defining orientable foliations and the allied orienting
covers.}
\end{rem}
\fi
(c) The properness of leaves is clear in view of (b). That
leaves are closed sets can be derived from the fact that the
leaf-space is a (non-Hausdorff) \hbox{1-manifold}, which
follows directly from (b), as we shall recall later
(\ref{leaf-space-one-manifold}). Of course closedness can be
deduced also directly from (b): given a point $p$ not on the
leaf $L$, choose a foliated chart $U$ about $p$. If $U\cap L$
is empty we are done. If not then by (b) we see only a single
plaque of $L$ in $U$ so that we easily find an open set
$V\subset U$ containing $p$ but not intersecting $L$.
(d) As we shall not really need it, we leave as an exercise
the task of clarifying the meaning of running to infinity
(probably in terms of evasion from any compactum).
The last claim of (d) is somewhat harder to establish
(especially if one tries to delineate the broadest generality
in which such a separation holds true). Thus we reserve the
next section to a detailed discussion.
\end{proof}
\smallskip
{\small {\bf Optional semi-historical digression.} The sequel
may lead to an interpretation of the following prose of
Haefliger-Reeb~\cite[p.\,120]{Haefliger_Reeb_1957}: {\it ``Le
th\'eor\`eme 1 qui suit est classique; sa d\'emonstration
repose sur le th\'eor\`eme de Jordan (dans une version
particuli\`erement facile \`a \'etablir); ...''} Of course
Jordan is here somehow blended with Schoenflies. Recall
incidentally that the nomenclature ``Schoenflies theorem'' for
the bounding disc property is a rather recent coinage (perhaps
first appearing in Wilder 1949, as noticed in Siebenmann 2005
\cite[p.\,651]{Siebenmann_2005}). At any rate what is relevant
to the sequel is that point (b) of Prop.~\ref{Alex_separation}
provides a local flatness allowing one to prove a version of
Jordan separation using only covering space theory. Thus we
match slightly with the version particularly easy to establish
mentioned in Haefliger-Reeb, albeit they probably rather had
in mind a mod 2 homology argument, as shows the sequel of
their text {\it ``...elle utilise donc essentiellement le fait
que le plan ${\Bbb R}^2$ est simplement connexe (ou plus
pr\'ecis\'ement que son premier nombre de Betti modulo 2 est
nul).''} However their sketched proof of their Th\'eor\`eme 1
uses in fact Schoenflies and not merely Jordan separation
(recall Dubois-Violette's example).
}
\iffalse
\subsection{A
separation argument
(Riemann, Jordan, etc.)}
Regarding the separation claim in
Prop.~\ref{Alex_separation}(d), one can either:
(1) try to prove it directly (exploiting the full strength of
the assumption) or attempt to deduce it from more general
Jordan-type separation statements independent of foliation
theory. Specifically one could forget the foliation, and
consider the situation of
(2) a single curve $L$ properly embedded as a closed set in
the surface $S$ with $\pi_1=0$.
(3) Even more generally one can wonder about the case of a
proper hypersurface $H^{n-1}$ embedded in a simply-connected
manifold $M^n$.
(Of course finding the right definition of properness is an
inherent part of the problem...)
Regarding the possible angles of attack, we first survey some
methods (while detailing them latter):
(A) argue as for Jordan in GaGa, 2010 \cite{GaGa2010}, that
is, use singular homology plus excision and a certain tubular
neighbourhood $T$ of $L$. Here the novel difficulty arises
from the fact that (unlike a Jordan curve) $L$ is not compact
(nor Lindel\"of) and the tubular neighbourhood theory is more
hazardous. (Recall for instance that the diagonal in the
long-plane ${\Bbb L}^2$ lacks a {\it bona fide} tubular
neighbourhood.)
(B) as in GaGa, 2011, Dynamics \cite{GabGa_2011} exploit the
five lemma (esp. Lemma 4.13 (ii)). This was successfully
applied to the pseudo-Moore problem for flows (cf. DYN), yet
there the orbit was Lindel\"of thus we could also exploit the
classical tubular neighbourhood theory.
(C) try to take advantage of Schoenflies in this Jordan like
separation question. This argument has the inconvenient of
being specifically two-dimensional, yet maybe the advantage of
being more powerful within its specialization. In fact this
was essentially the first method we tried to use in Dyn.,
which subsequently was substituted by method (B) for it seemed
to rely on less geometric intuition. However the present
wackiness of method (B) (poor tubular neighbourhoods) could
motivate a revitalization of method (C) (Schoenflies trapping
method). Still we meet some trouble to justify properly as we
did in Dyn. Maybe there is a fourth method relying somehow in
some faith in the abstract methodology of Riemann.
\iffalse (D) {\bf Cut-and-past argument \`a la Riemann.}
Assume the contrary and slit $M$ along the hypersurface $H$ to
obtain a bordered manifold $W$. Take two copies of the
resulting $W$ and glue them suitably along the boundary to
construct a two-fold covering of $M$. This would violate the
simple-connectivity of $M$. So there is maybe a sort of
general abstract argument in the spirit of Riemann, compare
also Haefliger-Reeb, Lemme p.\,113 for a similar argument...in
the one-dimensional case. Of course this case is very special
as the hypersurface being a point we have a nice tubular
neighbourhood (simply a chart also available in the
non-Hausdorff case). In general we would like to know that $H$
is two-sided in $M$, so that $W$ has two borders $\partial
W^{\pm}$, say with positive versus negative electric charge.
Taking two copies of $W$, say $W_1, W_2$ we glue the positive
boundary of $W_1$ with the negative boundary of $W_2$ and
likewise after charge inversion. This gluing produces $N$ a
connected manifold with a natural two-sheeted covering
projection to $M$, which would violate the hypothesis
$\pi_1(M)=0$. It seems that here again the difficulty is that
when we cut a manifold with the magic scissor effecting a
duplication of points along the cut one needs coherent charge
distribution on the newly created particles (points). Of
course if $H$ is one-sided (e.g. a pseudo-line in $RP^2$) then
maybe there is no way to get a coherent charge
distribution...Yet maybe with the assumption of
simple-connectivity there is one for the following reason. \fi
\iffalse {\bf (D) Riemann electrical surgery}. Let us be more
formal. Given is a proper closed hypersurface $H$ in $M$
simply-connected. We choose a point $p\in H$, and we cut $M$
along $H$ . Formally this means that we duplicate the point of
$H$ to get a new point set $M^{\star}$, which is $M\sqcup H$.
So for any $p\in H$ we introduce two point $p_+$ and $p_-$. We
fix arbitrarily a sign charge at $p$ and propagates it to all
of $H$ with pathes. It results a coherent sign distribution,
because otherwise two different pathes from $p$ to $q$, say,
would induce different charges, but then going forth with one
path and back to $p$ with the other we would have a charge
reversing loop, violating the simple-connectivity of $M$. (In
fact this assertion is even more reliable if we assume that
$H$ is simply-connected, so as to ensure that the homotopy
take place in $H$, which is no serious restriction if $M$ is a
surface, for the Jordan separation by circles is easily
established.) Now we topologise $M^{\star}$ to make it a
bordered manifold. Certainly we need a more intrinsic way to
proceed: at any point of $H$ there is by properness a locally
flat chart $U$ where $H\cap U$ divides $U$, and these
components of $U-(H\cap U)$ could somehow define the
duplicatum $p_+$ and $p_-$, while also yielding a sense to the
idea of local propagation of the polarization along pathes by
continuity. As $H$ is $1$-ctd we get a coherent polarization,
and the locally flat charts should make it obvious how to
define on the set $M^{\star}$ a bordered manifold structure
(thought of as $M$ slitted along $H$). In fact the latter
notion should have a meaning even when there is no coherent
charge distribution. Yet in the latter case $M\sim H$ has two
boundary components. If we assume now that $H$ does not divide
$M$, then we have two conclude as above. Taking two copies of
$W$, say $W_1, W_2$ we glue the positive boundary of $W_1$
with the negative boundary of $W_2$ and likewise after charge
inversion. This gluing produces $N$ a connected manifold with
a natural two-sheeted covering projection to $M$, violating
the hypothesis $\pi_1(M)=0$. \fi
{\bf (D) Riemann electrical surgery}. Given a proper closed
hypersurface $H$ in $M$ a simply-connected manifold, we would
like to show that $H$ divides $M$. The idea is that any such
hypersurface in a manifold (one is almost tempted to use the
``divisor'' jargon of algebraic geometry) induces naturally a
two-sheeted cover of $M$. When $H$ does not divides $M$ this
covering is connected, and this violates the
simple-connectivity of $M$.
\iffalse
The idea for the construction of this covering is depicted
schematically on Figure~\ref{Riemann's_trick}, and can be
arrived at via two equivalent roads. (Skip the first road
which is not a geodesic.)
{\bf 1 st route.} We first establish without the assumption of
simple-connectivity of $M$:
\begin{lemma}
Given $H$ a proper hypersurface in a manifold $M$, one can
slit $M$ along $H$ to obtain a bordered manifold denoted
$M\div H=:W$ whose boundary $\partial W$ is in involution,
that is to say there is a fixed point free transformation of
order two $\sigma$ on $\partial W$.
Furthermore there is a canonical continuous surjection $W\to
M$ called the assembly map collapsing precisely those pair of
point lying in involution under $\sigma$. In other words if we
glue $W$ along its boundary via $\sigma $ we reobtain $M$.
\end{lemma}
\begin{proof} Since $H$ is proper we have about any point $p\in
H$ a locally flat chart $(U,U\cap H)\approx ({\Bbb R}^n, {\Bbb
R}^{n-1}\times \{0\})$. We define formally $W$ as $M-H$ plus
those set of components. Then we split all Euclidean
neighbourhood in the charts along this local splitting thereby
defining a topology on the set $W$ fulfilling all the
desiderata. Since this classical scissoring construction is
purely local we maintain (at least hope) that it works also
for non-metric manifolds.
\end{proof}
This construction amounts to duplicate each point $p\in H$ by
introducing two point $p_+$ and $p_-$. We fix arbitrarily a
sign charge at $p$ and propagates it to all of $H$ with pathes
chosen in $H$. To do this let us assume $H$ simply-connected.
(Albeit this is presumably not the best possible generality,
we can notice that in our case of interest, where $M$ is
two-dimensional, this not a severe proviso for Jordan
separation by circles is easily established). It results a
coherent sign distribution, because otherwise two different
pathes from $p$ to $q$, say, would induce different charges,
but then going forth with one path and back to $p$ with the
other we would have a charge reversing loop, violating the
simple-connectivity of $H$. Consequently $W$ has two boundary
components.
If $H$ would not divide $M$, then $W$ is connected. Taking
two copies of $W$, say $W_1, W_2$ we glue the positive
boundary of $W_1$ with the negative boundary of $W_2$ and
vice-versa after charge inversion. This gluing produces $N$ a
connected manifold with a natural two-sheeted covering
projection to $M$ (induced by the assembly maps $W_i\to M$),
violating the hypothesis $\pi_1(M)=0$.
Variant: If $H$ would not divide $M$, then $W$ is connected.
Taking two copies of $W$, while calling $W'$ the new copy and
$p'$ the twin of $p\in W$. We glue $p\in \partial W$ with
$\sigma(p')$. This gluing produces $N$ a connected manifold
with a natural two-sheeted covering projection to $M$ (induced
by the assembly maps $W\to M$), violating the hypothesis
$\pi_1(M)=0$. If this more abstract version holds we do not
need even the propagation argument, and this would establish
the assertion in full generality (without the proviso
$\pi_1(N)=0$).
{\bf 2nd route.}
The
construction can also be described by saying that given $H$ in
$M$ proper, we take a double layer (upper and lower sheet),
and whenever we see $H$ within a local chart $U$ we
glue the upper side of that chart (as divided by $H$) with the
opposite lower side of $H$ and vice-versa. (Compare
Figure~\ref{Riemann's_trick}). The best way to do this is to
define a new altered topology on $M\sqcup M'$, where typical
neighborhoods are redefined by smashing them like the Niagara
falls, i.e. whenever a typical neighbourhood $U$ in $M \sqcup
M'$ is divided by $H$ as $U^{+}, U^{-}$ we alter it into
$U^{+}\cup U^{-}{'}$, where the prime indicates that we push
one half of the neighbourhood in the other layer.
This produces a two-fold covering of $M$, whose total space
$M_H$
is connected if $H$ does not separate $M$.
\begin{figure}[h]
\centering
\epsfig{figure=riemann.eps,width=122mm}
\caption{\label{Riemann's_trick}
Construction of the two-sheeted covering of a manifold $M$
polarized along a hypersurface $H$}
\vskip-5pt\penalty0
\end{figure}
\begin{rem} {\rm Of course this trick is strongly
reminiscent to some of Riemann's constructions when dealing
with the surfaces bearing his name. Note also that a similar
trick is used in Haefliger-Reeb (Lemme, p.\,113) in the case
of one-manifolds (even non-Hausdorff ones). This prompts
the question if our separation question holds as well in the
non-Hausdorff case, yet we shall probably avoid getting
side-tracked by too much generality.}
\end{rem}
Let us state ``semi-formally'' the statement:
\begin{prop} Given any closed proper hypersurface $H$ in a
manifold $M$. There is a canonically defined two-fold covering
$ M_H \to M$ (called the double layer polarization of $M$
along $H$). This space can be thought of as charged particles
$(\pm)$ moving on $M$ which whenever they cross $H$ they
undergo a charge inversion. Thus in particular ${M}_H$ is
disconnected exactly when $H$ divides $M$.
\end{prop}
\iffalse
\begin{proof} That the construction works in full generality
is a little miracle, yet maybe trivial in view of the purely
local character of the construction. The second assertion is
an exercise de pens\'ee pure, which rather trivial. For
instance if $H$ divides $M$ then
\end{proof}
\fi
This gives our sought-for:
\begin{cor} Given any closed proper hypersurface $H$ in a
simply-connected manifold $M$. Then $H$ divides $M$.
\end{cor}
Of course if convinced by the proof then this would give an
alternate proof, and indeed a much more general statement,
than the JCT of GaGa, 2010. In particular it would reprove the
classical Jordan curve theorem in ${\Bbb R}^2$ in a very
elementary way! Yet one can still wonder if the Jordan curve
of positive area (Jordan-Lebesgue two-dimensional measure)
constructed by W.\,F. Osgood (1904) is proper...Of course by
the (deep) Schoenflies theorem the apparent
(measure-theoretic) complexity of such a Jordan curve
disintegrate completely in the topological realm!
As a little question/exercise: what is the (index two)
subgroup of $\pi_1(M)$ corresponding to the covering $ M_H \to
M$ resulting from the polarization along $H$? (It seems to be
the subgroup consisting of loops intersecting and even-number
of time the hypersurface $H$..., for those are precisely the
one lifting as loops for they close up with a charge
conservation).
\fi
\fi
\subsection{Polarized covering
\`a la Riemann
and Jordan separation in the large}
Given a hypersurface $H$ in a simply-connected manifold $M$,
it is intuitively clear that $H$ divides $M$, provided the
hypersurface is closed as a point-set. A possible strategy is
that any such hypersurface in a manifold
induces naturally a
double cover of $M$. When $H$ does not divides $M$ this
covering is connected, violating the simple-connectivity of
$M$.
This section details the above idea. First a:
\begin{defn}\label{hypersurface} {\rm A {\it (locally flat)
hypersurface} in a manifold is a (non-empty) subset $H$ such
that for any point $p\in H$ there is an open neighbourhood $U$
in $M$ and a homeomorphism of triad $h:(U, U\cap H,
p)\approx({\Bbb R}^n, {\Bbb R}^{n-1}\times\{0\},0)$. Since
$U\cap H$ divides $U$, we call $U$ a polarised chart. One has
a splitting $U=U_+\cup U_{-}$ in two local halves defined as
the closures in $U$ of the components of $U-(U\cap H)$. In the
sequel we shall refer to $U_{\pm}$ as being semi-charts.}
\end{defn}
For instance the ``open'' straight line $H=]-1,+1[\times\{0\}$
in ${\Bbb R}^2$ is a hypersurface,
but does not separate the plane. This is why we restrict
attention to hypersurfaces, which are closed as point-sets. As
the terminology ``closed hypersurfaces''
conflicts with the classical nomenclature ``closed manifolds''
(referring to compact borderless manifolds), some {\it ad hoc}
jargon is coined to disambiguate the double usage of
``closed'' in point-set vs. combinatorial topology:
\begin{defn}\label{divisor:def} {\rm A {\it divisor} in a manifold
is a hypersurface in the sense of (\ref{hypersurface}), whose
underlying set is closed as a point-set.}
\end{defn}
Now ``our'' polarization trick is the following mechanism:
\begin{prop}\label{Rieman-polarized-cover}
Given a divisor $H$ in a manifold $M$, there is a naturally
defined
double cover $M_H \to M$ (called the polarization of $M$ along
$H$), with the distinctive property that $M_H$ is disconnected
if and only if $H$ divides $M$.
\end{prop}
\begin{proof}
{\bf (1) Intuitive idea.}
First we can imagine that we cut $M$ along $H$ to obtain a
bordered manifolds $W$ with an involution $\sigma$ on the
boundary $\partial W$ telling one how to reglue the points to
remanufacture the manifold $M$ out of $W$. (We use here the
magic scissor of combinatorial topology, which instead of
deleting points rather duplicate them!) In particular one has
an {\it assembly} map $\alpha \colon W \to M$, which is
one-to-one except over $H$ where the fibers are two points
exchanged by $\sigma$. (Call $\sigma p$ the opposite of $p$.)
Then take $W'$ a replica of $W$, and denote by $p'\in W'$ the
twin copy of the point $p\in W$. In the disjoint union
$W\sqcup W'$ identify the point $p\in
\partial W$ with the opposite of its twin, i.e. $\sigma p'$
(where for simplicity we still denote by $\sigma$ the
involution on $\partial W'$). We define $M_H$ as the resulting
quotient space. It is not hard to show that the assembly maps
$\alpha \cup \alpha' \colon W\sqcup W' \to M$ induce a map
$M_H \to M$ which is a covering projection.
\begin{figure}[h]
\centering
\epsfig{figure=riemann.eps,width=122mm}
\caption{\label{Riemann's_trick}
The
double cover of a manifold $M$
polarized along a hypersurface $H$}
\vskip-5pt\penalty0
\end{figure}
{\bf (2) Another viewpoint.} The above construction requires a
cleaner description of the cutting process. We can take a
slightly different approach. First take a copy $M'$ of $M$,
and define a new topology by splicing any polarized chart
$U=U_+ \cup U_{-}$ (cf. Def.~\ref{hypersurface}) into the two
``spliced'' sets $U_+\sqcup U_{-}'$ and $U_+'\sqcup U_{-}$.
The ``primes'' indicates that we push alternatively one of the
two halves of $U$ into the second layer $M'$. Further we would
like to identify the points $p$ in $U\cap H=U_+\cap U_{-}$
with their twins $p'$ so as to restore the locally Euclidean
character. Note that we are not merely redefining a new
topology on the (static) point-set $M\sqcup M'$, but really
doing a gluing on the two spliced charts which is easy
locally, yet maybe
problematic (at the non-metric scale). \iffalse ..Do we need
here generalised spaces in the sense of Grothendieck (yet then
unlikely to stay within the realm of classical manifolds...)
So we are in front of a little or maybe an insurmontable
difficulty. \fi
{\bf (3) Finding an issue.} Maybe the trick is as follows,
closer to the approach (1). We would like to formalise the
idea of cutting along a hypersurface. Thus we need first to
enrich $M$ by creating a replica for each point lying on $H$.
We try to think of such a point as a pair $(p,U_{\pm})$
consisting of a (classical) point $p$ of $H$ plus a preferred
half $U_{\pm}$ of a polarized chart $U$ about $p$. Since $H$
is closed (as a point-set), we may fix an atlas for $M$ such
that any chart meeting $H$ is a polarised chart (first cover
the hypersurface by polarized charts and then
aggregate charts of the manifold $M-H$). Say that such an
atlas is polarised w.r.t. $H$. Given a polarised atlas $\cal
A$ (say a maximal one to kill any dependence upon anodyne
choice from the beginning) we define a new point-set $W$ as
consisting of all {\it filters}, in the following two senses:
\begin{defn}\label{filters} A filter $F$ of charts
(resp. of semi-chart) is a nested sequences of charts
$U_i\supset U_{i+1}$ of $\cal A$ (resp. semi-charts, i.e.
halves of polarized charts of $\cal A$) whose common
intersection $\cap_{0\le i\le \omega} U_i$ is a unique point
(called the center of the filter).
\end{defn}
Declare two filters $F_1, F_2$ as {\it equivalent} if for any
member of the first $U_i\in F_1$ there is an element of the
second $V_j\in F_2$ such that $V_j\subset U_i$. It is easy to
check that this is an equivalence relation. Now define $W$ as
the set of equivalence classes of filters. Notice that there
are two equivalence classes of filters converging to a point
$p\in H$, whereas there is a unique class converging to a
point not on $H$. We have a map $\alpha\colon W \to M$
assigning to a filter its center and we endow $W$ with the
most economical topology making $\alpha$ continuous. Then it
looks easy to check that $W$ is a bordered manifold.
As the map $\alpha$ is two-to-one above $H$, it gives a
mapping $\sigma\colon
\partial W \to \partial W$ exchanging these two points. Now we
have all the necessary ingredients to conclude as in the first
step (1).
\smallskip
{\bf Proof of the distinctive property in
(\ref{Rieman-polarized-cover}).} [$\Rightarrow $] (NB: this is
the sense really needed for the corollary below). If $H$ does
not divide, then the bordered manifold $W$ is connected, and
so is a fortiori $M_H$ which is obtained by identifying $W$
with a replica $W'$. (Here and below, we use implicitly that
the interior of $W$ is naturally homeomorphic to $M-H$, and
the general fact that a bordered manifold is connected iff its
interior is.)
[$\Leftarrow$] Assume that $H$ divides $M$. Then $W$ is
disconnected, and then $M_H$ is disconnected as follows from
the construction. Indeed assume for (psychological) simplicity
that $W$ has two components $W_+$, $W_-$. Then $M_H$ results
from $W\sqcup W'=(W_+\sqcup W_-)\sqcup(W_+'\sqcup W_-')$ by
attaching $W_+$ with $W_-'$ and $W_-$ with $W_+'$ and
therefore $M_H$ has two components.
\end{proof}
This gives our sought-for:
\begin{cor}\label{Riemann-separation} A closed hypersurface $H$ (as a point-set!) in a
simply-connected manifold $M$ divides the manifold $M$.
\end{cor}
\begin{proof} If $H$ would not divide $M$, then the polarized
covering $M_H\to M$ is connected, violating the assumption
that $\pi_1(M)=0$.
\end{proof}
In particular this implies what we really wanted in
Prop.~\ref{Alex_separation}(d)
\begin{cor} Any leaf of a simply-connected
surface divides.
\end{cor}
\begin{proof} Point (b) of (\ref{Alex_separation}) implies
that the leaf is a hypersurface in the sense of
(\ref{hypersurface}), whereas point (c) of the same
(\ref{Alex_separation}) ensures that the leaf is a closed as a
point-set. Thus we conclude with (\ref{Riemann-separation}).
\end{proof}
\iffalse
\subsection{Discussion and related literature}
Corollary~\ref{Riemann-separation} shows that any
simply-connected manifold splits (i.e., is divided by any
divisor). As we shall discuss latter the converse does not
hold (consider e.g., the Poincar\'e homology sphere). Thus
Corollary~\ref{Riemann-separation} is not completely sharp,
and that the optimal result would be perhaps the following
hypothetical characterization:
\begin{conj} A manifold $M$ splits (i.e. is
divided by any divisor) iff $\beta_1=0$, where $\beta_1$ is
the first Betti number with coefficient mod 2.
\end{conj}
The corollary generalises (and reproves,
unfortunately only under the local flatness assumption
implicit in our definition of hypersurfaces) many classical
results, e.g.:
(1) The Jordan curve theorem (JCT) (separation of the plane
${\Bbb R}^2$ by any embedded circle). (Leibniz, Bolzano,
Jordan, Veblen, etc., including Hales for the first computer
assisted proof of (JCT).)
(2) The Jordan-Brouwer separation (separation of ${\Bbb R}^n$
by any embedded ${\Bbb S}^{n-1}$). (Brouwer, Lebesgue,
Schoenflies, Alexander)
(3) The separation of ${\Bbb R}^n$ by any hypersurfaces which
is closed as a point-set (cf. e.g., Lima 1988, Amer. Math.
Monthly \cite{Lima_1988}).
Of course it seems that this is related to Alexander's
duality, and it would be of interest to know what can exactly
can be deduced from it, maybe the above sharp conjectural
characterisation.
Albeit, our corollary above is not sharp, it is
methodologically interesting for it avoids completely homology
theory, while focusing only the fundamental group and the
allied covering space theory.
As a foliated application we have:
\begin{cor} A foliation of codimension one of a
simply-connected manifold containing a leaf which is a divisor
cannot be transitive (i.e. has no dense leaf).
\end{cor}
\fi
\iffalse
\subsection{Discussing older methods}
{\bf Method (A) fails against target (1).} One writes the
homology sequence of the pair $(S,S-L)$:
$$
0=H_1(S)\to H_1(S,S-L)\to H_0(S-L)\to H_0(S)\to H_0(S,S-L)=0.
$$
The two extremal groups are trivial. Now the critical step
would be to claim that $L$ (despite its potential longness)
admits a tubular neighborhood $T$, at least in a suitable weak
(homological) sense (which will be soon apparent). Then the
similar sequence for the pair $(T,T-L)$:
$$
H_1(T)\to H_1(T,T-L)\to H_0(T-L)\to H_0(T)\to H_0(T,T-L)=0.
$$
shows that under the desideratum $H_1(T)=0$ and $H_0(T-L)$ of
rank $2$ (probably due to the simple-connectivity of $L$), the
relative group $H_1(T,T-L)$ has rank one. Since it is
isomorphic by excision to $H_1(S,S-L)$ we deduce that
$H_0(S-L)$ has rank two, yielding the asserted separation.
{\bf Method B attempts to swallow the big target (3)} Maybe a
variant would be to use the five lemma as in GaGa 2011,
Dynamics, yet it is not straightforward... Let us see why?
Recall form loc. cit. Lemma 4.13(ii) stating that {\it if $J$
is a closed set in a space $M$, and if $J$ is strictly
contained in $U$ an open set of $M$. Then if $J$ divides $U$,
and $H_1(U) \to H_1(M)$ is onto then $J$ divides $M$.} To
apply this to the situation (c), we choose $J:=H$, $M:=M$ but
need to construct $U$. The obvious idea is again to take a
sort of tubular neighborhood, by aggregating distinguished
charts $(U, H\cap U)\approx({\Bbb R}^n,{\Bbb R}^{n-1}\times
\{0\})$. Here we would define properness $H\subset M $ by the
requirement that
\iffalse (i) for any point $p$ of $M$ there is a chart $U\ni
p$ of $M$ such that
(ii) whenever a chart $V$ of $M$ meets $H$, there is a smaller
chart $U\subset V$ such that $(U, H\cap U)\approx({\Bbb
R}^n,{\Bbb R}^{n-1}\times \{0\})$ for a suitable homeomorphism
of pairs. \fi
(PROPER) for any point $p$ of $M$ and any chart $V$ of $M$
containing the point $p$ and meeting $H$, there is a smaller
chart $U\subset V$ still containing $p$ such that $(U, H\cap
U)\approx({\Bbb R}^n,{\Bbb R}^{n-1}\times \{0\})$ for a
suitable homeomorphism of pairs.
Then if we let $p$ run through $H$ we attempt to aggregate
such chart to construct $U$, yet the obvious difficulty is how
to ascertain that $U$ can be chosen so that $H$ divides it?
\iffalse In fact, it could well be the case that the assertion
follows from a very general separation theorem (aking to the
``Zerlegungssatz'' of H. Kneser about 1928), compare the
conjecture/question below. \fi
{\bf Method C} Now we discuss the specific two-dimensional
method trying to take advantage of the Schoenflies theorem.
Here again the method could apply either to the case of a
single curve or to a regular family of curves (i.e., a
foliation):
\begin{lemma}
(i) Let $L$ be a proper (closed) curve in the simply-connected
surface $S$. Then $L$ divides $S$.
(ii) In particular the assertion holds for $L$ a leaf of a
foliation on $S$.
\end{lemma}
\begin{proof} (i) If $L$
is compact, then this follows from the JCT established in
GaGa, 2010, NZJM. Otherwise assume $L$ non-dividing curve in
the surface $S$, then choose any point $p\in L$ plus two
perturbed points $p^{+}$ and $p^{-}$ lying on both sides of
$L$ (locally thinking). Since $S-L$ is connected (and a
manifold, we assume $L$ closed), there is an arc $\gamma$
joining $p^+$ to $p^{-}$ traced on $S-L$. Now link $p^{+}$ to
$p^{-}$ by a small arc $\delta$ meeting $L$ only once at $p$.
Then the Jordan curve $\gamma+\delta$ would bound a disc $D$
(Schoenflies), which contains one side of the curve $L$ in it.
This violates the properness of $L$, for $D$ is a compactum
whose trace on $L$ is non-compact.
\begin{figure}[h]
\centering
\epsfig{figure=psycho.eps,width=68mm}
\caption{\label{Schoenflies_trap}
Jordan separation in a simply-connected surface via a
Schoenflies trapping}
\vskip-5pt\penalty0
\end{figure}
Thinking seriously about this argument one may still have some
doubt (for it is not completely clear that $\gamma$ can avoid
$\delta$ except at the extremities...).
Perhaps the argument is easier if we do not forget the
foliation. In this case we choose near $p\in L$ a foliated box
$B$. We know that the leaf $L$ cannot reappear in the box $B$
(by point (a)). We choose two points $p^{+}$, $p^{-}$ in B
lying on the cross-section through $p$. Since $S-L$ is
connected one can maybe show that $L$ union the open arc $A$
from $p^+$ to $p^-$ still do not disconnect $S$. If so link
$p^+$ to $p^{-}$ in $S-(L\cup A)$ by an arc $\gamma$. Then
$\gamma \cup A$ is a Jordan curve, and we conclude as above.
\end{proof}
\iffalse
\begin{ques} Let $M^n$ be a simply-connected (non-metric)
manifold of dimension $n$. Then $M^n$ is divided by any proper
hypersurface $H$ (submanifold of codimension-one).
\end{ques}
\fi
\fi
\subsection{More analogies and divergences
from Haefliger-Reeb}
Albeit we shall not use it, we can push-forward the analogy
with Haefliger-Reeb's theory. If $\cal F$ is a foliation on a
simply-connected surface $S$, then
(A) the leaf-space $S/ \cal F$
is still a 1-manifold (generally non-Hausdorff), cf.
(\ref{leaf-space-one-manifold}) below. In our setting the
leaf-space needs not to be second-countable (equivalently
Lindel\"of, as manifolds are locally second-countable). So the
leaf-space is a non-Hausdorff 1-manifold with possibly long
``branches'' or also with possibly uncountably many branches
(consider e.g., the leaf-space of ${\Bbb L}^2$ slitted along
the closed set ${\Bbb L}_{\ge 0} \times {\omega_1}$ and
foliated vertically or the Pr\"ufer type example depicted on
Figure~\ref{Train:fig}).
(B) However there is a little divergence with the metric case,
for now the projection $S\to S/ \cal F$ needs not to be a
(locally trivial) fibration. Indeed it is enough to consider
slitted long planes ${\Bbb L}^2-(\{0\}\times {\Bbb L}_{\ge
0})$ foliated vertically to see that the leaf-type can jump
erratically between the three open 1-manifolds (real-line,
long ray and long line). \iffalse Thus the issue that
any leaf of a foliation of the plane admits a saturated
neighbourhood with a product structure breaks down in our
context. \fi
Another
perverse example is provided by the vertical foliation of the
Moore surface (cf. Figure~\ref{Train:fig}), where there is no
jump in the topological type of the leaves, yet the projection
$M\to M/{\cal F}$ is not a fibration. If it would then since
the base is ${\Bbb R}$ which is contractible the fibration
would be trivial (Feldbau-Ehresmann-Steenrod), and so the
total space would be ${\Bbb R}^2$ violating the non-metric
nature of the Moore surface $M$.
\iffalse
Claim (A) requires a little justification based
by the following copied version of Th\'eor\`eme 1 in
Haefliger-Reeb \cite[p.\,120]{Haefliger_Reeb_1957} attributed
to Poincar\'e-Bendixson and justified there (quite loosely)
via Jordan rather than Schoenflies(!), which is somewhat
abrupt in view of the Dubois-Violette example):
\begin{prop} \label{Haefli_Reeb_Thm1} ($\approx$ Haefliger-Reeb, Th\'eor\`eme~1) Given any foliated chart of a foliation of a
simply-connected surface the intersection with any leaf
reduces to the empty set or to a line.
\end{prop}
\begin{proof} The proof is the same as the one provided by
Figure~\ref{David's_trick} (plus the neighbouring
phraseology). Indeed if the leaf appears twice in the foliated
chart we can always produce a foliated disc.
\end{proof}
\fi
As in Haefliger-Reeb \cite[p.\,122]{Haefliger_Reeb_1957} the
fact that a leaf appears at most once in a foliated chart
(Prop.~\ref{Alex_separation}(b)) implies the:
\begin{cor}\label{leaf-space-one-manifold} The
leaf-space $V=S/ \cal F$ of a foliated simply-connected
surface $S$ is a one-dimensional manifold (generally
non-Hausdorff), which
is simply-connected (i.e., $\pi_1(V)$ is trivial, or
equivalently $V$ is divided by any puncture).
\end{cor}
\begin{proof} (Just a translation of
Haefliger-Reeb's argument.) To show that $V=S/ \cal F$ is a
$1$-manifold, it is enough to check that any point $z\in V$
admits an open neighborhood homeomorphic to the
number-line ${\Bbb R}$. Let $\pi\colon S \to V$ the canonical
projection (associated to the equivalence relation $\rho$ of
appurtenance to the same leaf); the leaf $\pi^{-1}(z)$ meets
at least one foliated chart $O_i$. The equivalence relation
induced by $\rho$ on $O_i$ is, by
Prop.~\ref{Alex_separation}(b), the relation $\rho_i$
corresponding to the partition in parallel lines. Thus
$\pi(O_i)$ which is an open neighbourhood of $z$ (since $\rho$
is an open equivalence relation\footnote{This means that the
saturation of any open set is open, or what amounts to the
same that the canonical projection is open.}), is homeomorphic
to $O_i/\rho_i$, that is to the numerical line ${\Bbb R}$.
Regarding the second assertion (simple-connectivity of the
leaf-space) we again follow Haefliger-Reeb. The complement of
each leaf $L$ (a closed subset of $S$) has two components
(Prop.~\ref{Alex_separation}(c)(d)); hence the complement of
any point of $V$ has also two components.
This
is equivalent to the simple-connectivity of $V$ (compare lemma
p.\,113 in Haefliger-Reeb \cite{Haefliger_Reeb_1957} which is
a special case of (\ref{Rieman-polarized-cover}), or formulate
an appropriate exercise in algebraic topology using
Seifert-van Kampen, or Mayer-Vietoris).
\end{proof}
\subsection{Hausdorffness of the leaf-space in the $\omega$-bounded case}
By the preceding section,
the leaf-space of a foliated simply-connected surface is
a 1-manifold. In the metric case, the
non-Hausdorff\-ness of the quotient
is mostly catalyzed
by Reeb components. Heuristically it is rather evident that
there is no long Reeb components. More precisely if one
assumes that there is a long transversal, then it is easy to
deduce a continuous map from the long ray ${\Bbb L}_+$ to the
reals ${\Bbb R}$ which is not eventually constant (by looking
how the leaves emanating from a point on the transversal
intercept a cross-section of a foliated chart). This gives
some weight to the:
\begin{conj}
The leaf-space of any foliated simply-connected
$\omega$-bounded surface is Hausdorff (which is probably
always the long-line).
\end{conj}
Here is an outline of the difficulty appearing in an attempt
of proof. Given two leaves $L_1, L_2$ one would like to
separate them. Of course if there is a leaf $L$ which divides
$L_1$ from $L_2$ in the sense of ``Jordan'' that is the $L_i$
belong to two distinct components of $M-L$, then those
components (projected in the
leaf-space) will
separate $L_1$ from $L_2$ in the sense of Hausdorff, and we
are finished. Now we would like to show that under the
$\omega$-boundedness condition, there is such a leaf $L$.
\iffalse
Such a leaf should exist by David's result (appearing
in BGG2) that in a Type I manifold, a doubly long leaf has a
foliated neighborhood which is a long tube $D \times L$, where
$D$ is a
$1$-disc (interval). An $\omega$-bounded surface being Type I
and sequentially compact, if you take out one of the
neighboring long lines, then it should cut the surface in two,
using the fact that the surface is 1-connected (I guess ?).
OKAY this is true by Prop.~\ref{Alex_separation}(d), but this
does not imply that both leaves lye in different components of
the removed one. \fi
Probably more is true. Recall that a divisor is a (locally
flat) hypersurface which is closed as a point set. In a
simply-connected foliated surface any leaf is a divisor which
is not a circle (\ref{Alex_separation}). Let us call {\it
pseudo-line}
a connected divisor in a simply-connected surface (say an {\it
absolute}, for short) which is not the circle. (It can be the
real-line, the long ray or the long line). Since any divisor
in a simply-connected $M^2$ divides
(\ref{Riemann-separation}), given 3 pseudo-lines in an
absolute, either one of them divides the two others or no
lines separates the remaining two. Call
the first configuration {\it parallel}, and the second an
{\it amoeba}. In the latter case the 3 pseudo-lines bound a
bordered subregion namely the triple intersection of those
halves of the $L_i$ containing the remaining two pseudo-lines
$L_j, L_k$ ($j,k \neq i$). Then we have the following
strengthening of the conjecture:
\begin{conj}
Any $3$ leaves of an $\omega$-bounded foliated absolute are
parallel.
\end{conj}
Here is a somewhat more theological argument supporting this
conjecture, which is perhaps not the most elementary, yet
adumbrating a broader perspective. If not, then the three
lines $L_1,L_2,L_3$ are in the configuration of an amoeba.
Then one can double the ``amoeba'' domain bounding the three
curves $L_i$ to get a sort of long pant. It is easy to show
that the latter pant is $\omega$-bounded and of Euler
characteristic $-1$ (for instance with Mayer-Vietoris or by
using the fact that the characteristic of a bagpipe is equal
to that of the bag, cf. \cite[Lemma 4.4]{Gab_2011_Hairiness})
Then conclude with the following conjecture
(\ref{Euler-obstruction}) which has probably some independent
interest (to be compared to the hairiness
note \cite{Gab_2011_Hairiness} for flows).
\subsection{Missing Euler obstruction}
\begin{conj}\label{Euler-obstruction}
An $\omega$-bounded surface with negative Euler characteristic
$\chi<0$ cannot be foliated.
\end{conj}
This
is an intriguing version of the Euler-Poincar\'e obstruction.
We think by experience that it must be true, yet the proof
looks more involved than in the flow case (where the
hypothesis was slightly different, namely non-zero $\chi$, cf.
\cite{Gab_2011_Hairiness}). The example of ${\Bbb L}^2$ with
$\chi=1$ shows that the condition $\chi\neq 0$ is not enough
to obstruct foliability.
\section{Poincar\'e-Bendixson arguments}
In this section, we derive from Poincar\'e-Bendixson's
trapping argument under dichotomy (alias Jordan separation),
several {\it universal} obstructions to transitivity
not confined to the metric case. The complexity (of the
proofs) raises with the topology quantified by the rank of the
$\pi_1$. The method is basically a reduction to the dichotomic
case by passing to
double covers, with Poincar\'e-Bendixson's method acquiring
more punch when combined with Riemann's branched covers.
When the total space fails to be dichotomic, some deeper
versions of Poincar\'e-Bendixson (like those of Kneser,
Markley, etc.) describing the dynamics on the Klein bottle
enter into the arena.
Ultimately we derive an almost complete classification of
finitely-connected metric surfaces
which are transitively-foliated. Besides, intransitivity
transfers non-metrically, being conserved to any non-metric
degeneracy of a finitely-connected metric surface provided its
invariants (Euler character, ends-number and indicatrix) are
kept unaltered.
The
soul concept formalizes this idea while unifying all
results under a single perspective.
\subsection{Dynamics on the bottle (Kneser, Peixoto, Markley,
Aranson, Guti\'errez)}
Beside the basic Poincar\'e-Bendixson obstruction, we require
several other classic theorems describing
the
dynamics on the Klein bottle. Those
rely on some
magic arguments
close to the Poincar\'e-Bendixson trapping, yet deviating from
it inasmuch as they exploit a
global cross-section.
\begin{lemma} (Kneser 1924 {\rm \cite{Kneser24}})\label{Kneser}
Any foliated Klein bottle has a circle leaf.
\end{lemma}
\iffalse
\begin{proof}
Compare Kn
\end{proof}
\fi
\begin{cor}\label{Klein-foliated-intransitive}
The Klein bottle $\Klein$ is foliated-intransitive.
\end{cor}
\begin{proof} By Kneser (\ref{Kneser})
there is a circle leaf $K$. If it divides the bottle ${\Bbb
K}$ we are finished. Else cut the surface along $K$ to get a
{\it connected} compact bordered surface with $\chi=0$ and
either one or two contours. By classification
(\ref{Moebius-Klein-classification}) these are resp. a
(compact) M\"obius band or an annulus. Deleting the boundary
gives in both cases surfaces with $\pi_1\approx {\Bbb Z}$, and
conclude with (\ref{infinite-cyclic-group}) below.
\end{proof}
\begin{lemma}\label{Markley}
(Markley 1969, Aranson 1969, Guti\'errez 1977) The Klein
bottle $\Klein$ is flow-intransitive.
\end{lemma}
\begin{proof} The intransitivity of ${\Bbb K}$
was first established by Markley 1969 \cite{Markley_1969}
(independently Aranson 1969), yet the argument of Guti\'errez
1978~\cite[Thm~2, p.\,314--5]{Gutierrez_1978_TAMS} seems to be
the ultimate simplification. We recall it for
completeness.
By a lemma of Peixoto there is a global cross-section $C$ to
the flow (transverse circle). This circle is two-sided (its
tubular \nbhd{ }being oriented by the flow-lines is an annulus
not a M\"obius band). Also $C$ is not dividing (a separation
impeding transitivity). Cutting $\Klein$ along $C$ yields a
connected bordered surface $W$ with 2 contours with $\chi$
unchanged equal to $0$. By classification
(\ref{Moebius-Klein-classification}), $W$ is an annulus.
Orient its 2 contours $C_1, C_2$ as the boundary of $W$, and
the original surface is recovered by an orientation-preserving
homeomorphism $h\colon C_1\to C_2$ (which we may assume, in
reference to a planar model say $W=\{z\in {\Bbb C}: 1 \le
\vert z \vert \le 2 \}$, to be a reflection about the vertical
axis on $C_1$ followed by a radial map $C_1\to C_2$). We
denote $h(p)=p'$, just by a prime.
\begin{figure}[h]
\centering
\epsfig{figure=Gutierr.eps,width=122mm}
\vskip-10pt\penalty0
\caption{\label{Gutierrez:fig}
Flow intransitivity of the Klein bottle (Guti\'errez's proof)}
\vskip-5pt\penalty0
\end{figure}
Assume the flow entrant on the outer contour $C_2$ and sortant
on the inner contour $C_1$. A dense orbit must cross $C$, and
w.l.o.g. we may suppose that the forward-orbit
is dense. Let $0$ be a point on $C_2$ whose forward-orbit is
dense in $\Klein$. Since the inner contour $C_1$ has a
foliated collar where the flow is sortant,
the dense orbit must eventually reach this collar and so
intercepts $C_1$ at some point, say $1$. Then reflect
vertically $1$ and map it radially to get $1'\in C_2$. As
before (denseness) the subsequent trajectory must again
intercept $C_1$, at some position say $2$. Consider $h(2)=2'$,
and notice that the subsequent orbit is trapped inside the
dark subregion of Figure~\ref{Gutierrez:fig}, violating
denseness. Indeed, the future of $2'$ will be an interception
with the arc $A=\overline{1,2}\subset C_1$ determined such
that the Jordan circuit $0,1,A,2,1',2',0$ [=flowing forwardly
from $0$ to $1$ , then following the arc $A$, next flowing
backwardly from 2 to $1'$ and finally moving injectively on
the circle along the orientation specified by the triple
$1',2',0$] is null-homotopic in the annulus $W$, so bounds a
disc $D$ in $W$, which is the required trapping region, since
$h(A)\subset D$.
\end{proof}
\subsection{Dichotomy obstructs oriented transitivity}
The cornerstone
is an oriented foliated
avatar of Poincar\'e-Bendixson:
\begin{lemma}\label{Poinc-Bendixson_many} An oriented
foliation on a dichotomic surface has no dense leaf. Further
an
addendum is that no finite collection of leaves can
be dense.
\end{lemma}
\begin{proof}
This is the
trapping argument of Poincar\'e-Bendixson, best understood by
drawing a figure:
\begin{figure}[h]
\centering
\epsfig{figure=poinben.eps,width=122mm}
\vskip-15pt\penalty0
\caption{\label{poinben:fig}
Foliated Poincar\'e-Bendixson argument}
\vskip-5pt\penalty0
\end{figure}
Assume that $L$ is a dense leaf. Choose on it a point (called
$0$) and about $0$ a foliated box $B$. Since $L$ is dense it
must reappear in the box $B$. W.l.o.g. assume this to be a
forward interception w.r.t. the orientation (else reverse it).
Call the first return to the cross-section $1$. The piece of
leaf from 0 to 1 closed by the cross-sectional arc from $1$
back to $0$ is a Jordan curve $J$. By dichotomy $J$ divides
the surface, trapping the future of the trajectory. In
particular the next return to the cross-section, call it $2$,
occurs to the right of $1$. By induction it follows that the
successive returns occur in a order preserving fashion. On
that
picture on can safely superpose several trajectories
(=oriented leaves) so as to deduce the addendum.
\end{proof}
\begin{rem} {\rm Lemma~\ref{Poinc-Bendixson_many}
reproves that a 1-connected surface lacks dense leaves (a
foliation on a 1-connected manifold being automatically
orientable (\ref{orienting:2-fold-covering})).
\subsection{Foliated surfaces with infinite
cyclic group}
Our initial intention was to apply the Haefliger-Reeb theory
to the following proposition, by passing to the universal
cover while arguing that the lifts of the dense leaf cannot be
dense. (We were not able to present a decent proof and crudely
put, met some difficulties in showing that the generating
deck-translation acts as translations or gliding reflections
of ${\Bbb R}^2$.)
\begin{prop} \label{infinite-cyclic-group} A (non-metric) surface
whose fundamental group is infinite cyclic
lacks a transitive foliation (i.e., with at least one dense
leaf).
\end{prop}
\iffalse
\begin{proof} Assume the contrary, and let $(S, \cal F)$ denote a
foliated surface with group $\pi={\Bbb Z}$ (the group of
relative integers) and with dense leaf $L$. We construct as
usual $p\colon \widetilde{S}\to S$ the universal covering
surface and let $\pi$ act as the group of deck-translations.
Of course we lift as well the foliation to get
$\widetilde{\cal F}$ a foliation on $\widetilde{S}$, which is
invariant under the group action $\pi$.
Thus the set $p^{-1}(L)$ split as a disjoint union of
$\widetilde{\cal F}$-leaves. Of course since the covering
projection is a local homeomorphism this set has to be dense
in $\widetilde{S}$. Furthermore it is not hard to show that
$p^{-1}(L)$ splits into countably many leaves $L_i$ ($i\in
{\Bbb Z}$) acted upon simply-transitively by the group $\pi$.
(Indeed a non-trivial isotropy would imply that some power
$\tau^{i}$ of a fixed generating deck translation, $\tau$,
preserves a certain leaf, say $L_0$. Yet since $L_0$ is not a
circle it is one of the three open one-manifolds, but the two
non-metric ones are ruled out since the have the f.p.p
(fixed-point property), leaving ${\Bbb R}$ as the unique
option, which can however only project down to a circle when
divided by a (free) ${\Bbb Z}$-action, violating the denseness
of $L$).
Now we shall appeal to Prop.~\ref{Alex_separation}. Thus all
$L_i$'s are closed (as point-set), open as manifold and
dividing $\widetilde{S}$ in two halves. Of course we may label
the leaves $L_i$ so that $\tau^{i}(L_0)={L_i}$.
Then we maintain the following:
{\bf Claim}: $L_2$ lyes in the same half of $L_0$ as does
$L_1$.
or better
{\bf Claim}: $L_2$ lyes in the half of $L_1$ not containing
$L_0$.
If so applying $\tau$ we see by induction that $L_{i+1}$ lyes
in the half of $L_i$ not containing $L_{i-1}$. By this sort of
``combinatorics'' (or rather a sharper version of it) we would
like to deduce that $\{ L_0, L_1 \}$ the region between $L_0$
and $L_1$ (defined as the intersection of the two respective
halves containing the other one) would be an open set not
meeting $p^{-1}(L)$, which would be the required
contradiction...
\end{proof}
\fi
\iffalse
Here is an alternative approach to show
Prop.~\ref{infinite-cyclic-group}.
The idea is the following: \fi
\iffalse
\begin{proof}
If $M$ is not orientable, construct the $2$-fold orientation
covering $M_{or} \to M $ of the manifold $M$, whose total
space is orientable. Of course the group of $M_{or}$ is still
isomorphic to ${\Bbb Z}$.
Assume $M$ foliated by $\cal F$,
and lift the foliation to $M_{or}$ to get ${\cal F}_{or}$. If
this foliation is oriented, we are finished by
(\ref{Poinc-Bendixson_many}).
Otherwise we pass to (an additional) 2-fold cover $p\colon
\Sigma\to M_{or}$ rendering the foliation $p^{-1}({\cal
F}_{or})$ orientable. Of course the surface $\Sigma$ continues
to be orientable and to have infinite cyclic group. Thus by
Lemma~\ref{dichotomy:orient_plus_inf_cycl}, it is dichotomic.
So Lemma~\ref{Poinc-Bendixson_many} applies, and impedes the
(at most) four leaves lying above a (hypothetical) dense leaf
$L$ of the original $2$-manifold $M$ to be dense.
\end{proof}
\fi
\begin{proof} If orientable, the surface is
dichotomic (\ref{dichotomy:orient_plus_inf_cycl}) and we
conclude with (\ref{Poinc-Bendixson_many}) after orienting the
foliation up to passing to the
double cover (\ref{orienting:2-fold-covering}). As the group
is ${\Bbb Z}$, it stays so under finite covering, which also
preserves orientability. If not orientable, the surface
constructs its orientation double cover
(\ref{indicatrix-orient-covering}) and we are reduced to the
previous
case.
\end{proof}
\iffalse
\begin{rem} {\rm Lemma~\ref{Poinc-Bendixson_many}
reproves that a simply-connected surface cannot have a dense
leaf (a foliation on a simply-connected
manifold is automatically orientable).\fi \iffalse And then we
can either:
(1) use Jordan separation and the above Poincar\'e-Bendixson
lemma.
(2) or alternatively apply the closing argument (of BGG2 \cite{BGG2}, cf.
also the simple case $1$ of Figure~\ref{David's_trick})
without having to worry about the more complicated Cases~2 and
2bis, which are impossible for an oriented foliation. [So at
this stage I understood that David's proof in BGG2 is indeed
complete, yet one has to insist that the foliation can be
globally oriented. Thus with this preliminary we can simplify
somewhat the proof surrounding Figure~\ref{David's_trick},
while in particular avoiding the index formula for line
fields.]
\fi
}
\end{rem}
\subsection{Free groups of rank two under dichotomy}
Recall as a motivation Dubois-Violette's transitive
(indeed minimal) foliation on the thrice punctured plane with
fundamental group $F_3$ (free
on three letters).
Also the Kronecker torus punctured once shows a minimal foliation
on a surface with $\pi_1\approx F_2$ (free of rank 2). The
following result
(surely well-known in the metric case, albeit we did not
checked
the literature carefully)
shows the sharpness of those examples as
having the minimum complexity for the
fundamental group permitting a
dense leaf.
In particular 3 is the minimal number of punctures required to
the plane to manufacture a {\it labyrinth}
(=foliation
with a dense leaf).
\begin{prop}\label{dichotomic-free-of-rank-2:prop}
A dichotomic surface
with fundamental group $\pi_1$ free of rank $2$ is
foliated-intransitive.
\end{prop}
This applies
to the twice-punctured Moore
surface, and more generally to any
twice-punctured simply-connected surface,
in view of
(\ref{puncturing}).
\smallskip
\begin{proof}
If the foliation is orientable, then the foliated version of
Poincar\'e-Bendixson (\ref{Poinc-Bendixson_many}) concludes.
Otherwise, pass to the 2-fold orienting cover
(\ref{orienting:2-fold-covering}), which by the branched
covering argument of (\ref{dicho-covering-is-dicho}) is still
dichotomic, reducing again to the Poincar\'e-Bendixson
obstruction (\ref{Poinc-Bendixson_many}).
\end{proof}
\subsection{Free groups of rank two (non-orientable cases)}
The above (\ref{dichotomic-free-of-rank-2:prop}) does not
apply to $\Moe_*$ (punctured M\"obius band), which is ${\Bbb
R}P^2_{**}$ (twice punctured projective plane), which has
$\pi_1\approx F_2$ (\ref{puncturing}), but
not dichotomic
by (\ref{dicho-implies-orientable}).
Yet the
method of branched covers still applies:
\begin{prop}\label{Klein-Weichold} The twice-punctured
projective plane
${\Bbb R}P^2_{**}=\Moe_*$ is foliated-intransitive.
\end{prop}
\begin{proof} {\it Orientable case.} If the foliation
is orientable, take a compatible flow on ${\Bbb R}P^2_{**}$
(\ref{Kerek-Whitney:thm}). By a standard method \`a la Beck
(\ref{Beck's_technique:extension:lemma}) the flow extends to
${\Bbb R}P^2$. Passing to the universal cover $S^2$,
Poincar\'e-Bendixson obstructs transitivity.
{\it Non-orientable case.} If not, the foliation determines a
double orienting cover $p\colon \Sigma \to {\Bbb R}P^2_{**}$
(\ref{orienting:2-fold-covering}). Looking around the
punctures we can by Riemann's trick
(\ref{Riemann:branched-cover}) compactify this map to a
branched covering $\Sigma^*\to {\Bbb R}P^2$ (ramified at $R$ a
sublocus of the punctures). Thus
$$
\chi(\Sigma^*)=2\chi({\Bbb R}P^2)-\deg(R).
$$
Since $\deg(R)\le 2$, $\chi(\Sigma^*)\ge 0$. Since $\Sigma^*$
is connected we have also $\chi(\Sigma^*)\le 2$.
If $\chi(\Sigma^*)=2$, then $\Sigma^*\approx S^2$ and
Poincar\'e-Bendixson
concludes.
If $\chi(\Sigma^*)=1$, we
have ${\Bbb R}P^2$ and argue as above (orientable case).
If $\chi(\Sigma^*)=0$, then we have either the Klein bottle
${\Bbb K}$ or the torus ${\Bbb T}^2$. In the first case, lift
the foliation to $\Sigma$, take a compatible flow
(\ref{Kerek-Whitney:thm}) and ``extend'' it to $\Sigma^{*}$
(\ref{Beck's_technique:extension:lemma}), violating the
flow-intransitivity of Klein ${\Bbb K}$
(\ref{Markley}).
The toric case requires a separate argument.
We have the
branched covering ${\Bbb T}^2=\Sigma^*\to \proj$ ramified at
two places, as $\deg(R)= 2$.
Exchanging sheets induces an
involution $\sigma$ of the torus which is orientation
reversing (the quotient being non-orientable) with
two fixed points.
One obstruction to this is geometric
amounting essentially to the Klein-Weichold (1876--1883)
classification of orientation reversing involutions on
oriented closed surfaces (relevant to the real algebraic
geometry of curves). In fact we merely need the very
basic fact,
asserting that the linearization in the small of such an
involution near a fixed point is a symmetry about a line,
violating the isolated nature of the above fixed points.
(Formal proof of this in the topological case came slightly
later with the era of Schoenflies, Brouwer, \Kerekjarto.)
\iffalse
{\footnotesize Alternatively there is
a clumsy algebraic way using the Lefschetz trace formula:
$$
2=\chi({\rm Fix}(\sigma))=\sum_{i} (-1)^i Tr (H_i(\sigma)).
$$
The second trace is $-1$, as $\sigma$ reverses orientation.
Writing the matrix of $H_1(\sigma)$ w.r.t. the canonical basis
$e_1, e_2$ of $H_1(T^2)$ as $M=\begin{pmatrix} a &b \cr c & d
\end{pmatrix}$, with coefficients in ${\Bbb Z}$. The Lefschetz
relation writes $a+d=-2$ (L). Since $\sigma $ is involutive
$M^2$ is the identity; hence (1) $a^2+bc=1$, (2)
$ab+bd=0=b(a+d)$, (3) $ac+cd=0=c(a+d)$, (4) $cb+d^2=1$.
From (2),(3) and (L) it follows that $b=c=0$. So by (1),(4)
$a^2=1$ and $d^2=1$. By (L) it follows $a=d=-1$
But that is not all. Using the Lie group structure of the
torus we have the so-called Pontrjagin product. The latter can
be used to show that what $\sigma$ does on the $H_1$
determines its action on the $H_2$. In our situation we have
$$
H_2(\sigma)(e_1\star e_2)=H_1(\sigma) (e_1) \star H_1(\sigma)
(e_2)=(-e_1)\star(-e_2)=e_1\star e_2,
$$
by bi-linearity. Since $e_1\star e_2=[T^2]$ is the fundamental
class, this violates the orientation reversion. } \fi
\end{proof}
Beside ${\Bbb R}P^2_{**}=\Moe_*$, another
specimen
with $\pi_1=F_2$ is $\Klein_*$ (punctured {\it Klein bottle},
i.e. non-orientable closed surface with $\chi=0$, so ${\Bbb
K}=S_{2c}$ is also the sphere with $2$ cross-caps). Again we
expect a similar result:
\begin{lemma}
The punctured Klein bottle $\Klein_*$ is
foliated-intransitive.
\end{lemma}
\begin{proof} We break the argument in two parts; the first
being a repetition of the method employed so far, which
in the present case seems sterile:
\smallskip
{\footnotesize
{\bf The usual method foils.} If the foliation is orientable
we are done by the flow-intransitivity of Klein $\Klein$
(\ref{Markley}). If not the foliation defines its oriented
2-fold covering $\Sigma \to \Klein_*$
(\ref{orienting:2-fold-covering}), which we compactify into a
branched covering $\Sigma^* \to \Klein$. In particular:
$$
\chi(\Sigma^*)=2\chi(\Klein)-\deg(R).
$$
As $0\le \deg(R)\le 1$, it follows $-1\le \chi(\Sigma^*)\le
0$. Hence $\Sigma^*$ is either $\Klein$ or $T^2$ if $\chi=0$
or $N_3$ the sphere with $3$ cross-caps when $\chi=-1$.
The case of $\Klein$ is easily ruled out by Markley's
intransitivity (\ref{Markley}).
The other cases are harder. In the toric case $\deg(R)=0$, so
that the compactifying covering is unramified. This means that
about the puncture the foliation is oriented, etc.
}
\smallskip
{\bf A new trick is required.} Maybe a more efficient argument
is to use the index formula for line-fields (Poincar\'e,
Bendixson, \Kerekjarto, Hopf, etc.). Since there is a unique
singularity, the index at the puncture is zero. Thus the
foliation extends to $\Klein$. (If the singularity looks like
a letter ``X'' with opposite hyperbolic sectors and opposite
focus-type sectors with leaves converging to the puncture,
then there is no such extension! Yet such a scenario impedes
transitivity due to the focusing sectors.) Once the foliation
is extended, conclude with Kneser 1924 (\ref{Kneser}) or
rather its corollary (\ref{Klein-foliated-intransitive}).
\end{proof}
Uniting
the forces of the two previous propositions we
deduce:
\begin{lemma}\label{rank_two_non-orientable:intransitive}
A non-orientable metric surface with $\pi_1=F_2$ is
intransitive.
\end{lemma}
\begin{proof} Such a surface is homeomorphic
either to $\proj_{**}$ or $\Klein_*$ by
(\ref{Kerekjarto:non-orient}).
\end{proof}
\iffalse
\begin{proof}
Using the \Kerekjarto classification of open metric surfaces,
such a surface is either $\proj_{**}$ or $\Klein_*$.
Indeed a classical result of Kerekjarto (1923) (also in
Richards, 1966) says that an open metric surface of
finite-connectivity possesses an end neighbourhood
homeomorphic to a punctured plane. This applies to our
surface, $M$, since $F_2$ is not the group of a closed
surface. So by aggregating the puncture we get a new surface
$M^*$ which punctured gives $M$.
$\bullet$ If $M^*$ is compact, then as it is non-orientable
so $M^*$ is homeomorphic to $N_g$, $g\ge 1$, the sphere with
$g$ cross-caps, denoted $S^2_{gc}$. Puncturing back gives
$M=S^2_{*,gc}$ (one puncture, $g$ cross-caps). By
(\ref{cross-capping}), $\pi_1(M)\approx F_g$ (free of rank
$g$). Thus $g=2$, and so $M^*=N_2=\Klein$. q.e.d.
$\bullet$ If not, i.e. $M^*$ is open, so its group is free
(\ref{freeness-for-open-surfaces:prop}) and of rank one less
than that of $M$ (\ref{puncturing}). Hence
$\pi_1(M^*)=F_1\approx {\Bbb Z}$. So $M^*$ has still
finite-connectivity, and we may apply once more \Kerekjarto's
end lemma to produce a new surface, $M^{2*}$, which punctured
once gives back $M^*$.
---If $M^{2*}$ is compact, then by the classification it is
$N_g=S^2_{gc}$, for some $g\ge 1$. So puncturing
$M^{2*}_{*}=S^2_{*,gc}$ and since this leads back to $M^*$
with $\pi_1=F_1$, it follows from (\ref{puncturing}) that
$g=1$. Thus $M^{2*}=N_1={\Bbb R}P^2$. q.e.d.
---When $M^{2*}$ is open, then its group is free
(\ref{freeness-for-open-surfaces:prop}) and is now of rank
zero (\ref{puncturing}), hence it is trivial. So $M^{2*}$ is
simply-connected, hence orientable, violating the
non-orientability of $M$. This completes the proof.
\end{proof}
\fi
The ultimate generality is to
relax the metric proviso:
\begin{theorem}\label{intransitivity:non-orient_rank_2}
A non-orientable surface with $\pi_1=F_2$ is intransitive.
\end{theorem}
\begin{proof} The idea is
to reduce to the metric case via an appropriate exhaustion. If
$L$ is a dense leaf then $L$ is either ${\Bbb R}$ or the
long-ray ${\Bbb L}_+$ (cf. (\ref{separability}) below). Up to
deleting the long-side of $L$ (which does not affect the
assumption made on our surface $M$ by
(\ref{deleting:a_closed_long_ray})) we may assume that $L$ is
the real-line. Choose now a $\pi_1$-calibrated exhaustion
(\ref{calibrated-exhaustions:lemma})
$M=\bigcup_{\alpha<\omega_1} M_{\alpha}$ by Lindel\"of
subregions with $\pi_1(M_{\alpha})\approx\pi_1(M)=F_2$. Since
$L$ is Lindel\"of, there is $\beta<\omega_1$ such that
$M_{\beta}\supset L$; and $L$ being dense in $M$ it is a
fortiori so in $M_{\beta}$. Since $M$ is non-orientable, it
contains a one-sided Jordan curve $J$
(\ref{orientability-in-terms-of-Jordan-curves}) (whose tubular
neighbourhood $T$ is a M\"obius band). Since $T\supset J$ is
Lindel\"of we may assume that $M_{\beta}\supset T$ as well,
violating the metric case
(\ref{rank_two_non-orientable:intransitive}) of the theorem.
\end{proof}
This applies to the Moore surface $M$ with two cross caps,
denoted $M_{2c}$, as well as to $M_{*,c}$ the Moore surface
punctured once and cross-capped once (apply
(\ref{cross-capping}) and (\ref{puncturing})).
The next section studies the sharpness of those results, while
giving some sporadic extensions to groups of rank 3.
\iffalse It is easy to check that the above results are sharp
by modifying the Dubois-Violette example. More precisely one
can inflate the punctured point of the thorn-singularity to a
punctured disc and then cross-cap the border (cf.
Figure~\ref{cross-cap:fig} below). (Note that a tripod
singularity is generated, which has a saddle connection.) Thus
we see that the following non-orientable surfaces with
$\pi_1=F_3$ are transitive: $S^2_{3*,1c}$ (3 punctures, 1
cross-cap), $S^2_{2*,2c}$ (2 punctures, 2 cross-caps),
$S^2_{1*,3c}$ (1 punctures, 3 cross-caps).
Intuitively, a cross-cap has exactly the same dynamical effect
as a puncture.
Doing more punctures or cross-caps only improves the
transitivity issue (just puncture outside a dense leaf, which
cannot fill the whole manifold).
\begin{figure}[h]
\centering
\epsfig{figure=crosscap.eps,width=82mm}
\caption{\label{cross-cap:fig}
Non-orientable avatars of Dubois-Violette}
\vskip-5pt\penalty0
\end{figure}
\fi
\subsection{The monolith of finitely-connected metric surfaces}
This section aims to classify
those
finitely-connected metric surfaces which are transitive (and
those which are not). In the subsequent section we deduce
non-metrical transfers of intransitivity.
In view of \Kerekjarto{ }(\ref{Kerkjarto:end}) any
finitely-connected metric surface is homeomorphic to a
finitely-punctured closed surface. Hence by the M\"obius
{\it et al.} classification
(\ref{Moebius-Klein-classification}) any such surface derives
from the sphere $S:=S^2$ through iteration of the three
operations (1) handle surgery, (2) cross-capping (3)
puncturing. Thus we can tabulate a ``monolith'' for all such
surfaces (Figure~\ref{monolith:fig} below), where
right-arrows are {\it puncturing}
(denoted by a stared subscripts ``$_{*}$'', e.g. $S_*={\Bbb
R}^2$), up double-arrows are {\it handle attachments}
($\Sigma_g$ denoting the orientable closed surface of genus
$g$), and left-squig-arrows are {\it cross-caps} (denoted by
subscripts ``$_{c}$''). Boldface fonts denote the rank of the
(fundamental) group when it is free. (Given a rank there are
only finitely many
metric surfaces with the prescribed group.) The exotic arrows
(not fitting with the hexagonal lattice) arise from the
well-known relations in the monoid of closed surfaces
(under-connected sum) inherent to the classification theorem
(\ref{Moebius-Klein-classification}) in term of $\chi$, and
the orientability character. (For instance attaching a handle
to a non-orientable surface amounts to 2 cross-caps, both
decreasing $\chi$ by 2 units.)
\begin{figure}[h]
\vskip-35pt\penalty0 \centering
\epsfig{figure=monolit.eps,width=132mm}
\vskip-65pt\penalty0
\caption{\label{monolith:fig}
The monolith of finitely-connected metric surfaces}
\end{figure}
{\it Squares} indicate those surfaces which cannot be foliated
(Euler-Poincar\'e obstruction $\chi\neq 0$). {\it Hexagons}
show surfaces which are foliated-intransitive in view of
previously listed obstructions (\ref{Poinc-Bendixson_many}),
(\ref{infinite-cyclic-group}),
(\ref{dichotomic-free-of-rank-2:prop}),
(\ref{rank_two_non-orientable:intransitive}) and
(\ref{intransitivity:new-obstruction}) below. {\it Stars} show
surfaces which are foliated-transitive
(as discussed below).
\def\wood{woodpecker}
\begin{lemma} If a surface is transitive, then so is its punctured
version (just puncture outside a dense leaf).
\end{lemma}
Thus we need only to establish the transitivity of ``minimal''
models with respect to puncturing. For instance $\Sigma_1$ the
torus is transitive (Kronecker foliation) and this propagates
right-down (on Figure~\ref{monolith:fig}).
\subsection{Transitive examples via surgery (Peixoto, Blohin)}
To construct transitive foliations, we can use a surgical
device (due e.g., to Peixoto 1962 \cite{Peixoto_1962}, Blohin
1972 \cite{Blohin_1972})
\begin{lemma}\label{Peixoto-Blohin:lemma} The following surfaces are transitively foliated:
{\rm (1)} $\Sigma_{g,*}$ the once punctured orientable surface
of genus $g\ge 1$;
{\rm (2)} $S_{*,gc}$ the once punctured
sphere with $g$ cross-caps $g\ge 3$.
\end{lemma}
\begin{proof} Start with a (Kronecker) irrational
foliation of the torus $\torus$. Pick two foliated boxes and
apply the \wood-surgery (Fig.\,\ref{wood:fig}.a). Connecting
by a handle the two contours, and deleting the arc (saddle
connection) shows that $\Sigma_{2,*}$ is transitive (the arc
deletion amounts to a single puncturing as the handle is
thought of as infinitesimal so that the two depicted arcs are
in reality just one). For higher genuses,
consider an alignment of such flow-boxes (cf.
Fig.\,\ref{wood:fig}.b), and delete the thick arc (3 pieces,
but connected!) proving (1). Regarding (2) we cross-cap the
contours (cf. Fig.\,\ref{wood:fig}.c) and delete the thick
arc. Since the torus with one cross-cap is $\approx$ $S_{3c}$
this proves (2).
\begin{figure}[h]
\centering
\epsfig{figure=wood.eps,width=122mm}
\caption{\label{wood:fig}
The \wood-surgery of Peixoto-Blohin}
\vskip-5pt\penalty0
\end{figure}
\end{proof}
Doing the same surgery in the Dubois-Violette foliation
(Fig.\,\ref{wood:fig}.d), shows:
\begin{lemma} The sphere $S_{n*,k c }$ with $n$ punctures and
$k$ cross-caps is transitive for $n= 4$ and $0\le k\le 4$.
\end{lemma}
Together with (\ref{Peixoto-Blohin:lemma}) (and keeping a view
over the monolithic Figure~\ref{monolith:fig}), this gives a
complete knowledge of which $S_{n*,k c }$ are transitive,
except when $(n,k)$ takes the values $(3,2)$, $(2,2)$,
$(3,1)$.
The first case $(3,2)$ is transitive by gluing
Fig.\,\ref{wood:fig}.e with Dubois-Violette's disc
(Figure~\ref{Dubois:fig}, Rosenberg's version). This piece of
the puzzle suffices to establish in view of the combinatorics
of Figure~\ref{monolith:fig} the:
\begin{prop}\label{rank-4-or-more-is-volatile-transitive} An open metric surface of finite-connectivity
(=rank of the $\pi_1$) $\ge 4$ is transitive.
\end{prop}
\subsection{Sporadic obstruction in rank $3$}
The last case $(n,k)=(3,1)$ (of the previous section) is
intransitive by the following (using again Riemann's branched
coverings conjointly with the Poincar\'e-\Kerekjarto-Hopf
index formula):
\begin{lemma}\label{intransitivity:new-obstruction}
The thrice-punctured projective plane $S_{3*,c}$ is
foliated-intransitive.
\end{lemma}
\begin{proof} If the foliation is orientable, then we are
reduced to the flow-intransitivity of the projective plane
$\proj$ (which boils down to that of $S^2$ prompted by
Poincar\'e-Bendixson). Otherwise we construct the
double cover $\Si\to M:=S_{3*,c}$ rendering the foliation
oriented (\ref{orienting:2-fold-covering}). Compactify this to
a branched covering $\Si^{*}\to M^*=\proj$ via Riemann's trick
(\ref{Riemann:branched-cover}). By Riemann-Hurwitz
$\chi(\Si^*)=2\chi(\proj)-\deg(R)$. As there are $3$
punctures, $0\le \deg(R) \le 3$. Hence $-1\le \chi(\Si^*) \le
2$. If $\chi(\Si^*)=2$, then we have $\sphere$ or $\proj$ both
precluded by Poincar\'e-Bendixson. If $\chi(\Si^*)=0$, then we
have $\torus$ or $\Klein$. The former is excluded since the
sheet exchange involution must be orientation reversing hence
cannot fix isolated points (Klein-Weichold argument already
used in (\ref{Klein-Weichold})). The Klein option $\Klein$ is
precluded by Markley's flow-intransitivity of the Klein bottle
(\ref{Markley}).
Finally when $\chi(\Si^*)=-1$, we have $\deg(R)=3$.
This means by construction that all three punctures are
non-orientably foliated. Thus they have each
a
semi-integral index. By the index formula they sum up to
$\chi(\proj)=1$, violating arithmetics modulo one
$\frac{1}{2}+\frac{1}{2}+\frac{1}{2}=\frac{3}{2}\neq 1=0$.
\end{proof}
At this stage the only remaining bastion of resistance is the
case $S_{2*,2c}$ (twice punctured Klein bottle), of which it
is not completely trivial to decide its transitivity issue.
(We do not know yet the
answer.)
This is surely well known yet confess that presently we failed
to present a decent proof.
\iffalse
\begin{proof} We start at $S_{4*}$ with the Dubois-Violette
foliation $\cal D$. We apply the lollypop construction
(Figure~\ref{cross-cap:fig}). This surgery trades one puncture
against one cross-cap plus a puncture (required to delete the
tripod singularity). Thus it really amounts to one cross-cap
in the underlying surface. (One can also argue with $\chi$:
inflating the puncture up to reach the bordered surface is a
homotopy equivalence (no $\chi$-variation), then cross-capping
the border also keeps $\chi$ unchanged and finally the
puncture diminishes $\chi$ by one. Hence the lollypop surgery
acts on the invariant as $(\chi, n=\# ends,
orientability)\mapsto (\chi-1, the same, non-orientable)$.
In other words this effect a motion left-down in the hexagonal
lattice, transforming $S_{4*}$ into $S_{4*,c}$. This foliated
surgery clearly preserve the transitivity (but destroy the
minimality for a saddle connection is created). Iterating the
lollypop surgery we get the transitivity of $S_{4*,2c}$, and
then $S_{4*,3c}$.
\end{proof}
\fi
\iffalse
\subsection{Non-metric applications via the metric soul}
It is natural to imagine that any (non-metric) surface of
finite-connectivity (i.e. with fundamental group of finite
rank) has a metric {\it soul} capturing the salient invariants
of connectivity, indicatrix (=orientable or not) and the ends
number which in the metric case (of finite-connectivity)
determine completely the topological type. We shall attempt to
define formally the soul below, prove its existence and show
in fact that any calibrated exhaustion is a model for the
soul. A certain analogy with the bag pipe decomposition of
Nyikos (1984),
for $\omega$-bounded surfaces which are a restricted class of
surfaces of finite-connectivity (yet in some sense those of
the purest type).
As a foliated application, we see that intransitivity
transcends from the metrical soul to the whole manifold.
More precisely, given a finitely-connected (non-metric)
surface $M$, there is by (\ref{calibrated-exhaustions:lemma})
a calibrated exhaustion by Lindel\"of subregions $M_{\alpha}$
such that $\pi_1(M_{\alpha})\to \pi_1(M)$ is isomorphic for
all $\alpha$.
Further the typical example of the Moorization of a compact
bordered surface (cf. \cite{GabGa_2011} for a geometrical
definition) suggests that the $M_{\alpha}$ can be chosen to
belong to the same topological type.
So for instance the Moorization of the bordered surface given
by the sphere with 3 holes and 1 cross-cap (i.e. 3-holed
$\proj$) is intransitive (via
(\ref{intransitivity:new-obstruction})).
Maybe we can arrange more general statements, if we can polish
the idea of the (metrical) soul. We know that the rank of the
fundamental group determine the $M_{\alpha}$ up to finite
ambiguity, and if we specify the number of ends and the
orientability character then the metric surface is completely
determined (\ref{Kerkjarto:end}).
\begin{lemma}\label{soul} Given a finitely-connected (non-metric) surface
$M$, there is a calibrated exhaustion $M_\alpha$ by Lindel\"of
subregions such that each $M_\alpha$ have the same number of
ends and the same orientation character as $M$. The
topological type of $M_\alpha$ which is uniquely defined is
called the soul of $M$.
\end{lemma}
\begin{proof} By (\ref{calibrated-exhaustions:lemma}) we know
already that there is a $\pi_1$-calibrated exhaustion. The
assertion regarding orientability is clear since if $M$ is
orientable then so are all its subregions, whereas if it is
not then there is a one sided Jordan curve (with tubular
\nbhd{ }a M\"obius band) which will be absorbed by some member
of the exhaustion, and we just need to initialize the
indexation from that non-orientable stage. Hence the only
non-trivial assertion concerns the number ends; and this
requires some work. Recall that the ends number is
the maximal number of non-relatively compact residual
components to a compactum. So given a compactum $K$, choose
$M_{\alpha}$ containing $K$. Since $M_{\alpha}$ has
finite-connectivity (the rank of the $\pi_1$) it has a finite
number of ends $\varepsilon_{\alpha}$. Now if $\varepsilon$
the ends cardinality of $M$ is different from
$\varepsilon_{\alpha}$ then intuitively there will be further
ramifications or collapses which will cause fluctuation of the
$\pi_1$, violating the calibration assumption.
\end{proof}
Then we have as foliated application:
\begin{prop}\label{soul:intrans}
If the soul of a finitely-connected (non-metric) surface is
intransitive, then so is the whole surface.
\end{prop}
\begin{proof} The argument is similar to
(\ref{intransitivity:non-orient_rank_2}). If $L$ is a dense
leaf then $L$ is either ${\Bbb R}$ or the long-ray ${\Bbb
L}_+$ (cf. (\ref{separability}) below). Up to deleting the
long-side of $L$ we may assume that $L$ is the real-line,
since the topology of the soul (hence its transitivity) is
unaffected under this excision. Indeed the soul is determined
by the connectivity, the orientability character and the
number of ends of $M$ which are unchanged under a closed
long-ray slitting (\ref{deleting:a_closed_long_ray}).
Choose now a $\pi_1$-calibrated exhaustion
(\ref{calibrated-exhaustions:lemma})
$M=\bigcup_{\alpha<\omega_1} M_{\alpha}$ by Lindel\"of
subregions with $\pi_1(M_{\alpha})\approx\pi_1(M)=F_r$. Since
$L$ is Lindel\"of, there is $\beta<\omega_1$ such that
$M_{\beta}\supset L$; and $L$ being dense in $M$ it is a
fortiori so in $M_{\beta}$ (violating the intransitivity of
the soul).
\end{proof}
Applying (\ref{intransitivity:new-obstruction}) we find:
\begin{cor}
(Non-metric) surfaces with $\pi_1=F_3$ and $3$ ends are
intransitive.
\end{cor}
\fi
\subsection{Intransitivity transfer from the metric soul}
As seen in (\ref{soul:uniqueness}) it is legitimate to
think of any (non-metric) surface of finite-connectivity (i.e.
with fundamental group of finite rank) has having a metric
{\it soul} capturing the salient invariants $(\chi=1-b_1,
\varepsilon, a)$ of connectivity, the ends number and
indicatrix (=orientable or not) which in the metric case
constitutes a complete system of topological invariants
(\ref{Kerkjarto:end}).
A foliated application of this soul-method
(very akin to Nyikos' bagpipes) is the transfer of
intransitivity from the metrical soul to the whole manifold:
\begin{prop}\label{soul:intrans}
If the soul of a finitely-connected (non-metric) surface is
intransitive, then so is the whole surface.
\end{prop}
\begin{proof} The argument is similar to
(\ref{intransitivity:non-orient_rank_2}). Assume $M$
transitive with dense leaf $L$. We know that $L$ is either
${\Bbb R}$ or the long-ray ${\Bbb L}_+$ (cf.
(\ref{separability}) below). Deleting from $M$ the long-side
of $L$ does not
change the
invariants $(\chi,\varepsilon, a)$
by virtue of (\ref{deleting:a_closed_long_ray}), hence
keeps invariant the soul type by (\ref{soul:uniqueness}).
After this long slit, we have a new leaf $L\approx {\Bbb R}$
which is still dense. As $L$ is Lindel\"of, it is contained in
a Lindel\"of subregion $U\supset L$ (random amalgam of charts
covering $L$). By the kernel killing procedure
(\ref{killing:kernel}) we can enlarge $U$ into a soul
$S\supset U$ so that $\pi_1(S)\to \pi_1(M)$ is isomorphic. $L$
being dense in $M$, it is a fortiori
in
$S$, violating the soul intransitivity.
\end{proof}
Applying (\ref{intransitivity:new-obstruction}) we find:
\begin{cor}
(Non-metric) surfaces with $\pi_1=F_3$ and $3$ ends are
intransitive.
\end{cor}
This applies for instance to the Moorization (cf. e.g.
\cite{GabGa_2011}) of the bordered surface given by the sphere
with 3 holes and 1 cross-cap (i.e. 3-holed $\proj$). Notice
that the full-theory of the soul is not truly required for
such simple-minded Moore type surfaces whose geometric
structure is sufficiently explicit so as to replace the soul
by a calibrated exhaustion by subregions having the same
topological type (just add successively the thorns).
\subsection{Biminimal foliations (bidirectional denseness)}
In this section we address a somewhat specialized question,
involving
methods of Bendixson,
\Kerekjarto{ }and Mather.
Albeit Dubois-Violette's foliation of $S^2_{4*}$ (4 punctures
in $S^2$) is minimal, the 4 leaves converging to the punctures
(separatrices) fails to be dense in one direction.
In contrast, all leaves of the Kronecker irrational foliation
of the torus are bidirectionally dense.
\begin{defn}
{\rm A
$1$-foliation is {\it biminimal} if all leaves are
bidirectionally dense, i.e. both semi-leaves emanating from
any point are dense.
}
\end{defn}
By the sequential-compactness argument
(\ref{long-semi-leaf}) all leaves of a biminimal foliation are
real-lines ${\Bbb R}$ (provided ambient dimension $\ge 2$).
We may wonder if $S^2_{4*}$ (fourth-punctured sphere) is
biminimal. The negative answer is
supplied by the following
Bendixson alternative:
\begin{lemma} In a foliated punctured plane ${\Bbb
R}^2_*$, there is either a circle leaf enclosing the puncture
or there is a leaf converging to the puncture. Moreover both
alternatives
exclude mutually if the first option occurs countably many
times with a nested collection of circle leaves shrinking to
the puncture.
\end{lemma}
\begin{proof} Mather 1982 \cite[p.\,246, \S 7]{Mather_1982},
refers to Bendixson original paper or to \Kerekjarto{ }1923
\cite[p.\,256]{Kerekjarto_1923}. Here is a brief outline of
the argument as modernized by Birkhoff, Nemytskii-Stepanov,
Reeb, etc.
The classical Poincar\'e-Bendixson argument shows that
dichotomy implies properness in the flow case, to which we may
reduce as $\pi_1={\Bbb Z}$ by passing to the double cover
(\ref{orienting:2-fold-covering}). Then
a standard Zorn
lemma argument
creates a compact leaf, provided there is a leaf with compact
semi-leaf closure. The compact circle leaf encloses the
puncture, since otherwise it bounds a disc (recall that a
proper power of the generator of $\pi_1$ cannot be realised by
a Jordan curve).
\end{proof}
\begin{cor}\label{biminimal-impeded-by-puncture} Any punctured surface (metric or not)
lacks a biminimal foliation.
\end{cor}
Since any finitely-connected open metric surface possesses an
end
neighbourhood
homeomorphic to a punctured plane
(\ref{Kerkjarto:end}), we have the special case:
\begin{cor} A finitely-connected metric surface
cannot be biminimally foliated, except the torus.
\end{cor}
\begin{proof} If compact, the surface has $\chi=0$
(\ref{Euler:classical-obstruction}), so is the torus or the
Klein bottle $\Klein$ (\ref{Moebius-Klein-classification}).
The latter option is precluded by Kneser 1924 (\ref{Kneser}).
In the open case, by \Kerekjarto's end theorem
(\ref{Kerkjarto:end}), the surface is a punctured one
and (\ref{biminimal-impeded-by-puncture}) concludes.
\end{proof}
Those results fails to
tell if the infinite connected sum of tori have a (bi)minimal
foliation.
Also which metric surfaces are biminimally foliated? Besides,
does the last corollary extend to non-metric surfaces? It
seems that the Lindel\"of
exhaustion trick does not work well. Note that the doubled
Pr\"ufer surface $2P$ is not biminimal (indeed not even
minimally foliated by (\ref{Baillif:nano-black-holes})).
Baillif's example in BGG2 \cite{BGG2} of a minimally foliated
non-metric surface foliated by short leaves is
not biminimal, raising the:
\begin{ques} Can we
find a non-metric biminimally foliated surface?
\end{ques}
\section{Gravitational
effects (quantum radiation at the microscopic scale)}
\label{Gravitation:sec}
\subsection{Long semi-leaves are tame}
Given a point in a manifold foliated by curves, then after
fixing one of the two possible directions there is a unique
motion starting from the point prescribed by the foliation.
Such a ``trajectory'' referred to as a {\it semi-leaf}, is a
bordered $1$-manifold (under the leaf-topology).
\begin{prop}\label{long-semi-leaf}
Long semi-leaves in
$1$-foliated manifolds are properly embedded.
\end{prop}
\begin{proof} By
classification of bordered 1-manifolds (in $3$ species:
$[0,1]$,
$[0,\infty)$ and ${\Bbb L}_{\ge 0}=[0, \omega_1)$ {\it closed
long ray}) our non-metric semi-leaf is the closed long ray.
The latter being
sequentially-compact, we conclude with (\ref{sekt}) below.
\end{proof}
\begin{lemma}\label{sekt} (i) Let
$f\colon X \to Y$ be a continuous map, where $X$ is
sequentially-compact (sekt for short), and $Y$ Hausdorff and
first-countable. Then the mapping $f$ is closed.
(ii) In particular $f(X)$ is closed and if furthermore $f$ is
injective, then $f\colon X \to f(X)$ is a homeomorphism onto
its image.
\end{lemma}
\begin{proof} First recall that a closed subset
of a space is sequentially-closed, and conversely if the space
is first-countable.
Thus for (i), it is enough to show that $f(F)$ is
sequentially-closed, whenever $F$ is closed. So let $y_n\in
f(F)$ be a sequence converging to $y\in Y$. Choose $x_n\in F$
such that $f(x_n)=y_n$. Since $X$ is sekt, we may extract a
subsequence $x_{n_k} \to x \in F$, converging to $x$, say. By
continuity $f(x_{n_k})\to f(x)$. Since $Y$ is Hausdorff, it
follows $y=f(x)\in f(F)$. q.e.d.
(ii)
The continuity of the inverse follows from $f$ being a closed
mapping.
\end{proof}
\subsection{Transitivity implies separability}
When a manifold (indeed a space) has a transitive flow (one
dense orbit) then the phase-space is separable (rational times
of a dense orbit).
If a manifold is transitively foliated (one dense leaf), then
as the latter can be long it is not obvious that the ambient
manifold has to be separable. It is even trivially false as
exemplified by a long 1-manifold trivially foliated by itself.
Ruling out this trivial exception we have:
\begin{prop}\label{separability}
A dense leaf of a $1$-foliation on a manifold $M^n$ of
dimensionality $n\ge 2$ is homeomorphic (w.r.t. the leaf
topology) to the real-line ${\Bbb R}$ or the long ray ${\Bbb
L}_+$. Furthermore $M^n$ is separable.
\end{prop}
\begin{proof} By
classification of 1-manifolds the leaf belongs to one of the
following type: circle ${\Bbb S}^1$, real-line ${\Bbb R}$,
long ray ${\Bbb L}_+$ and long line ${\Bbb L}$. The two
extreme items in this list are sequentially-compact,
thus always embedded and closed as point-sets by (\ref{sekt}).
Thus by (the elementary case of) invariance of
dimension (Brouwer {\it et al.} in general, yet not required
presently), the dense leaf $L$ cannot be of those two types as
$n\ge 2$, whence the first assertion. Regarding the
separability clause, it is plain when $L\approx {\Bbb R}$.
Assuming $L\approx {\Bbb L}_{+}$, we may at any point $p$ of
$L$ split the leaf in a short $L_{\le p}$ and a long $L_{\ge
p}$ semi-leaf, resp. homeomorphic to ${\Bbb R}_{\ge 0}$ and
${\Bbb L}_{\ge 0}$ (closed long ray). The latter being
sequentially-compact, it fails to be dense. Thus only the
short
semi-leaf
can contribute to denseness,
and separability of $M$ follows.
\end{proof}
\begin{rem} {\rm For foliations of higher dimensionality,
the above (\ref{separability}) fails drastically. An example
of Martin Kneser shows how to foliate with a {\it unique}
surface-leaf a non-metric 3-manifold of the Pr\"ufer type (cf.
for a picture, \cite{BGG1}, arXiv version). Kneser's example
can easily be
`stretched' so as to
render it non-separable.}
\end{rem}
\begin{cor}\label{separability:cor} An
$\omega$-bounded surface is intransitive,
except if it is the torus.
\end{cor}
\begin{proof} If transitively foliated, the surface is
separable (\ref{separability}). Being also $\omega$-bounded it
is compact. Since it is foliated, $\chi=0$
(\ref{Euler:classical-obstruction}). So by classification
(\ref{Moebius-Klein-classification}), it is either the torus
or the Klein bottle, which is intransitive
(\ref{Klein-foliated-intransitive}).
\end{proof}
\subsection{Miniature black holes (Pr\"ufer, R.\,L. Moore, Baillif)}
This and the subsequent section exemplify a
geometric obstruction to foliated-transitivity lying beyond
the algebraic obstruction encoded in the fundamental group or
better in the soul (\ref{soul:intrans}), as well as beyond the
point-set obstruction of non-separability
(\ref{separability}). Thus the present obstruction (due to M.
Baillif) is the first (within the
scope of this paper) being truly non-metrical, yet still of a
geometric nature prompted by the
granularity of particular non-metric manifolds, namely those
of the Pr\"ufer type (including the Moore and
Calabi-Rosenlicht surfaces, plus of course many other
specimens having a similar morphology). Using the
phase-transition metaphor, this amounts to a volatile-gaseous
(=transitive) configuration which embedded into the
non-metrical fridge becomes frozen-intransitive.
We already know that Cantor's long ray is responsible of some
black hole phenomenology at the
macroscopic scale \cite{BGG1}. For instance the long cylinder
$S^1\times {\Bbb L}_{+}$ imposes to each
foliated structure to be either ultimately vertical (foliated
by straight long rays) or asymptotically horizontal (with
slices \hbox{$S^1\times \{\alpha\}$} occurring as leaf for a
{\it closed unbounded} ({\it club}) subset of $\alpha$'s
running in the long-ray factor).
So one can essentially imagine
a super-massive black hole
hiddenly sitting at the
long end of the
cylinder and dictating the destiny of any foliated structure,
thought of as an {\it ether} ($\approx${\it substrat
physico-chimique} in R. Thom's jargon) evidencing the
gravitational features of the manifold.
We are dealing here
with a purely naked-topological
form of
gravitation without metric (and the allied
Riemann curvature tensor), yet
still reasonably
qualifiable as ``geometric''.
Apart from Cantor's long ray (of dimension 1) the other
charismatic prototype of non-metric manifold (requiring two
dimensions)
is the
Pr\"ufer construction. Especially
we have, $P$, the bordered Pr\"ufer surface (constructed by a
aggregating rays to an open half-plane, very akin to
projective geometry, esp. the blow-up operation) which---by
folding the contours---produces the Moore surface. This can be
thought of as an upper half-plane with many (infinitesimal)
`teats' hanging down the boundary (horizontal line), compare
Figure~\ref{Train:fig} for a poor depiction.
Clearly something
`erotical' must happen near the `boundary'
(or rather what remains thereof---that is nothing!),
and
by analogy with Cantor's super-massive black hole scenario, we
now
imagine a `continuous' series of
nano-black holes materialised by the folded contours of the
Pr\"ufer surface $P$, called the {\it thorns} of the
Moorization. (If you prefer imagine the little black holes
located at each teat's extremity.) The latter effects a
quantum radiation at the microscopic scale as shown by the
following technique of M. Baillif ($\approx 2008-9$):
\begin{theorem}\label{Baillif:nano-black-holes}
In any foliation of the Moore surface, almost all (=all but
countably many exceptions) thorns are semi-leaves. Similarly,
in any foliation of the Calabi-Rosenlicht surface (=doubled
Pr\"ufer surface $2P$) almost all bridges are leaves. (Here
Bridges refer to the contours of $\partial P$ viewed in
$2P$.)
\end{theorem}
\begin{proof} Let $P\to M$ be the contour-folding projection
from Pr\"ufer-to-Moore. The boundary $\partial P$ decomposes
as a continuum of real-lines contours, whose respective image
are the thorns denoted $T_x$ (homeomorphic to a semi-line
${\Bbb R}_{\ge 0}$) indexed by $x\in{\Bbb R}$. The complement
of all thorns is called the {\it core} (of the Moore surface).
The core $U$ being a cell $\approx {\Bbb R}^2$, hence
separable, let us fix $D\subset U$ a countable dense set. Let
${\cal F}_D$ be the
set of leaves of the foliation (denoted $\cal F$) passing
through the points of $D$.
If the thorn $T_x$ is not a semi-leaf of $\cal F$, then there
is a point $y\in T_x$ such that $L_y$ (=the leaf through $y$)
deviates into the core. Then we can find nearby a leaf $L$ in
the collection ${\cal F}_D$ intercepting $T_x$
(Fig.\,\ref{Baillif:fig}.a).
\begin{figure}[h]
\centering
\epsfig{figure=Baillif.eps,width=122mm}
\caption{\label{Baillif:fig}
Interception of thorns by leaves and black thorns sandwiches}
\vskip-5pt\penalty0
\end{figure}
Choosing for each such $x$
an intercepting leaf defines a
map
$$
\varphi \colon \text{ White thorns}:=\{x\in {\Bbb R} : T_x
\text{ is not a semi-leaf}\} \longrightarrow {\cal F}_D .
$$
Now a leaf can intersect at most countably many thorns. This
follows from the squatness of the Moore surface $M$ (i.e. any
continuous map from the long ray to $M$ is eventually
constant) and the Pr\"ufer topology which shows that any
subset of the Moore surface having at most one point on each
thorn is discrete. Therefore long leaves are precluded and
short ones cannot contain an uncountable discrete subset.
Hence the map $\varphi$ is at most $\omega$-to-1 (denumerable
fibres), whence the countability of the set of ``White
thorns''.
The
argument for the doubled Pr\"ufer is
nearly identical, hence left as an
exercise.
\end{proof}
\subsection{Intransitivity of the thrice-punctured Moore surface}
The Moore surface $M$ itself, being simply-connected
(Seifert-van Kampen applied to the canonic open cover by
individual thorns added to the core), is
intransitive, as deduced either via Schoenflies
(\ref{Alex_separation}) or
Poincar\'e-Bendixson (\ref{Poinc-Bendixson_many}).
After one or two punctures there is an universal obstruction
dictated by the fundamental group
(\ref{infinite-cyclic-group}) resp.
(\ref{dichotomic-free-of-rank-2:prop}). In the Moore case, the
argument can be done by hand:
\medskip
{\footnotesize {\bf Optional easy argument by hand.} One adds
successively
the thorns of $N=M_{n*}$ the Moore surface with $0\le n\le 2$
punctures: i.e., $N=(\text{core } U) \cup \bigcup_{x\in {\Bbb
R}} T_x$ which is covered by the $M_x:=U\cup T_x$. Assuming
$M$ transitive with dense leaf $L$ we know that $L$ is short
(squatness of Moore). Thus $L$ is Lindel\"of and therefore
contained in $M_D:=\bigcup_{x\in D} M_x$ a countable union of
$M_x$ ($D\subset {\Bbb R}$ denumerable). Using either Morton
Brown's monotone union theorem (or the classification of
metric $1$-connected surfaces (\ref{uniformization}) or even
better a home-made homeomorphism) it is easy to see that $M_D$
is homeomorphic to the core $U$. Since $U$ contains $L$
densely, this violates the intransitivity of the core.
}
\medskip
Doing more than $\ge 3$ punctures in Moore there is no
universal algebraic obstruction (recall Dubois-Violette), but
a gravitational one (remind Baillif):
\begin{prop}\label{Baillif:adapted_by_Gabard}
The Moore surface with $n\ge 3$ (hence all $n$) punctures
$M_{n*}$ is foliated-intransitive.
\end{prop}
\begin{proof} (with some little bluff, but hopefully
convincing enough!) Truncate the punctured surface $N:=M_{n*}$
(foliated by ${\cal F}$) by looking at the subregion $U$ lying
below a line above which all punctures are lying, and which
therefore is homeomorphic to the Moore surface $M$. By
(\ref{Baillif:nano-black-holes}) applied to $(U,{\cal F}_U)$
almost all thorns are ``black'', i.e. semi-leaves of the
foliation. Fix any black thorn $T_x$, and about its extremity
$\partial T_x$ a foliated chart $B$. Assuming that $L$ is a
dense leaf of ${\cal F}$ it will certainly appears on both
sides of $T_x$ regarded in the foliated-box $B$ as 2 plaques
$P_1, P_2$ separated by the plaque $P$ of $B$ containing
$T_x$. Since the set of white thorns is countable, its
complementary set of black thorns is dense (Baire). Thus we
can squeeze in sandwich the 2 plaques $P_1, P_2$ by 2 new
black thorns $T_{x'}$ and $T_{x''}$
(Fig.\,\ref{Baillif:fig}.b) such that within the box $B$,
$P_1$ separates $T_{x'}$ from $T_{x}$ and $P_2$ separates
$T_{x}$ from $T_{x''}$. As the two $P_i$ belong to the same
leaf they must be somehow connected.
Agreeing that the box is vertically foliated, both plaques
$P_i$ will be either connected through their bottoms
(down-down), in a mixed fashion (down-up) or via their tops
(up-up).
Since $T_x$ goes down to infinity, a down-down connection is
precluded, except of course if the leaf re-traverse the box
$B$ but then a foliated disc is created, by doubling the first
return to the cross-section of $B$
(Fig.\,\ref{Baillif:fig}.c). We use here Schoenflies
(\ref{Schoenflies-Baer}) (or a simple form thereof) in the
1-connected Moore surface $U\approx M$ (taking advantage of
the canonical open cover
by thorns aggregation). The
same argument excludes the possibility of a down-up
connection. Thus we have an ``up-up'' connection and then
sides of the leaf $L$ are squeezed between the three thorns
$T_{x'},T_{x}, T_{x''}$ and this without the possibility of
coming up again (else a foliated disc is created by the same
Schoenflies mechanism). This triple squeezing clearly impedes
the denseness of $L$, proving our assertion.
\end{proof}
\subsection{Experimental data:
prescribing topology and foliated dynamics}
Now we try to draw a
picture showing
the
interplay between the topology and the possible foliated
dynamics for surfaces. This involves a Venn diagram with the
following topological versus
foliated attributes:
(1)
Combinatorial topology: simply-connected $\Rightarrow$
dichotomic;
(2) Point-set topology: metric $\Rightarrow$ separable;
(3)
Foliated dynamics:
minimal $\Rightarrow$ transitive $\Rightarrow$ foliated.
Recall also some
`transverse' implications: transitive implies separable
(\ref{separability}), and the mutual exclusion of 1-connected
and transitive (\ref{Alex_separation}) or
(\ref{Poinc-Bendixson_many}). By way of examples the following
diagram (Figure~\ref{Venn}) shows that this is a
reasonably exhaustive list of obstructions. A closer look
aids guessing new empirical obstructions, or more neutrally to
ask the right questions. For instance
it looks rather hard to exhibit a separable, 1-connected,
non-metric surface lacking a foliation.
\iffalse
Thus the following conjecture is a rather
metaphysical existence question (i.e., potentially at the
borderline of the usual axiomatic ZFC
(Zermelo-Fraenkel-Choice):
\begin{conj} Any pseudo-Moore surface (i.e.,
separable, 1-connected, but non-metric) admits a foliation.
\end{conj}
\begin{rem} {\rm Recall however that it is possible
(yet not very easy) to locate separable non-metric surfaces
lacking foliations (cf. the mixed Pr\"ufer-Moore surfaces in
BGG2 \cite{BGG2})}
\end{rem}
\fi
Recall however that it is possible (yet not very easy) to
locate separable non-metric surfaces lacking foliations (cf.
the mixed Pr\"ufer-Moore surfaces in BGG2 \cite{BGG2}).
Let us
describe the various regions of the diagram (the following
enumeration refers to the labels (1), (2), (3), etc. fixed on
Figure~\ref{Venn} in a spiral-like fashion):
(1) and (2) contains respectively only the plane and the
sphere which are the only simply-connected metric surfaces
(\ref{uniformization}).
(3) is a region where it is difficult to exhibit a specimen.
(For a vague candidate cf. Section~\ref{long-sun})
\begin{figure}[h]
\hskip-25pt \epsfig{figure=Venn.eps,width=142mm}
\vskip-15pt\penalty0
\caption{\label{Venn}
Some foliated geography in the non-metric realm}
\vskip-5pt\penalty0
\end{figure}
(4) contains the long-glass $\Lambda_{0,1}$ which is the
long-cylinder ${\Bbb S}^1\times{\Bbb L}_{\ge 0}$ capped-off by
a 2-disc. This surface cannot be foliated by BGG1 \cite{BGG1}.
(5) has a plethora of examples including the long-plane ${\Bbb
L}^2$, the long-quadrant ${\Bbb L}^2_+$ and the original
1-connected Pr\"ufer surface $P_{\rm collar}$.
(6) contains the Moore surface $M$ which has a foliation
induced by the vertical foliation of the bordered Pr\"ufer
surface $P$. The latter has semi-saddle singularities
disappearing during the folding process $P\to M$. Also in (6)
we have the more exotic Maungakiekie surface, which is the
result of a long Nyikos expansion effected to an open 2-cell.
(7) contains naively speaking $2P$ the doubled Pr\"ufer (alias
Calabi-Rosenlicht manifold) which has a horizontal foliation.
This does not preclude the possibility that $2P$ endowed with
a more exotic foliation is transitive. Thus here we ignore
the question of the sharp positioning of the manifold $2P$,
within the diagram. However, in view of (\ref{Dubois}) it
seems that $2P$ is transitive, but not minimal by
(\ref{Baillif:nano-black-holes}), so $2P$ belongs in reality
to (8). Yet, class (7) contains the Moorized annulus $M(A)$
with the radial foliation. Since $M(A)$ has a $\pi_1$
isomorphic to ${\Bbb Z}$, it is intransitive
by (\ref{infinite-cyclic-group}). For the same reason this
class also contains the Moore surface punctured once $M_{*}$.
The twice-punctured Moore surface $M_{2*}$ is also here, being
dichotomic with $\pi_1=F_2$, hence intransitive by
(\ref{dichotomic-free-of-rank-2:prop}).
(8) is non-empty by Pr\"uferising Dubois-Violette's example
(\ref{Dubois}).
(9) contains a Nyikos long expansion performed near one of
the 4 punctures of Dubois-Violette (elongate the separatrix).
(10) contains the Dubois-Violette foliation (on ${\Bbb
R}^2_{3*}$) punctured twice on the same leaf to generate
artificially a non-dense leaf. In reality the underlying
manifold ${\Bbb R}^2_{5*}$ is minimally foliated (puncture
twice on different leaves of Dubois-Violette).
(11) contains the punctured plane ${\Bbb R}^2_{*}$ (foliated
e.g. by concentric circles) which is intransitive by
(\ref{infinite-cyclic-group}). Likewise ${\Bbb R}^2_{2*}$ is
intransitive by (\ref{dichotomic-free-of-rank-2:prop}).
(12) is an empty region, because any metric open surface has a
Morse function without critical points
(\ref{Morse-Thom:surfaces}), thus a foliation. So a surface in
(9) has to be compact, and dichotomy allows only the sphere
(by classification (\ref{Moebius-Klein-classification})),
which has already been positioned in a deeper nest of the
diagram.
(13) could contain the Moorization $M(G)$ of a
multiply-connected domain $G=\Sigma_{0,n}$ with at least $n\ge
3$ contours (starting with the ``pant''). A vague idea could
be that the Moorization forces a vertical behavior near the
cytoplasmic expansions present in the Moorization, thus
``shaving'' the ``hairs'' gives a compact subregion
(homeomorphic to $G$) along the boundary of which the
foliation is transverse, and then we are done by the
Euler-Poincar\'e obstruction. But this
hasty intuition is wrong as shown in
Section~\ref{Moorized-disc:sec}. In
particular this argument would imply that the Moorized disc
cannot be foliated,
but (\ref{Moorized-disc:prop}) below shows the contrary. The
same construction (cf. Fig.\,\ref{Moorized-disc}g) also shows
that the $M(\Sigma_{0,n})$ for $n\ge 3$ admit foliations, and
so belongs to region (7), but not to (8) in view of
(\ref{Baillif:adapted_by_Gabard}), the case $n=3$ following
also from (\ref{dichotomic-free-of-rank-2:prop}). So region
(13) looks rather deserted, except if we remind the
construction in BGG2 \cite[Sect.\,4.2]{BGG2} of mixed
Pr\"ufer-Moore surfaces which produces a bunch of separable,
non-simply-connected surfaces lacking foliations. Furthermore
if we accept the operation of {\it full} Nyikosization $N$,
which produces long hairs at all point of the boundary, then
$N(\Sigma_{0,n})$ for $n \ge 2$ would belong to (13), compare
Section~\ref{long-sun}.
(14) has the surfaces $\Lambda_{0,n}$ of genus $0$ with $n\ge
3$ long cylinder-pipes, which lack foliations by BGG1
\cite{BGG1} (super-massive black hole scenario).
(15) admits a plethora of examples with most of them arising
indeed from a non-singular flow (cf. \cite{GabGa_2011} and
recall optionally the theorem of Whitney \cite{Whitney33}
(building over Hausdorff) telling that non-singular 2D-flows
induce foliations). Of our pictured examples the only one
which is not induced by a flow is the ``wormhole'' double
long-plane, i.e. the connected sum ${\Bbb L}^2 \# {\Bbb L}^2$
which is however
foliated by circles.
(16) contains simple examples using variants of the Pr\"ufer
construction. For instance we can Pr\"uferize an annulus and
glue radially opposite boundaries by long bridges (cf.
figure).
(17) The same construction as in (16) with short bridges
yields a specimen.
(18) We lack a serious example.
However with the operation of full Nyikosization we can take
$N(\Sigma_{g,n})$ where the genus $g\ge 1$ and with $n\ge 1$
contours.
(19) Take a Kronecker torus Pr\"uferized along an arc.
By the scenario of nano-black holes
(\ref{Baillif:nano-black-holes}) this surface is not minimal,
giving a sharp positioning.
(20) Puncture the Kronecker torus, and making a long Nyikos
expansion
of one of the 2 separatrices converging to the puncture gives
a minimal foliation on a non-metric surface (trick due to M.
Baillif, more details in BGG2 \cite{BGG2}).
(21) Take a Kronecker torus. This is the unique compact
specimen as follows from Kneser (\ref{Kneser}), but there is
of course a menagerie of non-compact examples (e.g., Kronecker
torus punctured once).
(22) Take the example of Dubois-Violette on $S^2_{4*}={\Bbb
R}^2_{3*}$.
(23) contains as fake specimen the Kronecker torus punctured
twice on the same leaf. (In reality the twice punctured torus
also carries a minimal foliation, so it is not a sharp
example.)
(24) Take the torus with the trivial foliation by circles. Of
course this is fake, since the torus really lives in (21). Yet
(24) contains the M\"obius band (=twisted ${\Bbb R}$-bundle
over $S^1$) which is intransitive by
(\ref{infinite-cyclic-group}). Doing one (or even 2) punctures
in M\"obius
the surface is still intransitive by
(\ref{Klein-Weichold}) resp.
(\ref{intransitivity:new-obstruction}). For a compact example
we have
the Klein
bottle ${\Bbb K}$, which by Kneser (\ref{Kneser}) is not
minimal, and in fact foliated-intransitive
(\ref{Klein-foliated-intransitive}).
\iffalse We would like to show that Klein is intransitive.
This is well-known for flows (Markley, 1968). For foliations
one can cut the Klein bottle along the Kneser circle, which is
not null-homotopic (else it would bound a disc) and we get a
bordered surface of characteristic zero which is either an
annulus or a M\"obius band. Both have $\pi_1={\Bbb Z}$ so
there is an obstruction to a dense leaf by
(\ref{infinite-cyclic-group}). \fi
(25) contains
all closed surfaces except those having already been
positioned, namely the sphere, and the 2 surfaces with Euler
characteristic zero. These split into orientable surfaces of
genus $\ge 2$ and non-orientable surfaces (spheres with $g\ge
1$ cross-caps) for all values of $g$ except $g=2$ which is the
Klein bottle. Since open metric surfaces always foliate (Morse
function argument (\ref{Morse-Thom:surfaces})), this is a
complete
tabulation of the birds in class (25) in view of the
classification (\ref{Moebius-Klein-classification}).
\subsection{Razor principle foiled}\label{Moorized-disc:sec}
Given the Moorized disc $M(D^2)$, which looks like a hairy
disc with hairs emanating transverse to the boundary
(Fig.\,\ref{Moorized-disc}a), one could expect that a foliated
structure has to be compatible with the hairs, and thus
`shaving' the hairs gives an impossible foliation of the
(compact) disc. This would imply that $M(D^2)$ lacks a
foliation. This naive principle is erroneous.
Basically, it is
faulty because the {\it semi-saddle} $xy$ (level curves of
that function) restricted to $y\ge 0$ is not the unique one
inducing a regular foliation after Moorization. Indeed the
{\it half-saddle} defined by $x^2-y^2$ (cf.
Fig.\,\ref{Moorized-disc}c) behaves also well under folding.
This suggests how to foliate $M(D)$, compare
Figure~\ref{Moorized-disc}, which is commented upon in the
picture-assisted proof below.
\begin{figure}[h]
\centering
\epsfig{figure=Moore.eps,width=122mm}
\caption{\label{Moorized-disc}
Foliating the Moorized disc}
\vskip-5pt\penalty0
\end{figure}
\begin{prop}\label{Moorized-disc:prop}
The Moorized disc can be foliated.
\end{prop}
\begin{proof} Consider the singular foliation of the disc
$D^2$ depicted on Fig.\,\ref{Moorized-disc}d: this is
everywhere orthogonal to the boundary except at 2 singular
points resembling a `spiderman'. Recall the
interpretation of Pr\"ufer's construction in terms of rays,
and the synthetic description of the (Pr\"ufer)-charts. This
can be thought of as the map
$\sigma$ pictured above (Fig.\,\ref{Moorized-disc}b). It is
easy to
see that if the foliation is vertical then its image by the
Pr\"ufer chart is a semi-saddle, whereas the spiderman
singularity transforms to a half-saddle. Since both types of
singularities disappear during the folding process
(Fig.\,\ref{Moorized-disc}e), we get a genuine foliation on
the Moorization $M(D^2)$ (depicted on
Fig.\,\ref{Moorized-disc}f).
\end{proof}
The same method shows that the Moorization of the
multiply-connected domains (any number of contours) can be
foliated (Fig.\,\ref{Moorized-disc}g).
\subsection{Razor principle for long hairs, supernovas
and science fiction}\label{long-sun}
It is rather hard to find non-metric separable surfaces
lacking foliations. However, in \cite[Sect.\,4.2]{BGG2},
Baillif showed that suitable mixed Pr\"ufer-Moore surfaces
(which are separable) lack foliations. The idea is the
following. Start with $P$ the bordered Pr\"ufer surface. Each
contour (=boundary component) of $P$ can either be folded
(Moorization) or left unaltered. Since Moorization is well
behaved w.r.t. the vertical foliation
(Fig.\,\ref{Moorized-disc}c-e), but not against the horizontal
foliation developing thorn singularities when folded, Baillif
showed that a violent mixture of both processes (folding and
`nothing') produces bordered surfaces whose double are
separable, yet without foliations. Being doubled such surfaces
fails to be 1-connected, and it would be interesting to answer
the:
\iffalse if only countably many collars are required. However
it can be observed:
(1) the precise argument of BGG2 is somewhat involved, and
(2) it uses the double $2 P_{A,B}$ (notation of loc. cit.),
thereby not answering the question of the existence of a
simply-connected example lacking a foliation.
Thus it is natural (at least legitimate) to ask: \fi
\begin{ques}\label{corona-sun:question} Is there a simply-connected non-metric separable surface without
foliations? If yes, then more subjectively, what is the
simplest
such example?
\end{ques}
In the second subjective question, ``simplest'' seems to have
a double fragrance, namely
simplest to construct or
simplest to show its non-foliability. Along the second
interpretation, we have perhaps the following (very
hypothetical) answer, involving another black hole
scenario---this time at the `mesoscopic' scale! {\it Warning:}
the next paragraph is maybe only pure science fiction.
Recall Nyikos' long (cytoplasmic) expansion of a cell (cf.
\cite{BGG2}). \iffalse As explained in BGG2 \cite{BGG2}, we
can think of the plane trivially foliated by lines, and
elongated one leaf to get a surface $N$ which is separable,
$1$-connected and foliated. If we can show a sort of
black-hole behaviour, namely that foliations of this surface
must contain ultimately the long hair (only available as
Gauld's hand-written notes of the parc Bertrand, Geneva 2010).
\fi
Assume such long expansions can be performed as often as we
please, doing them in all directions of a disc yields a
$1$-connected separable surface with many long hairs emanating
in all directions (naively imagined as orthogonal to the
circumference). Call this manifold the {\it supernova} (with
long hairs and complicated corona). Another more tangible
generating mode for the supernova is to do first a Moorization
(of the disc), and then make Nyikos' expansions to the thorns
(conceived now as independent processes). It is conceivable
that any foliation of the supernova contains all hairs as
semi-leaves, by a variant of (\ref{Baillif:nano-black-holes}).
(I~think that this was verified in David Gauld's hand-written
notes from the parc Bertrand, Geneva 2010, unfortunately
unpublished as yet.) Then one might (via a razor principle)
deduce a foliation of the compact disc (transverse to the
boundary). The supernova would thereby answer positively our
naive question (\ref{corona-sun:question}). Albeit it is not
the simplest to construct, it is perhaps the easiest to show
its lack of foliated structure.
\section{Miscellaneous}
This section collects miscellaneous topics not directly
relevant to the main text:
\subsection{Jordan separation}
Maybe first a question related to Jordan separation approached
via the Riemann polarization trick. Recall that we defined a
divisor in a manifold as a
locally-flat hypersurface
which is closed as a point-set (\ref{divisor:def}). Taking for
granted the universality (i.e., non-metrical validity) of the
Riemann trick (\ref{Rieman-polarized-cover}) we deduced that
{\it in a simply-connected manifold any divisor is dividing}
(\ref{Riemann-separation}). The converse does not hold, for
instance the Poincar\'e (homology) sphere is divided by any
divisor, but not simply-connected. The former assertion
follows from the well-known:
\begin{lemma}\label{intersection-pairing} Given a closed (topological) manifold $M$ whose first homology
modulo 2, $H_1(M,{\Bbb Z}_2)$ is trivial (equivalently the
first Betti number $\beta_1$ with mod 2 coefficients, is
zero). Assume furthermore that the manifold as a reasonable
``intersection theory'', which is certainly the case whenever
$M$ is smoothable (Weyl 1923, Lefschetz 1930, de Rham 1931,
etc.). Then $M$
is divided by any divisor.
\end{lemma}
\begin{proof} By contradiction, let
$H$ be a non-dividing divisor. Since $H$ is closed as a
point-set in $M$ compact, it is compact. Thus it carries a
fundamental class mod 2, denoted $[H]$. Choose any point $p\in
H$ and a locally flat chart $U$ with $(U,U\cap
H,p)\approx({\Bbb R}^n,{\Bbb R}^{n-1}\times \{0 \},0)$. Fix a
little arc $a$ transverse to $H$ and meeting it in exactly one
point. Since $M-H$ is connected, choose a path $b$ in $M-H$
joining both extremities of $a$. Thus $a+b=:c$ defines a
1-cycle (mod 2) intersecting $H$ in exactly one point
transversally. Thus the intersection number $[c]\cdot [H]=1$.
This implies that $[c]\neq 0$, and so $\beta_1\neq 0$.
\end{proof}
Consequently the Poincar\'e sphere, $M^3$, splits because
$H_1(M^3, {\Bbb Z})=0$ implies $H_1(M^3, {\Bbb Z}_2)=0$ (true
in any space, for a 1-cycle mod 2 can always be oriented). We
do not know if the lemma holds in full generality:
\begin{ques}
Assume the manifold $M$ to have $\beta_1=0$, then is $M$
separated by any divisor? And what about the converse?
\end{ques}
\iffalse Of course this is a somewhat metaphysical question
perhaps out of reach within the usual axiomatic ZFC,
especially if there exist an extraterrestrial manifold without
any divisor, yet non-simply connected. (Can such a crumpled
manifold be constructed as a variant of Rudin-Zenor???) Is
the answer positive at least for metric manifolds? (Of course
in view of the existence of topological Morse functions, any
metric manifold admits at least one and indeed many divisors.)
\fi
\iffalse
\subsection{Transitive foliation}
It is obvious that if a manifold (indeed a space) admits a
transitive flow (one dense orbit) then the phase-space has to
be separable. What about transitive foliation? If a manifold
has a transitive foliation (one dense orbit) then as the
latter can be long it is not obvious that the ambient manifold
has to be separable.
\begin{ques}
Can we construct a non-separable manifold with a transitive
foliation?
\end{ques}
Of course the formulation is quite loose. Trivial answer:
consider the trivial (top-dimensional) foliation of the
manifold by itself. Even if we demand the foliation to be of
codimension-one, then there is the example of Martin Kneser
(as described in BGGI \cite{BGG1}, arXiv version) which can be
``stretched'' so to
render it non-separable. In this example there is a single
surface leaf filling completely a 3-manifold (of the Pr\"ufer
type). So the question becomes more interesting if we demand a
foliation by curves on a surface.
\begin{ques}
Can we construct a non-separable surface with a transitive
foliation (of dimension one)?
\end{ques}
Even more specifically. Let $P$ be the bordered Pr\"ufer
surface, and $2P$ be its double (that is the Calabi-Rosenlicht
manifold). Consider also an elongated version with long
bridges, i.e. $2P'$, where $P'$ is $P$ with a closed collar
added, i.e. $P'=P\cup (\partial P \times [0,1])$.
\begin{ques}
Can we construct on the doubled Pr\"ufer surface $2P$ (alias
the CaRo-manifold) a transitive foliation? Same question for
$2P'$, the version with long-bridges. Is it even possible to
construct minimal foliations on those surfaces?
\end{ques}
It can be remarked that the answer is negative if the
foliation is orientable. This follows from the observation
that the Poincar\'e-Bendixson bag argument (for flows) holds
more generally for such foliations in a dichotomic surface.
\begin{prop} A dichotomic surface lacks a transitive oriented
foliation.
\end{prop}
Recall also from DYN that it is easy to check that both $2P$
and $2P'$ are dichotomic.
\fi
\iffalse (C) Modulo some details in the previous section, we
have that a surface with $\pi_1$ infinite cyclic lacks a
transitive foliation. By Dubois-Violette the thrice-punctured
plane has a minimal foliation. If I remember well, in an old
version of BGG1, I had a proof that three is the minimal
number of puncture required to the plane so as to admit a
minimal or transitive foliation. This suggests perhaps that a
(non-metric) surface with $\pi_1=F_2$ (free on two generators)
and which is dichotomic also lacks a transitive foliation. \fi
\iffalse
\subsection{Transitivity implies separability}
In this section we establish the following result, which
answers some of the above questions (e.g., it shows that $2P'$
is not labyrinthic)
\begin{prop}\label{separability}
Assume that $L$ is a dense leaf of a $1$-foliation on a
manifold $M^n$ of dimension $n\ge 2$. Then with the leaf
topology $L$ is homeomorphic to the real-line ${\Bbb R}$ or
the long ray ${\Bbb L}_+$. Furthermore $M^n$ is separable.
\end{prop}
\begin{proof} The proof uses the classification of
one-manifold into the four well-known type: circle ${\Bbb
S}^1$, real-line ${\Bbb R}$, long ray ${\Bbb L}_+$ and long
line ${\Bbb L}$. The two extreme items in this list are
sequentially-compact (sekt for short), thus always embedded
and closed as point-sets (compare Lemma~\ref{sekt} below).
Thus by invariance of the dimension, the dense leaf $L$
cannot be of those two type as $n\ge 2$. This proves the first
assertion. Regarding the separability of $M$, it is plain when
$L\approx {\Bbb R}$. So assume that $L\approx {\Bbb L}_{+}$ is
the long ray. To any point $p$ of $L$ one can split $L$ in a
short $L_{<p}$ and a long side $L_{\ge p}$, resp. homeomorphic
to ${\Bbb R}$ and ${\Bbb L}_{\ge 0}$ the closed long ray. The
latter space is also sequentially-compact, and so fails to be
dense. Thus the short side has to be dense in $M$, and the
separability of $M$ follows.
\end{proof}
The completion of the above proof involves the following:
\iffalse
\begin{lemma}\label{sekt} (i) Let $f\colon X \to Y$ be a continuous map, where
$X$ is sekt, and $Y$ Hausdorff and first-countable. Then
$f(X)$ is closed.
(ii) If furthermore $f$ is injective, then $f\colon X \to
f(X)$ is a homeomorphism onto its image.
\end{lemma}
{\footnotesize \begin{proof} First recall that a closed subset
of a space is sequentially-closed, and conversely if the space
is first-countable.
Thus for (i), it is enough to show that $f(X)$ is
sequentially-closed. So let $y_n\in f(X)$ be a sequence
converging to $y\in Y$. Choose $x_n\in X$ such that
$f(x_n)=y_n$. Since $X$ is sekt, we may extract a subsequence
$x_{n_k} \to x \in X$, converging to $x$, say. By continuity
$f(x_{n_k})\to f(x)$. Since $Y$ is Hausdorff, it follows
$y=f(x)$. q.e.d.
(ii) Let $g\colon f(X) \to X$ be the inverse mapping of $f$.
The continuity of $g$ amounts to show that $f$ is a closed
mapping. So let $F\subset X$ be a closed set, to show that
$f(F)$ is closed. Let $y_n\in f(F)$ such that $y_n\to y\in Y$.
Let $x_n\in F$ be such that $f(x_n)=y_n$. Since $X$ is sekt,
we may extract a subsequence $x_{n_k} \to x \in F$. By
continuity $f(x_{n_k})\to f(x)$. Since $Y$ is Hausdorff, it
follows $y=f(x)\in f(F)$.
\end{proof}
}
{\bf Remark:} In fact this proof should be optimised by
showing the following statement: \fi
\begin{lemma}\label{sekt} (i) Let $f\colon X \to Y$ be a continuous map, where
$X$ is sekt, and $Y$ Hausdorff and first-countable. Then the
mapping $f$ is closed.
(ii) In particular $f(X)$ is closed and if furthermore $f$ is
injective, then $f\colon X \to f(X)$ is a homeomorphism onto
its image.
\end{lemma}
{\footnotesize \begin{proof} First recall that a closed subset
of a space is sequentially-closed, and conversely if the space
is first-countable.
Thus for (i), it is enough to show that $f(F)$ is
sequentially-closed, whenever $F$ is closed. So let $y_n\in
f(F)$ be a sequence converging to $y\in Y$. Choose $x_n\in F$
such that $f(x_n)=y_n$. Since $X$ is sekt, we may extract a
subsequence $x_{n_k} \to x \in F$, converging to $x$, say. By
continuity $f(x_{n_k})\to f(x)$. Since $Y$ is Hausdorff, it
follows $y=f(x)\in f(F)$. q.e.d.
(ii)
The continuity of the inverse
follows from $f$ being a closed mapping.
\end{proof}
}
\fi
\iffalse
\begin{cor}\label{separability:cor} An $\omega$-bounded surface is intransitive,
except if it is the torus.
\end{cor}
\begin{proof} If transitively foliated, the surface $M$ is
separable (\ref{separability}). Being also $\omega$-bounded it
is compact. Since $M$ is foliated, $\chi(M)=0$. So by
classification, $M$ is either the torus or the Klein bottle
${\Bbb K}$. Yet the latter is intransitive. Indeed by a
classical result of Hellmuth Kneser (1924) any foliation of
${\Bbb K}$ has a circle leaf $K$.
Under transitivity the leaf $K$ cannot divide ${\Bbb K}$.
Cutting the surface along $K$ we get a connected compact
(bordered) surface with $\chi=0$ having either one or two
contours. By classification these are resp. a (compact)
M\"obius band or an annulus. Deleting the boundary, we get in
both cases surfaces with $\pi_1\approx {\Bbb Z}$, and we
conclude with (\ref{infinite-cyclic-group}).
\end{proof}
\fi
\iffalse
Following Rosenberg \cite{Rosenberg_1983} introduce the:
\begin{defn} A labyrinth is a one-dimensional foliation with a
dense leaf. By extension we
call a manifold
a labyrinth if it admits a labyrinth.
\end{defn}
A manifold which is a labyrinth has to be separable (except if
it is one-dimensional). Thus apart from the trivial case of
curves, we know four obstructions to being a labyrinth:
(1) non-separability (Prop.~\ref{separability});
(2) simple-connectivity in the two-dimensional case (by
Prop.~\ref{Alex_separation}(b))
(3) $\pi_1={\Bbb Z}$ in the 2D-case
(\ref{infinite-cyclic-group})
(4) $\pi_1=F_2$ in the dichotomic 2D-case
(\ref{dichotomic-free-of-rank-2:prop}).
(5) $\pi_1=F_2$ in the non-orientable 2D-case
(\ref{rank_two_non-orientable:intransitive}).
{\small \begin{rem} {\rm Simple-connectivity alone is not
enough. Indeed Harrison-Pugh 1989 (building over previous work
of Katok) showed how to construct an ergodic (hence
transitive) flow on ${\Bbb R}^3$ free from any singular point.
By a result of Whitney 1939, or more economically just because
it is smooth, this flows induces a foliation with a dense
leaf, i.e. a labyrinth on ${\Bbb R}^3$. Alternatively use the
Fathi-Herman minimal smooth flow on $S^3\times S^3$. }
\end{rem}
}
\fi
\iffalse
\begin{exam} {\rm So for instance the doubled Pr\"ufer surface with long
bridges, $2P'$, is not a labyrinth (because it is not
separable). For the same reason, the original Pr\"ufer surface
$P_{\rm collar}=P\cup (\partial P \times [0,\infty))$ is not a
labyrinth. (One can alternatively argue with (2)). Regarding
the Moore surface, it is not a labyrinth in view of the second
obstruction (2). Note however that the case of $2P$ (the
Calabi-Rosenlicht surface) is left unanswered.} \end{exam}
\fi
\iffalse
\begin{exam}\label{Dubois}
{\rm If we consider Dubois-Violette's labyrinth
on
${\Bbb R}^2_{3*}$ (three punctures) and Pr\"uferize along an
arc contained in a leaf we can certainly obtain a dichotomic
non-metric surface which is a labyrinth, and which is somehow
close to $2P$, yet not exactly homeomorphic to it for the
punctures give isolated ends (separable from the other ends
via a Jordan curve).
The question of deciding if $2P$ itself is a labyrinth remains
unsettled.}
\end{exam}
\fi
\iffalse
\subsection{Other point-set properties of 1-foliations}
Here we recall some basic properties of $1$-foliations, that
where already studied in BGG1, yet maybe under a different
angle.
Let us start we the following obvious property:
\begin{lemma}
None of the connected one-dimensional manifolds contains a
discrete uncountable subset.
\end{lemma}
\begin{prop} A leaf of a $1$-foliation on a manifold can
appear at most countably many times in any foliated chart
\end{prop}
\begin{proof} If not then then we have a foliated chart where
the leaf appears uncountably many times, but then we can
exhibit in $L$ a discrete uncountable subset.
\end{proof}
\begin{cor} A $1$-foliation of a manifold $M^n$ with $n\ge 2$
has at least uncountable many leaves.
\end{cor}
\fi
\iffalse
\subsection{Euler-Venn diagram of surfaces
with prescribed topology and foliated dynamics}
Now we attempt to draw a big picture showing
the mutual interactions between the
topology and the
possible foliated dynamics for surfaces. More precisely we
draw a Venn diagram with the following topological versus
dynamical foliated attributes:
(1)
Combinatorial topology: simply-connected $\Rightarrow$
dichotomic;
(2) Point-set topology: metric $\Rightarrow$ separable;
(3)
Foliated dynamics:
minimal $\Rightarrow$
transitive $\Rightarrow$ foliated.
Recall also the
interdisciplinary implications: transitive implies separable
(\ref{separability}), and the mutual exclusion of 1-connected
and transitive (\ref{Alex_separation}) or
(\ref{Poinc-Bendixson_many}). By way of examples the following
diagram shows that this is an essentially exhaustive list of
obstructions. However a closer look
aids discovering some new
obstructions at least at the
empirical level. For instance
it looks rather hard to exhibit a separable, 1-connected,
non-metric surface lacking a foliation. Thus the following
conjecture is a rather metaphysical existence question (i.e.,
potentially at the borderline of the usual axiomatic ZFC
(Zermelo-Fraenkel-Choice):
\begin{conj} Any pseudo-Moore surface (i.e.,
separable, 1-connected, but non-metric) admits a foliation.
\end{conj}
\begin{rem} {\rm Recall however that it is possible
(yet not very easy) to locate separable non-metric surfaces
lacking foliations (cf. the mixed Pr\"ufer-Moore surfaces in
BGG2 \cite{BGG2})}
\end{rem}
Let us
describe the various regions of the diagram:
(1) and (2) contains respectively only the plane and the
sphere which are the only simply-connected metric surfaces
(\ref{uniformization}).
(3) is a region where it is difficult to exhibit a specimen.
(For a candidate cf. Section~\ref{long-sun})
\begin{figure}[h]
\hskip-25pt \epsfig{figure=Venn.eps,width=142mm}
\vskip-15pt\penalty0
\caption{\label{Venn}
Some foliated geography in the non-metric realm}
\vskip-5pt\penalty0
\end{figure}
(4) contains the long-glass $\Lambda_{0,1}$ which is the
long-cylinder ${\Bbb S}^1\times{\Bbb L}_{\ge 0}$ capped-off by
a 2-disc. This surface cannot be foliated by BGG1 \cite{BGG1}.
(5) has a plethora of examples including the long-plane ${\Bbb
L}^2$, the long-quadrant ${\Bbb L}^2_+$ and the original
1-connected Pr\"ufer surface $P_{\rm collar}$.
(6) contains the Moore surface $M$ which has a foliation
induced by the vertical foliation of the bordered Pr\"ufer
surface $P$. The latter has semi-saddle singularities
disappearing during the folding process $P\to M$. Also in (6)
we have the more exotic Maungakiekie surface, which is the
result of a long Nyikos expansion effected to an open 2-cell.
(7) contains naively speaking $2P$ the doubled Pr\"ufer (alias
Calabi-Rosenlicht manifold) which has a horizontal foliation.
This does not preclude the possibility that $2P$ endowed with
a more exotic foliation is transitive. Thus here
we ignore the question of the sharp positioning
of the manifold $2P$, within the diagram. In view of
(\ref{Dubois}) it seems that $2P$ is transitive, yet not
minimal by (\ref{Baillif:nano-black-holes}), so $2P$ belongs
in reality to (8). Yet, class (7) contains the Moorized
annulus $M(A)$ with the radial foliation. Since $M(A)$ has a
$\pi_1$ isomorphic to ${\Bbb Z}$, it is intransitive
by (\ref{infinite-cyclic-group}). For the same reason this
class also contains the Moore surface punctured once $M_{*}$.
The twice-punctured Moore surface $M_{2*}$ is also here, being
dichotomic with $\pi_1=F_2$, hence intransitive by
(\ref{dichotomic-free-of-rank-2:prop}).
(8) is non-empty by Pr\"uferising Dubois-Violette's example
(\ref{Dubois}).
(9) contains a Nyikos long expansion performed near one of
the 4 punctures of Dubois-Violette.
(10) contains the Dubois-Violette foliation (on ${\Bbb
R}^2_{3*}$) punctured twice on the same leaf to generate
artificially a non-dense leaf. In reality the underlying
manifold ${\Bbb R}^2_{5*}$ is minimally foliated (puncture
twice on different leaves of Dubois-Violette).
(11) contains the punctured plane ${\Bbb R}^2_{*}$ (foliated
e.g. by concentric circles) which is intransitive by
(\ref{infinite-cyclic-group}). Likewise ${\Bbb R}^2_{2*}$ is
intransitive by (\ref{dichotomic-free-of-rank-2:prop}).
(12) is an empty region, because any metric open surface has a
Morse function without critical points, thus a foliation. So a
surface in (9) has to be compact, and the dichotomy allows
only the sphere (by classification
(\ref{Moebius-Klein-classification})), which has already been
positioned in a deeper nest of the diagram.
(13) could contain the Moorization $M(G)$ of a
multiply-connected domain $G=\Sigma_{0,n}$ with at least $n\ge
3$ contours (starting with the ``pant''). The vague idea would
be that the Moorization forces a vertical behavior near the
cytoplasmic expansions present in the Moorization, thus
``shaving'' the ``hairs'' we could find a compact subregion
(homeomorphic to $G$) along the boundary of which the
foliation is transverse, and then we would be done by the
Euler-Poincar\'e obstruction. But this
hasty intuition is totally wrong as shown in
Section~\ref{Moorized-disc:sec}. In
particular this argument would imply that the Moorized disc
cannot be foliated,
but Proposition~\ref{Moorized-disc:prop} shows the contrary.
In fact the same construction (cf. Fig.\,\ref{Moorized-disc}g)
it also shows that the $M(\Sigma_{0,n})$ for $n\ge 3$ also
admits foliation, and so belongs to region (7) or perhaps
(7+). So region (10) looks rather deserted, except if we
remind the construction in BGG2 \cite[Sect.\,4.2]{BGG2} of
mixed Pr\"ufer-Moore surfaces which produces a bunch of
separable, non-simply-connected surface lacking foliations.
Furthermore if we accept the operation of {\it total}
Nyikosization $N$, which produces long hairs at all point of
the boundary, then $N(\Sigma_{0,n})$ for $n \ge 2$ would
belong to (10), compare Section~\ref{long-sun}.
(14) has the surfaces $\Lambda_{0,n}$ of genus $0$ with $n\ge
3$ long cylinder-pipes (which lacks a foliation by BGG1
\cite{BGG1}).
(15) admits a plethora of examples and most of them arising
indeed from a non-singular flow (recall a theorem of Whitney
(building over Hausdorff) implying that such $2D$-flows induce
foliations.) Of our pictured examples the only one which is
not induced by a flow is the ``wormhole'' double long-plane,
i.e. the connected sum ${\Bbb L}^2 \# {\Bbb L}^2$ which is
however
foliated by circles.
(16) contains simple examples using variants of the Pr\"ufer
construction. For instance we can Pr\"uferize an annulus and
glue radially opposite boundaries by long bridges (cf.
figure).
(17) The same construction as in (16) with short bridges
yields a specimen.
(18) We lack any serious example.
However with the operation of full Nyikosization we can take
$N(\Sigma_{g,n})$ where the genus $g\ge 1$ and with $n\ge 1$
contours.
(19) Take a Kronecker torus Pr\"uferized along an arc. Note
that by the scenario of nano-black holes
(\ref{Baillif:nano-black-holes}) this surface is not minimal,
giving a sharp positioning.
(20) Take a Kronecker torus, with two punctures on the same
leaf so as to create a non-dense leaf. (Of course in reality
the twice punctured torus also carries a minimal foliation, so
it is not a sharp example. Yet puncturing the Kronecker torus,
and making a long Nyikos expansion near the puncture in the
direction of the foliation gives a minimal foliation on a
non-metric surface. (This trick is due to M. Baillif.)
(21) Take a Kronecker torus. This is the unique compact
specimen (as follows from Kneser), but there is of course a
menagerie of non-compact examples.
(22) Take the example of Dubois-Violette on $S^2_{4*}={\Bbb
R}^2_{3*}$.
(23) contains as fake specimen the Kronecker torus punctured
twice on the same leaf.
(24) Take the torus with the trivial foliation by circles. Of
course this is fake, since the torus really lives in (21). Yet
(24) contains the M\"obius band (=twisted ${\Bbb R}$-bundle
over $S^1$) which is intransitive by
(\ref{infinite-cyclic-group}). If we make one (or even 2)
punctures in M\"obius
the surface is still intransitive by
(\ref{intransitivity:new-obstruction}). For a compact example
we have
the Klein
bottle ${\Bbb K}$.
By Hellmuth Kneser (1924) (\ref{Kneser}) any foliation on the
Klein bottle has a circle leaf, hence ${\Bbb K}$ is surely not
minimal. We would like to show that Klein is intransitive.
This is well-known for flows (Markley, 1968). For foliations
one can cut the Klein bottle along the Kneser circle, which is
not null-homotopic (else it would bound a disc) and we get a
bordered surface of characteristic zero which is either an
annulus or a M\"obius band. Both have $\pi_1={\Bbb Z}$ so
there is an obstruction to a dense leaf by
(\ref{infinite-cyclic-group}).
(25) contains
all compact (closed) surfaces except those which have already
been positioned, namely the sphere, and the two surfaces with
Euler characteristic zero. These includes orientable surfaces
of genus $\ge 2$ and non-orientable surfaces (cross-capped
spheres with $g\ge 1$ cross-cap) for all value of $g$ except
$g=2$ which is the Klein bottle. Since an open metric surfaces
can always be foliated (Morse function argument), this is a
complete classification for the birds in class (25).
\fi
\subsection{Gr\"otzsch-Teichm\"uller theory for
non-metric surfaces?}
Since Rad\'o 1925 \cite{Rado_1925}, it is known that a
complex-analytic structure on a $2$-manifold implies second
countability, hence metrizability (Urysohn). Now such a
structure also known as a Riemann surface structure is
essentially the same as a conformal structure allowing one to
measure angles. In fact this can be achieved in the more
general category di-analytic or Klein surfaces, where
non-orientable surfaces are permitted. Rad\'o's theorem
generalizes directly to Klein surfaces, by passing to the
orientation double cover which is a Riemann surface (with an
anti-holomorphic involution):
\begin{lemma} Any Klein surface is
metric.
\end{lemma}
\begin{proof} Above the Klein surface there is a Riemann
surface, which by Rad\'o \cite{Rado_1925} is metric, thus
Lindel\"of and the latter property pushes down to the Klein
surface, which being locally second countable is then second
countable, and Urysohn concludes.
\end{proof}
Since conformal structures are lacking on non-metric surfaces,
and reminding the quasi-conformal trend (initiated in the now
classical works of Gr\"otzsch, 1928, Lavrentief 1929, Ahlfors
1935, Teichm\"uller 1938, etc.) it is rather natural to wonder
about quasi-conformal structures. This was addressed by R.\,J.
Cannon 1969 \cite{Cannon_1969}, who found rather surprising
answer(s). More on this soon, yet let us first dream a little.
If a diffeomorphism (between regions) of the plane (say of
class $(C^1)$) is not conformal, then it will distort
infinitesimal circle into ellipses, whose eccentricity $Q\ge
1$ (long axes divided by the short axes) provides a dilation
quotient measuring the deviation from conformality. The
diffeomorphism is {\it $K$-quasiconformal} if the distortion
$Q$ is bounded by a finite constant $1\le K<\infty$ throughout
its domain of definition.
A {\it $K$-quasiconformal structure} is defined by an atlas
with transition maps being $K$-qc. Of course a $1$-qc
structure is nothing else than a Klein structure, or a
conformal structure (non-orientable surfaces are welcome, at
least permitted).
Now the dream would be that given any (non-metric) surface
(say with a differentiable structure, albeit there is a way to
speak of quasi-conformality without regularity assumption),
then (following Gr\"otzsch's idea and phraseology) we could
look for the ``{\it m\"oglichst Konform}'' atlas minimizing
the angular distortion $Q$. Thus there is a way of assigning
to every surface $M$ a number $1\le Q(M)\le +\infty$ which is
the infimum of $K$ over all $K$-qc atlases. Of course we set
$Q(M)=+\infty$ if there is no such atlas, and $Q(M)=1$ holds
precisely when $M$ is metric (Rad\'o's theorem). This provides
a continuous numerical invariant quantifying how violently
non-metric the surface is. Unfortunately it seems that there
is no such fine quantification, for Cannon's main result is
that any $K$-quasi-conformal structure on a surface forces
metrizability so that $Q(M)$ can take only the values $1$ or
$+\infty$.
However Cannon also shows that some reasonable surfaces (of
the Pr\"ufer type) as well as surfaces deduced from the long
ray (Cantor type) allows quasi-conformal structures in the
weak sense that there is no uniform bound on the distortion
valid for all transition maps. This raises the question if
such (weak) quasi-conformal structures exist for all
surfaces. (Maybe this would follow from the conjectural
smoothability of surfaces.)
\medskip
{\small {\bf Acknowledgements.} The
author
wishes to thank Daniel Asimov, Andr\'e Haefliger, Claude
Weber
for conversations
on topics lying in or outside the scope of the present
note. Last but not least, the
experts David Gauld and Mathieu Baillif are
acknowledged for generous
e-mails exchanges.
}
{\small
|
1,314,259,993,183 | arxiv | \section{Introduction}
\label{introduction}
\hspace{8pt}Seismic interpretation requires detailed understanding of seismic acquisition, processing, and data models to infer geological meaning. The process of seismic data preprocessing and migration involves geometric transformation and analysis to produce an accurate image of Earth's subsurface. Prestack and poststack preprocessing can introduce artifacts or coherent noise into seismic data, leading to false-positive identification of seismic structures during interpretation. Hence, consistent accurate interpretation of seismic data requires many years of experience.
Despite improvement in the quality of 3-D migrated seismic data, thorough interpretation of certain geologic elements remains subjective. Subjective interpretation occurs due to the presence of many `valid' interpretations or weak amplitude reflections. One cause of weak amplitude reflection is the absence of bottom simulating reflectors in the subsurface during acquisition \cite[]{bedle2019seismic}. In addition, interpreted seismic data are considered intellectual property by energy industries. Consequently, publicly annotated data are scarce. All these challenges necessitate the need for a model-based framework that is objective to defined constraints, requires little or no human-assisted label, and powerful enough to learn deep-diverse patterns in seismic data.
Recent deep learning success stories have further motivated research for automated seismic interpretation using machine learning. But the limited label problem is a challenge to training deep learning models imported from computer vision applications where labels ramp into millions in number. Consequently, deep learning models designed for seismic applications must be trained with limited data awareness.
\hspace{8pt}To address these challenges, we propose a \emph{self-supervised} learning framework that does not require any labels from interpreters. Rather, the model is trained on the physics of seismic patterns, from which homogeneous patterns are separated out using constraints.
Various computer-assisted frameworks have been proposed in the literature, of which we make two broad categories: attributes-based interpretation and machine learning-based interpretation. Seismic attributes-based methods rely on mathematical computations to identify distinctive patterns of seismic amplitudes. These patterns are mapped to a database of previous successful patterns for successful interpretation \cite[]{chopra2005seismic}. Because the computation of geometric attributes can be automated, many seismic attributes have been introduced to the geoscience community \cite[]{taner1994seismic, barnes1992calculation, chen1997seismic, shafiq2015detection, shafiq2017salt, shafiq2018role, shafiq2018novel}. However, seismic attributes are usually designed to identify specific patterns of interest to an interpreter. Hence, patterns that deviate from the specific target are unidentified. This implies a suite of complementary attributes must be selected by an interpreter interested in identifying important events. Our proposed model, however, learns patterns without previous specification bias. In addition, it contains millions of parameters which is powerful enough to learn very complex patterns that would be missed by simpler algorithms such as attributes.
\hspace{8pt}Early adoption of machine learning models in seismic research began with supervised methods in which the model has access to labeled data. \cite[]{di2018patch, wu2018convolutional, xiong2018seismic, araya2017automated, dramsch2018deep}. Several notable works have also attempted to overcome the effect of limited annotated data by employing semi-supervised and weakly supervised techniques \cite[]{alaudah2016weakly, alaudah2018structure, di2019semi,babakhin2019semi, alfarraj2019semi, wu2019faultseg3d, liu20193d, di2018deep}. In these frameworks, there is less dependence on fully labeled data. In semi-supervised frameworks, for instance, researchers use fewer annotations augmented with pseudo-labels. Weakly supervised learning models use weaker labels like image labels only \cite[]{alaudah2018learning}. Weak labels are easier to generate in numerous quantities compared to pixel-level annotations required in supervised frameworks.
\hspace{8pt}In contrast, our method does not require any annotated labels from interpreters. This ranks it at the same ease of use as attributes, with the exception that we applied a more powerful learnable model. Another relevant body of literature explores unsupervised learning works which also addresses the limited data problem. These frameworks are mostly based on K-Means clustering and Konohen's self-organizing maps (SOMs) \cite[]{barnes2002investigation}. The basic workflow includes extracting attributes from seismic data and using a dimension-reduction algorithm, mostly principal component analysis (PCA), to identify the most important features. The principal features are then clustered to a specified number of centroids. Although these methods have produced great results, the ground-truth centroids of the attributes are unknown; and manually initializing centroids does not converge to ground-truth centroids. Secondly, PCA leads to information loss and the distance metric used in clustering algorithms is usually Euclidean based, which affects the cluster accuracy. Our proposed clustering framework does not require the number of centroids to be predefined, rather our algorithm explores multi-scaled, directional spectral information in the images to extract high-dimensional coefficients before clustering them using a custom-distance metric.
\hspace{8pt}Lastly, we explore other relevant literature based on their target applications. For instance, detection of faults \cite[]{araya2017automated, di2019improving, di2019semi, shafiq2018novel,wu2019faultseg3d, xiong2018seismic}, delineation of salt bodies, \cite[]{di2018multi, di2018deep, shafiq2015detection}, classification of facies \cite[]{liu20193d, dramsch2018deep, qian2017seismic, alaudah2019machine, alaudah2019facies}, prediction of rock lithology from well logs \cite[]{alfarraj2019semi, das2018convolutional, das2019effect} and segmentation of seismic layers are few areas of relevant applications of deep learning to seismic. In these literature, various depth of labels are used to train the network. Our proposed method eliminates this challenge by learning image labels using an unsupervised framework. \cite{dubrovina2019composite} propose using the latent space of an encoder-decoder architecture to split and rearrange various parts of a 3D object. The latent space is projected to summable orthogonal subspaces. Each orthogonal subspace of the latent variable retains low-level features of various parts of the 3D object. However, the dataset includes ground-truth labels of 3D parts, our method does not include any ground-truths for partitioned parts. \cite{li2019orthogonal} add an orthogonal penalty to latent variables and show that by using SOMs on the orthogonal features, more separability on pixel-space features is realized. The methodology presented in \cite{li2019orthogonal} is similar to ours in applying orthogonality to latent variables. In our self-supervised framework, we introduced projection matrices to factorize input images; hence, eliminating the need for SOM on the features.
\hspace{8pt}In this work, we use the F3 block as our dataset. The F3 block is an offshore block in the Dutch sector of the North Sea. The dataset is preprocessed using a dip-median filter to remove random noise and to enhance the edges of the seismic reflections. Seismic traces in the volume are clipped above and below 4.0 times the standard deviation of the volume. All amplitude values are normalized to the range [-1, 1]. The volume is split into 120x120 overlapping patches. We propose a new hierarchical clustering model to group these patches into $K$ classes. Four classes out of all classes are passed to a deep encoder-decoder model, augmented with two discriminators. The latent space of the encoder-decoder model is projected unto two learned subspaces. We added constraints to guide the factorization of each input patch into two images. Each synthesized image corresponds to a learned subspace.
The subspaces are further constrained to be orthogonal. Though we use discriminators, our method is self-supervised because the adversarial part of our model is used in a multi-task setting aside from learning the factorized latent space, which is done unsupervised. Lastly, we conclude by evaluating our proposed method against related research. We further show that our deep encoder-decoder model learns attributes that are qualitatively better than traditional attributes for structural delineation.
\section{Hierarchical Clustering Framework}
\hspace{8pt}We propose a hierarchical clustering framework to cluster images into $K < 20$ clusters. Where $K$ is the number of clusters. The volume is subdivided into subset blocks. Each subset block contains 15 vertical sections in the inline or crossline direction. We separate inline and crossline subset blocks. The clustering framework consists of several layers. The first layer is initialized using a density-based algorithm: density-based spatial clustering of applications with noise (DBSCAN) \cite[]{ester1996density}, to produce contiguous clusters. All other layers consist of hierarchical merges of clusters in proceeding layers. Three sections are extracted from each subset block taken at five intervals apart. A set of three sections at intervals of five apart, is called a category. Hence, each subset-block contains three categories.
\hspace{8pt}We do not randomize all the images collected from the volume to make them independent and identically distributed, to avoid losing domain correlation information between sections. However, within each category, we randomize the order of all images. Each category contains $270$ images and $165$ images for inline and crossline respectively. Inline categories are clustered separately from crossline categories due to differences in feature representation in both sets.
\subsection{Clustering with DBSCAN}
\hspace{8pt}DBSCAN uses density-based metrics for cluster discovery. Two parameters are usually defined for DBSCAN: $m$ and $\epsilon$. $m$ is the minimum number of points that can form an independent cluster, while $\epsilon$ is the maximum distance between two core points. Given a group of points, DBSCAN determines whether a point is a core or border point. The former belongs to a cluster with at least $m$ minimum points within $\epsilon$ distance of each other. The latter is within $\epsilon$ distance from any core point but not within $\epsilon$ distance to $m$ points. At initialization, one cluster is randomly chosen from the dataset and neighboring points are checked to find whether they are core or border points. If $m$ points are core points, a cluster is established and other core points and border points $\epsilon$ distant away are added to that cluster. Points that are neither core nor border point, are considered noise. In Figure \ref{fig:dbscan}, we show a toy example of DBSCAN with $m=3$. The red points are core points because they have $m\ge3$ neighboring points at $\epsilon$ distance away. The yellow points are border points, because they are $\epsilon$ distant from less than $m$ points, but are in the neighborhood of a core point. All core points are considered density-reachable from other core points. The blue points are considered noise because they are not $\epsilon$ distant from any point in the group.
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.34]{Figure1.pdf}
\caption{A toy example of DBSCAN algorithm, $m=3$. The radius of the circles represent $\epsilon$ distance from each point. The red points are core points because they are within $m$ neighbors of each other. Yellows points are border points because they have less than $m$ points in their $\epsilon$ neighborhoods. The blue point is considered noise because it is not within $\epsilon$ distance to any other point.}
\label{fig:dbscan}
\end{figure}
\hspace{8pt}To specify a distance metric for DBSCAN between two seismic images, we used a similarity (SIM) metric proposed by \cite{alfarraj2016content}. The authors showed that SIM compares curvelet coefficients from two images. Curvelets coefficients \cite[]{starck2002curvelet} are multi-scaled decompositions of an image obtained by tiling the frequency domain with trapezoidal-shaped tiles \cite[]{alfarraj2016content}. Additionally, SIM outperforms many other similarity measures for texture images like S-SSIM \cite[]{wang2004image}, CW-SSIM \cite[]{gao2011cw}, and STSIM \cite[]{zujovic2013structural}. The tiled frequencies collected at various scales (here 32) of the image in various directions, make the algorithm a good edge detector. Hence, suitable for seismic images. Singular value decomposition (SVD) is applied to the curvelet coefficients extracted from the images to select the best coefficients. The reduced coefficients are then compared using Czenkanoski's similarity to obtain a score in the range [0, 1] \cite[]{alfarraj2016content}:
\begin{equation}
SIM(I_1, I_2) = \frac{||v_1 - v_2 ||_1}{||v_1 + v_2||_1}.
\end{equation}
\noindent The $SIM(I_1, I_2)$ returns values from $[0, 1]$ for each image pair, where $I_1$ and $I_2$ are images and $v_1$ and $v_2$ are the SVD reduced curvelet coefficients. Hence, a $0$ implies perfectly similar, while $1$ denotes perfectly dissimilar.
Inline and crossline images are fed sequentially into the DBSCAN algorithm. By experimenting with several hyperparameter values, we set $\epsilon=0.10$, for inline images and $\epsilon=0.09$, for crosslines, while $m=3$ for both inline and crossline images. Thus, for any image occurring at any location within any section, if that image is not $\epsilon$ distant to any other image in the same category, it is marked as noise. We keep $\epsilon$ small to ensure thorough discrimination between clusters. The drawback of a small $\epsilon$ is that there will be many $(\sim 30)$, small clusters per category. However, this remains solvable since a hierarchical algorithm is applied to merge contiguous clusters.
\subsubsection{Algorithm 1}
\hspace{8pt}In the first step, we merge intra-category clusters. As shown in Figure \ref{fig:algorithm1}, we arrange the clusters in order of cluster size. Each circle represents a cluster. The blue column represent one category. We select a handful of images from the smallest cluster and compare these images to remaining clusters inside the category. The smallest cluster is then merged with the most similar cluster, in this case, cluster three. The algorithm is iterated until the number of clusters reduces from $\sim 30$ to a predefined small number (e.g., $8$).
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.4]{Figure2.pdf}
\caption{Algorithm 1 procedure for intra-category merging of clusters. The blue circles are clusters arranged in ascending order of magnitude. For one merge instance, we compare cluster 1 with all other clusters till we find the closest match, e.g., cluster 3. In which case, we merge clusters 1 and 3 and re-sort all clusters again.}
\label{fig:algorithm1}
\end{figure}
\subsubsection{Algorithm 2}
\hspace{8pt}In the second step, we merge inter-category clusters. In Figure \ref{fig:algorithm2}, the blue column is category $A$, the yellow column is category $B$. A few images are randomly sampled from each cluster in $A$ and compared with all clusters in $B$. All clusters in $A$ with a one-to-one mapping, in best proximity to clusters in $B$, are merged. For instance, clusters 1, 2, and 4 in $A$ all select cluster 2 in $B$ as their closest cluster (many-to-one mapping), we do not merge any of these clusters. However, we merge clusters 3-to-1 and N-to-N. After merging similar clusters, the remaining clusters in $B$ are transferred to $A$.
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.40]{Figure3.pdf}
\caption{Algorithm 2 procedure for inter-category merging of clusters. The blue and yellow columns are two categories: $A$ and $B$, whose clusters are to be merged. We compare each cluster in $A$ with every other cluster in $B$ to find the closest match. In this illustration, clusters 1, 2 and 4 in $A$ all select cluster 2 in $B$ as their closest match. Cluster 3 in $A$ maps to cluster 1 in $B$. Assuming for all other clusters in $A$, we obtain a one-to-one mapping in close match to clusters in $B$, we only match clusters with one-to-one matching between $A$ and $B$. We then copy over clusters with many-to-one $(1,2 \mbox{ and } 4)$ mapping from $A$ to $B$.}
\label{fig:algorithm2}
\end{figure}
Figure \ref{fig:hierarchy_merging} illustrates how the hierarchical clustering module works. Both algorithms are applied alternatively on the data until the total number of categories are reduced to $8$ and the total number of clusters inside all categories are reduced to $14$.
\begin{figure}[htb!]
\centering
\includegraphics[scale=0.40]{Figure4.pdf}
\caption{Hierarchical cluster merging. $L_0$ is the first layer and contains clusters from DBSCAN. Subsequent layers are obtained by iterating algorithm 1 and 2. $L_1$ is the second layer of clusters after the first iteration of both algorithms. $L_2$ is the third layer and so on.}
\label{fig:hierarchy_merging}
\end{figure}
Lastly, we inspect all 14 clusters. The total number of clustered images was 6386. There are eight visually unique clusters, including noise. Hence, we combine repetitive clusters. For instance, dipping amplitudes and non-dipping amplitudes can be combined into the `horizons’ cluster.
\hspace{8pt}In Figure \ref{fig:clustering_result}, Cluster 0 is labeled as `chaotic' because the dominant geologic component highlighted by the clustering algorithm is the chaotic facies. Cluster 1 contains images from the salt dome region of the volume. In this cluster, it is interesting that although the patterns of the reflection amplitudes are diverse, our algorithm clusters them accurately. Cluster 2 identifies parallel reflectors, that we label as `horizons'. The horizons cluster contains the most number of images of all clustered images, and the clustering algorithm also attains the highest accuracy in identifying this cluster.
\begin{figure*}[!thb]
\centering
\includegraphics[scale=0.45]{Figure5.pdf}
\caption{Result showing eight clusters at the final layer.}
\label{fig:clustering_result}
\end{figure*}
\hspace{8pt} Cluster 3 captures faulted regions of the volume and are labeled as faults. Notice some discontinuities are labeled as faults. Cluster 4 captures another variant of the chaotic class with some non-conformity in seismic reflections. Cluster 5 contains images taken from the bright-spot region of the seismic volume. Interestingly, our algorithm differentiated this class of images from the horizon class in Cluster 2. Cluster 6 highlights images from the top of the salt dome and they were clustered into a different class by our algorithm because they did not fully represent structures captured inside the salt dome. Cluster 7 shows irregular seismic reflections. Two of them contain chaotic regions while others do not fit into any of the other classes. There are several classes like cluster 7 and 5 that would not be used in training our deep learning framework, but would be left for further study in future research. Figure \ref{fig:clustering_result} contains the output of our clustering framework.
\hspace{8pt}Four clusters were selected from Figure 5 for training the deep learning model. These clusters would be referred to as classes in future references. From the result in Figure 5 Horizons (class 2), faults (class 3), salt domes (class 1) and chaotic (class 0) labels are representative of the selected images. The `horizon' cluster is the largest class with 1258 images and the Chaotic class is the smallest class with 389 images. The Fault and the salt dome classes had 720 and 500 images, respectively. We discarded images clustered as noise. Hence, for balanced classes, we randomly select 389 images from each class, making a total of 1556 training images.
\section{Deep Learning Framework}
\label{lt_fact}
\begin{figure}[!thb]
\centering
\includegraphics[scale=0.46]{Figure6.pdf}
\caption{A sample section image from inline 400 (a) showing four samples of images clustered by our hierarchical model, (b) shows manually labeled structural components versus background. Structural components are annotated in green and the background in red.}
\label{fig:sample_patches}
\end{figure}
\hspace{8pt}We have a set of clustered seismic images in four classes, each with equal number of images. The assignment of image labels has been done by our clustering module similar to the example in Figure~\ref{fig:sample_patches}a. We propose an encoder-decoder model to learn annotations as shown in Figure ~\ref{fig:sample_patches}b. Clustered images were not pixel annotated, Figure ~\ref{fig:sample_patches}b is only for illustration. The architecture of the proposed model is shown in Figure \ref{fig:model}. The set of all training images is designated as $\mathbf{X}: \{\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_N\}$. The encoder is designated: $q_\phi(\mathbf{x}_i)$ and the decoder is designated $p_\theta(\mathbf{z}_i)$. Additionally, $\mathbf{z}_i \in \mathbf{Z}$ is the corresponding latent vector to $\mathbf{x}_i$. The learning parameters in both encoder and decoder are $\phi$ and $\theta$ respectively, and $p_\theta(\mathbf{X})$ is the probability distribution of all seismic images over parameter $\theta$. Thus, the log-probability of the data is: $\log p_\theta(\mathbf{X})= \sum_i \log p_\theta(\mathbf{x}_i)$. Now, $p_\theta(\mathbf{x}_i)$ is intractable because the posterior $p_\theta(\mathbf{x}_i|\mathbf{z}_i)$ is intractable \cite[]{kingma2013auto}. For notational convenience, we will drop the subscript $i$ on $\mathbf{x}_i$ and $\mathbf{z}_i$, as all future references to these variables refers to a single data instance during iterative training or inference.
\hspace{8pt}We introduce two projection matrices: $\mathbf{P}_1$ and $\mathbf{P}_2$. Both matrices are fully connected layers of size 1024x1024. The matrices project $\mathbf{z}_i$ to orthogonal $\mathbf{z}_{1}$ and $\mathbf{z}_{2}$ subspaces respectively. In Figure~\ref{fig:sample_patches}b, $\mathbf{z}_1$ and $\mathbf{z}_2$ are mapped to the blue and red annotations in each input image. Note that we do not assume there are no images with overlapping class representations in our clusters. In one forward pass, operators $\mathbf{P}_1$ and $\mathbf{P}_2$ are applied to $\mathbf{z}$ thus:
\begin{equation}
\mathbf{z}_1 = \mathbf{P}_1 \mathbf{z}, \; \; \; \mathbf{z}_2 = \mathbf{P}_2 \mathbf{z}.
\end{equation}
\noindent$\hat{\mathbf{x}}_1$ and $\hat{\mathbf{x}}_2$ are synthesized from the decoder, $p_\theta(\mathbf{z})$. The decoder tries to learn some properties of each seismic image and furnish such knowledge into reconstructing both images.
\begin{equation}
\mathbf{\hat{x}}_1 \sim p_\theta(\mathbf{z}_1), \; \; \;
\mathbf{\hat{x}}_2 \sim p_\theta(\mathbf{z}_2).
\end{equation}
\noindent We desire the reconstructed image $\mathbf{r}$ to be as similar to the input image as possible. Note that $\mathbf{r}$ is usually blurry in encoder-decoder networks. Hence, we use a discriminator, $D_1$, to ensure $\mathbf{r}$ is sharp. A second discriminator, $D_2$, is applied to the encoder to ensure latent variables of $\mathbf{z}$ are spread out. In the original Bayes auto-encoder architecture, \cite{kingma2013auto} derive the evidence lower bound for reconstructing $\mathbf{x}$. Because we modify the behavior of the latent space, by projecting it to orthogonal latent spaces, we re-derive the evidence lower bound in the context of projecting the latent space. The decoder distribution, $p_\theta(\mathbf{X})$ is intractable because the posterior is intractable. Hence, we sample from an auxillary distribution, $q_\phi(\mathbf{z})$. We drop $\theta$ and $\phi$ for notation convenience knowing $p$ and $q$ are distributions over both parameters respectively. The derivation of the evidence lower bound (ELBO) on the log-likelihood of $\mathbf{X}$ is as follows:
\noindent $\mathbb{E}_{\mathbf{x} \sim p_{\mathbf{x}}} \; (\log \left(p_\theta(\mathbf{x})) \right)$ is log-likelihood of generating the data.
Next, we introduce $\mathbf{z}_1$ and $\mathbf{z}_2$ and sum over their marginals:
\begin{equation}
\begin{aligned}
\mathbb{E}_{\mathbf{x} \sim p_{\mathbf{x}}} \; (\log \left(p_\theta(\mathbf{x})) \right) \\ = \mathbb{E}_{\mathbf{x} \sim p_{\mathbf{x}}} \left[ \log \left( \sum_{\mathbf{z}_1} \sum_{\mathbf{z}_2} p_\theta(\mathbf{x}, \mathbf{z}_1, \mathbf{z}_2) \right) \right].
\end{aligned}
\label{marginals}
\end{equation}
\noindent We sample from $q(\mathbf{z|x})$ and find the expectation over its conditional distribution because we cannot sample from the posterior $p(\mathbf{x|z}))$:
\begin{equation}
\begin{aligned}
(\ref{marginals})
\ge \mathbb{E}_{\mathbf{x} \sim p_{\mathbf{x}}} \left[ \mathbb{E}_{\mathbf{z}_1 \sim q(\mathbf{z}_1|\mathbf{x})} \; \log \left(\frac{p(\mathbf{z}_1)}{q(\mathbf{z}_1|\mathbf{x})} \right) \right.\\ + \left. \mathbb{E}_{\mathbf{z}_2 \sim q(\mathbf{z}_2|\mathbf{x})} \; \log \left(\frac{p(\mathbf{z}_2)}{q(\mathbf{z}_2|\mathbf{x})} \right) \; \log \left(\frac{p(\mathbf{z}_2)}{q(\mathbf{z}_2|\mathbf{x})} \right) \right]\\ +
\mathbb{E}_{\mathbf{x} \sim p_{\mathbf{x}}} \left[ \mathbb{E}_{q(\mathbf{z}_1, \mathbf{z}_2)|\mathbf{x})} \; \log \left(p(\mathbf{x}|\mathbf{z}_1, \mathbf{z}_2)) \right) \right ]. \\
\end{aligned}
\label{auxillary}
\end{equation}
Simplifying expressions in (\ref{auxillary}) in $KL$ divergence terms gives:
\begin{equation}
\begin{aligned}
\mathbb{E}_{\mathbf{x} \sim p_{\mathbf{x}}} \left[ -KL(q(\mathbf{z}_1|\mathbf{x})||p(\mathbf{z}_1)) -KL(q(\mathbf{z}_2|\mathbf{x})||p(\mathbf{z}_2)) \right] \\
+ \mathbb{E}_{q_{({\mathbf{z_1, z_2|x}})}} \log(p(\mathbf{x|z_1,z_2})). \\
\end{aligned}
\label{KL_div}
\end{equation}
We can write the $KL$ terms in (\ref{KL_div}) in entropy terms:
\begin{equation}
\begin{aligned}
\mathbb{E}_{\mathbf{x} \sim p_{\mathbf{data}}} \left[ \mathbb{E}_{q_\phi(\mathbf{z_1, z_2|x})} \log \left( p_\theta(\mathbf{x|z_1, z_2}) \right) \right.\\ +
\left. H(q(\mathbf{z_1|x})) + H(q(\mathbf{z_2|x})) - H \left(p(\mathbf{z_1})\right) - H \left(p(\mathbf{z_2})
\right)\right],
\end{aligned}
\label{entropy}
\end{equation}
\noindent where $KL(\mathbf{a} || \mathbf{b})$ is the Kullback-Leibler divergence between distributions $\mathbf{a}$ and $\mathbf{b}$.
We could conclude from equation \ref{entropy} that: evidence $\ge$ reconstruction + entropy of projected latent codes - entropy of actual latent priors.
In equation \ref{entropy}, the entropy of the encoder is $H(q(\mathbf{z|x}))$ and the conditional entropy of the encoder due to the projected prior is $H(q(\mathbf{z_1|x}))$ and $H(q( \mathbf{z_2|x}))$. The entropies of the priors in the orthogonal subspaces are: $H(p(\mathbf{z_1}))$ and $H(p(\mathbf{z_2}))$. If the conditional entropies of the the encoder after projecting the latent space matches the entropies of the projected priors: $H(q(\mathbf{z_1|x})) - H(p(\mathbf{z_1}))=0$ and $H(q(\mathbf{z_2|x})) - H(p(\mathbf{z_2}))=0$, then the reconstructed image $\mathbf{r}$ will be a perfect reconstruction of $\mathbf{x}$. This would only happen if the projected priors are perfectly sampled from the actual priors of distributions representing the foreground and background of each $\mathbf{x}$. Two images are reconstructed by the decoder: $\hat{\mathbf{x}}_1$ and $\hat{\mathbf{x}}_2$. Here, we assume that the encoder learns not only to generate the latent codes, but understands that it needs to generate two latent codes that match the two virtual distributions in the decoder. Hence, it must generate one latent code to encapsulate two virtual distributions in the decoder. The decoder learns parameters $\theta$ such that each reconstructed seismic image has two factors inherent in them. It remains that the decoder must be constrained to understand both factors.
\begin{figure*}
\centering
\includegraphics[scale=0.5]{Figure7.pdf}
\caption{Diagram of proposed deep learning module. $\mathbf{X}$ is the set of all input images. $q_{\phi}(.)$ is the encoder. $\mathbf{z}$ is the latent code factorized into $\mathbf{z}_1, \mathbf{z}_2$. $p_{\theta}(.)$ is the decoder that outputs the factorized images: $\hat{\mathbf{X}}_1$ and $\hat{\mathbf{X}}_2$.}
\label{fig:model}
\end{figure*}
\subsection{Latent space factorization using projection matrices}
\label{proj_matrix}
\hspace{8pt}A projection matrix $\mathbf{P}$ is an $n \times n$ square matrix that gives a vector subspace representation $T$ that is a projection of $\mathbb{R}^n$ to $T$. For any real operator $\mathbf{P}$ to be a valid projection matrix, it must satisfy the following properties:
\begin{enumerate}
\item{$\mathbf{P} = \mathbf{P}^*$ ($\mathbf{P}^*$ is the adjoint of $\mathbf{P}$)}.
\item{$\mathbf{P}^2 = \mathbf{P}$} (idempotent property).
\end{enumerate}
Figure \ref{fig:model} shows two projection matrices $\mathbf{P}_1$ and $\mathbf{P}_2$ designed to be mutually orthogonal.
However, we impose no constraint to make either of them orthogonal projection matrices in which case we would need $\mathbf{P} = \mathbf{P}^T$ for a real $\mathbf{P}$.
To impose a projection matrix behavior on both matrices, we create a fully connected layer in the model and initialize it using a uniform distribution. Now, both matrices are learnable and both projection and orthogonality constraints can be imposed on them by solving an optimization objective.
We formulate the following projection loss function:
\begin{equation}
L_{proj} = \sum_{i,j, \; \; i \neq j}^2 \mathbf{P}_i^T \mathbf{P}_j + \sum_{i=1}^2 (\mathbf{P}^2_i - \mathbf{P}_i).
\label{lproj}
\end{equation}
Where $L_{proj}$ is the projection loss added to adversarial losses. We discuss the use of discriminators and their corresponding losses below.
\subsection{Adversarial Training}
\label{adversarial}
\hspace{8pt}\cite{goodfellow2014generative} introduce Generative Adversarial Networks (GANs). A GAN sets up an adversarial min-max game between a generator $G$ and a discriminator $D$. At training, the generator attempts to generate an image that lies in the dataset distribution space such that the discriminator is unable to distinguish if the image is real or generated. While the discriminator gets better at detecting real and generated (fake) images. This adversarial setup constrains the generator to generate sharp images almost at the quality of the input image. GANs are superior in the quality of generated images they produce compared to basic encoder-decoder models. Images reconstructed from encoder-decoder models are usually blurry. Blurry images are due to using the mean squared error (MSE) loss between the reconstructed and original image, implying the error between both images is Gaussian noise, which is not in the case in seismic images \cite[]{zhao2017towards}.
For notation purpose, our decoder would pass as a generator $G$ in this section, while our encoder will be represented as $E$ in adversarial setting. The two discriminators specified in Figure \ref{fig:model} retain their notations as: $D_1$ and $D_2$. $D_1$ ensures a sharp reconstruction, $\mathbf{r}$, while $D_2$ helps in the spread of latent variables, $\mathbf{z}$. As seen in Figure \ref{fig:model}, $\mathbf{r} = \hat{\mathbf{x}}_1 + \hat{\mathbf{x}}_2$. We introduce an $L_1$ loss between $ \hat{\mathbf{x}}_1 \mbox{ and } \hat{\mathbf{x}}_2$ to enforce structural difference:
\begin{equation}
L_{diff} = -1 \times \sum_{n=1}^N |\hat{\mathbf{x}}_1 - \hat{\mathbf{x}}_2|.
\label{ldiff}
\end{equation}
\noindent Where $L_{diff}$ is the $L_1$ penalty. Since discriminator $D_1$ takes $\mathbf{r}$ as input, the adversarial loss further ensure $\mathbf{r}$ is similar to $\mathbf{X}$. The adversarial loss for $D_1 \mbox{ versus } G$ becomes:
\begin{equation}
\begin{aligned}
\min_{G} \max_{D_1} &
E_{\mathbf{x} \sim p_{data}} \left[\log (D_1(\mathbf{x})) \right] \\ + E_{\mathbf{z} \sim q(\mathbf{z})} \left[ \log(1-D_1(G(\mathbf{z}))) \right].
\end{aligned}
\label{ldav1}
\end{equation}
\noindent Similarly, we define an adversarial loss on the latent space $\mathbf{z}$.
Training a GAN is very challenging. One experimental problem with training GANs is the problem of mode--collapse. Mode--collapse occurs when $G$ reconstructs the same output image for varying latent variable $\mathbf{z}$. In which case, $\mathbf{r}$ converges to a local minimum of one or few images without generalizing properly on the whole data distribution. To solve this problem, we apply discriminator $D_2$ on $\mathbf{z}$. $D_2$ helps to impose a uniform distribution on $\mathbf{z}$ and to flatten out the mixed Gaussian space of $\mathbf{z}$.
\begin{equation}
\begin{aligned}
\min_{G} \max_{D_2} &
E_{\mathbf{z} \sim q(\mathbf{z})} \left[\log(D_2(\mathbf{z})) \right] + E_{\mathbf{u} \sim U_{[0,1]}} \left[\log(1-D_2(\mathbf{u})) \right]. \\
\end{aligned}
\label{ldav2}
\end{equation}
Equation \ref{ldav2} defines the loss on $D_2$, to flatten the mode of the Gaussian distribution in the latent space. Thus, the mode-collapse problem in the latent space is diminished. Equations \ref{ldav1} and \ref{ldav2} are defined as $L_{D_1} \mbox{ and } L_{D_2}$ respectively.
\subsection{Reconstruction Loss}
\hspace{8pt}In equation \ref{ldiff}, we impose a structural difference loss between $\hat{\mathbf{x}}_1$ and $\hat{\mathbf{x}}_2$. We need a reconstruction loss to ensure both images do not become a trivial solution while minimizing $L_{diff}$ in equation \ref{ldiff}. A trivial solution to $L_{diff}$ could be $\hat{\mathbf{x}}_1=\mathbf{1}$, a matrix of all 1s, and $\hat{\mathbf{x}}_2 =\mathbf{0}$, a matrix of all 0s. To avoid this, we define a reconstruction loss on $\mathbf{r}$ as follows:
\begin{equation}
L_{reconst.} = \frac{1}{N} \sum_{n=1}^N |{\mathbf{x}} - \mathbf{r}|^2.
\end{equation}
\noindent But $\mathbf{r} = \hat{\mathbf{x}}_1 + \hat{\mathbf{x}}_2$. Hence,
\begin{equation}
\label{reconstruction}
L_{reconst.} = \frac{1}{N} \sum_{n=1}^N |{\mathbf{x}}_n - (\hat{\mathbf{x}}_{1_n} + \hat{\mathbf{x}}_{2_n})|^2.
\end{equation}
\noindent Equation \ref{reconstruction} ensures both $\hat{\mathbf{x}}_1$ and $\hat{\mathbf{x}}_2$ are valid seismic images and ensures a cycle consistency with the input image.
Lastly, the composite model is trained with losses (\ref{lproj}-\ref{reconstruction}) over 300 epochs. After each forward pass, we back-propagate $L_{reconst}$ to update $E$, $G$, $\mathbf{P}_1$, $\mathbf{P}_2$. $L_{D_1}$ is back-propagated to update models $D_1$ and $G$. While $L_{D_2}$ is back-propagated to update models $D_2$ and $E$. Finally, $L_{proj}$ and $L_{diff}$ losses are used to update $\mathbf{P}_1$ and $\mathbf{P}_2$ simultaneously.
\section{Results}
\label{results}
\begin{figure*}[htb!]
\centering
\begin{tabular}{cc}
\includegraphics[scale=0.54]{Figure8a.pdf} & \includegraphics[scale=0.54]{Figure8b.pdf} \\
(a) Chaotic Region Delineation & (b) Salt Region Delineation \\
\includegraphics[scale=0.54]{Figure8c.pdf} & \includegraphics[scale=0.54]{Figure8d.pdf} \\
(c) Horizon Lines Delineation & (d) Fault Region Delineation \\
\end{tabular}
\caption{Proposed output images from our latent space factorization model trained on 120x120 patches. In each sub-figure, there are six images. $\mathbf{x}$ is the input image. $\hat{\mathbf{x}}_1$ is the image synthesized from projection matrix $\mathbf{P}_1$ and likewise $\hat{\mathbf{x}}_2$ is synthesized from $\mathbf{P}_2$. $\mathbf{r}$ is the reconstructed image. $|\hat{\mathbf{x}}_1-\hat{\mathbf{x}}_2|$ is the $L_1$-norm sparse output obtained from minimizing $L_{diff}$ in equation \ref{ldiff}. (a) shows the delineation of chaotic facies in the image. For each output image, we threshold out pixel values below 0.15. (b) shows a patch from a salt region. Notice the sparse pixel labels identifies interesting structures in the image. (c) shows parallel reflectors are delineated differently from the regions of smoothed amplitude reflections. (d) shows step-wise amplitude reflectors in the faulted region. Our method does not delineate faults along the faulting planes because it is unsupervised, but this regional delineation is sufficient to extend our patch model to sectional volume fault region delineation. Lastly, $\hat{\mathbf{x}}_1$ in (a), (c), (d) and $\hat{\mathbf{x}}_2$ in (b) were synthetically generated by our algorithm to satisfy our factorization objective because they are dissimilar to images in our training set.}
\label{patch-delineation}
\end{figure*}
\hspace{8pt}We show qualitative results on the trained images and generalize our predictions on the output images to make predictions on full vertical seismic sections. In addition, we show how our method can be applied to attribute extraction. In our attribute extraction process, we analyze two complementary attributes and compare them to six existing attributes in literature.
\hspace{8pt}First, each output image from the entire workflow has both an assigned class and a pixel label corresponding to a region of interest. In Figure~\ref{patch-delineation}, the input image $\mathbf{x}$ obtained from our clustering module is passed to the latent space model to obtain $\hat{\mathbf{x}}_1$ and $\hat{\mathbf{x}}_2$. In each sub-figure of Figure~\ref{patch-delineation}, $\hat{\mathbf{x}}_1$ and $\hat{\mathbf{x}}_2$ are synthesized by the model to be structurally complementary to each other such that their combination reconstructs $\mathbf{x}$. And the $L_1$ norm of their difference reveals a geologically meaningful region. In each sub-figure, we show sparse inference regions using confidence values. Each pixel value is a probability value in $[0, 1]$, where $0$ is the probability a pixel is not part of a geological class and $1$ is otherwise. For improved performance, we threshold-out probability values below a threshold to obtain a cleaner binary image at the bottom right. An evidence to show that both $\hat{\mathbf{x}}_1$ and $\hat{\mathbf{x}}_2$ are factorized from $\mathbf{x}$, as we hypothesized, is to observe their mean-squared error (MSE) against input image $\mathbf{x}$ .
In all four sub-figures, both synthesized images ($\hat{\mathbf{x}}_1$, $\hat{\mathbf{x}}_2$), have higher MSEs than the reconstruction $\mathbf{r}$. This implies that both images are not as similar to the input image as the reconstructed one, because they are factorized components of the input image. Further evidence is observed in $\hat{\mathbf{x}}_1$ in Figures \ref{patch-delineation}(a), \ref{patch-delineation}(c), \ref{patch-delineation}(d) and $\hat{\mathbf{x}}_2$ in Figure \ref{patch-delineation}(b).
In all these instances, the model generates an image that is unlike any image in our training set. The orthogonal counterpart of these images are similar to our input images with more contrast. Hence, $\hat{\mathbf{x}}_1$ \mbox{ and } $\hat{\mathbf{x}}_2$ are equivalent representation of the orthogonal subspaces to which their latent subsidiaries were projected.
The image label and pixel annotation obtained in Figure~\ref{patch-delineation} can be generalized to label sections of the F3 block to assist with seismic interpretation. We trained a segmentation model - DeepLab V3+ \cite[]{chen2017deeplab} on all output images from our proposed pipeline and applied a soft-max classification layer to make a prediction on each reflection amplitude to any of the pre-defined four classes. Furthermore, we ran another experiment to determine whether each amplitude in the seismic section belongs to a structure class. The later experiment fits into an attribute extraction framework to be discussed later. In all predicted pixel labels, we threshold out small probability values to improve our confidence on the pixel labels. Figure~\ref{fig:thresholds} show four levels of threshold values: $0.25, 0.35, 0.45$ and $0.55$.
\begin{figure*}[htb!]
\centering
\includegraphics[width=\textwidth]{Figure9.pdf}
\caption{This figure shows effect of four cut-off thresholds on predicted sections. a) with threshold set to 0.25, the prediction is quite noisy. b) when the threshold is set to 0.35, the prediction is less noisy. c) shows further progressive improvement in prediction at threshold=0.45. d) shows clear structural prediction when the threshold when set to 0.55.}
\label{fig:thresholds}
\end{figure*}
As shown in Figure~\ref{fig:thresholds}, images with a cut-off of $0.55$ performed best on structural classification of the seismic amplitudes when trained on DeepLab V3+ model. Hence, we report results based on a threshold of $0.55$. Note that the colorbars in Figure~\ref{fig:thresholds} do not reflect the thresholds. The thresholds are applied to the output images of the latent space model which are then used to train the segmentation network.
\hspace{8pt}Figure~\ref{fig:2d}, shows the corresponding results for a test section at inline 350 of the F3 block. Figure~\ref{fig:2d}a identifies parallel reflectors with dipping amplitude reflections. In Figure~\ref{fig:2d}b, an approximate blurb is shown on the chaotic sedimentary strip of the section. The chaotic region to the right is not as well delineated as the region on the left. A possible explanation for this observation is that most of the images clustered into the chaotic class were taken from regions on the left of the F3 block. If the chaotic facies on the right differs from the one on the left, then this partial delineation is a derived anomaly that may be corrected by including images from the right chaotic region into our training set. However, since our method of model training is self-supervised, we report our result as is.
Figure~\ref{fig:2d}c identifies the region containing fault structures on the left of the salt dome in the highlighted section. The highlighted region on the right of the salt dome is a false positive. However, the left identified region is delineated correctly. Note that the model delineates regions with faults without tracking the fault lines. Delineating regions of faults helps interpreters locate fault regions from which manual tracking of the faults may be done. Although salt domes are challenging to delineate, we show that our method identifies the salt dome boundary with significant accuracy in Figure~\ref{fig:2d}d. Other non-salt regions to the left and right of the salt are lightly highlighted. They are also false positives but we can dismiss them due to low probability values ($\sim 0.25$) in those regions.
\begin{figure*}[htb!]
\includegraphics[width=\textwidth]{Figure10.pdf}
\caption{Factorized 2D sections of the F3 Block showing a) horizons, b) chaotic, c) faults and d) salt dome structures in inline 350 of the F3 block. The white arrows illustrate regions delineated correctly by our framework}
\label{fig:2d}
\end{figure*}
\begin{figure*}[htb!]
\centering
\includegraphics[width=\textwidth]{Figure11.pdf}
\caption{3D Delineation of structures in the F3 Block. The $z$-axis is the time axis, the x-axis is the inline direction and the y-axis points in the crossline direction. a) shows parallel reflector lines delineated, b) shows a vertical section across the chaotic strip delineated, c) shows regions with fault structures delineated, d) shows the salt-dome region region delineated by our framework.}
\label{fig:3d}
\end{figure*}
We extended our section-based structural delineation in Figure~\ref{fig:2d} to 3D volume factorization of the F3 block. Figure~\ref{fig:3d} shows four facies in four separate classification instances in the F3 block. The delineated facies are marked using a white arrow. Evidence that factorization of the volume occurs can be seen by the relative absence of other structures apart from the classified one. For instance in Figure~\ref{fig:3d}a, only parallel reflectors from the horizon class are shown while the salt dome, fault region, and chaotic facies are factorized out. In Figure~\ref{fig:3d}b, the chaotic facies is delineated and the parallel reflectors shown in Figure~\ref{fig:3d}b are absent. In fault regions shown in Figure~\ref{fig:3d}c, the delineated region the right is a false positive corresponding to the false positive identified region on the right of the salt dome in Figure~\ref{fig:2d}. However, the left white arrows identify the fault region. Note that in Figure~\ref{fig:3d}d, the salt dome is delineated, but the horizons and chaotic regions are factorized out. The 3D delineation applies mostly to the boundary of the salt dome structure and corresponds to the regions delineated in Figure~\ref{fig:2d}d.
\subsection{Attribute Extraction}
\label{attributes}
\hspace{8pt}We further demonstrate that our deep learning model can factorize the F3 block into foreground and background attributes, each acting as features to guide the seismic interpreter to make informed decisions. The DeepLab segmentation model is modified to classify each amplitude in a section into two classes: foreground and background. Hence, every amplitude is mapped to a binary class. For each section, we compare the two predicted classes to six attributes from the literature for qualitative assessment. Gradient of texture (GoT) is a recent seismic attribute based on the gradients of seismic amplitudes in a moving window. In the 3D version, the gradients in the x, y and z directions were measured and combined into one value per pixel. The 2D version calculates gradients only in the x, y coordinates. The GoT attribute has been applied to delineate the boundaries of salt domes in 2D sections and 3D seismic volumes \cite[]{shafiq2015detection}. We compared our proposed attributes to 2D and 3D GoT.
\begin{figure*}[htb!]
\centering
\includegraphics[width=\textwidth]{Figure12.pdf}
\caption{This figure shows eight sub-figures of various attributes extracted from the same F3 block volume. All attributes are shown for inline section 200. a) is 3D GoT attribute shown for inline 200. b) is the human visual system based saliency map for the same section, c) shows phase congruency. Note that phase congruency is an improvement to 3D-GoT. d) is Texture-based GLCM. GLCM performs poorly for structural delineation in this application. e) is the Instantaneous amplitude attribute. This attribute identifies the boundary of the salt dome and highlights a few horizon lines. For completion, we include 2D GoT in f). 2D GoT does not reveal many features aside from the edges of the salt dome. g) and h) are our proposed background and foreground-based attributes. Notice that in h), most features we seek are represented. The salt dome boundary, the chaotic strip, and the horizon lines. The fault region is lightly delineated. g) is complementary to h). It shows the background regions that do not belong to features identified as structures.}
\label{fig:attributes}
\end{figure*}
Phase congruency \cite[]{shafiq2017salt} is another recent seismic attribute that improved on GoT. Phase congruency is a dimensionless quantity and it is unaffected by changes in image illumination and contrast, unlike GoT.
\hspace{8pt}In addition, we compared our method against a saliency-based seismic attribute \cite[]{shafiq2018role}, which view the features in the seismic volume in a perception analogous to human vision. The component features of saliency attributes consist of fast Fourier Transform (FFT) coefficients. Similar to 3D GoT, saliency-based attributes can be computed in 2D and 3D variants. Lastly, we compare against gray-level co-occurrence matrices (GLCM) texture attributes \cite[]{chopra2006applications} and instantaneous amplitude \cite[]{white1991properties} attributes.
Figures~\ref{fig:attributes}g and \ref{fig:attributes}h are our proposed attributes, while Figures \ref{fig:attributes}a - \ref{fig:attributes}f are attributes we compared against. Figure~\ref{fig:attributes}g is the proposed background attribute and Figure~\ref{fig:attributes}h is the proposed foreground attribute. GoT and Phase congruency, in Figure~\ref{fig:attributes}a and \ref{fig:attributes}c, show better delineation of parallel reflectors than our foreground attribute shown in Figure~\ref{fig:attributes}h. However, our method shows a less noisy delineation of the parallel reflectors and highlights the most important ones. Saliency, instantaneous amplitude and 2D-GoT in Figure~\ref{fig:attributes}b, \ref{fig:attributes}e, and \ref{fig:attributes}f delineates mostly the salt boundary and the fault regions - which implies our framework performs better in delineating parallel reflectors. GLCM at Figure~\ref{fig:attributes}d performed least in delineating structural attributes among all eight attributes compared.
\subsection{Comparison with a Machine Learning Framework by \cite{alaudah2018structure}}
\label{compare_aludah}
\hspace{8pt}Our deep learning framework could also be applied to solve a label-mapping problem similar to the problem solved by \cite{alaudah2018structure}. The author applied a non-negative matrix factorization (NNMF) algorithm to predict pixel labels from image labels. The NNMF factorizes an input matrix $X$ into features and label assignments: $$X = WH,$$ where $X$ is a matrix with images as column entries. $W$ is a features vector and $H$ gives the assignment of features in $W$ into their respective classes. The features learned during factorization were mapped to corresponding images to delineate geological structures. In Figure~\ref{fig:alaudahvsus}, we attempt to label pixels by mapping image labels learned from our clustering framework to pixel predictions made by our deep learning model and we compare the result with \cite{alaudah2018structure}'s framework.
\begin{figure*}[htb!]
\centering
\includegraphics[width=\textwidth]{Figure13.pdf}
\caption{We make a class-by-class comparison of our unsupervised model to \cite{alaudah2018structure}'s weakly supervised model.}
\label{fig:alaudahvsus}
\end{figure*}
The author used four classes: chaotic, salt dome, faults, and others, where the latter include images that do not belong to the previous three classes. Two major differences between our framework and the author's framework are as follows: in the latter, an interpreter assigns image labels to a few examples, uses an image retrieval technique to obtain labels for the remaining images which is then applied to the weakly-supervised framework. Secondly, \cite{alaudah2018structure}'s method is not based on a deep learning framework. In contrast, we do not use image labels and we train a deep learning frame for pixel and image labels. Figure~\ref{fig:alaudahvsus} shows four classes of images. Note that in labeling pixels in all the classes, the accuracy of our framework is highest along edges while \cite{alaudah2018structure}'s labeling is more region oriented. For instance, our framework assigns the highest confidence values of delineated geologic features to edges of peaks-to-troughs in all four classes. This is because we used an $L_1$-norm sparsity loss in generating the probability maps for predicting structures. In the chaotic features delineated, the performance of both frameworks is close. Because \cite{alaudah2018structure}'s method was not trained on the horizon's class specifically, the delineation of structures in this class of images was left out by the algorithm. However, in the same class, we are able to delineate lines of parallel reflectors.
\cite{alaudah2018structure}'s labeling captured the fault region more elegantly than ours, but our method highlights the salt regions elaborately than \cite{alaudah2018structure}'s method which mostly labeled the edges or boundary of the salt structure. Salt imaging is challenging due to the rugose structure of salt domes, leading to complex amplitude patterns in salt facies. These complex patterns can conflict with other structural patterns in the feature factorization of \cite{alaudah2018structure}'s framework leading to relatively poorer performance. Our method focuses attempts to separate interesting features from the surrounding background, hence it captures complex patterns better than the previous framework.
\section{Conclusion}
\hspace{8pt}We proposed two frameworks based on hierarchical clustering and a deep adversarial model. We showed that our hierarchical clustering model can be used for unsupervised clustering of seismic images. We also showed that our self-supervised deep learning framework can be used for self-supervised segmentation of geological structures using orthogonal latent space projection. One limitation of our self-supervised model is that the delineated structures are partial, which leaves room for improvement. Furthermore, the hierarchical clustering part and the self-supervised part are separate modules. In future work, we could create a comprehensive framework that would not need a pre-clustering module. The adversarial training method could also be improved for more stable training. Lastly, the latent space factorization methodology could be extended to multiple geological component delineations in each image compared to the current foreground and background-based methodology.
\section{Appendix - A Proof of Equation \ref{entropy}}
\hspace{8pt}Here, we provide detailed proof of equation \ref{entropy}.
Let $\mathbf{X}$ be our input data of images and $\mathbf{X} = [\mathbf{x}_1, \mathbf{x}_2, ..., \mathbf{x}_N ]$. We can assume independence on each $\mathbf{x}_i$. Let the distribution of $\mathbf{X}$ be $p_{data}$. Assume there exists a decoder/generator from which we can generate $\mathbf{x} \in \mathbf{X}$. Let the distribution of this generator over some learned parameter $\theta$, be $p_{\theta}(\mathbf{x})$. The log-likelihood of generating $\mathbf{X} = \mathbb{E}_{\mathbf{x} \sim p_{data}}[\log(p_\theta(\mathbf{x}))]$. We introduce an auxiliary distribution over another learning parameter $\phi$, $q_{\phi}(\mathbf{x})$ to approximate $p_{\theta}(\mathbf{x})$. We introduce $q_\phi(\mathbf{x})$ because to reconstruct each $\mathbf{x}$ from the likelihood function, we must know the underlying distribution. We can re-write the log-likelihood as:
\begin{equation}
\begin{aligned}
\mathbb{E}_{x \sim p_{data}} \left[ \log
\left( p_\theta( \mathbf{x}) \right) \right] \\ = \mathbb{E}_{x \sim p_{data}} \left[ \log
\left(\sum_\mathbf{z} p_\theta( \mathbf{x, z}) \right) \right] \\
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
& = \mathbb{E}_{x \sim p_{data}} \left[\log \left( \sum_\mathbf{z} p_\theta(\mathbf{x|z}) . p_\theta(\mathbf{z}) \right) \right],
\end{aligned}
\label{eqn:appdx}
\end{equation}
\noindent but the posterior $p_\theta(\mathbf{x|z})$ is intractable, hence we use an auxiliary distribution, $q_{\phi}(\mathbf{x})$, to estimate $p_\theta(\mathbf{x|z})$. Note that we introduce a prior $\mathbf{z}$ into equation \ref{eqn:appdx}. From an encoder-decoder framework, the prior $\mathbf{z}$ is the latent space variable passed into the decoder/generator. We know the distribution of $\mathbf{z}$ is Gaussian with zero mean, unit variance due to batch normalization at the end of the encoder. Hence, we re-write equation \ref{eqn:appdx} as:
\begin{equation}
\begin{aligned}
\noindent\mathbb{E}_{\mathbf{x} \sim p_{data}} \left[\log \sum_\mathbf{z} (p_\theta( \mathbf{x, z})) \right]
\end{aligned}
\end{equation}
we can condition the likelihood on $\mathbf{z_1, z_2}$ without loss of generality:
\begin{equation}
\begin{aligned}
& = \mathbb{E}_{\mathbf{x} \sim p_{data}}\left[ \log \left(\sum_\mathbf{z_1} \sum_\mathbf{z_2} (p_\theta(\mathbf{x|z_1, z_2})) . p_\theta(\mathbf{z_1, z_2}) \right) \right].
\end{aligned}
\label{lgen}
\end{equation}
Next, we introduce $q_\phi(\mathbf{z_1}|\mathbf{x})$, $q_\phi(\mathbf{z_2}|\mathbf{x})$ into (\ref{lgen}) follows:
\begin{equation}
\begin{aligned}
= \mathbb{E}_{\mathbf{x} \sim p_{data}} \left[\log \left( \sum_\mathbf{z_1} \sum_\mathbf{z_2} q_{\phi}(\mathbf{z_1|x}).q_{\phi}(\mathbf{z_2|x}) \right. \right. \\ \times
\left. \left. \frac{p_\theta(\mathbf{z_1}). p_\theta(\mathbf{z_2})}{q_{\phi}(\mathbf{z_1|x}). q_{\phi}(\mathbf{z_2|x})} . p_\theta(\mathbf{x|z_1, z_2}) \right) \right]
\end{aligned}
\label{intro_aux}
\end{equation}
Simplifying,
\begin{equation}
\begin{aligned}
(\ref{intro_aux}) = \mathbb{E}_{\mathbf{x} \sim p_{data}} \left[ \log \left(\sum_{\mathbf{z_1} \sim p_\theta(\mathbf{z_1})} \frac{p(\mathbf{z_1})}{q_\phi(\mathbf{z_1 | x)}} \right. \right. \\ \left. \left. \times
\sum_{\mathbf{z}_2 \sim p_{\theta}(\mathbf{z_2})} \frac{p(\mathbf{z_2})}{q_\phi(\mathbf{z_2 | x)}} . p_\theta(\mathbf{x|z_1, z_2}). q_{\phi}(\mathbf{x|z_1, z_2}) \right) \right]
\end{aligned}
\label{simplifying}
\end{equation}
Applying Jensen's inequality,
\begin{equation}
\begin{aligned}
(\ref{simplifying}) \ge \mathbb{E}_{\mathbf{x} \sim p_{data}} \left[ \mathbb{E}_{\mathbf{z_1} \sim q_\phi(\mathbf{z_1|x)}} \log \left( \frac{p_\theta(\mathbf{z_1})}{q_{\phi}(\mathbf{z_1|x})} \right) \right. \\ \left. +
\mathbb{E}_{\mathbf{z_2} \sim q(\mathbf{z_2|x)}} \log \left( \frac{p_\theta(\mathbf{z_2})}{q_{\phi}(\mathbf{z_2|x})} \right) \right. \\ \left. + \mathbb{E}_{q_\phi(\mathbf{z_1, z_2|x})} \log(p_\theta (\mathbf{x|z_1, z_2})) \right]
\end{aligned}
\label{jensen}
\end{equation}
Now we can write each term in their $KL$ equivalence.
\begin{equation}
\begin{aligned}
(\ref{jensen}) = \mathbb{E}_{\mathbf{x} \sim p_{data}} \left[-KL(q_\phi(\mathbf{z_{1}|x})||p_\theta(\mathbf{z_1})) \right. \\ \left. - KL(q_\phi(\mathbf{z_{2}|x})||p_\theta(\mathbf{z_2})) \right. \\ + \left.
\mathbb{E}_{q_{(\mathbf{z_1, z_2|x}})} \log(p_\theta(\mathbf{x|z_1, z_2})) \right]
\end{aligned}
\label{kl_equiv}
\end{equation}
We can re-arrange equation \ref{kl_equiv} in entropy terms:
\begin{equation}
\begin{aligned}
= \mathbb{E}_{\mathbf{x} \sim p_{\mathbf{data}}} \left[ \mathbb{E}_{q_\phi(\mathbf{z_1, z_2|x})} \log \left( p_\theta(\mathbf{x|z_1, z_2}) \right) \right. \\ \left. + H(q_\phi(\mathbf{z_1|x})) + H(q_\phi(\mathbf{z_2|x})) \right. \\ \left. + \mathbb{E}_{q_\phi(\mathbf{z_1})} \log \left(p_\theta(\mathbf{z_1})) \right. \right. \\
\left. + \mathbb{E}_{q_\phi(\mathbf{z_2})} \log \left(p_\theta(\mathbf{z_2})
\right)\right]\\
\end{aligned}
\end{equation}
Further re-arranging, we arrive at the same form as equation \ref{entropy}:
\begin{equation}
\begin{aligned}
= \mathbb{E}_{\mathbf{x} \sim p_{\mathbf{data}}} \left[ \mathbb{E}_{q_\phi(\mathbf{z_1, z_2|x})} \log \left( p_\theta(\mathbf{x|z_1, z_2}) \right) \right. \\ \left. + H(q_\phi(\mathbf{z_1|x})) + H(q_\phi(\mathbf{z_2|x})) \right. \\ \left. - H \left(p_\theta(\mathbf{z_1})\right) - H \left(p_{\theta}(\mathbf{z_2})
\right)\right].\\
\end{aligned}
\label{proof}
\end{equation}
This proves equation (\ref{entropy}).
\bibliographystyle{seg}
|
1,314,259,993,184 | arxiv | \section{Introduction}
The idea of a `condensed vacuum' is generally accepted in modern
elementary particle physics. Indeed, in many different contexts one
introduces a set of elementary quanta whose perturbative empty
vacuum state $|o\rangle$ is not the true ground state of the theory.
For instance, in the physically relevant case of the Standard Model
of electroweak interactions, the situation can be summarized by
saying that "What we experience as empty space is nothing but the
configuration of the Higgs field that has the lowest possible
energy. If we move from field jargon to particle jargon, this means
that empty space is actually filled with Higgs particles. They have
Bose condensed" \cite{thooft}. The translation from field jargon to
particle jargon can be obtained, for instance, along the lines of
Ref.\cite{mech} where the substantial equivalence between the
effective potential of quantum field theory and the energy density
of a dilute particle system was established.
For this reason, it becomes natural to ask \cite{pagano} if Bose
condensation, i.e. the spontaneous creation from the empty vacuum of
elementary spinless quanta and their macroscopic occupation of the
same quantum state, say ${\bf{k}}=0$ in some reference frame
$\Sigma$, might represent the operative construction of a "quantum
ether". This would characterize the {\it physically realized} form
of relativity and could play the role of preferred frame in a modern
Lorentzian approach.
Usually this possibility is not considered with the motivation,
perhaps, that the average properties of the condensed phase are
summarized into a single quantity that transforms as a world scalar
under the Lorentz group. For instance, in the Standard Model, the
vacuum expectation value $\langle\Phi\rangle$ of the Higgs field.
However, this does not imply that the vacuum state itself has to be
{\it Lorentz invariant}. Namely, Lorentz transformation operators
$\hat{U}'$, $\hat{U}''$,..might transform non trivially the
reference vacuum state $|\Psi^{(0)}\rangle$ (appropriate to an
observer at rest in $\Sigma$) into $| \Psi'\rangle$, $|
\Psi''\rangle$,.. (appropriate to moving observers S', S'',..) and
still, for any Lorentz-invariant operator $\hat{G}$, one would find
\begin{equation} \langle \hat{G}\rangle_{\Psi^{(0)}}=\langle
\hat{G}\rangle_{\Psi'}=\langle
\hat{G}\rangle_{\Psi''}=..\end{equation} The possibility of a
non-Lorentz-invariant vacuum state was addressed in Ref.\cite{epjc}
by considering two basically different approaches. In a first
description, by following the axiomatic approach to quantum field
theory \cite{cpt}, the vacuum is described as an eigenstate of the
energy-momentum vector. Therefore, by observing that (with the
exception of unbroken supersymmetries) there are no known
interacting theories with a vanishing vacuum energy, and using the
Poincar\'e algebra of the boost and energy-momentum operators, one
deduces that the physical vacuum cannot be a Lorentz-invariant state
and that, in any moving frame, there should be a non-zero vacuum
spatial momentum $\langle {\hat{P}_i}\rangle_{\Psi'}\neq 0$ along
the direction of motion. In this way, for a moving observer S' the
physical vacuum would look like some kind of ethereal medium for
which, in general, one can introduce a momentum density $\langle
\hat{W}_{0i}\rangle_{\Psi'}$ through the relation (i=1,2,3)
\begin{equation} \label{density} \langle {\hat{P}_i}\rangle_{\Psi'}\equiv \int
d^3x~\langle \hat{W}_{0i}\rangle_{\Psi'} \neq 0 \end{equation} On
the other hand, in an alternative picture where one assumes the
following form of the vacuum energy-momentum tensor
\cite{zeldovich,weinberg}
\begin{equation}\label{zeld} \langle \hat{W}_{\mu\nu}\rangle_
{\Psi^{(0)}}=\rho_v ~\eta_{\mu\nu}\end{equation} ($\rho_v$ being a
space-time independent constant and $\eta_{\mu\nu}={\rm
diag}(1,-1,-1,-1)$), one is driven to completely different
conclusions. In fact, by introducing the Lorentz transformation
matrices $\Lambda^\mu_\nu$ to any moving frame S', defining $\langle
\hat{W}_{\mu\nu}\rangle_{\Psi'}$ through the relation
\begin{equation}\langle \label{cov}
\hat{W}_{\mu\nu}\rangle_{\Psi'}=\Lambda^{\sigma}{_\mu}\Lambda^{\rho}{_\nu}
~\langle\hat{W}_{\sigma\rho}\rangle_{\Psi^{(0)}}\end{equation} and
using Eq.(\ref{zeld}),
it follows that the expectation value of $\hat{W}_{0i}$ in any
boosted vacuum state $| \Psi'\rangle$ vanishes, just as it vanishes
in $|\Psi^{(0)}\rangle$, i.e. \begin{equation} \label{density1} \int
d^3x~ \langle \hat{W}_{0i}\rangle_{\Psi'} \equiv \langle
{\hat{P}_i}\rangle_{\Psi'}= 0 \end{equation} As discussed in
Ref.\cite{epjc}, both approaches have their own good motivations and
it is not so obvious to decide between Eq.(\ref{density}) and
Eq.(\ref{density1}) on pure theoretical grounds.
At the same time, checking the Lorentz invariance of the physical
vacuum by an explicit microscopic calculation, in the realistic case
of the Standard Model, seems to go beyond the present possibilities.
To this end, in fact, one should construct the transformed vacuum
state $|\Psi'\rangle$ by acting with the appropriate boost generator
on the reference condensed vacuum state $|\Psi^{(0)}\rangle$. Even
disposing, at least in the simplified case of spontaneous symmetry
breaking in a pure scalar theory \cite{ciancitto}, of a
non-perturbative ansatz for $|\Psi^{(0)}\rangle$, as a coherent
state expressed in terms of the creation and annihilations operators
$a^{\dagger}_{\bf p}$ and $a_{\bf p}$ of the trivial empty vacuum
state $|o\rangle$, one is faced with a serious problem: the standard
second-quantized form of the boost generators \begin{equation} \hat{M}_{0i}=i\int
{{d^3{\bf p} }\over{(2\pi)^3}} ~a^{\dagger}_{\bf p}~ \omega({\bf
p}){{\partial}\over{\partial p_i}}~ a_{\bf p} \end{equation} is only valid for
a free-field theory. For an interacting theory, the explicit
construction of the boost generators is only known in perturbation
theory (see e.g. \cite{poincare1,poincare2} and references quoted
therein) and thus this type of approximation could hardly be trusted
in the presence of non-perturbative phenomena such as vacuum
condensation. In addition, even in perturbation theory, the
elimination of ultraviolet divergences in global operators
represents a delicate task so that only very simple theories or
low-dimensionality cases have been worked out so far. For these
reasons, deciding on the Lorentz-invariance of the condensed vacuum
of present particle physics represents a highly non-trivial problem.
Alternatively, one might argue that a satisfactory solution of the
vacuum-energy problem lies definitely beyond flat space. A non-zero
$\rho_v$, in fact, will induce a cosmological term in Einstein's
field equations and a non-vanishing space-time curvature which
anyhow dynamically breaks global Lorentz symmetry.
Nevertheless, in our opinion, in the absence of a consistent quantum
theory of gravity, physical models of the vacuum in flat space can
be useful to clarify a crucial point that, so far, remains obscure:
the huge renormalization effect that is seen when comparing the
typical vacuum-energy scales of modern particle physics with the
experimental value of the cosmological term needed in Einstein's
equations to fit the observations. For instance, the picture of the
vacuum as a superfluid explains in a natural way why there might be
no non-trivial macroscopic curvature in the equilibrium state where
any liquid is self-sustaining \cite{volovik}. In this framework, the
condensation energy of the medium plays no observable role so that
the relevant curvature effects may be orders of magnitude smaller
than those expected by solving Einstein's equations with the full
$\langle \hat{W}_{\mu\nu}\rangle_ {\Psi^{(0)}}$ as a source term. In
this perspective, ``induced-gravity'' \cite{adler} approaches,
where gravity somehow arises from the excitations of the quantum
vacuum itself, may become natural and, to find the appropriate form
of the energy-momentum tensor in Einstein's equations, we are lead
to sharpen our understanding of the vacuum structure and of its
excitation mechanisms by starting from the physical picture of a
superfluid medium.
By following this approach, in Ref.\cite{epjc}, to explore the
possible effects of the energy-momentum flow expected in a moving
frame according to Eq.(\ref{density}), it was adopted a
phenomenological two-fluid model in which the quantum vacuum, in
addition to the main zero-entropy superfluid component, contains a
small fraction of ``normal'' fluid. This is responsible for a
non-zero $\langle \hat{W}_{0i}\rangle_{\Psi'}$ and gives rise to a
small heat flow and to an effective thermal gradient
\begin{equation} \label{gradient}
{{\partial T }\over {\partial x^i}}\equiv-{{\langle
W_{0i}\rangle_{\Psi'} }\over{\kappa_0}}
\end{equation} Here $\kappa_0$ is an unknown parameter, introduced for
dimensional reasons, that plays the role of thermal conductivity of
the vacuum. Since its value is unknown, the effective thermal
gradient is left as an entirely free quantity whose magnitude should
be constrained by experiments.
In principle, this effective gradient could induce small convective
currents in a loosely bound system as a gaseous medium (placed in a
container at rest in the laboratory frame) and produce a slight
anisotropy of the speed of light in the gas. On the other hand, for
a strongly bound system, such as a solid or liquid transparent
medium, the small energy flow generated by the motion with respect
to the vacuum condensate should dissipate mainly by heat conduction
with no particle flow and no light anisotropy in the rest frame of
the medium, in agreement with the classical experiments in glass and
water.
For this reason, one should design a new class of ether-drift
experiments where two optical cavities are filled with a gas and
study the frequency shift $\Delta \nu$ between the two resonators
that gives a measure of the possible anisotropy of the two-way speed
of light. Such a type of "non-vacuum" experiment would be along the
lines of Ref.\cite{holger} where just the use of optical cavities
filled with different materials was considered as a useful tool to
study possible deviations from Lorentz invariance.
The aim of this paper is to give a set of precise predictions for
this new class of ether-drift experiments. In Sect.2 we shall
provide a definite model for the two-way speed of light. In Sect.3,
we shall discuss various experimental set up and the expected form
of the signal. Finally, in Sect.4 we shall present our summary and
conclusions.
\section{The two-way speed of
light in a gaseous medium}
Rigorous treatments of light propagation in dielectric media are
based on the extinction theory \cite{born}. This was originally
formulated for continuous media where the interparticle distance is
smaller than the light wavelength. In the opposite case of an
isotropic, dilute random medium \cite{weber}, it is relatively easy
to compute the scattered wave in the forward direction and obtain
the refractive index. However, if there are convective currents,
taking into account the motion of the molecules that make up the gas
is a non-trivial problem. If solved, one expects an angular
dependence of the refractive index and an anisotropy of the phase
speed of the refracted light.
This expectation derives from a much simpler, semi-quantitative
approach where one introduces from scratch the refractive index
${\cal N}$ of the gas and the time $t$ spent by refracted light to
cover some given distance $L$ within the medium. By assuming
isotropy, one would find $t={\cal N}L/c$. This can be expressed as
the sum of $t_0=L/c$ and $t_1=({\cal N}-1)L/c$ where $t_0$ is the
same time as in the vacuum and $t_1$ represents the additional,
average time by which the refracted light is ``slowed'' down by the
presence of matter. If there are convective currents, so that $t_1$
is different in different directions, one can deduce an anisotropy
of the speed of light proportional to $({\cal N}-1)$. To see this,
let us consider light propagating in a 2-dimensional plane and
express $t_1$ as
\begin{equation} t_1={{L}\over{c}}f({\cal N}, \theta, \beta)
\end{equation} with $\beta=V/c$, $V$ being the velocity of the
laboratory with respect to the preferred frame $\Sigma$ where the
isotropic form
\begin{equation}
\label{boundary} f({\cal N}, \theta, 0)={\cal N}-1
\end{equation}
is assumed. By expanding around ${\cal N}=1$ where, whatever
$\beta$, $f$ vanishes by definition, one finds for gaseous systems
(where ${\cal N}-1 \ll 1$) the universal trend
\begin{equation} f({\cal N}, \theta,\beta)\sim ({\cal N}-1)F(\theta,\beta) \end{equation}
with
\begin{equation}
F(\theta,\beta)\equiv (\partial f/\partial {\cal N})|_{ {\cal N}=1}
\end{equation} and $F(\theta,0)=1$.
Therefore, from \begin{equation} t({\cal N},\theta,\beta)=
{{L}\over{c({\cal N},\theta,\beta)}}\sim {{L}\over{c}}+
{{L}\over{c}}({\cal N}-1)~F(\theta,\beta)
\end{equation} one gets
\begin{equation}
c({\cal N},\theta,\beta)\sim {{c}\over{ {\cal N} }}~ \left[1-
({\cal N}-1) ~(F(\theta,\beta) -1)\right]
\end{equation}
Analogous relations hold for the two-way speed of light $
\bar{c}({\cal N},\theta,\beta)$ \begin{equation} \bar{c}({\cal
N},\theta,\beta)={{2~c({\cal N},\theta,\beta)c({\cal N},\pi
+\theta,\beta)}\over{c({\cal N},\theta,\beta) +c ({\cal N},\pi
+\theta,\beta)}} \sim {{c}\over{ {\cal N} }} \left[1- ({\cal N}-1)
~\left( {{F(\theta,\beta) + F(\pi+\theta,\beta)}\over{2}} -1\right) \right]
\end{equation} that is commonly measured in optical resonators. In
this case, one predicts a non-zero anisotropy
\begin{equation} {{\Delta \bar{c}_\theta}\over{c}} \equiv {{{\bar{c}}({\cal N},\pi/2,\beta)-
{\bar{c}}({\cal N},0,\beta)}\over{c}}\sim ({\cal N}-1)~{{\Delta
F}\over{2}}
\end{equation} with $\Delta F= F(0,\beta)+F(\pi,\beta)
-F(\pi/2,\beta)-F(3\pi/2,\beta)$ and the characteristic scaling law
\begin{equation} \label{scale} {{
\Delta\bar{c}_\theta( {\cal N} ) } \over{ \Delta \bar{c}_\theta(
{\cal N}') }} \sim {{ {\cal N}-1 }\over{ {\cal N}'-1 }}
\end{equation} More quantitative estimates can be obtained by exploring
some general properties of the function
$F(\theta,\beta)$. By expanding in powers of $\beta$
\begin{equation}
F(\theta,\beta)-1 = \beta F_1(\theta) + \beta^2 F_2(\theta)+...
\end{equation}
and taking into account that, by the very definition of two-way
speed, $\bar{c}({\cal N},\theta,\beta)= \bar{c}({\cal
N},\theta,-\beta)$, it follows that $F_1(\theta)=-F_1(\pi +
\theta)$. Therefore, we get the general structure of the two-way
speed of light to ${\cal O}(\beta^2)$
\begin{equation}
\label{legendre} \bar{c}({\cal N},\theta,\beta) \sim {{c}\over{
{\cal N} }} \left[1- ({\cal N}-1)~\beta^2
\sum^\infty_{n=1}\zeta_{2n}P_{2n}(\cos\theta)
\right]
\end{equation}
in which we have expressed the combination $F_2(\theta) + F_2(\pi
+\theta)$ as an infinite expansion of even-order Legendre
polynomials with unknown coefficients $\zeta_{2n}={\cal O}(1)$.
This general structure can be compared with the corresponding result
\cite{pla} obtained by using Lorentz transformations to connect S'
to the preferred frame
\begin{equation} \label{twoway} \bar{c}({\cal N},\theta,\beta)\sim {{c}\over{ {\cal N}
}}~[1-\beta^2~(A+B\sin^2\theta)] \end{equation} with
\begin{equation} \label{lorentz} A\sim 2({\cal N}-1)
~~~~~~~~~~~~B\sim -3({\cal N}-1)\end{equation} that corresponds to
set in Eq.(\ref{legendre}) $\zeta_2=2$ and all $\zeta_{2n}=0$ for $n
> 1$. Eqs.(\ref{twoway})-(\ref{lorentz}), that represent a definite realization
of the general structure in (\ref{legendre}), provide a partial
answer to the problems posed by our limited knowledge of the
electromagnetic properties of gaseous systems and will be adopted in
the following as our basic model for the two-way speed of light.
Notice that Eqs.(\ref{twoway})-(\ref{lorentz}) lead to
\begin{equation} \label{eq1}
{{\Delta\bar{c}_\theta( {\cal N})}\over{c}} \sim 3 ({\cal N}
-1)~{{V^2}\over{c^2}}\end{equation} and thus Eq.(\ref{scale}) is
identically satisfied. At the same time, one gets agreement with the
pattern observed in the classical and modern ether-drift
experiments, as illustrated in Refs.\cite{pla}, that suggests (for
gaseous media {\it only}) a relation of the type in Eq.(\ref{eq1}).
In fact, in the classical experiments performed in air at
atmospheric pressure, where ${\cal N}\sim 1.000293$, the observed
anisotropy was ${{\Delta\bar{c}_\theta}\over{c}}\lesssim 10^{-9}$
thus providing a typical value $V/c\sim 10^{-3}$, as that associated
with most cosmic motions. Analogously, in the classical experiments
performed in helium at atmospheric pressure, where ${\cal N}\sim
1.000035$ (and in a modern experiment with He-Ne lasers where ${\cal
N}\sim 1.00004$), the observed effect was
${{\Delta\bar{c}_\theta}\over{c}}\lesssim 10^{-10}$ so that again
$V/c\sim 10^{-3}$.
Notice also that, although originating from a different theoretical
framework, Eq.(\ref{twoway}) is formally analogous to the expression
of the two-way speed of light in the RMS formalism
\cite{robertson,ms} where $A$ and $B$ are taken as free parameters.
One conceptual detail concerns the gas refractive index whose
reported values are experimentally measured on the earth by two-way
measurements. For instance for the air, the most precise
determinations are at the level $10^{-7}$, say ${\cal N}_{\rm
air}=1.0002926..$ for yellow light at STP (Standard Temperature and
Pressure). By assuming a non-zero anisotropy in the earth's frame,
one should interpret the isotropical value $c/{\cal N_{\rm air}}$ as
an angular average of Eq.(\ref{twoway}), i.e.
\begin{equation} \label{nair} {{c}\over{ {\cal N}_{\rm air} }}\equiv
\langle\bar{c}(\bar{\cal N}_{\rm air},\theta,\beta)\rangle=
{{c}\over{ \bar {\cal N} _{\rm air} }} ~[1-{{1}\over{2}} (\bar{\cal
N}_{\rm air} -1){{V^2}\over{c^2}}]
\end{equation} From this relation, one can determine the unknown value $\bar
{\cal N} _{\rm air} \equiv {\cal N}(\Sigma)$ (as if the gas were at
rest in $\Sigma$), in terms of the experimentally known quantity
${\cal N}_{\rm air}\equiv{\cal N}(earth)$ and of $V$. In practice,
for the standard velocity values involved in most cosmic motions,
say 200 km/s $\leq V \leq $ 400 km/s, the difference between ${\cal
N}(\Sigma)$ and ${\cal N}(earth)$ is well below $10^{-9}$ and thus
completely negligible. The same holds true for the other gaseous
systems at STP (say nitrogen, carbon dioxide, helium,..) for which
the present experimental accuracy in the refractive index is, at
best, at the level $10^{-6}$. Finally, the isotropic two-way speed
of light is better determined in the low-pressure limit where
$({\cal N}-1)\to 0$. In the same limit, for any given value of $V$,
the approximation ${\cal N}(\Sigma)={\cal N}(earth)$ becomes better
and better.
\section{Ether-drift experiments in gaseous media}
From the point of view of ether-drift experiments, the crucial
ingredient, that might indicate the existence of a preferred frame,
consists in detecting the characteristic modulations of the signal
due to the earth's rotation. Descriptions of this important effect
are already available in the literature. For instance, within the
SME model \cite{sme} the relevant formulas are given in the appendix
of Ref.\cite{mewes} and for the RMS test theory \cite{robertson,ms}
one can look at Ref.\cite{applied}. However, either due to the great
number of free parameters (19 in the SME model) and/or to the
restriction to a definite experimental set up, it is not always easy
to adapt these papers to the actual conditions needed for our
experimental test. For this reason, in the following, we will
present a set of compact formulas that can be immediately used by
the reader to evaluate the signal when two arbitrary gaseous media
fill the resonating cavities. The formalism covers most experimental
set up including the very recent type of experiment proposed in
Ref.\cite{luiten} to perform tests of the Standard Model.
The main point is that the earth's rotation enters only through two
quantities, $v=v(t)$ and $\theta_0=\theta_0(t)$, respectively the
magnitude and the angle associated with the projection of the
unknown cosmic earth's velocity ${\bf{V}}$ in the plane of the
interferometer.
Once the angle $\theta_0$ is conventionally defined when one of the
arms of the interferometer is oriented to the North point in the
laboratory (counting $\theta_0$ from North through East so that
North is $\theta_0=0$ and East is $\theta_0=\pi/2$), we can
immediately use the formulas given by Nassau and Morse
\cite{nassau}. These are valid for short-term observations, say 3-4
days, where there are no appreciable changes in the cosmic velocity
due to changes in the earth's orbital velocity around the Sun and
the only time dependence is due to the earth's rotation.
In this approximation, introducing the magnitude $V$ of the full
earth's velocity with respect to a hypothetic preferred frame
$\Sigma$, its right ascension $\alpha$ and angular declination
$\gamma$, we get
\begin{equation} \label{cosine}
\cos z(t)= \sin\gamma\sin \phi + \cos\gamma
\cos\phi \cos(\tau-\alpha)
\end{equation} \begin{equation}
\sin z(t)\cos\theta_0(t)= \sin\gamma\cos \phi -\cos\gamma
\sin\phi \cos(\tau-\alpha)
\end{equation} \begin{equation}
\sin z(t)\sin\theta_0(t)= \cos\gamma\sin(\tau-\alpha) \end{equation}
\begin{equation} \label{projection}
v(t)=V \sin z(t) ,
\end{equation}
Here $z=z(t)$ is the zenithal distance of ${\bf{V}}$. Namely, $z=0$
corresponds to a ${\bf{V}}$ which is perpendicular to the plane of
the interferometer and $z=\pi/2$ to a ${\bf{V}}$ that lies entirely
in that plane. Further, $\phi$ is the latitude of the laboratory and
$\tau=\omega_{\rm sid}t$ is the sidereal time of the observation in
degrees ($\omega_{\rm sid}\sim {{2\pi}\over{23^{h}56'}}$).
Let us now consider two orthogonal cavities oriented for simplicity
North-South (cavity 1) and East-West (cavity 2) in the laboratory
frame. They are filled with two different gaseous media with
refractive indices ${\cal N}_i$ (i=1,2) such that ${\cal
N}_i=1+\epsilon_i$, and $0\leq \epsilon_i \ll 1$. The frequency in
each cavity is \begin{equation} \nu_i(\theta_i)=\bar{c}_i({\cal
N}_i,\theta_i,\beta)k_i \end{equation} and the frequency shift is
\begin{equation} \Delta\nu=\nu_1(\theta_1)-\nu_2(\theta_2)
\end{equation} In the above relations we have introduced the
parameters $k_i$
\begin{equation} k_i={{m_i}\over{2L_i}}\end{equation} where $m_i$ are integers
fixing the cavity modes and $L_i$ are the cavity lengths. Finally,
$\theta_i$ is the angle
between ${\bf{V}}$ and the axis of the i-th cavity and
$\bar{c}_i({\cal N}_i,\theta_i,\beta)$ denote the two-way speeds of
light in (\ref{twoway}).
We observe that, in the presence of an effective vacuum thermal
gradient, one might also consider pure thermal conduction effects in
the solid parts of the apparatus. Even by using cavities with an
ultra-low thermal expansion coefficient, these conduction effects
could induce tiny differences of the cavity lengths (and thus of the
cavity frequencies) upon active rotations of the apparatus or under
the earth's rotation. However, this effect does not depend on the
gas that fills the cavity and therefore can be preliminarily
evaluated and subtracted out by first running the experiment in the
vacuum mode, i.e. at the same room temperature but when no gas is
present inside the cavities. The precise experimental limits of
Ref.\cite{herrmann} (obtained with vacuum cavities at room
temperature) show that any such effect can be reduced to the level
$10^{-15}-10^{-16}$ and thus would be irrelevant for our purpose. In
fact, as we shall show in a moment, the typical magnitudes of the
signal, expected by running the experiments in the gaseous mode,
should be larger by 4-5 orders of magnitude.
By introducing the unit vectors $\hat{\bf n}_i$ that fix the
direction of the two cavities and the projection ${\bf{v}}$ of the
full ${\bf{V}}$ in the interferometer's plane, one finds
\begin{equation}
V^2\sin^2\theta_i=V^2(1-\cos^2\theta_i)=V^2-(\hat{\bf n}_i\cdot
{\bf{v}} )^2 \end{equation} so that ($v=|{\bf{v}}|$)
\begin{equation} V^2\sin^2\theta_1=V^2- v^2\cos^2\theta_0 \end{equation} and
\begin{equation} V^2\sin^2\theta_2=V^2- v^2\sin^2\theta_0\end{equation}
Therefore, by defining the reference frequency $\nu_0={{c
k_1}\over{{\cal N}_1}}$ and introducing the parameter $\xi$ through
\begin{equation} \xi={{ {\cal N}_1 k_2}\over{{\cal N}_2 k_1 }} \end{equation}
one finds the relative frequency shift \begin{equation}
\label{general} {{\Delta \nu(t)}\over{\nu_0}}=1- \xi +
{{V^2}\over{c^2}}[\xi(A_2 +B_2) -(A_1+B_1)] +
{{v^2(t)}\over{c^2}}[B_1\cos^2\theta_0(t) - \xi
B_2\sin^2\theta_0(t)] \end{equation} For a symmetric apparatus
where ${\cal N}_1={\cal N}_2$, $A_1=A_2$, $B_1=B_2=B$ and $\xi=1$,
one finds
\begin{equation} \label{symm}{{\Delta \nu(t)_{\rm
symm}}\over{\nu_0}} = B {{v^2(t)}\over{c^2}} \cos2\theta_0(t)
\end{equation} On the other hand for a non-symmetric apparatus of
the type considered in Ref.\cite{luiten} with $L_1=L_2=L$, but where
one can conveniently arrange ${\cal N}_1=1$ (up to negligible terms)
so that $A_1\sim B_1 \sim 0$, denoting ${\cal N}_2={\cal N}$,
$A_2=A$, $B_2=B$, ${{m_2}\over{m_1}}={\cal P}$, we find
\begin{equation} \label{asymm} {{\Delta \nu(t)}\over{\nu_0}}=1-
{{{\cal P}\over{\cal N}}}+ {{{\cal P}\over{\cal
N}}}{{V^2}\over{c^2}}(A +B) - B{{{\cal P}\over{\cal
N}}}{{v^2(t)}\over{c^2}} \sin^2\theta_0(t) \end{equation} To
consider experiments where one or both resonators are placed in a
state of active rotation (at a frequency $\omega_{\rm rot} \gg
\omega_{\rm sid}$), it is convenient to modify Eq.(\ref{general}) by
rotating the resonator 1 by an angle $\delta_1$ and the resonator 2
by an angle $\delta_2$ so that the last term in Eq.(\ref{general})
becomes
\begin{equation} {{v^2(t)}\over{
c^2}}[B_1\cos^2(\delta_1-\theta_0(t)) - \xi
B_2\sin^2(\delta_2-\theta_0(t))] \end{equation} Therefore, in a
fully symmetric apparatus where ${\cal N}_1={\cal N}_2$, $A_1=A_2$,
$B_1=B_2=B$ and $\xi=1$ and both resonators rotate, as in
Ref.\cite{schiller}, setting
\begin{equation}\delta_1=\delta_2=\omega_{\rm rot}t \end{equation} one obtains
\begin{equation} \label{symm2} {{\Delta \nu(t)_{\rm
symm}}\over{\nu_0}}= B {{v^2(t)}\over{c^2}} \cos2( \omega_{\rm rot}t
-\theta_0(t)) \end{equation} On the other hand, if only one
resonator rotates, as in Ref.\cite{herrmann}, setting $\delta_1=0$
and $\delta_2=\omega_{\rm rot}t$ one obtains the alternative result
\begin{equation} \label{asymm2} {{\Delta \nu(t)}\over{\nu_0}}= B
{{v^2(t)}\over{2 c^2}}[\cos2\theta_0(t) + \cos2( \omega_{\rm rot}t
-\theta_0(t))] \end{equation} By first filtering the signal at the
frequency $\omega=\omega_{\rm rot} \gg \omega_{\rm sid}$, the main
difference between the two expressions is an overall factor of two.
Let us now return to the general case of a non-rotating set up
Eq.(\ref{general}). Using Eqs.({\ref{cosine})-({\ref{projection}) we
obtain the simple Fourier expansion \begin{equation} {{\Delta
\nu(t)}\over{\nu_0}}=1-\xi + (g_0+g_1\sin\tau +g_2\cos\tau +g_3\sin
2\tau+ g_4\cos2\tau )\end{equation} where \begin{equation}
\label{g0} g_0={{V^2}\over{c^2}}[ \xi(A_2 +B_2) -(A_1+B_1) +
B_1(\sin^2\gamma\cos^2\phi+ {{1}\over{2}}\cos^2\gamma\sin^2\phi)
-{{1}\over{2}}\xi B_2 \cos^2\gamma ]\end{equation} \begin{equation}
\label{f12} g_1=-{{1}\over{2}}{{V^2}\over{c^2}}B_1\sin 2\gamma\sin
2\phi \sin \alpha
~~~~~~~~~~~~~~~~~~~~~~~g_2=-{{1}\over{2}}{{V^2}\over{c^2}}B_1\sin
2\gamma\sin 2\phi \cos \alpha \end{equation} \begin{equation}
\label{f34} g_3={{1}\over{2}}{{V^2}\over{c^2}}(B_1\sin^2\phi +\xi
B_2)\cos^2\gamma\sin 2\alpha
~~~~~~~~~~g_4={{1}\over{2}}{{V^2}\over{c^2}}(B_1\sin^2\phi +\xi
B_2)\cos^2\gamma\cos 2\alpha~~\end{equation} Since the mean signal
is most likely affected by systematic effects, one usually
concentrates on the daily modulation. In this case, assuming that
$g_1$, $g_2$, $g_3$ and $g_4$ can be extracted to good accuracy from
the experimental data, one can try to obtain a pair of angular
variables through the two independent determinations of $\alpha$
\begin{equation} \label{alpha} \tan \alpha=
{{g_1}\over{g_2}}~~~~~~~~~~~~~~~\tan 2\alpha=
{{g_3}\over{g_4}}\end{equation} and the relation \begin{equation}
\tan |\gamma| ={{|B_1\sin^2\phi +\xi B_2|}\over{|2 B_1\sin 2\phi|}}~
\sqrt{ {{g^2_1+g^2_2}\over{g^2_3+g^2_4}}} \end{equation} Notice that
Eqs.(\ref{g0})-(\ref{f34}) remain unchanged under the replacement
$(\alpha,\gamma) \to (\alpha+\pi,-\gamma)$. Also, two dynamical
models that predict the same anisotropy parameters up to an overall
re-scaling $B_i \to \lambda B_i$ would produce the same $|\gamma|$
from the experimental data.
Finally for a symmetric apparatus, where $B_1=B_2=B$ and $\xi=1$,
one obtains the simpler relation \begin{equation} \label{gamma} \tan
|\gamma| ={{1+\sin^2\phi }\over{|2 \sin 2\phi|}}~ \sqrt{
{{g^2_1+g^2_2}\over{g^2_3+g^2_4}}} \end{equation} where any
reference to the anisotropy parameters drops out.
To obtain some order of magnitude estimate, let us consider the
amplitude of the modulation of the signal at the sidereal frequency
for a typical latitude of the laboratory $|\phi |\sim 45^o$. This is
given by \begin{equation} g_{\omega_{\rm sid}}= \sqrt{ g^2_1 +
g^2_2}= {{1}\over{2}}{{V^2}\over{c^2}}~|B_1\sin 2\gamma|
\end{equation} By assuming the cavity 1 to be filled with carbon
dioxide (whose refractive index at atmospheric pressure is ${\cal
N}_1 \sim 1.00045$) and the typical value ${{V^2}\over{c^2}} \sim
10^{-6}$ (associated with most cosmic motions) one expects a typical
modulation of the relative frequency shift $ g_{\omega_{\rm
sid}}\sim 10^{-10}$. Analogously, for helium at atmospheric pressure
(where ${\cal N}_1 \sim 1.000035$) one expects $g_{\omega_{\rm
sid}}\sim 10^{-11}$. As anticipated, these values would be 4$-$5
orders of magnitude larger than the limit $10^{-15}-10^{-16}$ placed
by the present ether-drift experiments in vacuum.
\section{Summary and conclusions}
In principle, on the basis of very general arguments related to a
non-zero vacuum energy, the physical condensed vacuum of present
particle physics might represent a preferred reference frame. In
this case, in any moving frame there might be a non-zero vacuum
energy-momentum flow along the direction of motion. By treating the
quantum vacuum as a relativistic medium, this non-zero
energy-momentum flow should behave as an effective thermal gradient.
As such, it could induce small convective currents in a loosely
bound system as a gas and an anisotropy of the speed of light.
For this reason, we have considered in this paper a new class of
ether-drift experiments in which optical resonators are filled by
gaseous media. The existence of convective currents leads to the
general structure of the two-way speed in Eq.(\ref{legendre}) that
admits Eqs.(\ref{twoway})-(\ref{lorentz}) as a special case.
In this particular limit, by using the basic relations
(\ref{cosine})-(\ref{projection}) to take into account the effect of
the earth's rotation, we have derived a set of definite predictions
that cover most experimental set up. For the typical velocities
involved in most cosmic motions, the expected relative frequency
shift between the two resonators should be about 4$-$5 orders of
magnitude larger than the limit $10^{-15}-10^{-16}$ placed by the
present ether-drift experiments in vacuum.
We want to emphasize that, due to the limited precision
characterizing our knowledge of the electromagnetic properties of
gaseous media, that forces us to restrict to relations
(\ref{twoway})-(\ref{lorentz}) for the two-way speed, we cannot
exclude the existence of other competing mechanisms that, while
physically different from our proposed drift of the vacuum energy,
may simulate the same effects. For instance, a similar direction
dependence of the refractive index might also be introduced if the
molecules in the gas exhibit no net motion but instead a suitable
non-isotropic local interaction of the incoming radiation with the
medium is introduced, for instance within the more general framework
of the SME model \cite{sme}. In this case, there might be
non-equivalent ways to obtain the same characteristic experimental
signatures.
Still, we believe that our picture of light anisotropy, as arising
from the convective currents that can be established in dilute
systems, provides a simple theoretical framework to understand why
Eq.(\ref{eq1}), while being consistent with the pattern observed in
gaseous systems, does {\it not} apply to Michelson-Morley
experiments performed in solid transparent media \cite{fox} as
perspex (where ${\cal N}\sim 1.5$).
In any case, exploring the class of scenarios consistent with
Eqs.(\ref{twoway})-(\ref{lorentz}) leads to consider the following
experimental checks:
~~i) for a symmetric apparatus one should try to extract from the
data the product $H=B{{V^2}\over{c^2}}$ and, by using
Eqs.(\ref{alpha}) and (\ref{gamma}), two pairs of conjugate angular
variables $(\alpha,\gamma)$ $(\alpha+\pi,-\gamma)$. Also, by
suitably changing the gaseous medium (and its pressure) within the
cavities, one should try to check the precise trend predicted in
Eqs.(\ref{scale}) and (\ref{lorentz}), namely
\begin{equation} {{H'}\over{H''}}\sim {{ {\cal N}'-1}\over{ {\cal
N}''-1 }}\end{equation}
ii)~~for a non-symmetric apparatus of the type proposed in
Ref.\cite{luiten}, where one can conveniently fix the cavity
oriented North-South to have ${\cal N}_1=1$ (up to negligible
terms), by using Eqs.(\ref{lorentz}) one predicts $B_1\sim 0$ in
Eqs.(\ref{f12}) and (\ref{f34}) so that all time dependence should
be due to $B_2$. Thus the modulation of the signal should be a pure
$\omega=2\omega_{\rm sid}$ effect with no appreciable contribution
at $\omega=\omega_{\rm sid}$
iii)~~for a deeper analysis, one should keep in mind that, in each
single session, the direction $(\alpha,\gamma)$ cannot be
distinguished from the opposite direction $(\alpha+\pi,-\gamma)$.
For this reason, a whole set j=1,2..M of short-term experimental
sessions should be performed in different periods along the earth's
orbit to obtain an overall consistency check. Notice that, for a
complete description of the observations over a one-year period, it
is not necessary to modify the simple formulas
Eqs.(\ref{g0})-(\ref{f34}) and introduce explicitly the further
modulations associated with the orbital frequency $\Omega_{\rm
orb}\sim {{2\pi}\over{1 ~{\rm year}}}$. Rather, by plotting on the
celestial sphere all directions defined by the $(\alpha_j,\gamma_j)$
pairs obtained in the various short-term observations, one can try
to reconstruct the earth's ``aberration circle''. If this will show
up, by using the formulas of the spherical triangles, one will be
able to determine the mean cosmic velocity $\langle V\rangle $ from
the angular opening of the circle and the known value of the earth's
orbital
velocity $\sim 30 $ km/s. In this way, given the value of
$\langle H \rangle$, one will be able to disentangle $\langle
V\rangle $ from $B$ and estimate the absolute magnitude of the
anisotropy parameter.
\vskip 50 pt
|
1,314,259,993,185 | arxiv | \section{Introduction}
A variety of strategies have been explored and proposed to directly detect dark matter (DM) via the interactions that it may have with the Standard Model (SM) particles. One possibility is to design detectors that look for excitations of a material as the DM scatters off or is absorbed in it. The sensitivity of such experiments at low DM masses is fundamentally limited by the minimum energy required to create an excitation in the material. Currently, several leading constraints on DM with masses below a GeV are set by detectors that look for DM-electron interactions using semiconductor targets, where thresholds are set by the ionization energy of the valence-band electrons (the bandgap), which is typically of order $1$~eV~\cite{Essig:2011nj,Tiffenberg:2017aac,Crisler:2018gci,Agnese:2018col,Abramoff:2019dfb,Aguilar-Arevalo:2019wdi,SENSEI:2020dpa,Arnaud:2020svb,Amaral:2020ryn,DAMIC-M:2022aks}. This is sensitive to DM scattering or absorption for DM masses exceeding $\sim 1$~MeV or $\sim 1$~eV, respectively.
We propose to extend the reach of semiconductor detectors towards lower DM masses through a simple innovation: adding shallow impurities to the semiconductor.
Shallow impurities, also called dopants, are atoms that introduce new energy levels close to the conduction band (n-type dopant) or valence band (p-type dopant), which are populated by electrons or holes contributed by the n- or p-type atoms.
By emitting these charges into the conduction or valence bands, dopants can be ionized with sub-bandgap energy depositions, as schematically shown in Fig.~\ref{fig:Schematic}. Since the dopant's electrons or holes are weakly bound, their orbits lie far from the impurity centers, so the ionization energies are largely independent of the details of the dopant and are instead mostly set by the macroscopic properties of the underlying semiconductor~\cite{PhysRev.97.869,PhysRev.98.915,kohn1957shallow,shklovskii2013electronic}.
In this simplified picture the smallness of the energy required to ionize a dopant can be quantified by accounting for the screening of the impurity potential due to the semiconductor's dielectric function $\epsilon\sim 10$, and for the smallness of the effective electron or hole masses compared to the electron mass in vacuum, typically $m_*/m_e \sim 0.1-1$.
These two factors suppress
the ionization energy
by roughly $m_* / (m_e \epsilon^2)\sim 10^{-3} -10^{-2}$ with respect to the one of a free atom, resulting in energies of order $10-100$~meV. Such low thresholds allow designing ionization detectors that can be used to probe sub-MeV DM scattering or sub-eV DM absorption. In doped semiconductors, DM can also ionize valence-band electrons, so a doped target retains the signatures of pure semiconductor targets for $\gtrsim$ eV energy depositions.
\begin{figure}[t!]
\includegraphics[width=0.5\textwidth]{Schematic_plot_new.pdf}
\caption{Pure and doped semiconductor ionization signals: electron-hole pair for pure semiconductors (left), electron-only for n-type (middle) and hole-only for p-type (right) dopants. The bottom panels depict the corresponding energy levels. $E_g\sim 1$ eV is the bandgap and $E_I \sim 10$ meV is the dopant ionization energy.
\label{fig:Schematic}
}
\end{figure}
For models of DM scattering off electrons via a light mediator, we find that a doped target with an exposure of 100 g-day could probe the
entire currently unconstrained sub-MeV mass region for DM produced via freeze-in~\cite{Essig:2011nj,Chu:2011be,Dvorkin:2019zdi}, in the absence of dark counts (DC). For sub-eV dark-photon DM absorbed by electrons, an exposure as small as 1 gram-day with DC at the level of existing undoped detectors could probe currently unconstrained absorption cross sections.
Our proposal has the advantage that it could largely rely on existing technology. Doped semiconductor detectors have been fabricated for decades for infrared (IR) light detection~\cite{rogalski2002infrared,rogalski2019infrared}. Due to the nature of their applications (\textit{e.g.}, infrared astronomy) the DC requirements on existing detectors are above what is needed for DM detection~\cite{rieke2007infrared}.
However, single-electron sensitivity with small DC has been demonstrated in pure semiconductor detectors that collect charge such as SENSEI ($\sim 450$~DC/g-day)~\cite{SENSEI:2020dpa,SENSEI:2021hcn} or phonons, such as SuperCDMS HVeV~\cite{SuperCDMS:2020ymb}, and they could be further reduced with detector improvements~\cite{SENSEI:2020dpa}.
Thus, our proposal could be realized by combining the design of existing doped semiconductor detectors with the technologies used to obtain low DC in undoped detectors.
While in the body of this work we focus on studying ionization signals, for energy depositions below the dopant's ionization threshold, doped targets allow for transitions between the ground and excited electron or hole bound states. Upon relaxation to the ground state, phonons are emitted, which could be potentially detected with future single-phonon detectors. We leave the discussion of phonon signals to an appendix.
We organize this work as follows. We begin by reviewing the theory of electrons in doped semiconductors, focusing on n-type dopants (the situation for p-type dopants is analogous). We then compute the energy loss function due to ionization in these materials, obtain DM scattering and absorption rates, and project the detector's reach for a silicon target doped with phosphorus. In the Supplemental Materials, we discuss detector design, backgrounds, other low-threshold detection technologies, the expected phonon signals, and calculational details.
\section{Electronics of N-type Doped Semiconductors}
The electronic wavefunctions in a perfect crystal potential are given by Bloch wavefunctions $e^{i \mathbf{k} \cdot \mathbf{r} } u_\mathbf{nk}(\mathbf{r})$, where $\mathbf{k}$ labels the crystal momenta, $n$ is a band index, and $u_{n\mathbf{k}}$ are functions with the periodicity of the lattice.
For donor electrons in n-type semiconductors the spectrum differs from the perfect lattice solution due to the impurities. In this case, the wavefunctions can be expressed as a Bloch-state superposition~\cite{PhysRev.97.869,PhysRev.98.915,kohn1957shallow,shklovskii2013electronic},
\begin{equation}
\psi= \frac{1}{\sqrt{V}} \sum_{\mathbf{k},n} A_n(\mathbf{k}) e^{i \mathbf{k} \cdot \mathbf{r} } u_{n\mathbf{k}}(\mathbf{r}) \quad ,
\label{eq:exactsol}
\end{equation}
where $V$ is the semiconductor's volume, and the Fourier coefficients $A_n(\mathbf{k})$ are found by solving Schr\"odinger's equation for $\psi$ in the impurity and lattice potentials. The impurity potential can be approximated by the one of a ionic charge screened by the lattice,
\begin{equation}
U(r)= -\frac{\alpha}{\epsilon r} \quad ,
\label{eq:impuritypotential}
\end{equation}
where $\alpha$ is the fine-structure constant, and $\epsilon$ is the crystal's dielectric function. The crystal potential is more complex and leads to dependency of the wavefunctions on the band structure. Full knowledge of the bands, however, is not required to obtain approximate wavefunctions, as donor electrons bind only weakly to the impurity so their typical momentum lies close to the bottom of the conduction band. This leads to two simplifications. First, the wavefunctions $u_{n\mathbf{k}}$ in Eq.~\eqref{eq:exactsol} can be approximated to be those at the (possibly degenerate) conduction band minima.
Fixing the band index $n$ to correspond to the conduction band and dropping it, and taking the momentum coordinate of the $\xi$-th degenerate conduction-band minimum to be $\mathbf{k_\xi}$, this corresponds to approximating $u_{n\mathbf{k}} \approx u_{\mathbf{k_\xi}}$, so the wavefunction Eq.~\eqref{eq:exactsol} for an electron with momentum close to the $\xi$-th minimum simplifies to
\begin{equation}
\psi_\xi \approx e^{i \mathbf{k_\xi \cdot \mathbf{r} }} u_{\mathbf{k_\xi}}(\mathbf{r}) F(\mathbf{r}) \quad ,
\label{eq:effmass}
\end{equation}
where $F(\mathbf{r})\equiv \sum_\mathbf{k} A(\mathbf{k}) e^{i \mathbf{k} \cdot{\mathbf{r}}}/\sqrt{V}$ and $\mathbf{k}$ is now the momentum relative to the band minimum. Eq.~\eqref{eq:effmass} indicates that donor electrons are described by bottom-of-band Bloch wavefunctions modulated by an ``envelope'' $F(\mathbf{r})$, which is the same for all degenerate minima $\xi$.
The second simplification is that near the band minima, band energies can be approximated by the leading term in a momentum expansion,
\begin{equation}
E(k)=\frac{\mathbf{k}^2}{2m_*} \quad ,
\label{eq:effectivemass}
\end{equation}
where we have assumed isotropy in momentum space for simplicity (``spherical'' band approximation), and $m_*$ is the electron's effective mass.\footnote{The spherical-band approximation is not precise in indirect bandgap semiconductors such as Si and Ge where the bands are anisotropic. However, as discussed in the appendix, treating the bands as spherical will suffice for our purposes.} With these approximations, the envelope functions and energy eigenvalues can be shown to be solutions of the Schr\"odinger equation with a Hamiltonian set by the screened impurity potential Eq.~\eqref{eq:impuritypotential} and the kinetic term Eq.~\eqref{eq:effectivemass}~\cite{PhysRev.97.869,PhysRev.98.915,kohn1957shallow,shklovskii2013electronic},
\begin{equation}
-\frac{\nabla^2}{2m_*}F(\mathbf{r}) + U(r) F(\mathbf{r})=E\, F(\mathbf{r}) \quad .
\label{eq:schr}
\end{equation}
The solutions to Eq.~\eqref{eq:schr} are Hydrogenic, so the energies (relative to the conduction band), Bohr radius of the bound electrons, and $1s$ ground-state envelope function are
\begin{equation}
E_n = -\frac{\alpha^2}{2n^2} \frac{m_*}{\epsilon^2} \quad , \quad a_* = \frac{\epsilon}{\alpha m_*} \quad , \quad F_{1s}(\mathbf{r})=\frac{e^{-r/a_*}}{\sqrt{\pi a_*^3}}\,,
\label{eq:En}
\end{equation}
where $n$ is the principal quantum number. For typical values of $m_*/m_e\sim 0.1-1$ and $\epsilon\sim 10$, the ionization energies $ E_I\equiv E_{n=1}$ are of order $10-100$~meV. The Bohr radius $a_*$ of a dopant electron is $\epsilon(m_e/m_*)\sim 10-100$ times larger than typical lattice spacings $a$, so its typical momentum lies near the origin of the first Brillouin zone, $|\mathbf{k}| \ll 1/a$, validating the bottom-of-band approximation. With our approximations both the ionization energy and Bohr radius are independent of the dopant atom, as anticipated in the introduction.
The model described above is the simplest version of the ``effective-mass method''~\cite{PhysRev.97.869,PhysRev.98.915}.
Corrections to the model arise from wavefunction overlap of electrons localized at different impurities, electron-electron interactions, and differences in the impurity potential at different sites. These corrections lead to dispersion in the discrete energies Eq.~\eqref{eq:En} and thus to ``impurity bands''. For high doping ($n_D\gtrsim 3\cdot10^{18}/\mathrm{cm}^3$ for uncompensated Si:P~\cite{swartz1960low,yamanouchi1967electric,lohneysen1990metal}), these bands allow for electric conduction as in a metal, resulting in a metal-insulator Mott-Anderson transition~\cite{mott1949basis,anderson1958absence} and eliminating the energy gap that is required for the design of ionization detectors. Here we only consider semiconductors with doping densities below the Mott value. Additional corrections to the effective-mass method come from short-distance modifications to the impurity potential Eq.~\eqref{eq:impuritypotential} that break the degeneracy of the band minima~\cite{aggarwal1965optical,shklovskii2013electronic}. For Si,
these corrections lead to a ground-state that is set by a superposition of the Bloch wavefunctions at the degenerate minima, modulated by the common 1s envelope of Eq.~\eqref{eq:En} (see appendix).
While up to now we have only considered donor electron bound states, the presence of impurities also affects the conduction-band electron wavefunctions, which are relevant as final states for computing ionization probabilities. The modified conduction-band electron wavefunctions are simply given by Eq.~\eqref{eq:effmass}, with envelope functions set by the positive-energy solutions of the Schr\"odinger Eq.~\eqref{eq:schr}; they are given by~\cite{holt1969matrix}
\begin{eqnarray}
\nonumber
F_{{ \mathbf{k}}}(\mathbf{r})&=&
\color{black}{\frac{e^{(\frac{\pi}{2ka_*}-i\mathbf{k}\cdot \mathbf{r})} }{\sqrt{V}}}
\Gamma(1-\frac{i}{ka_*})
{}_1F_1\Big[\frac{i}{ka_*},1,i(kr+\mathbf{k}\cdot \mathbf{r})\Big] ,
\label{eq:freesol}
\end{eqnarray}
where $\Gamma$ and ${}_1F_1$ are the Gamma and confluent hypergeometric functions.
\section{Dark matter detection using doped semiconductors}
In a doped semiconductor, DM can interact with both valence band and dopant electrons. Since the typical momentum and energy transfer relevant for ionizing dopant electrons
are well separated from those for ionizing valence-band electrons, we can treat these two processes independently. The interaction rate on valence-band electrons happens as in pure semiconductors and has been computed in~\cite{Essig:2011nj,Graham:2012su,Lee:2015qva,Essig:2015cda,Derenzo:2016fse,Griffin:2019mvc,Trickle:2019nya,Griffin:2021znd,Hochberg:2021pkt,Knapen:2021run,Knapen:2021bwg,Trickle:2022fwt,An:2014twa,Bloch:2016sjj,Hochberg:2016sqx}. Here we focus instead on obtaining the single-ionization rate of DM interactions with the dopant electrons. Secondary ionization of other dopants by the excited electron are unlikely given the large separation between dopants for doping densities below the Mott transition, so they are not computed here.
We consider two example DM models. First we take a DM particle $\chi$ scattering with electrons via ultralight vector mediator $A'$ (``dark photon") that kinetically mixes with the SM photon with mixing parameter $\kappa$~\cite{Holdom:1985ag,Galison:1983pa}, so that the momentum-space low energy potential coupling $\chi$ with electrons is
\begin{equation}
V(\mathbf{q})=\frac{ g_\chi \kappa e}{q^2} \,,
\label{eq:potential}
\end{equation}
where $g_\chi$ is the coupling between $\chi$ and the mediator, $e$ the elementary charge, and the mediator mass $m_{A'}$ has been neglected with respect to the momentum transfer. Second, we consider a model where the kinetically-mixed vector field itself is the DM, which can be detected by absorption on electrons.
\begin{figure*}[t!]
\includegraphics[width=0.45\textwidth]{Reach_ionization.pdf}\includegraphics[width=0.45\textwidth]{Dark_photon_abs_reach.pdf}
\caption{\textit{Solid colored lines}: projected $90\%$ C.L. reach (single-charge ionization signal) for DM-electron scattering via a light mediator (\textbf{left}) and for dark photon DM absorption (\textbf{right}), for a Si:P target with doping density $n_D=1\times 10^{18}\textrm{cm}^{-3}$. Two contributions to the reach are shown, at low and high DM masses coming from P-dopant and Si ionization, respectively. The reach is computed for three levels of exposure and dark counts (DC): 1 kg-yr (blue) and 100 g-day (orange) exposure with zero DC, and 1 g-day exposure (red) with 450 g-day DC. \textit{Shaded grey}: exclusion regions from existing direct detection experiments~\cite{PhysRevLett.125.171802,Chiles:2021gxk, Hochberg:2021yud,PhysRevD.102.042001,PhysRevLett.107.051301}. \textit{Light gray}: excluded by stellar cooling~\cite{Vogel:2013raa}, solar reflection~\cite{An:2017ojc,An:2021qdl} and SN1987A~\cite{Chang:2018rso} (\textbf{left}), or from the solar emission of dark photons~\cite{PhysRevD.102.115022} (\textbf{right}). \textit{Light blue line (left panel):} cross section required for DM being produced by freeze-in ~\cite{Essig:2011nj,Dvorkin:2019zdi}. \textit{Grey-dashed lines:}
reach of other proposed targets for 1 kg-year exposure and no DC: superconductors~\cite{Hochberg:2015pha} (Al), polar~\cite{Knapen:2017ekk} (GaAs), Dirac~\cite{Hochberg:2017wce} and Fermi materials~\cite{Hochberg:2021pkt}.
\label{fig:scattering}
}
\end{figure*}
\textbf{Dark matter scattering:} the DM scattering rate per unit target mass is~\cite{Hochberg:2021pkt,Kahn:2021ttr}
\begin{equation}
R = \frac{\rho_{{\chi}}}{\rho_T m_{{\chi}}} \int d^3 \mathbf{\mathbf{v_{\chi}}} f(\mathbf{\mathbf{v_{\chi}}}) \Gamma(\mathbf{v_{\chi}}) \quad ,
\end{equation}
where $\rho_\chi=0.4 \, \mathrm{GeV}/\mathrm{cm}^3$, and $\rho_T$ are the DM and target mass densities (for Si $\rho_T=2.3\, \mathrm{g}/\mathrm{cm}^3$), $f(\mathbf{\mathbf{v_{\chi}}})$ is the DM velocity distribution in the halo~\cite{Drukier:1986tm}, with dispersion, escape, and earth velocities $v_0= 220\,\textrm{km/s}$, $v_{\rm esc}= 500\,\textrm{km/s}$, and $v_E= 240\,\textrm{km/s}$ (in the galactic frame). $ \Gamma(\mathbf{v}_\chi)$ is the scattering rate of a single DM particle with velocity $\mathbf{v_{\chi}}$ in the whole target, given by
\begin{equation}
\Gamma(\mathbf{v}_\chi)= \int \frac{d^3 \mathbf{q}}{(2\pi)^3} \left| V(\mathbf{q})\right|^2 \frac{q^2}{2\pi\alpha} \mathcal W(\mathbf{q},\omega_\mathbf{q})\,,
\end{equation}
with $\omega_\mathbf{q}= \mathbf{q}\cdot\mathbf{v_\chi}-q^2/(2m_\chi)$ being the energy transfer, $V(\mathbf{q})$ is given by Eq.~\eqref{eq:potential}, and the energy loss function (ELF) $\mathcal W$ is
\begin{equation}
\mathcal W(\mathbf{q},\omega) = \frac{(2\pi)^2 \alpha n_D}{{q}^2 |\epsilon(\mathbf{q},\omega)|^2} \! \sum_{\xi,\mathbf{k}} \delta(E_\mathbf{k}\!-\!E_{i}\!-\!\omega) \left|\left<\xi \mathbf{k} \left| \mathbf{\hat{\rho}}(\mathbf{q}) \right| i \right>\right|^2 \,,
\label{eq:ELF}
\end{equation}
where $n_D$ the number density of dopants and $\mathbf{\hat{\rho}}(\mathbf{q})=e^{-i \mathbf{q}\cdot \hat{\mathbf{r}}}$ is the momentum-space electron-density operator. In Eq.~\eqref{eq:ELF}, we have already performed the sum over target electrons, so there is a factor of $1/|\epsilon(\mathbf{q},\omega)|^2$ from multi-particle screening~\cite{Trickle:2019nya,Hochberg:2015fth,Kahn:2021ttr}, and the term in brackets is the single-particle form factor between the initial (ground) state and free conduction-band electrons with momentum close to the $\xi$-th minimum. This form factor is obtained in the appendix, where we show that within the effective mass method described previously, and
for momentum transfers less than the inverse lattice spacing, $|\mathbf{q}|\ll 1/a$, it is given by the form factor between the initial (bound) and free Hydrogenic envelope functions $F_i$ and $F_{\mathbf{k}}$
\begin{equation}
\sum_{\xi} {\left|\left<\xi \mathbf{k} \left| e^{-i \mathbf{q}\cdot \hat{\mathbf{r}}} \right| i \right>\right|^2} = \bigg|\int d^3\mathbf{r} \,
F_{\mathbf{k}}(r)^* F_{i}(r) e^{-i \mathbf{q}\cdot {\mathbf{r}}} \bigg|^2 \quad .
\label{eq:hydrogenic}
\end{equation}
For ionization from the 1s ground state Eq.~\eqref{eq:En} into the free states Eq.~\eqref{eq:freesol}, Eq.~\eqref{eq:hydrogenic} has been analytically computed in~\cite{holt1969matrix,Chen:2015pha}. Using the result of~\cite{holt1969matrix} in Eq.~\eqref{eq:ELF} we obtain the ELF for n-type doped semiconductors
\begin{eqnarray}
\label{eq:ELFfinal}
\mathcal W(\mathbf{q},\omega) &=&\bigg(\frac{E_{\mathrm{eff}}}{E_0} \bigg)^2 \frac{\color{black}{2^{10}\pi^2\alpha m_* n_D a_*^4}}{\color{black}{3 |\epsilon(\mathbf{q},\omega)|^2} } \\ \nonumber
&& \frac{(3\tilde{q}+\tilde{k}^2+1) \exp\big[-\frac{2}{\tilde{k}} \tan^{-1}\big( \frac{2\tilde{k}}{\tilde{q}^2-\tilde{k}^2+1} \big) \big]}
{[(\tilde{q}+\tilde{k})^2+1]^3 [(\tilde{q}-\tilde{k})^2+1]^3 [1-\exp(-2\pi/\tilde{k})]}
\end{eqnarray}
where $\tilde{q}=q a_*$ and $\tilde{k}\equiv \sqrt{2 m_* (\omega-E_I)} a_*$ is the momentum of the final state electron for an energy transfer $\omega>E_I$. In Eq.~\eqref{eq:ELFfinal}, we have heuristically added a normalization prefactor $({E_{\mathrm{eff}}}/{E_0})^2$ to account for the ratio of local electric field at the donor center and the average field in the crystal, as done in photoionization calculations~\cite{dexter1958theory,anderson1975shallow,sclar1984properties}.
\textbf{Dark matter absorption:} the rate per unit mass for the absorption of kinetically-mixed vector DM is simply $R_{\chi} = \kappa^2 {\rho_{\chi}}/{\rho_T}\,\mathcal{W}(\mathbf{q=0}, m_\chi)$~\cite{Mitridate:2021ctr}, where the ELF is given by Eq.~\eqref{eq:ELFfinal}.
\section{Projected reach}
We now compute the projected DM reach taking for concreteness silicon doped with phosphorus as the target.
For this material, the parameters entering in the ELF Eq.~\eqref{eq:ELFfinal} are set to $E_I=45$~meV, $m_*=0.3\,m_e$, $a_*=23$~atomic units, and ${E_{\mathrm{eff}}}/{E_0}=2.2$, as discussed in the appendix. The resulting projections for DM scattering and absorption are presented in Fig.~\ref{fig:scattering}, where in both cases the phosphorus density is $n_D=10^{18}/\mathrm{cm^3}$. This density is chosen to maximize the number of target electrons while staying below the metal-insulator transition. For DM scattering, the bounds are presented as a function of the reference cross section on electrons $\bar{\sigma}_e \equiv { \mu_{\chi e}^2}/{\pi} |V(q=\alpha m_e)|^2$,
where $\mu_{\chi e}$ is the DM-electron reduced mass. For DM absorption the bounds are presented as a function of the photon-dark photon mixing parameter $\kappa$. Bounds are presented for three assumptions regarding dark counts (DC) and exposure. In blue and green we present limits for kg-year and 100 g-day exposures that consider only Poisson statistical uncertainties and assume no DC are observed. In red we project more conservative sensitivities with exposures and DC in line with SENSEI at MINOS~\cite{SENSEI:2020dpa}, that is a 1 g-day exposure and 450/g-day DC. Each projected exclusion curve is broken into two pieces to highlight the reach due to the ionization of dopants (masses below $\sim 1$~MeV or $\sim 1$~eV for scattering or absorption), or due to the ionization of valence band electrons (for larger masses).\footnote{The Si ionization sensitivity projections include screening, and hence the 1 g-day curve assuming a DC of 450/g-day is weaker than the SENSEI limit shown in gray~\cite{SENSEI:2020dpa}.}
Our projections show that doped semiconductor targets have a significant discovery potential, and compare favorably against other proposed targets. From the figures we clearly see how introducing doping into the material extends the reach of semiconductor detectors to lower DM masses. In Fig.~\ref{fig:scattering}, we see that in the absence of DC, doped semiconductors have the potential to probe scattering cross sections that lead to DM being produced by freeze-in
down to the smallest allowed masses, $m_{\chi}\simeq 30$ keV. The excellent reach to DM scattering is in part explained by kinematic matching, as for the typical momentum tranfers $q_{\mathrm{typ}}\approx m_{\chi} v_\mathrm{rel}\approx m_{\chi} v_e \approx 100\,\mathrm{eV} (m_{\chi}/100\,\mathrm{keV})$, with $v_e\approx \alpha/\epsilon\sim 10^{-3}$ ($\approx v_{\chi}$) being the dopant electron velocity, the energy transfers are precisely of the order of dopant ionization energies, $q_{\mathrm{typ}} v_{\chi}\approx 100$~meV. The kinematics are further studied in the appendix, where we present differential scattering rates. The right panel of Fig.~\ref{fig:scattering} indicates that Si:P also has excellent reach to dark-photon absorption for DM masses $m_{A'}\lesssim 1$~eV. For DM absorption, even with a small 1 g-day exposure and if DC are included in our estimates at the level currently observed by SENSEI, our proposal could probe currently unconstrained parameter space.
\section{Discussion}
We have proposed a new DM detection strategy based on looking for DM interactions on dopant atoms. By computing DM interaction rates, we have shown that doped semiconductor targets have the potential to explore large regions of parameter space of two benchmark DM models, sub-MeV DM coupling to the SM via a light mediator, and kinetically-mixed sub-eV dark-photon DM.
This work begins the exploration of doped sub-MeV DM detectors. From the theoretical side, interaction rates on other doped targets and calculations of the Migdal effect on dopants will be required, while from an experimental perspective developments for the detector design are needed, as discussed in the appendix. We conclude that the development of doped semiconductor detectors with low dark counts is both scientifically and technologically motivated, and may lead to the discovery of DM.
\section{Acknowledgments}
We would like to thank Steve Holland, Junwu Huang, Noah Kurinsky, Guillermo Moroni, Sae Woo Nam, Roger Romani, Miguel Sofo Haro, Javier Tiffenberg, and Sho Uemura for useful discussions. The work of P.D.~is supported by the US Department of Energy under grant DE-SC0010008. D.E.U.~is supported by
Perimeter Institute for Theoretical Physics and by the Simons Foundation. Research at Perimeter Institute is supported
in part by the Government of Canada through the Department of Innovation, Science and
Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. R.E.~acknowledges support from DoE Grant DE-SC0009854, Simons Investigator in Physics Award 623940, the US-Israel Binational Science Foundation Grant No.~2016153, and the Heising-Simons Foundation Grant No.~79921.
M.S.~acknowledges support from Department of Energy Grants DE-SC0009919 and DE-SC0022104.
\newpage
\section{Supplementary Material}
\subsection{Detector design and backgrounds}
To realize the discovery potential of doped semiconductor targets a scalable single-electron detection technology with low dark counts is required. Here we discuss one possible implementation, based on the Skipper-CCD detectors used by the SENSEI experiment~\cite{SENSEI:2020dpa}. Skipper-CCDs are imaging detectors that use high-resistivity n-type Si as the bulk absorber to detect ionization signals. Even if the target is n-type, these CCDs are not doped ionization detectors, since their doping density is extremely small, on the order of $10^{11}/\textrm{cm}^3$, and since they are operated at temperatures where the dopants are already ionized. Skipper-CCDs have demonstrated single electron-hole-pair resolution and dark counts as low as $450$ events per gram-day~\cite{SENSEI:2020dpa}. Building upon this technology, we envision designing a Skipper-CCD with a large level of n-type doping in the bulk.\footnote{A detector that instead measures ionization signals by reading out phonons such as SuperCDMS HVeV could also be considered, but the current single-electron dark current in such detectors is larger than at SENSEI by a factor of $10^3$~\cite{SuperCDMS:2020ymb}. In addition, SuperCDMS HVeV, at least in its current setup, would reject a single-electron or single-hole signal from an ionized dopant, as its single-electron bin corresponds only to events that are electron-hole pairs.}
The design of a doped Skipper-CCD would require several technological developments. For concreteness, we discuss these developments in the context of n-type targets (the situation for p-type detectors is analogous). First and foremost, current n-type Skipper-CCDs used for detecting DM only collect \textit{holes}, and they do so in a ``buried channel'' located right below the detector's frontside where charges are stored until readout. For our proposal to be realized, a doped n-type Skipper-CCD needs to collect ionized electrons from n-type dopants. A detailed design that satisfies this requirement will be presented in future work.
In order to collect the charge signals, and to ensure low levels of charge trapping, the detector would need to be placed in an electric field (as a conventional CCD), which would be provided by a bias voltage applied to detector contacts. Since the active area would be doped, this electric field would also induce currents within the ``impurity band'' formed by the dopant's energy levels, without the need for electrons being excited into the conduction band (``hopping conductivity''~\cite{PhysRev.79.726}), which would constitute a dark current. A technology to eliminate this current exists, which is used in a type of doped semiconductor detectors called ``Blocked Impurity Band'' detectors~\cite{petroff1986blocked} (BIBs). The idea is to introduce a layer of undoped semiconductor between the active detection region and the charge readout stages, which in our case are the buried channels, so that impurity band conduction is blocked. BIBs that operate with high levels of doping have quantum efficiencies of the order of $80\%$ for $\sim 1$~V of applied voltage across an absorber that is tens of $~\mu$m wide~\cite{rogalski2019infrared}, which also indicates that charge trapping in these devices is under control. This also suggests that charge transfer across pixels (which is required for readout in a CCD-like setup) could be done efficiently even if the target is doped.
In order to suppress thermal generation of dark currents due to the ionization of dopants, the detector would need to be operated at cryogenic temperatures. The rate for the thermal generation of electrons from neutral donors can be estimated to be $\langle\sigma_{eD^+}v_e\rangle N_c e^{-E_I/T}$ from detailed balance~\cite{MARTINI1972181}. Here $N_c$ is the effective density of states of electrons in the conduction band, $\sigma_{eD^+}$ is the cross section for electrons capture by charged donors, and $\langle\rangle$ indicates the thermal average. From measurements, we obtain $\langle\sigma_{eD^+}v_e\rangle\approx 7\times10^{-6} \textrm{cm}^3/\textrm{s}$ for Si:P~\cite{PhysRev.144.781}. Therefore, thermal dark currents would be kept at the level of $1\, e^-/\mathrm{kg}/\mathrm{yr}$ for $n_{D}=10^{18}/\textrm{cm}^3$ by operating the detector at a temperature $T\approx 5$ K ($\approx 0.43\,$meV).
The low operating temperatures represent a challenge for a doped detector based on a CCD design, since conventional CCDs cannot operate at temperatures below $\sim 70$ K due to carrier freeze-out ~\cite{janesick1987scientific}, where the gates become non-conducting. This issue could be solved by replacing the standard polysilicon gates with metal gates, as done in some metal oxide semiconductor (MOS) devices.
In addition to thermally generated dark counts, it is likely that a doped Skipper-CCD would be affected by the backgrounds that are observed in undoped Skipper-CCDs. The origin of the currently observed backgrounds in Skipper-CCDs is unknown, even if a fraction of them have been shown to arise from secondary radiation of high energy tracks~\cite{Du:2020ldo,SENSEI-radiative-to-appear}. Track-induced ``extrinsic'' backgrounds can be reduced by working in a radio-pure environment and improving shielding. The ``intrinsic'' detector backgrounds in Skipper-CCDs, on the other hand, could arise from charge leakage into the detector contacts, or from slow release of electrons from unidentified traps. Both of these effects could in principle be reduced by improving the insulating layers and pre-filling empty traps. On the other hand, the intrinsic dark counts that are observed in doped blocked-impurity band detectors are possibly coming from conduction across the blocking layers into the contacts~\cite{wang2016analysis,marozas2018surface,pan2021dark}, either via tunneling or due to impurities. Such dark currents could be suppressed by increasing the thickness of the insulating layers~\cite{wang2016analysis}. Yet another source of dark counts in doped detectors arises due to conduction within the impurity band by hopping into neutral (occupied) dopants, a process that's referred to as $\epsilon_2$ hopping conductivity~\cite{1971JETPL..14..185G,shklovskii2013electronic}. This can be exponentially suppressed by reducing the doping density, but measurements are required to find an optimal doping value.
We conclude this section by pointing out that several other detector targets with sub-eV thresholds exist, beyond doped semiconductors, which are used to detect infrared (IR) radiation but that can also be used to detect sub-MeV DM scattering or sub-eV dark photon absorption. We provide an incomplete list of available IR detectors in Table~\ref{tab:IR_detectors}, which includes Superconducting nanowire detectors (SNPDs), Single-photon avalanche diodes (SPADs), Mercury-Cadmium-Telluride detectors (HgCdTe), the already mentioned BIBs, Quantum Dot and Quantum Well Infrared Photodetectors (QDIP and QWIP) and Transition Edge Sensors (TES). These detectors can be designed and used as targets for DM absorption or scattering, or in some cases can be coupled to read out excitations from an external absorber that acts as the target.
SNSPDs were proposed as sub-MeV dark-matter targets in~\cite{Hochberg:2019cyy}, but an $\sim 8$ order-of magnitude increase in detector exposure (without a corresponding increase in dark currents) would be needed to probe regions of parameter space that are not currently excluded by astrophysical searches~\cite{Hochberg:2021yud}. Current nanogram-scale SNSPD detectors have dark counts of the order of $10^{-6}$ Hz. The origin of dark counts in SNSPDs is currently unknown, but it is possible that they arise from secondary emission from environmental high-energy radiation~\cite{Du:2020ldo}, or due to micro-fractures upon detector cooling~\cite{Anthony-Petersen:2022ujw}. Both of these dark count sources increase with detector exposures. SNSPDs have also been used as detectors to measure interaction events of DM on an external dielectric stack target~\cite{Baryakhtar:2018doz,Chiles:2021gxk}, and have also been proposed as photodetectors, to detect the photons that arise from DM with masses $\gtrsim$1~MeV interacting in a nearby scintillating target~\cite{Derenzo:2016fse,Essig:2019kfe,Blanco:2019lrf,Blanco:2022cel}.
Another possibility is to consider detectors based on low-bandgap compounds of the III-V groups. One of the most mature single-photon detection technologies where these compounds have been used are single-photon avalanche diodes, or SPADs. These detectors have the advantage that they be operated at significantly higher temperatures than SNSPDs, but suffer from larger dark currents~\cite{dello2022advances}, likely due to tunneling~\cite{huang2020high}. A type of SPADs (``Silicon Photomultipliers'' or SIPMs) have been proposed to study DM scattering with an energy threshold of $150$\,eV \cite{SIPM}, but to our knowledge the potential of SPADs as sub-MeV DM detectors has not been explored.
\begin{table}[t]
\begin{tabular}{|c|c|}
\hline
{Detector type} &{Energy gap (eV)} \\
\hline
SNSPD~\cite{Chiles:2021gxk} & $ 10^{-3}$ \\
\hline
III-V SPAD~\cite{zhang2015advances} & $ 0.1$ \\
\hline
\multirow{2}{*}{Hg$_{1-x}$Cd$_{x}$Te~\cite{doi:10.1146/annurev.astro.44.051905.092436}} & 0.5 ($x=0.44$) \\
&0.1 ($x=0.194$) \\
\hline
Doped semiconductor& $5 \times 10^{-2}$ (Si:P, Si:As) \\
BIB~\cite{doi:10.1146/annurev.astro.44.051905.092436} &$10^{-2}$ (Ge:Ga) \\
&$6\times 10^{-3}$(GaAs:Te) \\
\hline
QDIP~\cite{Campbell20071815}&$ 0.1$ (InAs/InGaAs) \\
\hline
QWIP~\cite{ROGALSKI1997295} &$ 0.1$ (GaAs/AlGaAs) \\
\hline
TES~\cite{ROGALSKI1997295} &$ 10^{-3}$ \\
\hline
\end{tabular}
\caption{Examples of existing IR photon detectors and the approximate energy gaps required to create a measurable target excitation.
}
\label{tab:IR_detectors}
\end{table}
Yet another option is to consider detectors based on Mercury-Cadmium-Telluride (HgCdTe), a compound that has the advantage that its bandgap can be tuned, in principle down to the metallic transition~\cite{rogalski2019infrared}. HgCdTe detectors are widely used in astronomy for near and mid-infrared detection, but to the best of our knowledge no HgCdTe-based detector has demonstrated single-photon detection, which would be a requirement for searching for sub-MeV DM scattering or sub-eV DM absorption. Significant progress, however, is being made to achieve IR single-photon detection using photodiodes based in HgCdTe~\cite{dello2022advances}.
Going towards the deep infrared, doped semiconductor BIBs offer some of the leading sensitivities. That being said and as already discussed in the body of this work, existing BIBs suffer from dark counts that are too large for the purposes of detecting DM.
Other more recently developed technologies are quantum well and quantum dot detectors. Regarding these detectors, here we only point out that quantum dots have been proposed as targets for DM scattering~\cite{Blanco:2022cel}, but to our knowledge their potential as photodetectors detectors to search for photons from DM interactions in a nearby target has not been explored.
Finally, TES have been proposed as sub-MeV DM detectors in a setup where DM interacts with an external absorber creating phonons, which are then read out by TES located in the absorber's surface~\cite{Hochberg:2015fth,Hochberg:2015pha,Hochberg:2016ajh,SPICE}, but order-of-magnitude improvements in the TES's energy thresholds are required to probe such light DM models~\cite{ren2021design}. Like SNSPDs, TES have also been proposed as a photodetector to detect photons from DM with masses $\gtrsim$1~MeV interacting in a nearby scintillating target~\cite{Derenzo:2016fse,Essig:2019kfe,Blanco:2019lrf,Blanco:2022cel}. Dark currents for TES's are briefly discussed in the section ``Sub-ionization energy depositions and phonon signals''.
\subsection{Phosphorus-doped Silicon ELF parameters}
In this appendix, we discuss our choices for the numerical values of the parameters entering into the ELF Eq.~\eqref{eq:ELFfinal} for the specific case of silicon doped with phosphorus, Si:P.
First, we set the ionization energy $E_I$ to the experimentally measured value, $E_I=45$~meV~\cite{aggarwal1965optical,ning1971multivalley}. For the Coulomb impurity potential and spherical band approximations used to obtain the ELF,
Eq.~\eqref{eq:En} sets the relations that allow the effective mass and Bohr radius to be calculated from the ionization energy. These relations (and more generally the effective mass method used in this work) must be regarded only as a rough approximation for Si, as for this material the band structure around the minimum is anisotropic, the impurity potential deviates from the Coulomb form near the impurity ion, and intervalley couplings modify the spectrum~\cite{aggarwal1965optical,ning1971multivalley}. In order to account for these corrections, we use the phenomenological prescription of~\cite{ning1971multivalley}, which has been demonstrated to correctly describe the experimentally measured energy levels in Si:P. The prescription retains the spherical band approximation and the Hydrogenic form of the envelope functions (so we may retain the form of the ELF Eq.~\eqref{eq:ELFfinal}), but treats $m_*$ as a free parameter that is determined by matching the ground state energy levels obtained with the spheroidal and realistic ellipsoidal band case in Si,
and $a_*$ as a variational parameter that is chosen to minimize the full Hamiltonian including short-distance and intervalley corrections. This results in $m_*=0.3\,m_e$ and $a_*=23$~atomic units~\cite{ning1971multivalley}.
For the screening prefactor in the denominator of Eq.~\eqref{eq:ELFfinal}, $1/|\epsilon(\mathbf{q},\omega)|^2$, we simply approximate $|\epsilon(\mathbf{q},\omega)|\approx|\epsilon(0,0)|$, which leads to a small error of order $O((q/\alpha m_e)^2)$ in the calculations~\cite{doi:10.1143/JPSJ.20.778,doi:10.1143/JPSJ.21.1852}.
Finally, we set the normalization prefactor to ${E_{\mathrm{eff}}}/{E_0}=2.2$ to match the normalization of measurements of the dielectric function in Si:P at $\mathbf{q}=0$ (\textit{i.e.}, photoabsorption measurements) reported in~\cite{thomas1981optical,gaymann1995temperature}, by using the relation
\begin{equation}
\mathcal W(\mathbf{q},\omega) = \mathrm{Im}\bigg[-\frac{1}{\epsilon(\mathbf{q},\omega)}\bigg] \quad ,
\label{eq:elfrelation}
\end{equation}
evaluated at $\mathbf{q}=0$. In order to validate our calculation of the ELF, we have compared our results with the frequency-dependent photoabsorption data of~\cite{thomas1981optical,gaymann1995temperature} and confirmed that it correctly reproduces the data (see section below on ELF validation). Note that by using Eq.~\eqref{eq:elfrelation}, it is in principle possible to obtain the ELF directly from measurements of the material's dielectric constant instead of using our computation, Eq.~\eqref{eq:ELFfinal}. To our knowledge, however, no measurements of $\epsilon(\mathbf{q},\omega)$ in Si:P away from $\mathbf{q}= 0$ have been performed,
and even at $\mathbf{q}=0$ the dielectric function has only been measured over a limited range of frequencies, so in what follows we rely on Eq.~\eqref{eq:ELFfinal} for obtaining DM interaction rates and calculating projections.
\subsection{Form factor derivation}
In this section, we calculate the form factor for dopant electron transitions between the ground state and free conduction-band energy levels. We begin by defining Bloch wavefunctions for conduction-band electrons as
\begin{equation}
\phi_{\xi \mathbf{k}}=\frac{1}{\sqrt{V}}e^{i \mathbf{k} \cdot \mathbf{r} } e^{i \mathbf{k}_\xi \cdot \mathbf{r} } u_\mathbf{k}(\mathbf{r}) \,,
\end{equation}
where the index $\xi$ specifies one of the possibly degenerate conduction-band minima and $\mathbf{k}_\xi$ its position in momentum space, $\mathbf{k}$ labels crystal momenta measured from this minimum, and $V$ is the semiconductor's volume. The periodic functions $u_\mathbf{k}$ are set to be unit-normalized in $V$,
\begin{equation}
\int_{V} d^3\mathbf{r} |u_\mathbf{k}(\mathbf{r})|^2 =1 \,,
\end{equation}
and Bloch wave-functions are orthonormal,
\begin{equation}
\left<\phi_{\eta \mathbf{k'}}|\phi_{\xi \mathbf{k}}\right>=\delta_{\xi \eta} \delta_\mathbf{k-k'} \quad .
\end{equation}
As discussed in the section Electronics of Doped Semiconductors, the initial (ground) state and final single-particle states are given by a sum of conduction-band Bloch wavefunctions,
\begin{eqnarray}
\left| i \right> &=& \sum_{\eta,\mathbf{k}} \alpha_{\eta} A_i(\mathbf{k}) \left| \phi_{\eta \mathbf{k}} \right> \quad , \\
\left| f \right> &=& \sum_{\eta',\mathbf{k'}} \beta_{\eta'} A_f(\mathbf{k}') \left| \phi_{\eta' \mathbf{k'}} \right> \quad ,
\end{eqnarray}
where the coefficients setting the components of the wavefunctions at the different minima are normalized as
\begin{equation}
\sum_\eta |\alpha_\eta|^2 =1 \quad .
\label{eq:norm2} \quad\end{equation}
As an example, for the Si ground state, which is relevant for our limit projections, the initial ground state is a singlet s-wave state formed by equal-weight superpositions of the wavefunctions at these minima, modulated by the 1s envelope function of Eq. \eqref{eq:En},
\begin{equation}
\psi_{1s}^{\mathrm{Si}}=F_{1s}(\mathbf{r}) \sum_{\xi=1}^{6}\,\frac{1}{\sqrt{6}}e^{i \mathbf{k}_\xi \cdot \mathbf{r} } u(\mathbf{k_\xi},\mathbf{r}) \quad ,
\label{eq:initialstate}
\end{equation}
so for Si, $\alpha_{\eta}=1/\sqrt{6}, \eta=1,2,..., 6$.
Now, the form factor for transitions between the initial and final states is given by
\begin{eqnarray}
\nonumber
{\left|\left<f \left| \hat{\rho}(\mathbf{q}) \right| i \right> \right|^2} &=&\bigg| \sum_{\eta \eta'\mathbf{k}\mathbf{k'}} \beta_{\eta'}^* \alpha_{\eta}
\\ \nonumber &&A_f^*(\mathbf{k'}) A_i(\mathbf{k}) \left<\phi_{\eta' \mathbf{k'}} \left| e^{-i \mathbf{q}\cdot \hat{\mathbf{r}}} \right| \phi_{\eta \mathbf{k}} \right> \bigg|^2 \quad , \\
\label{eq:form1}
\end{eqnarray}
where $\hat{\rho}(\mathbf{q})= e^{-i \mathbf{q}\cdot \hat{\mathbf{r}}} $ is the momentum-space electron density operator.
Given that the different conduction band minima in typical semiconductors such as Si and Ge are separated by distances of order $1/a\sim 1$~keV in momentum space, with $a$ being the lattice spacing, for momentum transfers $q \ll 1$~keV the generator of momentum translations $e^{-i \mathbf{q}\cdot \hat{\mathbf{r}}} $ does not connect different conduction band minima, and we obtain the simplification
\begin{equation}
\left<\phi_{\eta' \mathbf{k'}} \left|e^{-i \mathbf{q}\cdot \hat{\mathbf{r}}} \right| \phi_{\eta \mathbf{k}} \right> = \delta_{\eta' \eta} \delta_{\mathbf{k'}-\mathbf{k}-\mathbf{q}} \quad .
\label{eq:simpl1}
\end{equation}
This simplification is valid for sub-eV absorption in the semiconductor, where $q\lesssim 1$~eV, and for sub-MeV DM scattering via a light mediator, where we have checked that the typical momentum transfer is $q_{\mathrm{typ}}\approx m_{\chi} v_\mathrm{rel}\approx m_{\chi} v_e \approx 100\,\mathrm{eV} (m_{\chi}/100\,\mathrm{keV})$, with $v_e\approx \alpha/\epsilon\sim 10^{-3}$. Inserting Eq.~\eqref{eq:simpl1} in Eq.~\eqref{eq:form1} we obtain
\begin{eqnarray}
\nonumber
{\left|\left<f \left| \hat{\rho}(\mathbf{q}) \right| i \right> \right|^2} &=& \bigg| \sum_{\eta\mathbf{k}} \beta_{\eta}^* \alpha_{\eta} \,
A_f^*(\mathbf{k+q}) A_i(\mathbf{k}) \bigg|^2 \quad ,\\
\label{eq:form2}
\end{eqnarray}
and using Parseval's theorem on Eq.~\eqref{eq:form2}, we get
\begin{eqnarray}
\nonumber
{\left|\left<f \left| \hat{\rho}(\mathbf{q}) \right| i \right> \right|^2} &=& \bigg|\sum_{\eta}\beta_{\eta}^* \alpha_{\eta} \bigg|^2 \bigg| \int d^3\mathbf{r} \,
F_f(\mathbf{r})^* F_i(\mathbf{r}) e^{-i \mathbf{q}\cdot {\mathbf{r}}} \bigg|^2\,,\\
\label{eq:form3}
\end{eqnarray}
where we defined Fourier transforms as $F(\mathbf{r})\equiv \sum_\mathbf{k} A(\mathbf{k}) e^{i \mathbf{k_\xi}}/\sqrt{V}$ that are unit-normalized in the target volume. Now, take the final state to be $\left|f\right>=\left |\xi \mathbf{k}\right>$ so $\beta_\eta = \delta_{\eta \xi}$. Then Eq.~\eqref{eq:form3} simplifies to
\begin{eqnarray}
\nonumber
{\left|\left<\xi \mathbf{k} \left| \hat{\rho}(\mathbf{q}) \right| i \right> \right|^2} &=& | \alpha_{\xi} |^2 \bigg| \int d^3\mathbf{r} \,
F_{\mathbf{k}}(\mathbf{r})^* F_i(\mathbf{r}) e^{-i \mathbf{q}\cdot {\mathbf{r}}} \bigg|^2 \quad .\\
\end{eqnarray}
Finally, summing over the final-state minima $\xi$ and using the normalization condition Eq.~\eqref{eq:norm2} we obtain
\begin{eqnarray}
\nonumber
\sum_{\xi} {\left|\left<\xi \mathbf{k} \left| \hat{\rho}(\mathbf{q}) \right| i \right> \right|^2} &=& \bigg|\int d^3\mathbf{r} \,
F_{\mathbf{k}}(r)^* F_{i}(r) e^{-i \mathbf{q}\cdot {\mathbf{r}}} \bigg|^2 \quad ,
\label{eq:finalresult}\\
\end{eqnarray}
which is the result quoted in Eq.~\eqref{eq:hydrogenic}.
\subsection{ELF validation and comparison}
In this section, we validate our analytical approximation of the ELF in Si:P (Eq.(\ref{eq:ELFfinal})) by comparing it to experimental data and alternative analytic ELF computations. To our knowledge, only optical data is available for Si:P, meaning only $\mathcal W(0,\omega)$ is determined by experiments (within a range of measured frequencies).
In Fig.~\ref{fig:ELF_validation}, we show $\mathcal W(0,\omega)$ for Si:P calculated from the measurement of optical reflectance with two doping densities~\cite{PhysRevB.52.16486,PhysRevLett.71.3681}, and we also present our analytical hydrogenic ELF Eq.~(\ref{eq:ELFfinal}). For both doping densities, the hydrogenic ELF matches the data very well for energies above the ionization threshold $E_I=45$ meV.
\begin{figure}[t!]
\includegraphics[width=0.45\textwidth]{ELF_validation.pdf}
\caption{The ELF in the optical limit $\mathcal W(0,\omega)$ as a function of the photon energy $\omega$ for Si:P at 10~K. The solid lines denote the ELF derived from the measurement of optical reflectance with $n_D=1.8\times 10^{18}\textrm{cm}^{-3}$ (blue) and $n_D=3.4\times 10^{17}\textrm{cm}^{-3}$ (orange)~\cite{PhysRevB.52.16486,PhysRevLett.71.3681}. The dashed lines show the analytical results based on the hydrogenic ELF shown in Eq.~(\ref{eq:ELFfinal}).
\label{fig:ELF_validation}
}
\end{figure}
To check the robustness of the hydrogenic ELF at finite $\mathbf{q}$, we compare it with another analytical expression for the ELF proposed by Mermin, where an empirical formula of the dielectric function, or equivalently the ELF, is provided with coefficients to be fitted from data. The Mermin ELF relevant for dopants in Si has the following form:
\begin{eqnarray}\label{eq:Mermin_ELF}
\mathcal W_{\rm Mer}(\mathbf{q},\omega)=A\,\textrm{Im}\left[\frac{-1}{\epsilon_{\rm Mer}(\mathbf{q},\omega)}\right],
\end{eqnarray}
where $\epsilon_{\rm Mer}$ is the Mermin dielectric function defined as~\cite{Mermin:1970zz}
\begin{eqnarray}\label{eq:Mermin_epsilon}
\epsilon_{\rm Mer}(\mathbf{q},\omega)=1+\frac{(1+i\Gamma/\omega)(\epsilon_{\rm Lin}(q,\omega+i\Gamma)-1)}{1+(i\Gamma/\omega)\frac{\epsilon_{\rm Lin}(q,\omega+i\Gamma)-1}{\epsilon_{\rm Lin}(q,0)-1}}.
\end{eqnarray}
Here $\epsilon_{\rm Lin}(q,\omega)$ is Lindhard dielectric function derived from the free electron gas~
\begin{eqnarray}
\epsilon_{\rm Lin}(q,\omega)=1+\frac{3\omega_p^2}{q^2v_F^2}\lim_{\Gamma\to0}f\left(\frac{\omega+i\Gamma}{qv_F},\frac{q}{2m_e v_F}\right),
\end{eqnarray}
with $v_F=\left(\frac{3\pi\omega_p^2}{4\alpha m_e^2}\right)^{1/3}$ and
\begin{eqnarray}
f(u,z)&=&\frac{1}{2}+\frac{1}{8z}[g(z-u)+g(z+u)]\nonumber\\
g(x)&=&(1-x^2)\textrm{log}\left(\frac{1+x}{1-x}\right).
\end{eqnarray}
Taking the $q\to 0$ limit of Eq.~(\ref{eq:Mermin_ELF}), one gets
\begin{eqnarray}\label{eq:Mermin_ELF_k_0}
\mathcal W_{\rm Mer}(\mathbf{q},\omega)\bigg|_{\mathbf{q}\to 0}=A\, \textrm{Im}\left[\frac{-1}{1-\omega_p^2/(\omega^2+i\Gamma\omega)}\right].
\end{eqnarray}
In Eq.~(\ref{eq:Mermin_ELF_k_0}), $A$, $\omega_p$, and $\Gamma$ are fitting parameters that are determined by matching to data. After fitting to optical data for Si:P with $n_{D}=1.8\times 10^{18}\textrm{cm}^{-3}$ above the ionization energy $E_I=45$\,meV, we get $A=0.065$, $\omega_p=45$\,meV, and $\Gamma=70$\,meV.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{ELF_rate_compare.pdf}
\caption{ Projected $90\%$ C.L. reach of DM-electron scattering in doped semiconductors (Si:P with $n_D=1.8\times 10^{18}/\textrm{cm}^{-3}$) with a light dark photon mediator assuming 1~kg-yr exposure and zero background. The blue line shows the results using $\mathcal W(0,\omega)$ obtained purely from optical data. The orange and green lines are calculated based on ELF from Mermin ELF (Eq.~(\ref{eq:Mermin_ELF})) and hydrogenic model (Eq.~(\ref{eq:ELFfinal})), respectively.
\label{fig:rate_comparison}
}
\end{figure}
\begin{figure*}[t!]
\centering
\includegraphics[width=1\textwidth]{ELF_Hydrogen_Mermin.pdf}
\caption{Contour plot of the hydrogenic ELF (left) and Mermin ELF (right) as a function of $q$ and $\omega$. The legend shows the value of $\textrm{Log}_{10}[\mathcal W(q,\omega)]$. The black lines show the relation $\omega=q\, (v_{\rm esc}+v_E)-q^2/(2m_{\chi})$ for $m_{\chi}=0.1$ MeV (solid) and $m_{\chi}=1$ MeV (dashed), where $v_{\rm esc}$ is the escape velocity and $v_E$ is the Earth velocity in our galactic frame. The region below the black line is kinematically allowed for a given $m_{\chi}$.
\label{fig:ELFs}
}
\end{figure*}
We then evaluate the impact of the different ELFs on our computations in Fig.~\ref{fig:rate_comparison}, where we compare the DM rate for dopant ionization in the same material as above (Si:P with $n_{D}=1.8\times 10^{18}\textrm{cm}^{-3}$), calculated from the Hydrogenic and Mermin ELFs. In the figure we have assumed that scattering occurs via a light dark photon mediator. Our results indicate that the rates obtained from the Hydrogenic and Mermin ELFs agree well with each other in the whole sub-MeV region of DM masses. The two approaches are further compared in Fig.~\ref{fig:ELFs}, where we show contours of the ELFs as a function of momentum and frequency. From the figure we observe that at low momentum transfers compared with the characteristic momentum scale of the ELFs, $q\lesssim 1/a_*\sim 100\, \textrm{eV}$, both ELFs are similar, and start to differ only at larger momentum transfers. This feature explains in part the broad agreement of the rates obtained using the two ELFs, as for sub-MeV DM scattering via a light mediator the typical momentum transfer is precisely $q\lesssim 100\, \textrm{eV}$, except for masses near an MeV. Nevertheless, the momentum-dependency of the ELF is relevant at large DM masses. This is shown in Fig.~\ref{fig:rate_comparison}, where we also present in blue the rate obtained from $\mathcal W(0,\omega)$ purely from an interpolation of optical data. We see that this approach starts to differ from the momentum-dependent Hydrogenic and Mermin ELFs for $m_\chi$ near 1~MeV, which indicates that momentum-dependent terms in the ELFs become relevant for such masses.
\subsection{Differential rate of DM scattering on dopant electrons}
In this section, we study the differential rate $\frac{d R}{d\omega}$ for DM ionizing dopant electrons. The general $\frac{d R}{d\omega}$ for DM scattering with electrons in a target is given as:
\begin{equation}
\frac{d R}{d\omega}=\frac{\rho_\chi}{(2\pi)^3\alpha\rho_{T}m_\chi}\int dq\,q^3\eta(v_{\rm min}(q,\omega)) \left| V(q)\right|^2 \mathcal W(q,\omega),
\end{equation}
where $\eta(v_{\rm min})\equiv \int_{v_{\rm min}}d^3\mathbf{v}_\chi f(\mathbf{v}_\chi)/v_\chi$, and $v_{\rm min}(q,\omega)=\omega/q+q/(2m_\chi)$. As mentioned in the main text, we use the hydrogenic ELF in Eq.~(\ref{eq:ELFfinal}) to describe single-electron ionization from DM scattering with dopants.
In Fig.~\ref{fig:energyspectrum}, we show the differential rate for DM scattering in a Si:P target with a light dark photon mediator for several DM masses. For all cases, the spectra are peaked at the ionization threshold $E_I=45$ meV and rapidly decrease for larger $\omega$. This feature results from the combination of the kinematics of the light mediator scattering and the ELF. Therefore, the dominant contribution to the total rate comes from the low energy transfer. There is also a sharp cut-off of the spectrum at $\omega_{\rm max}=\frac{1}{2}m_\chi (v_{\rm esc}+v_E)^2$, which is the maximum energy transfer allowed for a given DM mass.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{Energy_spectrum_scattering.pdf}
\caption{The differential rate $\frac{dR}{d\omega}$ for DM ionizing a single dopant electron in Si:P with a light dark photon mediator. Here we choose $n_D=1.8\times 10^{18}/\textrm{cm}^3$ and $\bar\sigma_e=10^{-40}\,\textrm{cm}^2$. The solid lines show the results for three different DM masses: $m_\chi=0.05\,\textrm{MeV}$ (blue), $m_\chi=0.1\,\textrm{MeV}$ (orange), and $m_\chi=0.5\,\textrm{MeV}$ (green).
\label{fig:energyspectrum}
}
\end{figure}
\subsection{Sub-ionization energy depositions and phonon signals}
When the energy transfer to the dopant atom falls below the ionization threshold, dopant electrons are excited into higher-energy bound states that relax back to the ground state by emitting acoustic phonons, which can potentially be measured with calorimetry. To evaluate the DM discovery potential of these sub-ionizing signals, we compute here the contribution of bound-to-bound transitions to DM scattering and absorption rates.
\begin{figure*}[t!]
\includegraphics[width=0.45\textwidth]{phonon_scattering.pdf}\,\includegraphics[width=0.45\textwidth]{phonon_dark_photon.pdf}
\caption{\textit{Solid lines:} projected $90\%$ C.L. reach (pure phonon signal from bound-to-bound transitions) of DM-electron scattering with a light dark photon mediator (\textbf{left}) and dark photon DM absorption (\textbf{right}) in Si:P assuming 1 kg-yr exposure with zero background. We present results for two doping densities: $n_D=1.8\times 10^{18}\textrm{cm}^{-3}$ (blue) and $n_D=3.4\times 10^{17}\textrm{cm}^{-3}$ (orange). Bound-to-bound transitions occur for energy depositions $\omega<E_I=45$ meV, and we have additionally imposed a hard cutoff on the minimum energy transfer of 25 meV due to lack of reliable optical data to compute the ELF for energies below this value. \textit{Dashed lines:} analogous reach for a single ionization signal, \textit{i.e.}, for bound to free electron transitions that require energy depositions above $E_I$. Note that depending on the detection strategy, ionization events can be potentially be detected either by measuring the \textit{phonons} that are released as the ionized electron relaxes to the bottom of the conduction band, or by directly collecting the ionized \textit{electron}, as in standard imaging detectors such as CCDs and BIBs.
\textit{Gray-shaded region (left panel)} constraint from stellar cooling.
\label{fig:phonon}
}
\end{figure*}
DM interaction rates below the ionization threshold $\omega< E_I$ are not captured by the Hydrogenic ELF Eq.~(\ref{eq:ELFfinal}), which only includes contributions from ionization into free Bloch states. Given the uncertainty in the theoretical description of dopant bound states~\cite{shklovskii2013electronic}, we avoid performing a first-principles computation of the bound-to-bound transitions, and instead simply compute their contribution to the ELF by taking it directly from data of optical absorption for photon energies below the ionization threshold. This amounts to approximating $\mathcal W(\mathbf{q},\omega)\approx\mathcal W (0,\omega)$. Since bound-to-bound transitions are expected for momentum transfers that in Si:P are at most of order $q\lesssim E_I/v_{\chi}\approx 50$~eV for DM scattering, and $q\lesssim 50$~meV for DM absorption, this approximation is appropriate, as in both cases the momentum transfer lies below the characteristic scale of the dopant atoms which determines the momentum-dependency of the ELF, $qa_*<1$. Using the optical ELF, we obtain rates for DM-induced bound-to-bound transitions for both DM scattering via a light mediator and DM absorption. Assuming that each bound state de-excitation can be measured by collecting the emitted phonons, we show the DM reach, for both scattering and absorption in Si:P in Fig.~\ref{fig:phonon}. We also compare it with the ionization signals discussed in the main text. Thanks to the lower threshold and large bound-to-bound ELF, phonon signals from bound-state de-excitation potentially have a larger sensitivity than ionization signals (by factors of a few), and can probe lighter DM masses. The comparison of the phonon de-excitation and ionization projected reach, however, must be taken lightly, as we will see below that the technological capabilities and backgrounds of existing single-charge and calorimetric detectors differ substantially. Note that the reach to the phonon signals presented in Fig.~\ref{fig:phonon} is \textit{inclusive}, in the sense that the de-excitation of the bound states likely results in multiple phonons. Here we do not specify the corresponding phonon multiplicity nor spectrum.
While searching for a small phonon signal may be possible in the future, no detector currently provides sensitivity to single phonons with energies of order $\mathcal O(10)$~meV. In the future, however, two types of sensors may lead to $\mathcal O(10)$~meV single-phonon detection~\cite{Essig:2022dfa}: transition edge sensors (TES) and microwave kinetic inductance detectors (MKID). For these detectors to work, the emitted phonons need to be athermal in the target material, \textit{i.e.}, they must not down-convert into multiple lower energy phonons and must instead travel ballistically within the target, possibly reflecting on its surfaces multiple times before being collected. To our knowledge, no measurement of the phonon lifetime on a doped semiconductor target (with doping densities below the Mott transition) has been made. It is possible, however, that phonons are indeed athermal in doped targets. A doped target has always an irreducible gap, which is the energy required to excite the ground state into the lowest-lying excited state. In Si:P, for instance, the lowest-lying states are the approximately-degenerate 1s states, with the ground state having a ionization energy of order $45$~meV, and the next approximately degenerate state an ionization energy of order $34$~meV~\cite{shklovskii2013electronic}. Thus, the two lowest-lying states have an energy gap $\approx 10$~meV. As a consequence, in this material we would expect that a phonon emitted from bound-state de-excitations could be reabsorbed in the target if its energy is $\gtrsim 10$~meV, but would travel ballistically for energies below this threshold. The existence of such gaps has been experimentally observed in photoabsorption data. In~\cite{thomas1981optical,PhysRevB.52.16486} it is shown that at low doping densities $n_D \lesssim 10^{16}/\textrm{cm}^3$ and small temperatures $\lesssim 10$~K, doped semiconductors are transparent to photons with energies below $\approx 30$~meV. This stems from the fact that above these energies photons can excite the ground state into the 2p bound state (photonic excitations between the approximately degenerate 1s states are unlikely due to momentum mismatch). At larger doping densities and temperatures the absorption lines broaden significantly due to dispersion of the dopant's energy levels, and the energy gap disappears. While this suggests that lightly-doped semiconductors at cryogenic temperatures may allow for athermal phonons, more experimental and theoretical efforts are required to determine the phonon lifetime on doped targets.
Regarding backgrounds, calorimetric detectors currently show a large number of unknown events at low energies, which would need to be strongly mitigated for phonon signals to be a viable sub-MeV DM search channel. For instance, the SuperCDMS CPD experiment ~\cite{SuperCDMS:2020aus} shows a DC rate of approximately $10^4$ events per g-day at low energies, which is a factor of $100$ larger than single-charge detectors discussed previously. While SuperCDMS CPD works in a different energy range than the ones relevant for our proposal (its energy thresholds are of order $10$~eV), backgrounds in different types of calorimetric detectors, including CPD, further increase towards lower energies~\cite{SuperCDMS:2020aus,Fuss:2022fxe}. In~\cite{Anthony-Petersen:2022ujw} it was shown that at least part of the events at TES-based calorimetric detectors are related to stress-induced energy release from auxiliary materials around the detector, but a common background component to detectors subject to large and small levels of stress remains unexplained. Other phonon background in pure semiconductors, which are at present subleading but could be relevant in the future, have been calculated in~\cite{Berghaus:2021wrp}.
\bibliographystyle{utphys}
|
1,314,259,993,186 | arxiv | \section{Introduction}
\label{sec:intro}
Calabi-Yau compactifications of type II string theory (in)famously exhibit a moduli space of vacua. A vast amount of work has been invested towards devising mechanisms of inducing a potential on this space, with the aim of obtaining phenomenologically more realistic string theory models. A complementary approach towards lifting the degeneracy of vacua is to search for points on moduli space distinguished geometrically and then ask for physical implications of these distinguishing features. Supersymmetric flux vacua of type IIB were not historically, but could have been discovered following this strategy. In this paper, we revisit the realization of the discrete symmetries $T$ (time reversal) and $CP$ (charge conjugation and parity) in type II Calabi-Yau compactifications from this vantage point.
The question of time reversal invariance was recently discussed in the context of rigid Calabi-Yau compactifications in \cite{Cecotti:2018ufg}. There, a single $\theta$ angle appears, the coefficient of the topological term involving the graviphoton field strength. This coefficient is a field-independent constant, and the authors of reference \cite{Cecotti:2018ufg} ask, in the spirit of the swampland program \cite{Vafa:2005ui}, whether it takes a distinguished value preserving time reversal invariance in rigid Calabi-Yau compactifications. Our analysis, in the broader context of general Calabi-Yau compactifications, differs qualitatively as the multiple $\theta$ angles that appear are field dependent. We address three questions: first, we show that the perturbative and non-perturbative corrections to the tree-level prepotential around the large radius point, which can be determined precisely via mirror symmetry, do not break time reversal invariance. Surprisingly, it is not the infinitely many instanton corrections which pose the bigger challenge, but subtle quadratic terms in the prepotential which are needed to ensure integral monodromy. We show that the quantized value of the coefficients of these terms lead precisely to the two values of $\theta$ angles which are compatible with time reversal symmetry. Secondly, we argue that the monodromy action on the period vector associated to the Calabi-Yau compactification extends the set of vacuum expectation values of non-invariant fields compatible with time reversal invariance away from zero. Finally, inspired by \cite{Cecotti:2018ufg} and in the spirit of the opening paragraph of this introduction, we ask whether the $\theta$ angles take interesting values at other distinguished points in moduli space. We note that rank~2 attractor points have the distinguishing feature that, with an important caveat that we discuss, the gauge coupling matrix decouples the graviphoton from the remaining vector fields. Encouraged by this result, we considered the explicit value of the complex graviphoton coupling at such a point in an example: the value is mathematically distinguished by its relation to periods of modular forms, but its physical relevance remains elusive.
Considerations of $CP$ symmetry in the context of string theory date back to the early days of the field \cite{Strominger:1985it,Dine:1986bg,Dine:1992ya}. $CP$ must of course be broken in any realistic string model in order to reproduce the weak sector of the Standard Model. One focal point of the body of work on $CP$ in string theory is how to mitigate this breaking in the strong sector, i.e.\ how to solve (or incorporate solutions to) the strong $CP$ problem. A typical approach is to assume that $CP$ is broken in the underlying theory, and to attempt to calculate (or in recent works, determine the statistics) of the instanton generated potential for the Peccei-Quinn axion (an incomplete selection of such works is \cite{Conlon:2006tq,Svrcek:2006yi,Cicoli:2012sz,Broeckel:2021dpz, Demirtas:2021gsq}). In this work, we ask, in the context of flux compactifications of type IIB string theory, to what extent ingredients in these models {\it preserve} $CP$. After generalizing to multi-parameter models an old proposal of Strominger and Witten \cite{Strominger:1985it} for defining $CP$ transformations in the context of Calabi-Yau compactifications, we argue that $CP$ is trivially realized for Calabi-Yau compactifications on the locus of moduli space on which it can be defined, as it is induced by a orientation preserving diffeomorphism of the 10 dimensional theory (this is similar to an argument presented in \cite{Dine:1992ya}). We next discuss the vector and hyperscalar VEVs compatible with $CP$ invariance. For the supersymmetric vacua which preserve $CP$, we ask whether $CP$ invariant fluxes can be chosen which stabilize the moduli at these points. Similar questions have also been pursued in \cite{Kobayashi:2020uaj,Ishiguro:2020nuf,Ishiguro:2021ccl}, though we arrive at somewhat different conclusions: we argue that the fully corrected prepotential preserves $CP$ invariance, and we derive a condition on the third cohomology of the Calabi-Yau manifold which determines whether a supersymmetric flux vacuum preserves $CP$ symmetry. In the case of one-parameter models, we show that the condition is always satisfied.
From a four dimensional quantum field theory point of view, studying $T$ and $CP$ invariance separately is redundant, as based either on arguments relying on analytical continuation to Euclidean spacetime (see e.g.\ \cite{Streater:1989vi}) or on a detailed analysis of the types of interactions which can occur in a Lorentz invariant Lagrangian theory (see e.g.\ \cite{Weinberg:1995mt}), such theories enjoy $CPT$ invariance. In the larger context of higher dimensional quantum gravity theories, the two transformations appear on different footing: while the $T$ transformation can always be formulated, the existence of a natural candidate for a higher dimensional $CP$ transformation depends on the details of the compactification manifold. It thus makes sense to discuss the two independently.
The paper is organized as follows. In section \ref{sec:discrete_symmetries}, we discuss the basic structure of operators associated to discrete Lorentz symmetries, and identify two classes of terms in the action based on their transformation behavior under such symmetries. Section \ref{sec:time_reversal} discusses the action of time reversal in Calabi-Yau compactifications. As time reversal is orientation reversing, type IIA is the natural setting for this discussion. We identify a choice of intrinsic phases that renders the 10d action invariant in subsection \ref{subsec:T_in_10d}, before turning to the compactified theory in four dimensions in subsection \ref{subsec:T_in_4d}. Given the 10d result, the tree level theory must satisfy time reversal symmetry, as we check explicitly in \ref{subsubsec:T_in_4d_pert}. We incorporate non-perturbative $\alpha'$ corrections into our discussion in \ref{subsubsec:T_in_4d_inst}, and show that these respect time reversal invariance. In addition to worldsheet instanton contributions, mirror symmetry requires a constant and quadratic contributions to the prepotential. The coefficients of the latter are quantized and map to field independent $\theta$ angles. We work out the normalization of the action and show that the values that these coefficients may take are precisely those at which time reversal invariance holds. In subsection \ref{subsec:spontaneous_breaking_of_T}, we argue that while time reversal invariance seemingly requires the vacuum expectation value of the scalars in vector multiplets to vanish, VEVs equal to integers or half integers also preserve time reversal invariance as a consequence of monodromy symmetry. Finally, in subsection \ref{subsec:T_away_from_large_radius}, we discuss our implicit assumption that the compactification takes place in a vicinity of the large radius point, and touch upon issues that arise when moving away from this point. We turn to the discussion of $CP$ invariance in section \ref{sec:CP}. Unlike time reversal, it is natural to define $CP$ so that it acts on the internal manifold. We discuss this action in section \ref{subsec:CP_internal}. As the combined action of $CP$ on spacetime and the internal manifold is orientation preserving, the invariance of the 10d theory follows from the analysis of section \ref{subsec:trans_p_forms} without the need to introduce intrinsic phases. It is however possible to introduce intrinsic phases, and this will prove useful in discussing flux vacua. This is discussed in section \ref{subsec:CP_10d_action}. The question of spontaneous breaking of $CP$ invariance is treated in section \ref{subsec:spontaneously_breaking_CP}. Finally, in section \ref{subsec:CP_flux_vacua}, we analyze the invariance of supersymmetric type IIB flux vacua under $CP$ transformations. A series of appendices complement the text. In appendix \ref{app:4d_SUGRA}, we review two aspects of 4d $\mathcal{N}=2$ supergravity theories: the gauge coupling matrix $\mathcal{N}$ in light of special geometry, and the symplectic invariance and monodromy symmetry of such theories. Appendix \ref{app:specialkaehler} reviews in some detail the special K\"ahler geometry of the complex structure moduli space of Calabi-Yau manifolds. We review supersymmetric flux vacua in the context of type IIB flux compactifications in appendix \ref{appendix:SUSY_vacua}. Appendix \ref{appendix:Finding rank 2 attractors} finally discusses how to explicitly find rank 2 attractor points, which are equivalent to supersymmetric vacua in one-parameter models. We provide a list of such points in table~\ref{tab:attractor}.
\section{Discrete Lorentz symmetries} \label{sec:discrete_symmetries}
The Lorentz group in arbitrary dimensions $d$ exhibits four connected components. The component containing the identity is called the proper orthochronous Lorentz group. The other three components are obtained by acting by time reversal $\mathcal{T}$,
\begin{equation}
t \xmapsto{\mathcal{T}} -t \,, \quad x^i \xmapsto{\mathcal{T}} x^i \,,
\end{equation}
space inversion $\mathcal{P}_d$\,,
\begin{equation}
t \xmapsto{\mathcal{P}_d} t \,, \quad x^i \xmapsto{\mathcal{P}_d} -x^i \,,
\end{equation}
and their composition $\mathcal{T} \mathcal{P}_d$.
Quantum field theory already in four spacetime dimensions does not allow us to distinguish between the action of a discrete Lorentz symmetry such as $\mathcal{P}$ or $\mathcal{T}$ and a product of this action with a global internal symmetry (i.e.\ one not involving an action on spacetime), see e.g.\ the discussion in \cite{Weinberg:1995mt}. When descending from higher dimensions, we have even more freedom to define the action of these symmetries, as we can couple them with an involutive action of our choice on the internal dimensions.\footnote{Note that the action $x^i \mapsto -x^i$ cannot be defined generically in the internal dimensions; indeed, generically, global coordinates $x^i$ do not exist, and we may or may not be able to define an involution on the manifold. More on this later.} The composition of any such action with the reversal of time which is a symmetry of the theory merits the name $\mathcal{T}$, just as the composition with the inversion of the three spatial dimensions which yields a symmetry merits the name $\mathcal{P}$. We denote the corresponding operators on the Hilbert space of the theory as $T$ and $P$ respectively.
In canonical quantization, the construction of quantum fields relies on imposing proper transformation properties under proper orthochronous Lorentz transformations. The transformation under $P$ and $T$ can involve intrinsic phases whose relative values can be partially worked out by analyzing the structure of the fields. The textbook \cite{Weinberg:1995mt} is an excellent reference on such matters. This analysis leads e.g.\ to the statement that a fermion and an anti-fermion have opposite intrinsic parity, implying that mesons that are S-wave bound states, such as pions, are pseudo-scalars.
\subsection[Transformation of $p$-form fields under discrete Lorentz symmetries]{Transformation of \boldmath{$p$}-form fields under discrete Lorentz symmetries} \label{subsec:trans_p_forms}
The bosonic fields arising in 10d supergravities are the metric, the dilaton, and $p$-form fields.
We will assume that $\mathcal{P}$ and $\mathcal{T}$ are isometries of the metric. We will also assume that they leave the dilaton invariant, given that we do not expect their application to result in strong-weak dualities. We hence turn to the study of the transformation properties of $p$-forms. In physics, we often have the coefficients of a differential $p$-form in mind when we speak of a $p$-form field. For example, we think of the four components of the photon field as the coefficients of a 1-form, transforming under parity as
\begin{equation}
A_0(x) \xmapsto{P} A_0(\mathcal{P} x) \,, \quad A_i(x) \xmapsto{P} - A_i(\mathcal{P} x) \,.
\end{equation}
The transformation properties of $p$-form fields under $P$ and $T$ are however most succinctly described if we consider the $p$-form as a whole. Writing $A = A_\mu \mathrm{d} x^\mu$, the above transformation becomes
\begin{equation}
A(x) \xmapsto{P} (\mathcal{P}^* A)(x) = A_\mu(\mathcal{P} x) \mathcal{P}^* \mathrm{d} x^\mu \,.
\end{equation}
More generally, any $p$-form field $C$ can carry an intrinsic sign, in addition to the pullback action,
\begin{equation} \label{eq:C_under_T_and_P}
C(x) \xmapsto{P} \pm (\mathcal{P}^*C)(x) \,, \quad C(x) \xmapsto{T} \pm (\mathcal{T}^*C)(x) \,.
\end{equation}
Contributions of $p$-form fields to the action fall into two categories: kinetic terms which are metric dependent via the occurrence of the Hodge star and metric independent topological terms. We can subsume the discussion of the action of $\mathcal{P}$ and $\mathcal{T}$ on these terms under the study of their fate under the action of a general diffeomorphism $\phi: M \rightarrow M$ on spacetime $M$. In the case of topological terms, the transformation under pullback of the fields via the diffeomorphism $\phi$ is given by
\begin{eqnarray} \label{eq:action_top_term}
\lefteqn{S_{\text{top}}[C_i] = \int_M \omega_{i_1} \wedge \ldots \wedge \omega_{i_n} }\\
&&\mapsto S_{\text{top}}[\phi^*C_i] = \int_M \phi^*\omega_{i_1} \wedge \ldots \wedge \phi^*\omega_{i_n} = \int_M \phi^* (\omega_{i_1} \wedge \ldots \wedge \omega_{i_n} ) = \pm S_{\text{top}}[C_i] \,,\nonumber
\end{eqnarray}
where the forms $\omega_{i}$ denote either $p$-form potentials $C_i$ or the associated field strengths $F_i$. The final sign is positive for orientation preserving and negative for orientation reversing maps $\phi$. The second type of contribution takes the form
\begin{equation}
S_{\text{kin}}[g,C] = \int_M \mathrm{d} C \wedge * \mathrm{d} C = \int_M \langle \mathrm{d} C, \mathrm{d} C \rangle_g {\rm vol}_g \,.
\end{equation}
Assuming $\phi$ to be an isometry of the metric, such contributions transform as
\begin{eqnarray}
\lefteqn{S_{\text{kin}}[\phi^*g, \phi^*C ]=S_{\text{kin}}[g, \phi^*C ] = }\\
&&\int_M \langle \mathrm{d} \phi^*C, \mathrm{d} \phi^*C \rangle_g {\rm vol}_g = \pm \int_M \phi^* \big(\langle \mathrm{d} C, \mathrm{d} C \rangle_g {\rm vol}_g \big) = S_{\text{kin}}[g,C] \,.
\end{eqnarray}
In the penultimate step, we have invoked
\begin{equation}
\phi^* {\rm vol}_g = \pm {\rm vol}_g \,,
\end{equation}
with the sign depending on whether $\phi$ is orientation preserving (plus sign) or reversing (negative sign).
We conclude that kinetic energy type contributions are invariant under any isometry (orientation preserving or not), whereas topological terms are invariant under any orientation preserving diffeomorphism.
In the simple case of electromagnetism, the kinetic term $F\wedge*F$ is thus invariant under any isometry, while the topological term $F \wedge F$ breaks the symmetry under orientation reversing transformations. Note that both terms are insensitive to the choice of the intrinsic sign displayed in \eqref{eq:C_under_T_and_P}.
The analysis becomes sensitive to this sign when couplings between different $C$-form fields exist, or in the presence of sources. A 1-form field coupled via a covariant derivative
\begin{equation}
D = \mathrm{d} + i A
\end{equation}
will preserve $P$ if it transforms without sign (as does $\mathrm{d}$), and it will preserve $T$ if it transforms with sign (i.e.\ with opposite parity compared to $\mathrm{d}$),
\begin{equation}
A(x) \xmapsto{P} (\mathcal{P}^*A)(x) \,, \quad A(x) \xmapsto{T} - (\mathcal{T}^*A)(x) \,.
\end{equation}
(recall that only if $T$ is realized as an anti-linear operator can it relate two theories which both exhibit a bounded spectrum, as a linear $T$ would map $H \mapsto -H$). On the other hand, both signs are compatible with matter charged under shift symmetries, as occurs e.g.\ in flux compactifications.
Note that self-duality conditions such as
\begin{equation}
F_5 = * F_5
\end{equation}
of type IIB supergravity are not compatible with orientation reversing isometries, as by
\begin{eqnarray}
\lefteqn{\phi^*(\eta \wedge * \omega) = \phi^*\eta \wedge \phi^* (*\omega) =} \\
&&\phi^* \big( \langle \eta, \omega \rangle_g {\rm vol}_g \big) = \langle \phi^* \eta, \phi^* \omega \rangle_g \phi^* {\rm vol}_g = \pm \phi^* \eta\wedge * \phi^* \omega \,,
\end{eqnarray}
an orientation reversing isometry $\phi$ anti-commutes with the Hodge star
\begin{equation} \label{eq:PhionHodgeStar}
\phi^* (* \omega) = - * \phi^* \omega \,.
\end{equation}
\section{Time reversal} \label{sec:time_reversal}
In this section, we will explore the time reversal symmetry of type II string theory compactifications on Calabi-Yau manifolds, which lead to 4d theories with $\mathcal{N}=2$ supersymmetry. The action of time reversal in 4d spacetime is orientation reversing. Unlike the case of parity to which we shall turn below, it does not appear natural to compose this action with an action on the internal dimensions in defining $T$. By the argument at the end of section \ref{subsec:trans_p_forms}, it is therefore difficult to take type IIB supergravity as a starting point for our considerations, and we anchor our discussion in type IIA theory instead. Note that by mirror symmetry, both 10d vantage points should ultimately give rise to the same conclusions in 4d.
\subsection{The action of time reversal in 10d supergravity} \label{subsec:T_in_10d}
The bosonic action of type IIA supergravity is given by
\begin{eqnarray}
S^{\text{IIA}} &=& \frac{1}{2\kappa^2} \int \Big[ e^{-2\phi} \left( R *1 + 4 \mathrm{d} \phi \wedge * \mathrm{d} \phi - \frac{1}{2} H_3 \wedge * H_3 \right) \label{eq:10d_action}\\
&& - \frac{1}{2} \left( F_2 \wedge * F_2 + F_4 \wedge * F_4 \right) - \frac{1}{2} \left(B_2 \wedge \mathrm{d} C_3 \wedge \mathrm{d} C_3 \right) \Big] \,, \nonumber
\end{eqnarray}
where
\begin{equation}
F_2 = \mathrm{d} C_1 \,, \quad F_4 = \mathrm{d} C_3 - B_2 \wedge \mathrm{d} C_1 \,, \quad H_3 = \mathrm{d} B_2 \,.
\end{equation}
By the discussion in section \ref{subsec:trans_p_forms}, we need to introduce intrinsic phases under time reversal to render the topological terms in this action time reversal invariant. As the term
\begin{equation}
B_2 \wedge \mathrm{d} C_3 \wedge \mathrm{d} C_3
\end{equation}
is quadratic in $\mathrm{d} C_3$, it fixes the required transformation property
\begin{equation} \label{eq:TonB}
B_2(x) \xmapsto{T} - \mathcal{T}^*(B_2)(x)
\end{equation}
of $B_2$ uniquely. But then, for $F_4$ to transform simply under time reversal, we need to require that $C_1$ and $C_3$ transform with opposite relative sign,
\begin{equation}
C_1(x) \xmapsto{T} \pm (\mathcal{T}^*C_1)(x) \,, \quad C_3(x) \xmapsto{T} \mp (\mathcal{T}^* C_3)(x) \,. \label{eq:TonAandC}
\end{equation}
The $p$-form fields occurring in type II string theory are sourced by D-branes. The coupling occurs via a term
\begin{equation} \label{eq:WZW_coupling}
\mu \int_V \,\text{Tr}\, \left[ \exp[2\pi \alpha' \mathcal{F}_2 + B_2] \sum_q C_q \right]
\end{equation}
in the D-brane worldvolume action. Here, $V$ denotes the worldvolume of the brane, $\mu$ its tension and $\mathcal{F}_2$ the field strength (which can be non-abelian, thence the trace) of the gauge field on the brane. From the form of the coupling \eqref{eq:WZW_coupling}, we can read off that in order to preserve $T$,
\begin{itemize}
\item the field strength on the brane must transform as \eqref{eq:TonB},
\item $C_{i}$ and $C_{i+2}$ must transform with opposite sign. This condition is consistent with \eqref{eq:TonAandC}. It also implies that $\mathrm{d} C$ and $*\mathrm{d} C$ transform with equal sign (as $C_1$ and $C_7$ are electric-magnetic duals, as are $C_3$ and $C_5$).
\end{itemize}
\subsection{The action of time reversal on the 4d theory}
\label{subsec:T_in_4d}
Having shown the invariance of the 10d supergravity action under time reversal, the invariance of the 4d theory obtained from it upon compactification is automatic. By invoking mirror symmetry, $\alpha'$ corrections to the theory can be computed and elegantly packaged at the level of the 4d theory. We will set the stage in the next subsection by verifying the time reversal invariance of the tree level 4d action, before turning to the $\alpha'$ corrected action in section \ref{subsubsec:T_in_4d_inst}.
\subsubsection{The theory at tree level} \label{subsubsec:T_in_4d_pert}
The 4d supergravity action obtained from type IIA upon Calabi-Yau compactification will inherit time reversal symmetry. We can see this explicitly. The bosonic action is equal to
\begin{equation} \label{eq:4d_action}
S^{4\text{d}} = \int \Big[ \frac{1}{2} R*1 - g_{i\bar{\jmath}} \mathrm{d} t^i \wedge * \mathrm{d}\bar{t}^{\bar{\jmath}} - h_{uv}\mathrm{d} q^u \wedge * \mathrm{d} q^v + \frac{1}{2} \imP \mathcal{N}_{IJ} F^I \wedge * F^J + \frac{1}{2} \reP \mathcal{N}_{IJ} F^I \wedge F^J \Big]
\end{equation}
with the metric on the hypermultiplet moduli space given by
\begin{eqnarray}
h_{uv} \mathrm{d} q^u \wedge * \mathrm{d} q^v &=& \mathrm{d} \phi \wedge * \mathrm{d} \phi + g_{a\bar{b}} \mathrm{d} z^a \wedge * \mathrm{d} \bar{z}^{\bar{b}} + \\
&& +\frac{e^{4\phi}}{4} \left(\mathrm{d} a + \frac{1}{2} (\tilde{\xi}_A \mathrm{d} \xi^A - \xi^A \mathrm{d} \tilde{\xi}_A )\right) \wedge * \left(\mathrm{d} a + \frac{1}{2}(\tilde{\xi}_A \mathrm{d} \xi^A - \xi^A \mathrm{d} \tilde{\xi}_A) \right) + \nonumber \\
&& -\frac{e^{2\phi}}{2} (\imP \mathcal{M}^{-1})^{AB} \left( \mathrm{d}\tilde{\xi}_A + \mathcal{M}_{AC}\mathrm{d} \xi^C \right) \wedge * \left( \mathrm{d}\tilde{\xi}_A + \overline{\mathcal{M}}_{AC}\mathrm{d}\xi^C \right) \nonumber \,.
\end{eqnarray}
The index $i$ (as well as its alphabetic neighbors\footnote{This qualifier will also apply to all ensuing index attributions $I,a,A, \ldots$.}) enumerates vector multiplets containing each one complex scalar field $t^i$ and a vector field whose field strength is denoted $F^i$. The index $I$ runs over the range of $i$ with $0$ adjoined. $F^0$ is the field strength of the graviphoton, which resides in the $\mathcal{N}=2$ gravity multiplet, together with the metric. The special geometry relations governing the vector multiplet sector are summarized in appendix \ref{app:4d_SUGRA}. The hypermultiplets are indexed by $A$, which runs over the range of $a$ with $0$ adjoined. The dilaton $\phi$, the axion $a$ and the real pair of scalars $(\xi^0 ,\tilde{\xi}_0)$ reside in the so-called universal hypermultiplet, while all other hypermultiplets combine a complex scalar field $z^a$ with a pair of real scalars $(\xi^a ,\tilde{\xi}_a)$.
The matrix ${\mathcal{M}}$ is the mirror dual to the gauge coupling matrix $\mathcal{N}$: in type IIA compactifications on a Calabi-Yau manifold $X$, its expression is given by \eqref{eq:NIJ}, with the prepotential occurring in this definition determined by the special geometry of the complex structure moduli space of $X$. Likewise, the K\"ahler metric $g_{a\bar{b}}$ on the special K\"ahler base of the hypermultiplet moduli space is given by \eqref{eq:sigma_model_metric}, based on the same prepotential.
We shall first consider the transformation behavior of the hyperscalars under time reversal. We can let $z^a$ and the dilaton transform trivially. The matrix $\mathcal{M}$ as a function of $z^a$ is therefore also invariant. The transformation behavior of the axion $a$ is determined by that of the 10d $B$-field: it is related to the space-time components $h_3$ of the field strength of $B$ via
\begin{equation} \label{eq:dual_h3}
\mathrm{d} a = * h_3 + \ldots \,.
\end{equation}
By \eqref{eq:PhionHodgeStar} and \eqref{eq:TonB},
\begin{equation}
*h_3(x) \xmapsto{T} - * (\mathcal{T}^* h_3)(x) = \mathcal{T}^*(*h_3)(x) \,.
\end{equation}
Hence, the intrinsic phase of $a$ under time reversal is +1. Finally, the hyperscalars $\xi^A$ and $\tilde{\xi}_A$ in a type IIA compactification on $X$ arise as the expansion coefficients of $C_3$ in a symplectic basis of $H^3(X,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})$ and therefore both transform with the same sign under time reversal.
We conclude that the hypermultiplet sector conserves time reversal invariance at tree level, no matter what sign we choose in \eqref{eq:TonAandC}.
The hypermultiplet sector generically receives $g_s$ corrections, yet is protected in type IIA compactifications against $\alpha'$ corrections. As one choice of sign in \eqref{eq:TonAandC} leaves the hypermultiplet sector untouched, we can rule out time reversal breaking contributions in the fully quantum corrected action as long as hyper- and vector multiplet contributions do not mix, i.e.\ up to two derivative level. We are tempted to conjecture that the quantum corrected action will retain the symmetry under $(\xi^A, \tilde{\xi}_A) \mapsto (-\xi^A, -\tilde{\xi}_A)$, to render our argument independent of the choice of sign in \eqref{eq:TonAandC}.
Turning now to the more interesting vector multiplet sector, recall that the 10d origin of the graviphoton $A^0$ is the gauge potential $C_1$, and that the real part of the complex scalar fields $t^i$ residing in vector multiplets descend from internal modes of the 10d $B_2$ field, while the imaginary parts encode K\"ahler moduli of the internal metric,
\begin{equation}
t^i = b^i + i v^i \,.
\end{equation}
The behavior of $b^i$ under time reversal follows from \eqref{eq:TonB}:
\begin{equation} \label{eq:time_reversal}
b^i(x) \xmapsto{T} - b^i(\mathcal{T} x) \,, \quad \mathrm{i.e.}\quad t^i(x) \xmapsto{T} -\overline{t^i}(\mathcal{T} x) \,,
\end{equation}
as $T$ acts as an isometry on the metric. We lift the action of time reversal to projective coordinates on the vector multiplet moduli space via
\begin{equation} \label{eq:lift_of_T_to_X}
X^0(x) \xmapsto{T} \pm \overline{X^0}(\mathcal{T} x) \,, \quad X^i(x) \xmapsto{T} \mp \overline{X^i}(\mathcal{T} x) \,.
\end{equation}
Dimensional reduction of the 10d action \eqref{eq:10d_action} leads to the 4d action \eqref{eq:4d_action} with the $\sigma$-model metric $g_{i \bar{\jmath}}$ and the gauge coupling matrix $\mathcal{N}_{IJ}$ obtained from the cubic prepotential
\begin{equation} \label{eq:tree_level_prepotential}
F^{\text{tree}} = - \frac{1}{3!} \frac{\kappa_{ijk} X^i X^j X^k}{X^0} \,.
\end{equation}
Here, $\kappa_{ijk}$ denote the triple intersection numbers, see \eqref{eq:triple_intersection}. The gauge coupling matrix which follows from this prepotential via equation \eqref{eq:NIJ} has components $\reP \mathcal{N}_{00}$, $\reP \mathcal{N}_{ij}$, $\imP \mathcal{N}_{i0}$ which are odd in the fields $b^i$, and complementary components that are even. As $\reP \mathcal{N}$ is the coefficient matrix of the topological term, and $\imP \mathcal{N}$ the coefficient matrix of the gauge kinetic term, this is the 4d manifestation of the 10d argument leading to \eqref{eq:TonAandC}: the graviphoton must transform with opposite sign relative to all other gauge fields (which belong to vector multiplets) in order for time reversal to be a symmetry of the action.
\subsubsection[The $\alpha'$ corrected theory]{The \boldmath{$\alpha'$} corrected theory} \label{subsubsec:T_in_4d_inst}
The above discussion was for the tree level action obtained from dimensional reduction of the type IIA action. The vector multiplet sector is protected against $g_s$ corrections, but does receive $\alpha'$ corrections in type IIA compactifications. These are completely captured via mirror symmetry. We discuss these corrections in this subsection.
Note first that, as the $\kappa_{ijk}$ occurring in \eqref{eq:tree_level_prepotential} are real, the transformation of the tree level prepotential $F^{\text{tree}}$ under time reversal is given by
\begin{equation} \label{eq:T_transformation_of_prepotential}
t^i(x) \xmapsto{T} - \overline{t^i}(\mathcal{T} x) \quad \Rightarrow \quad F(x) \xmapsto{T} - \overline{F}(\mathcal{T} x) \,,
\end{equation}
where we have written $F$ for $F^{\text{tree}}$. In fact, independently of the precise form of the prepotential $F$, the behavior \eqref{eq:T_transformation_of_prepotential} alone guarantees invariance of the action under time reversal, as it implies that the components $\reP \mathcal{N}_{00}$, $\reP \mathcal{N}_{ij}$, $\imP \mathcal{N}_{i0}$ change sign under time reversal, while the complementary components remain invariant. We can see this directly from the presentation \eqref{eq:NIJ} of the gauge coupling matrix in terms of the prepotential: writing
\begin{equation}
F_I = \frac{\partial F}{\partial X^I} \,, \quad F_{IJ} = \frac{\partial^2 F}{\partial X^I \partial X^J} \,,
\end{equation}
note that \eqref{eq:T_transformation_of_prepotential} together with \eqref{eq:lift_of_T_to_X} imply
\begin{equation}
F_0(x) \xmapsto{T} \mp \overline{F_0}(\mathcal{T} x) \,, \quad F_i(x) \xmapsto{T} \pm \overline{F_i}(\mathcal{T} x)
\end{equation}
and
\begin{equation}
F_{00}(x) \xmapsto{T} -\overline{F_{00}}(\mathcal{T} x) \,, \quad F_{i0}(x) \xmapsto{T} \overline{F_{i0}}(\mathcal{T} x) \,, \quad F_{ij}(x) \xmapsto{T} - \overline{F_{ij}}(\mathcal{T} x) \,,
\end{equation}
from which the claim easily follows. Alternatively, we can begin by considering one of the defining relations for $\mathcal{N}_{IJ}$ given in \eqref{eq:NIJ_bis1},
\begin{equation}
\mathcal{N}_{IJ} X^J = F_I \,.
\end{equation}
Then
\begin{equation}
\begin{tikzcd}
&\mathcal{N}_{0J} X^J/X^0 (x)\arrow[mapsto]{d}[swap]{T} \arrow[r, equal]
& F_0/X^0 (x)\arrow[mapsto]{d}[swap]{T}\\
& (\tilde{\mathcal{N}}_{00} - \tilde{\mathcal{N}}_{0j} \bar{t}^j)(\mathcal{T} x) \arrow[r,equal]
& -\overline{F_0}/\overline{X^0} (\mathcal{T} x)\arrow[r, equal]
&- (\overline{\mathcal{N}}_{00} + \overline{\mathcal{N}}_{0j}\bar{t}^j)(\mathcal{T} x)
\end{tikzcd}
\end{equation}
and
\begin{equation}
\begin{tikzcd}
&\mathcal{N}_{iJ} X^J/X^0 \arrow[mapsto]{d}[swap]{T} \arrow[r, equal]
& F_i/X^0 \arrow[mapsto]{d}[swap]{T}\\
& (\tilde{\mathcal{N}}_{i0} - \tilde{\mathcal{N}}_{ij} \bar{t}^j)(\mathcal{T} x) \arrow[r,equal]
& \overline{F_i}/\overline{X^0} (\mathcal{T} x)\arrow[r, equal]
& (\overline{\mathcal{N}}_{i0} + \overline{\mathcal{N}}_{ij}\bar{t}^j)(\mathcal{T} x)
\end{tikzcd}
\end{equation}
where $\tilde{\mathcal{N}}_{IJ}(\mathcal{T} x)$ indicates the image of $\mathcal{N}_{IJ}(x)$ under time reversal. Comparing the constant terms and coefficients of $\bar{t}^i$ yields the result.
This generalization away from $F^{\text{tree}}$ is important, as in 4d, non-perturbative quantum corrections can be elegantly packaged at the level of the action in terms of corrections to the tree-level prepotential. Based on the foregoing discussion, we conclude that these corrections will not break time reversal invariance if the corrected prepotential still transforms according to \eqref{eq:T_transformation_of_prepotential}. In the vicinity of the large radius point, the exact prepotential has the form
\begin{equation} \label{eq:exact_prepotential}
\mathcal{F}(\boldsymbol{t}) =-\frac{\kappa_{ijk}}{6} t^i t^j t^k -\frac{\sigma_{ij}}{2}t^i t^j+ \gamma_j t^j + \frac{\zeta(3) \chi }{2 (2 \pi i)^3}- \frac{1}{(2\pi i)^3} \sum_{\boldsymbol{n}} a_{\boldsymbol{n}} e^{2\pi i \boldsymbol{n\cdot t}} \,, \quad a_{\boldsymbol{n}} \in \mathbb{Q}}\newcommand{\Iq}{\mathbb{q} \,,
\end{equation}
where $(X^0)^2\mathcal{F}(\boldsymbol{t}) = F(\boldsymbol{X})$. The coefficients appearing in this expansion are explained in the appendix following equation \eqref{eq:large_radius_F_multi_parameter}. The non-perturbative contribution satisfies \eqref{eq:T_transformation_of_prepotential} by reality of the coefficients $a_{\boldsymbol{n}}$. The perturbative contribution, polynomial in $t^i$, satisfies \eqref{eq:T_transformation_of_prepotential} if the coefficients of all odd order terms in the variables $t^i$ are real, and the coefficients of all even order terms imaginary. This is the case for all models for which the matrix $(\sigma)_{ij} = 0$. Before concluding that models with some entries $\sigma_{ij} \in \frac{1}{2}\mathbb{Z}}\newcommand{\Iz}{\mathbb{z} - \{0\}$ (the only other values that $\sigma_{ij}$ can take, see appendix~\ref{app:specialkaehler}) break time reversal invariance, we should note that by reality of the coefficients $\sigma_{ij}$, the order 2 terms in the prepotential only contribute linearly to the real part of the gauge coupling matrix \eqref{eq:NIJ}, giving rise to a term
\begin{equation} \label{eq:bare_theta}
-\frac{1}{2} \sigma_{ij}F^i \wedge F^j
\end{equation}
in the action, up to a normalization constant we have not been keeping track of up to this point. As the values of $\sigma_{ij}$ can shift by integral amounts under monodromy, this coupling is only well-defined if we can identify $2\pi \sigma_{ij}$ with the $\theta_{ij}$ angle of periodicity $2\pi$. If this is true, $\sigma_{ij} \in \frac{1}{2} \mathbb{Z}}\newcommand{\Iz}{\mathbb{z}$ is exactly the constraint which ensures that $\int F^i \wedge F^j \mapsto - \int F^i \wedge F^j$ is a symmetry of the action.
To check this claim, we need to work out the correct normalization of the term \eqref{eq:bare_theta} by reinstating all dimensionful constants and keeping track of the integrality properties of the gauge fields. To keep constants such as $\kappa_{ijk}$ dimensionless, it will be convenient, deviating from standard practice in 4d, to not assign mass dimension to coordinates $x^i$ and differentials $\mathrm{d} x^i$. The correct mass dimension $\ell^8$ of the topological term in the 10d action \eqref{eq:10d_action}, which requires
\begin{equation}
[B_2] = [H_3] = \ell^2 \,, \quad[C_p] = [F_{p+1}] = \ell^p \,,
\end{equation}
then follows from assigning the appropriate mass dimension to the fields $(B_2)_{\mu \nu}$ and $(C_p)_{\mu_1 \cdots \mu_p}$ rather than to the differentials $\mathrm{d} x^{\mu_i}$. To ensure the correct mass dimension of the kinetic terms, we must assign
\begin{equation}
[g_{\mu \nu} ] = \ell^2
\end{equation}
such that
\begin{equation}
[*1] = [\sqrt{-g}] = \ell^{10}
\end{equation}
and
\begin{equation}
[F_p \wedge * F_p] = [\frac{\sqrt{-g}}{p!} F_{\mu_1 \ldots \mu_p} F_{\nu_1 \ldots \nu_p} g^{\mu_1 \nu_1} \cdots g^{\mu_p \nu_p}] = \ell^{10} \ell^{2(p-1)} \ell^{-2p} = \ell^8 \,.
\end{equation}
After these preliminaries, we turn to the integrality properties of the form fields. The relation between the field strengths $F_p$ and integral cohomology follows from the Dirac quantization condition associated to the D-brane source term for these fields (see e.g.\ \cite{Blumenhagen:2013fgp}, also for the following statements relating couplings to the string length $l_s$):
\begin{equation} \label{eq:integrality_fluxes}
\mu_{p-1} \int_{\Sigma_{p+1}} F_{p+1} \in 2 \pi \mathbb{Z}}\newcommand{\Iz}{\mathbb{z} \,.
\end{equation}
The BPS condition implies that the charge $\mu_{p-1}$ of a D$(p-1)$ brane equals its tension $T_{p-1}$. Assuming that the tension of the fundamental string and a D$1$ brane coincide, a worldsheet calculation yields
\begin{equation}
T_p = \frac{2\pi}{l_s^{p+1}} \,,
\end{equation}
where $l_s^2 = 4 \pi^2 \alpha'$. It follows that (with apologies for the multiple uses of the bracket $[\cdot]$)
\begin{equation} \label{eq:flux_quantization}
\left[\frac{F_{p+1}}{l_s^p} \right] \in H^{p+1}(X,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z}) \,.
\end{equation}
A similar worldsheet calculation also yields the 10d gravitational coupling $\kappa^2$ in terms of the string length,
\begin{equation}
\kappa^2 = \frac{1}{4\pi} l_s^8 \,.
\end{equation}
Next, we reinstate the $\alpha'$ dependence in the relation between the field $B_2$ appearing in \eqref{eq:10d_action} and the variable $b^i$ on which the prepotential \eqref{eq:exact_prepotential} depends via $t^i = b^i + i v^i$. The exponentials
\begin{equation}
e^{2 \pi i \boldsymbol{n \cdot t}}
\end{equation}
arise from worldsheet instantons. The $b^i$ dependence stems from the worldsheet integrals
\begin{equation}
\exp\left({\frac{i}{4\pi \alpha'} \int_\Sigma B}\right) \,.
\end{equation}
Introducing the notation $b^i_s$ for the modes of $B$ (in the string worldsheet normalization), we conclude
\begin{equation}
b^i = \frac{1}{8 \pi^2 \alpha'} b^i_s = \frac{b^i_s}{2l_s^2} \,.
\end{equation}
We are now ready to perform the reduction of the topological term
\begin{equation}
-\frac{1}{4\kappa^2} \int B_2 \wedge \mathrm{d} C_3 \wedge \mathrm{d} C_3
\end{equation}
in the action \eqref{eq:10d_action}, which leads to the perturbative contribution proportional to $\Re \mathcal{N}_{ij}$ in the 4d action: choosing a basis of representatives $\{\omega_i\}$ of the cohomology $H^2(X,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})$ normalized as
\begin{equation}
\int_X \omega_i \wedge \omega_j \wedge \omega_k = \kappa_{ijk} \,,
\end{equation}
we obtain
\begin{eqnarray}
-\frac{1}{4\kappa^2} \int_{M_4 \times X} B_2 \wedge \mathrm{d} C_3 \wedge \mathrm{d} C_3 &=& - \frac{\pi}{l_s^8} \int \kappa_{ijk} b^i_s \,\mathrm{d} A_s^j \wedge \mathrm{d} A_s^k \\
&=& - \frac{2\pi}{4\pi^2} \int \kappa_{ijk} \,\frac{b^i_s}{2 l_s^2}\, \mathrm{d}\,\frac{2 \pi A_s^j}{l_s^3} \wedge \mathrm{d}\frac{2\pi A_s^k}{l_s^3} \\
&=& - \frac{2\pi}{4\pi^2} \int \kappa_{ijk} \,b^i \,\mathrm{d} A^j \wedge \mathrm{d} A^k \,.
\end{eqnarray}
To accompany $b^i_s$, we have here introduced the modes $C_3 = A^i_s \omega_i + \ldots$, which by \eqref{eq:integrality_fluxes} are related to gauge fields with the conventional 4d normalization $\int F \in 2 \pi \mathbb{Z}}\newcommand{\Iz}{\mathbb{z}$ via
\begin{equation}
A^i = \frac{2\pi}{l_s^3} A_s^j \,.
\end{equation}
Comparing to the corresponding term in the 4d action \eqref{eq:4d_action},
\begin{equation}
\frac{1}{2} \int \reP \mathcal{N}_{ij} F^i \wedge F^j \,,
\end{equation}
we conclude
\begin{equation}
\frac{1}{2} \reP \mathcal{N}_{ij}^{\text{tree}} = - \frac{2 \pi}{4 \pi^2} \, \kappa_{ijk}b^k \,.
\end{equation}
With the correct normalization thus fixed, $\sigma_{ij}$ hence contributes
\begin{equation}
\frac{1}{2}\reP \mathcal{N}_{ij}^{\sigma} = -\frac{2\pi}{4\pi^2} \sigma_{ij}
\label{eq:sigmaoneparameter}
\end{equation}
to the gauge kinetic term, as we wished to show.
Note that in non-abelian gauge theories whose gauge group exhibits a non-trivial center, the physics at $\theta=0$ and $\theta = \pi$ is markedly different \cite{Gaiotto:2017yup}. Such theories are accessible via Calabi-Yau compactifications of type II string theory via the process of geometric engineering \cite{Katz:1996fh}, and thus fit into the framework just described. As an example of this setup, we consider the engineering of gauge theories with the same gauge group SU(2) and vanishing matter content but different $\theta$ angle. The engineering geometries are Calabi-Yau threefolds which can be presented both as K3 fibrations and as elliptic fibrations over the Hirzebruch surfaces $F_n$, $n=0,1$.\footnote{Over the base $F_1$, the K3 and the elliptic fibration are associated to different K\"ahler cones in the extended K\"ahler cone of the Calabi-Yau manifold.} These compact Calabi-Yau manifolds exhibit
three K\"ahler moduli, traditionally labelled by $s$, $t$, and $u$: the first corresponds to the base $[B]$ of the rationally fibered Hirzebruch surface, the second to its rational fiber $[F]$, and the third corresponds to the class $[F]+[E]$, with $[E]$ the class of the
elliptic fiber. In terms of these parameters, the prepotential reads (in the K\"ahler cone of the K3 fibration)
\begin{eqnarray}
{\cal F}&=&-\frac{1}{6}(8\, u^3+3(2+n)\, u^2 s+ 6\, u^2 t+3n \,u t^2 +6 \,stu) \nonumber\\
&&- \frac{ n }{2} u t +\frac{1}{24}(92\, u+12(2+n)\, t +24\,s) + \frac{480 \zeta(3)}{2(2\pi i) ^3}+{\cal F}_{\text{inst}}(Q)\ .
\label{eq:Fstu}
\end{eqnarray}
The form of the coefficient of the quadratic term $ut$ follows upon imposing integrality of the transformation matrix implementing the monodromies of the period vector under $u\mapsto u+1$ and $s\mapsto s+1$, as we argue in appendix \ref{app:computing_omega} after equation \eqref{eq:TshiftsatMUM}. In all three cases,
we can decouple gravity by taking the volume of the elliptic curve
$[E]$ to infinity, while performing a double scaling limit on the remaining two K\"ahler classes in which we also take
the volume of the base $[B]$ to infinity, while the volume of the fiber
$[F]$, which governs the masses of the $W^\pm$ bosons of the SU(2) gauge theory, becomes hierarchically
small~\cite{Katz:1996fh}. Note that this is the weak coupling limit of the dual heterotic
string, as the volume $s\sim 1/g^2_{\text{het}}$ of the base $[B]$ of the Hirzebruch surfaces can be identified with the base of the K3 fibration. Following the
analysis of~\cite{Katz:1996fh} we see from the classical terms in \eqref{eq:Fstu} that in the decoupling
limit the real part of the gauge kinetic function evaluates to $\frac{1}{2} {\rm Re } {\cal N}^\sigma_{tt}=-
\frac{1}{4 \pi} n $, where $t$ corresponds to the scalar vacuum expectation value of the U(1) vector multiplet inside the SU(2). This indicates that we can engineer SU(2) Seiberg-Witten theory with $\theta=0$ for $n=0$ and $\theta = \pi$ for $n=1$.
\subsection{Spontaneously breaking time reversal invariance} \label{subsec:spontaneous_breaking_of_T}
In the previous subsection, we concluded that time reversal acts on the field $\boldsymbol{t}(x)$ as
\begin{equation}
\boldsymbol{t}(x) \xmapsto{T} - \overline{\boldsymbol{t}}(\mathcal{T} x) \,.
\end{equation}
The vacuum expectation value $\boldsymbol{t}_0$ of the field $\boldsymbol{t}$ is invariant under this transformation only if (assuming $\boldsymbol{t}_0$ constant)
\begin{equation}
\re \boldsymbol{t}_0 = 0 \,.
\end{equation}
Before concluding that all other VEVs break time reversal symmetry, we recall that $\mathcal{N}=2$ supergravity permits a symplectic action in the vector multiplet sector, as we review in appendix \ref{app:sympl_vs_monodromy}. A subgroup of the symplectic group acts as a symmetry on the theory. In type IIA Calabi-Yau compactifications, this symmetry group can be identified with the monodromy group acting on the middle dimensional homology of the mirror Calabi-Yau manifold. As long as we do not consider non-trivial gauge field backgrounds, we can identify all VEVs of $\boldsymbol{t}$ related by this symmetry as describing the same theory.\footnote{In the presence of non-trivial gauge field backgrounds, the identification would also require transforming this background.} The monodromy around the MUM point of a Calabi-Yau manifold can be written down in terms of its topological invariants, see \eqref{eq:TshiftsatMUM}; it induces the following shift symmetries on the variable $\boldsymbol{t}$:
\begin{equation}
\boldsymbol{t} \rightarrow \boldsymbol{t} + \sum_i n_i \boldsymbol{e}_i \,, \quad n_i \in \mathbb{Z}}\newcommand{\Iz}{\mathbb{z} \,.
\end{equation}
Given $\boldsymbol{t}_0$ such that $\re t_0^i \in \{0,\pm \frac{1}{2}\}$, the choice
\begin{equation}
n_i =
\begin{cases}
-1 \quad &\text{for } \re t_0^i = \frac{1}{2} \\
\hphantom{-}0 \quad &\text{for } \re t_0^i = 0 \\
\hphantom{-}1 \quad &\text{for } \re t_0^i = -\frac{1}{2} \\
\end{cases}
\end{equation}
acts just as negative complex conjugation. We conclude that all VEVs $\re t_0^i \in \{0,\pm \frac{1}{2}\}$ are compatible with the conservation of time reversal invariance.
\subsection{Time reversal symmetry away from the large radius point} \label{subsec:T_away_from_large_radius}
Up to now, we have implicitly considered time reversal symmetry in a vicinity of the large radius point. It is in this region that the 4d action \eqref{eq:4d_action}, with $h^{1,1}(X)$ vector multiplets, $h^{2,1}(X)+1$ hypermultiplets, and the prepotential given by \eqref{eq:exact_prepotential}, is a valid approximation of the theory. In particular, we have full knowledge of the massless spectrum of the theory here. Moving away from the large radius point, two phenomena may occur which invalidate the action \eqref{eq:4d_action}: additional states may become light, and a non-perturbative symplectic transformation may be required which changes which entries in the period vector \eqref{eq:period_vector} can be chosen to define coordinates on the scalar manifold of the vector multiplet sector (see the discussion in appendix \ref{app:sympl_vs_monodromy}). As the computation of the period vector takes place on the mirror manifold, it is convenient to continue the discussion from the vantage point of the mirror, and we will do so for the rest of this section.
Regarding the question of additional light states, while the absence of additional singularities in the prepotential is a suggestive criterion for the absence of such states in a given region, it is not necessarily fully reliable. E.g., consider a family of Calabi-Yau manifolds $\mathcal{X}$ and a point $z$ in moduli space at which the lattice
\begin{equation} \label{eq:intersection_with_H21}
\left(H^{2,1}(\mathcal{X}_{z}) \oplus H^{1,2}(\mathcal{X}_{z})\right) \cap H^{3}(\mathcal{X}_z,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})
\end{equation}
is at least of rank 2. At such a point, infinitely many non-proportional D-brane charges lead to vanishing central charge, yielding an infinite number of candidates for massless states. Points on moduli space satisfying this constraint on the cohomology lattice of the associated manifold exist, and are indeed very special. As we discuss in appendix \ref{appendix:SUSY_vacua}, if the lattice \eqref{eq:intersection_with_H21} has a rank 2 sublattice whose complexification has a Hodge decomposition, such points coincide with supersymmetric vacua of type IIB flux compactifications. We leave the investigation of the intriguing question of additional massless states at such points for future study.
In the remaining part of this section, we make some preliminary remarks regarding the choice of a distinguished symplectic frame away from large radius, and the relation to time reversal invariance. At a generic point on moduli space, such a choice will not exist. At conifold points, a family of distinguished frames is well-motivated in \cite{Huang:2006hq} in the context of imposing the so-called gap condition on topological string amplitudes. In the same reference, a tentative criterion is also put forth for orbifold points. Here, we would like to make an observation regarding a more subtle class of distinguished points on moduli space, so-called attractor points of rank 2, defined by the condition that the lattice
\begin{equation}
H^{3}(\mathcal{X}_z,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z}) \cap \left(H^{3,0}(\mathcal{X}_{z}) \oplus H^{0,3}(\mathcal{X}_{z})\right)
\end{equation}
have rank 2. As we discuss in appendix \ref{appendix:SUSY_vacua}, these points coincide with the supersymmetric vacua discussed above in one-parameter models.
Our simple observation is the following: at such points, the gauge coupling matrix $\mathcal{N}$ can be put in block diagonal form via a rational symplectic transformation.
Identifying the graviphoton and its magnetic dual with the modes of $C_4$ (recall that we are considering type IIB compactification)\footnote{As the back and forth between IIA, IIB, the compactification manifold $X$, and its mirror $\check X$ can be mind-bending, let us restate the situation: we are discussing the gauge coupling matrix $\mathcal{N}$ as obtained at a distinguished point on the K\"ahler moduli space of $X$ upon type IIA compactification. We perform our computation by considering the mirror image $z$ of the point, which is a distinguished point on the complex structure moduli space of $\check X$, and obtain $\mathcal{N}$ via type IIB compactification on $\check X$.} of Hodge type $(3,0) \oplus (0,3)$ \cite{Billo:1995ge}, one block of unit size determines the gauge coupling and theta angle for the graviphoton, and the other of size $b_2(\mathcal{X}_{\boldsymbol{\cdot}}) \times b_2(\mathcal{X}_{\boldsymbol{\cdot}})$ determines the couplings for the remaining vector fields. This form of $\mathcal{N}$ would indicate the decoupling of the graviphoton from the vector multiplets at rank 2 attractor points.
To argue for the form of $\mathcal{N}$, we first introduce the two lattices
\begin{eqnarray}
\Lambda = H^3(\mathcal{X}_z,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z}) \cap \left(H^{3,0}(\mathcal{X}_z) \oplus H^{0,3}(\mathcal{X}_z) \right)\,, \label{eq:Lambda}\\
\Lambda^\perp = H^3(\mathcal{X}_z,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z}) \cap \left(H^{2,1}(\mathcal{X}_z) \oplus H^{1,2}(\mathcal{X}_z) \right)\,,
\end{eqnarray}
as well as the notation $\Lambda_{\mathbb{Q}}\newcommand{\Iq}{\mathbb{q}} = \Lambda \otimes \mathbb{Q}}\newcommand{\Iq}{\mathbb{q}$ and $\Lambda^\perp_{\mathbb{Q}}\newcommand{\Iq}{\mathbb{q}} = \Lambda^\perp \otimes \mathbb{Q}}\newcommand{\Iq}{\mathbb{q}$. When $\Lambda$ is of rank 2, we can choose two elements $\alpha^0, \beta_0$ of a symplectic basis satisfying \eqref{eq:dual_basis} and underlying both the mode expansion of $C_4$ and the definition of the period vector associated to $\Omega$ (see \eqref{eq:Omega_expansion}) to lie in $\Lambda_{\mathbb{Q}}\newcommand{\Iq}{\mathbb{q}}$, and the remaining basis elements to lie in $\Lambda^\perp_\mathbb{Q}}\newcommand{\Iq}{\mathbb{q}$. With this choice, $X^i = F_i = 0$ and $\nabla_i X^0 = \nabla_i F_0 = 0$ for $i\neq 0$. We can immediately conclude from the presentation \eqref{eq:sol_N} of the gauge coupling matrix that with this choice of representatives of a basis of $H^3(X, \mathbb{Q}}\newcommand{\Iq}{\mathbb{q})$,
\begin{equation} \label{eq:diagonal_N}
\mathcal{N}_{\mathbb{Q}}\newcommand{\Iq}{\mathbb{q}} = \begin{pmatrix}
F_0/X^0 & 0 \\
0 & \overline{\nabla_{\boldsymbol{\cdot}} F_{\boldsymbol{\cdot}}}(\overline{\nabla_{\boldsymbol{\cdot}} X^{\boldsymbol{\cdot}}})^{-1}
\end{pmatrix} \,,
\end{equation}
with $\nabla_{\boldsymbol{\cdot}} F_{\boldsymbol{\cdot}}$ and $\nabla_{\boldsymbol{\cdot}} X^{\boldsymbol{\cdot}}$ denoting the matrices with entries $\nabla_i F_j$ and $\nabla_i X^j$ respectively.
The astute reader has undoubtedly remarked the unsettling multiple appearance of the field $\mathbb{Q}}\newcommand{\Iq}{\mathbb{q}$ in the preceding two paragraphs. The necessity of tensoring by $\mathbb{Q}}\newcommand{\Iq}{\mathbb{q}$ arises because the lattice $\Lambda \oplus \Lambda^\perp$ is generically of finite index in $H^3(\mathcal{X}_z,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})$. To permit the normalization $\int \alpha_I \wedge \beta^I = 1$, we will hence generically require recourse to a rational normalization of elements in $\Lambda$. Put differently, to reach the form \eqref{eq:diagonal_N} from a properly normalized symplectic basis for $H^3(\mathcal{X}_z,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})$ requires acting with an element of the rational symplectic group $\mathrm{Sp}(2(b_2+1),\mathbb{Q}}\newcommand{\Iq}{\mathbb{q})$. This is troubling because, as outlined in the opening paragraph of appendix \ref{app:sympl_vs_monodromy}, the same symplectic transformation that acts on the period vector also acts on the vector $(G^-, F^-)$ of field strengths. A rational symplectic transformation of the form
\begin{equation}
S = \frac{1}{r} \tilde{S} \,, \quad \tilde{S} \in \text{Mat}_{n \times n}(\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})
\end{equation}
with $n=2(b_2+1)$ and $r \in \mathbb{N}}\newcommand{\In}{\mathbb{n}$ chosen minimally would be permissible in the case of a charge lattice only populated in $r$ multiples of the elementary charges. We leave the investigation of this question, together with the more pressing conundrum regarding the spectrum of light states at rank 2 attractor points, for future study.
If a legitimate symplectic frame in which $\mathcal{N}$ takes the form \eqref{eq:diagonal_N} exists, it is tempting, particularly in the one-parameter case, to explicitly compute the diagonal entries of $\mathcal{N}$ to see whether they are in any way distinguished. In the context of this paper, a natural question is whether they yield $\theta$ angles that are either $0$ or $\pi$, i.e.\ that preserve time reversal invariance without recourse to a monodromy symmetry. This turns out not to be the case. As we feel that the computation of these diagonal entries itself is interesting, we will discuss one example despite this negative result.
Consider the family of Calabi-Yau manifolds associated to the Picard-Fuchs equation AESZ 34 \cite{almkvist2005} which has an attractor point of rank two at $z=-\frac{1}{7}$ \cite{Candelas:2019llw}.\footnote{There are two Calabi-Yau threefolds described in \cite{almkvist2005}; one of which is a free $\mathbb{Z}/10\mathbb{Z}$ quotient of the other. Here, for simplicity, we will only consider the quotient manifold. This corresponds to the case $\kappa=1$ in the notation of \cite{Candelas:2019llw}.} This Picard-Fuchs equation has the Riemann symbol
\begin{equation}
\label{eq:RiemannSymbolOfAESZ34}
\mathcal{P} \left\{~\begin{matrix}
0 & \frac{1}{25} & \frac{1}{9} & 1 &\infty\\ \hline
~0 & 0 & 0 & 0 & 1 \\
~0 & 1 & 1 & 1 & 1 \\
~0 & 1 & 1 & 1 & 2\\
~0 & 2 & 2 & 2 & 2
\end{matrix}~z\right\}
\end{equation}
and it describes the variation of Hodge structure of a family of Calabi-Yau manifolds with Hodge number $h^{2,1}=1$. Mirror to this family is another family of Calabi-Yau manifolds with triple intersection number $D^3=12$, second Chern class $c_2\cdot D = 12$ and Euler characteristic $\chi= -8$. With this topological data in hand, we may compute the periods in an integral symplectic basis around the MUM point at $z=0$ (see appendix~\ref{app:specialkaehler}, in particular equation \eqref{eq:ChangeOfBasisMatrixAroundMUMPT}). By analytically continuing these solutions to the attractor point at $z=-\frac{1}{7}$, it was found numerically in \cite{Candelas:2019llw} that the periods $\Pi$ in an integral symplectic basis are given by
\begin{equation}
\Pi(-\tfrac{1}{7})=\omega_1\begin{pmatrix}8 \\ -30 \\ 0 \\ 5 \end{pmatrix}+i\, \omega_2 \begin{pmatrix} 0 \\ 0 \\ 2 \\ 1 \end{pmatrix}\,,
\end{equation}
where\footnote{Note that, in comparison with the normalization in \cite{Candelas:2019llw}, our periods contain an additional factor of $(2\pi i)^3$ so that $\Omega$ is an algebraic form defined over $\mathbb{Q}$.}
\begin{equation}
\omega_1=13.323239482723603\cdots~,~~~~\omega_2=-80.866444656616459\cdots\,.
\end{equation}
This implies that in terms of the basis dual to the basis of $H_3(X,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})$ with regard to which the period vector $\Pi$ is expressed, a set of generators of $\Lambda$ is given by
\begin{equation} \label{eq:gens_Lambda}
(4,-15,-5,0)^T~~~~~\text{and}~~~~~(0,0,2,1)^T \,.
\end{equation}
Similarly,
\begin{equation}
\label{eq:covderiv at attractor point}
\nabla_z\Pi\left(-\tfrac{1}{7}\right) = \widetilde{\omega}_1 \begin{pmatrix} 3 \\ -6 \\ 0 \\ 1 \end{pmatrix}
+i\,\widetilde{\omega}_2
\begin{pmatrix} -7 \\ 14 \\ -10 \\ -5 \end{pmatrix} \,,
\end{equation}
where
\begin{equation}
\widetilde{\omega}_1=51.010880877055569\cdots~,~~~~\widetilde{\omega}_2 = -38.125487167252326\cdots~.
\end{equation}
Hence, with regard to the same basis as above, a set of generators of $\Lambda^\perp$ is given by
\begin{equation} \label{eq:gens_Lambda_perp}
(3,-6,0,1)^T~~~~~\text{and}~~~~~(1,-2,-5,-1)^T~.
\end{equation}
One readily checks that the integral of the wedge product of the generators of $\Lambda$ is equal to $\pm7$ and the same is true for the generators of $\Lambda^\perp$. Furthermore, the integral of the wedge product of an element of $\Lambda$ with an element of $\Lambda^\perp$ is indeed zero, as follows from considerations of Hodge type. Thus, we find that $\Lambda\oplus\Lambda^\perp$ is an index $7^2$ sublattice of $H^3(\mathcal{X}_z,\mathbb{Z})$. It is, therefore, impossible to assemble an integral symplectic basis of the third cohomology from elements of this lattice. However, normalizing the first period in \eqref{eq:gens_Lambda} and the second period in \eqref{eq:gens_Lambda_perp} by a factor of $-\frac{1}{7}$ yields a rational symplectic basis of this space which respects the Hodge splitting. The period vector $\Pi$ of the holomorphic three-form at $z=-\frac{1}{7}$ in terms of the standard basis at the MUM points is expressed in terms of this basis by multiplication by the matrix $S$,
\begin{equation}
S = \frac{1}{7}\begin{pmatrix}
14 & 7 & 0 & 0 \\
5 & 1 & 1 & -2\\
-5 & 0 & -4 & 15\\
0 & -7 & 21 & -42
\end{pmatrix} \in \frac{1}{7}\mathrm{Sp}(4,\mathbb{Z})~.
\end{equation}
As discussed above, the new basis diagonalizes the coupling matrix, yielding
\begin{equation}
\mathcal{N}_\mathbb{Q} = \begin{pmatrix}
\frac{-7}{\tau_\Lambda+3} & 0 \\
0 & \frac{3\tau_{\Lambda^\perp}-2}{14\tau_{\Lambda^\perp}-7}
\end{pmatrix}\,,
\end{equation}
where
\begin{equation}
\tau_\Lambda =-\frac{1}{2}+i\, 3.034789127729667\cdots~,~~~~\tau_{\Lambda^\perp} =\frac{1}{2} + i\, 0.373699556954729\cdots~.
\end{equation}
The numbers $\tau_\Lambda$ and $\tau_{\Lambda^\perp}$ are known to be the ratios of periods of modular forms of weight $4$ and $2$ respectively for the congruence subgroup $\Gamma_0(14)\subset \mathrm{SL}(2,\mathbb{Z})$ \cite{Candelas:2019llw}. For example, the number $\tau_{\Lambda^\perp}$ may be understood as the complex structure of the elliptic curve
\begin{equation}
y^2+xy+y=x^3+4x-6
\end{equation}
with $j$-invariant equal to $\left(\frac{215}{28}\right)^3$. However, as emphasized in \cite{Kachru:2020abh}, this is only one among many possible rational models. The question of which (if any) rational models are singled out by string theory remains an open question.
\section{\boldmath{$CP$} symmetry} \label{sec:CP}
Parity symmetry $P$ is more subtle to define than time-reversal symmetry $T$ in the context of compactifications, as we expect $P$ to also act on the compactification manifold. In addition to the considerations of section \ref{sec:time_reversal}, we hence need to generalize the spatial involution $x^i \mapsto -x^i$ to the case of manifolds which generically do not permit global coordinates. We will consider this generalization in the next subsection, before turning to the ensuing action in four dimensions.
\subsection[The action of $CP$ on the internal manifold]{The action of \boldmath{$CP$} on the internal manifold} \label{subsec:CP_internal}
In \cite{Strominger:1985it}, Strominger and Witten note that composing 4d parity with an orientation reversing involutive isometry in the internal dimensions yields an orientation preserving map and is a symmetry of type I supergravity, whereas 10d parity acting on type I supergravity on $\mathbb{R}}\newcommand{\Ir}{\mathbb{r}^{1,9}$ is not. For a generic compactification manifold $X$, it is not clear whether such a map exists. When a family of compactification manifolds $\mathcal{X}$ is constructed by considering hypersurfaces or complete intersections in complex projective space (or more generally, a product of weighted projective spaces), an orientation reversing involution can be constructed for those members of the family for which the coefficients of the defining equations lie in $\mathbb{R}}\newcommand{\Ir}{\mathbb{r}$. In the following, we will call this the real slice of complex structure moduli space. If we refer to the $\mathbb{C}}\newcommand{\Ic}{\mathbb{c}$-valued solution set of these equations as $\mathcal{X}_z(\mathbb{C}}\newcommand{\Ic}{\mathbb{c})$, with $z$ specifying one such choice of real coefficients, complex conjugation defines an involution $c$:
\begin{eqnarray}
c: \mathcal{X}_z(\mathbb{C}}\newcommand{\Ic}{\mathbb{c}) &\rightarrow& \mathcal{X}_z(\mathbb{C}}\newcommand{\Ic}{\mathbb{c}) \nonumber \\
p &\mapsto& \bar{p} \,.
\end{eqnarray}
When $\mathcal{X}$ describes a family of Calabi-Yau varieties with $h^{1,1} = 1$, $c$ induces an isometry of the Ricci flat metric $g$ on $\mathcal{X}_z$ for any choice of K\"ahler class $[\omega]$. To see this, note that $c^*g$ is also Ricci flat. We thus need to show that the associated K\"ahler class is equal to $[\omega]$; Yau's theorem will then allow us to conclude that $g = c^*g$. We can compute this class as follows:
\begin{equation}
c^* [\omega] = \alpha [\omega]
\end{equation}
as $h^{1,1} = 1$ , and $\alpha = \pm 1$ by $c^2 =1$. As $c$ is orientation reversing,
\begin{equation}
\int c^*(\omega \wedge \omega \wedge \omega) = - \int \omega \wedge \omega \wedge \omega \,,
\end{equation}
hence $\alpha = -1$. Note that $-c^* \omega$ is the K\"ahler form associated to the metric $c^* g$, as $c$ maps the complex structure $J$ of $\mathcal{X}_z$ to its negative,
\begin{equation} \label{eq:pullback_Kaehler_form}
-c^* \omega = -c^* ( g \circ J \otimes 1) = c^*(g) \circ J \otimes 1 \,,
\end{equation}
thus concluding the demonstration. For $h^{1,1}>1$, by \eqref{eq:pullback_Kaehler_form}, we still require $[\omega] = - c^* [\omega]$ to conclude $c^*g = g$. However, we now must restrict the K\"ahler classes we consider to the subset satisfying this constraint. This subset is not empty: given any metric $g$ on $\mathcal{X}_z$ with associated K\"ahler form $\omega$, $g + c^*g$ again defines a metric, with associated K\"ahler form $\omega - c^* \omega$ satisfying the constraint.
In the following, we will restrict to the region in the the complex structure moduli space and the K\"ahler cone for which $c$ exists and describes an isometry.
To work out the effect of this involutive isometry on the compactified theory, we need to compute the pullback $c^*$ of the involution on representatives of the cohomology of $\mathcal{X}_z(\mathbb{C}}\newcommand{\Ic}{\mathbb{c})$ that enter into the compactification. Assuming that $\mathcal{X}_z$ is a Calabi-Yau manifold, we can argue as follows for the middle dimensional cohomology: the space $H^0(\mathcal{X}_z, \Omega^3)$ of global sections of the sheaf of algebraic 3-forms $\Omega^3$ is one-dimensional. A choice of section (obtained via a residue formula on the ambient space) exists~\cite{MR229641}
which does not involve complex coefficients. Calling this choice $\Omega$, we thus have
\begin{equation} \label{eq:c_star_Omega}
c^* \Omega = \bar{\Omega} \,.
\end{equation}
The authors of \cite{Strominger:1985it} observe that this transformation implies that in heterotic compactifications, charged matter in a representation $R$ of the gauge group is mapped to the conjugate representation $\bar{R}$. They thus identify the combined action of parity $\mathcal{P}_4$ in 4d and $c$ on the compactification manifold with the discrete symmetry $\mathcal{C} \mathcal{P}$. Type II compactifications on smooth Calabi-Yau manifolds do not give rise to charged matter, but it is natural to identify this action with $\mathcal{C} \mathcal{P}$ also in this case, and we will do so:
\begin{equation} \label{eq:def_CP}
\mathcal{C} \mathcal{P} = \mathcal{P}_4 \circ c \,.
\end{equation}
To work out the action of $c^*$ on a basis $\{ \gamma_i \, | \, 1 \le i \le b_3 \}$ of $H^3(\mathcal{X}_z,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})$, we write
\begin{equation}
\Omega = \sum \gamma_i \Pi_i \quad \Rightarrow \quad c^* \Omega = \sum \gamma_i \overline{\Pi}_i = \sum (c^* \gamma_i) \, \Pi_i \,.
\end{equation}
The first equality on the RHS follows by \eqref{eq:c_star_Omega} and reality of $\gamma_i$, and the second by linearity of the pullback map. To solve for the $b_3$ forms $c^* \gamma_i$, we need $b_3$ linearly independent equations of this form. These can be obtained by applying the same reasoning to a full algebraic basis of $H^3(\mathcal{X}_z)$. Concretely, such a basis can be constructed by considering derivatives of $\Omega$ with regard to the $\frac{b_3-2}{2}$ coordinates $z_i$ on complex structure moduli space, the latter obtained as rational functions of the coefficients of the defining equations. In the one-parameter case, e.g., away from apparent singularities, it suffices to consider $\Omega$ together with its derivatives up to and including order three. Writing
\begin{equation} \label{eq:def_W}
(\Omega, \Omega' , \Omega'', \Omega''')_i = \sum_j \gamma_j \mathcal{W}_{ji} \,,
\end{equation}
we obtain
\begin{equation} \label{eq:c_star_from_W}
\sum_j \gamma_j \overline{\mathcal{W}}_{ji} = \sum_j c^*(\gamma_j) \mathcal{W}_{ji} \quad \Rightarrow \quad
c^* \gamma_i = \sum_j \gamma_j (\overline{\mathcal{W}} \cdot \mathcal{W}^{-1})_{ji} \,.
\end{equation}
Note that as $c^*$ maps $H^3(\mathcal{X}_z,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})$ to $H^3(\mathcal{X}_z,\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})$, the entries of the associated matrix are necessarily integers. Furthermore, on any connected subset of the real slice of moduli space that we are considering, $c$ is continuous. Hence, the matrix associated to $c^*$ is constant on such sets. We remark further that the determinant of $c^*$ (and more generally that of the pullback of any orientation reversing diffeomorphism) acting on the middle cohomology equals $(-1)^{\frac{b_3}{2}}$. This can be seen as follows: the vector space $H^3(\mathcal{X}_z,\mathbb{C})$ together with the pairing ${(\omega_1,\omega_2) \mapsto \int_{\mathcal{X}_z} \omega_1 \wedge \omega_2}$ is a symplectic vector space. We have $\int_{\mathcal{X}_z} c^* \omega_1 \wedge c^* \omega_2 = \int_{\mathcal{X}_z} c^* (\omega_1 \wedge \omega_2)$; since $c$ is orientation reversing (because $\mathcal{X}_z$ is a threefold), this shows that $c^*$ is antisymplectic. Hence, $ic^*$ is symplectic and thus has determinant 1. The claim follows by $\det(c^*) = (-i)^{b_3} \det(ic^*)$.
Turning next to even dimensional cohomology, $H^2(\mathcal{X}_z)$ and $H^4(\mathcal{X}_z)$ can be decomposed into an even and an odd eigenspace of $c^*$. Furthermore, integration gives a non-degenerate pairing $H^2(\mathcal{X}_z,\mathbb{R}) \otimes H^4(\mathcal{X}_z,\mathbb{R}) \rightarrow \mathbb{R}$ which is anti-invariant under $c^*$ since $c$ is orientation reversing. Hence, the pairing of forms of equal parity must vanish, allowing us in particular to conclude that there are (non-canonical) isomorphisms between the even eigenspace of $H^2(\mathcal{X}_z)$ and the odd eigenspace of $H^4(\mathcal{X}_z)$, and vice versa. Note that when we fix a K\"ahler form $\omega$ which is odd under $c^*$, such an isomorphism is induced canonically, as follows from the Lefschetz theorem, by wedging with $\omega$.
\subsection[$CP$ action in 10d and 4d]{\boldmath{$CP$} action in 10d and 4d} \label{subsec:CP_10d_action}
We will be interested below in flux compactifications. These take a simpler form in type IIB supergravity. The action of type IIB, ignoring the issue of the self-duality of the 5-form field strength, is given by
\begin{eqnarray}
S^{\mathrm{IIB}} &=& \int \Big[ e^{-2\phi} \left( \frac{1}{2} R *1 + 2 \mathrm{d} \phi \wedge * \mathrm{d} \phi - \frac{1}{4} H_3 \wedge * H_3 \right) \label{eq:IIB_action}\\
&& - \frac{1}{2} \left( F_1 \wedge * F_1 + \tilde{F}_3 \wedge * \tilde{F}_3 + \frac{1}{2} \tilde{F}_5 \wedge * \tilde{F}_5 \right) \nonumber\\
&& - \frac{1}{2} C_4 \wedge H_3 \wedge F_3 \Big] \,, \nonumber
\end{eqnarray}
where $F_i = \mathrm{d} C_i$ and
\begin{equation} \label{eq:def_tF}
\tilde{F}_3 = F_3 - C_0 \wedge H_3 \,, \quad \tilde{F}_5 = F_5 - \frac{1}{2} C_2 \wedge H_3 + \frac{1}{2} B_2 \wedge F_3 \,.
\end{equation}
This action gives rise to the correct equations of motion, which must then be supplemented with the self-duality constraint $\tilde{F}_5 = * \tilde{F}_5$.
$\mathcal{C} \mathcal{P}$ as defined in equation \eqref{eq:def_CP} is an orientation preserving isometry; it is hence a symmetry of the action \eqref{eq:IIB_action} by the discussion in section \ref{subsec:trans_p_forms}, without the need of introducing intrinsic phases. The $\mathcal{C} \mathcal{P}$ invariance of the 4d theory arising upon compactification is thus automatic. Instantons correct the tree-level prepotential governing the vector multiplet sector. As the corrected prepotential is expressed via \eqref{eq:prepotential_from_periods} entirely in terms of the periods of $\Omega$, $c$ merely maps the prepotential to its complex conjugate, preserving the action.
While it is thus not necessary to assign intrinsic phases to 10d fields in order to preserve $CP$ invariance, we will see below that the freedom to do so extends the set of vacua which preserve $CP$ invariance. The phases we will require in our study of flux vacua in section \ref{subsec:CP_flux_vacua} coincide with those we must introduce in order for a space-filling D3 brane to preserve $CP$ symmetry: the WZW coupling \eqref{eq:WZW_coupling} of a space-filling D3 brane located at a fixed point of the $c$-action will be invariant under $CP$ only if we impose an intrinsic phase $-1$ for $C_0$. By \eqref{eq:def_tF}, this in turn requires that $C_4$ and either $B_2$ or $C_2$ also acquire an intrinsic phase $-1$. Either choice is consistent with the invariance of the topological term in \eqref{eq:IIB_action}.
Note that phenomenological type IIB models often include space-filling D7 branes as ingredients, and invoke Euclidean D3 brane instantons to generate potentials for axions. Whether these branes preserve or violate $CP$ symmetry depends on the action of $c$ on the internal cycle that they wrap.
\subsection[Spontaneously breaking $CP$ invariance]{Spontaneously breaking \boldmath{$CP$} invariance} \label{subsec:spontaneously_breaking_CP}
The transformation properties of the 4d fields in \eqref{eq:4d_action} under $CP$ depend on the intrinsic phase of their parent field in 10d as well as on the action of $c^*$ on the basis of forms on which the Kaluza-Klein reduction of the parent field is based.
We first consider the hypermultiplet sector: the reduction here is based on a basis of even cohomology. As argued above, this basis can be chosen with definite parity with regard to $c^*$. Depending on the choice of intrinsic phase for $C_0$, $C_2$, $C_4$ and $B_2$, it is the even or odd modes which transform trivially under $CP$ and whose VEV is thus compatible with $CP$ invariance. The map between these modes and the hyperscalars $\xi^A$ and $\tilde{\xi}_A$ occurring in the 4d action \eqref{eq:4d_action} is somewhat intricate; it is worked out in \cite{Bohm:1999uk}.
We turn next to the vector multiplet sector. The scalars here are functions of the complex structure moduli. As the definition of $c$ required restricting to a real slice of complex structure moduli space invariant under the action of $c$, we conclude that any function of these moduli is also invariant. Hence, any VEV within the real slice that the vector scalars take is compatible with $CP$ invariance. Note in particular that on this slice, all $\theta$ angles vanish.
In the following, we will focus on points on complex structure moduli space that correspond to supersymmetric flux vacua of type IIB string theory, as reviewed in appendix \ref{appendix:SUSY_vacua}. As we discuss there, these coincide with rank 2 attractor points in one-parameter models. We review a strategy to find such rank 2 attractor points in appendix \ref{appendix:Finding rank 2 attractors}. All of the examples listed in table \ref{tab:attractor} of this appendix indeed lie on the real slice of moduli space; the corresponding VEVs of the vector scalars are therefore invariant under the $CP$ transformation. This property however is not generic. In the final paragraph of appendix \ref{appendix:Finding rank 2 attractors}, we also give several rank 2 attractor points which do not lie on the real slice.
\subsection[$CP$ invariance of supersymmetric flux vacua]{\boldmath{$CP$} invariance of supersymmetric flux vacua} \label{subsec:CP_flux_vacua}
In the previous section, we discussed the $CP$ invariance of distinguished points on moduli space. To localize the theory at these points requires additional ingredients. In this section, we want to study the $CP$ invariance of the theory in the presence of non-trivial fluxes. Introducing such non-trivial backgrounds will break any symmetry which does not act trivially on these.
Compactifying IIB string theory on a family of Calabi-Yau threefolds $\mathcal{X}$ with non-trivial $F_3$ and $H_3$ flux will generate a superpotential for some of the moduli (see appendix~\ref{appendix:SUSY_vacua}). More precisely, for given locally constant cycles $F_3,H_3\in H^3(\mathcal{X}_{\boldsymbol{\cdot}},\mathbb{Z})$\footnote{In fact, the correct quantization condition is given in equation \eqref{eq:flux_quantization}. We will implicitly choose a normalization of the field strengths to absorb the factor of $l_s^2$ in this section so as to not overload the notation.} defined over a contractible subset of the moduli space, the superpotential is given by
\begin{equation}
W(z) = \int_{\mathcal{X}_z} G_3\wedge \Omega_z
\end{equation}
where
\begin{equation} \label{eq:def_g3}
G_3=F_3-\tau H_3.
\end{equation}
Here, $\tau$ is a complex number that is identified with the vacuum expectation value of the axio-dilaton
\begin{equation}
\tau = C_0+ie^{-\phi}~.
\end{equation}
The superpotential $W$ depends on the complex structure moduli $z$ through the section $\Omega$ of the bundle of holomorphic (3,0)-forms.
Traditionally, one specifies the theory by fixing $F_3$ and $H_3$ and then solving the ensuing 4d equations of motion to determine the vacuum expectation value for the complex structure moduli and the axio-dilaton which follow. As explained in appendix~\ref{appendix:SUSY_vacua}, the supersymmetric solutions to these equations determine a point $z=z_0$ in complex structure moduli space at which a rank 2 lattice $\Gamma \subset H^3(\mathcal{X}_{z_0},\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})$ exists whose complexification has a Hodge decomposition of type $(2,1) \oplus (1,2)$.
In this section, we consider such vacua from a slightly different vantage point: we ask whether given such a point $z_0$, it is possible to choose compatible fluxes $F_3$ and $H_3$ such that $CP$ is conserved. In particular, this requires the fluxes to be invariant under this transformation.
Let us assume that at $z_0$, the intersection
\begin{equation}
\left(H^{2,1}(\mathcal{X}_{z_0}) \oplus H^{1,2}(\mathcal{X}_{z_0})\right) \cap H^3(\mathcal{X}_{z_0},\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})
\label{eq:Intersection2dim}
\end{equation}
has exactly rank 2, and hence equals $\Gamma$ (we will comment on higher rank intersections at the end of this section). The intersection is clearly invariant by $c^*$: $c^*$ acts as an involution on $H^3(\mathcal{X}_{z_0},\mathbb{Q}}\newcommand{\Iq}{\mathbb{q})$ as $c: \mathcal{X}_{z_0} \rightarrow \mathcal{X}_{z_0}$ is an involution. Furthermore, as $c$ is antiholomorphic, $c^* : H^{2,1}(\mathcal{X}_{z_0}) \xrightarrow{\,\smash{\raisebox{-0.55ex}{\ensuremath{\scriptstyle\sim}}}\,} H^{1,2}(\mathcal{X}_{z_0})$. As both $H^3(\mathcal{X}_{z_0},\mathbb{Q}}\newcommand{\Iq}{\mathbb{q})$ and $H^{2,1}(\mathcal{X}_{z_0}) \oplus H^{1,2}(\mathcal{X}_{z_0})$ are invariant under $c^*$, so is their intersection.
As an involution, $c^*$ restricted to $\Gamma$ is diagonalizable over $\mathbb{Q}}\newcommand{\Iq}{\mathbb{q}$ with eigenvalues $\pm 1$.
The argument at the end of section \ref{subsec:CP_internal} then shows that $\Gamma_{\mathbb{Q}}\newcommand{\Iq}{\mathbb{q}}$ decomposes into a sum of one-dimensional eigenspaces $E_{\pm}$ to eigenvalue $\pm 1$ respectively. We shall call the indivisible integral eigenvectors in these eigenspaces $\gamma_+$ and $\gamma_-$.
$CP$ conservation now faces an apparent quandary: due to the constraint \eqref{eq:crossed_fluxes}, we cannot choose both $F_3$ and $H_3$ as multiples of the invariant form $\gamma_+$. The conclusion that no supersymmetric vacuum preserves $CP$ would however be too hasty, as happily, we have the freedom to introduce additional intrinsic phases, as discussed in section \ref{subsec:CP_10d_action} above.
Choosing an intrinsic $CP$ phase $-1$ for either $F_3$ or $H_3$, for the flux background to not break $CP$, the field strength carrying this phase should be an integer multiple of $\gamma_-$, the field strength carrying the phase $+1$ an integer multiple of $\gamma_+$. Following the reasoning around \eqref{eq:4_choices_to_identify_G3} in the appendix, we conclude that both choices $F_3 \in E_+$, $H_3 \in E_-$ and $H_3 \in E_+$, $F_3 \in E_-$ are possible to fix the vacuum to the point $z_0$. Whether this vacuum is $CP$ symmetric hence depends only on the value of $C_0$: recall that when imposing a non-trivial intrinsic phase on either $H_3$ or $F_3$, $C_0$ also acquires an intrinsic $CP$ phase -1. Its VEV must therefore vanish in order to preserve $CP$ invariance. One may be tempted to enlarge the $CP$ preserving domain by virtue of the discrete shift invariance subgroup of S-duality. This is not possible, as this invariance is fixed by requiring that $G_3$ have the form
\begin{equation} \label{eq:decomp_G3}
G_3 = \gamma_+ - \tau \gamma_- \quad \text{or} \quad G_3 = \gamma_- - \tau \gamma_+ \,.
\end{equation}
For multi-parameter models, a computation must determine whether given $\Gamma$, the decomposition \eqref{eq:decomp_G3} occurs with $\reP \tau =0$ or not. If so, the vacuum is $CP$ preserving, else $CP$ violating. Specializing however to one-parameter models, we can show that all supersymmetric vacua are $CP$ preserving: we define a distinguished generator $\gamma$ of $\Gamma_{\mathbb{C}}\newcommand{\Ic}{\mathbb{c}}$ in this case as
\begin{equation}
\gamma = \nabla_z \Omega = (\partial_z + K_z)\Omega \,,
\end{equation}
with $\Omega$ chosen to satisfy \eqref{eq:c_star_Omega}. As $K \in \mathbb{R}}\newcommand{\Ir}{\mathbb{r}$, we also have $K_z \in \mathbb{R}}\newcommand{\Ir}{\mathbb{r}$ in the slice of moduli space under consideration. Hence,
\begin{equation}
c^* \gamma = \bar{\gamma} \,.
\end{equation}
Therefore, $\reP \gamma \in (E_+)_{\mathbb{C}}\newcommand{\Ic}{\mathbb{c}}$, $\imP \gamma \in (E_-)_{\mathbb{C}}\newcommand{\Ic}{\mathbb{c}}$, i.e.
\begin{equation}
\gamma = \alpha \gamma_+ + i\, \beta \gamma_- \quad \text{with} \quad \alpha, \beta \in \mathbb{R}}\newcommand{\Ir}{\mathbb{r} \,,
\end{equation}
allowing us to conclude that all choices of $G_3$ compatible with this flux vacuum have $C_0 = 0$, hence preserve $CP$.
More generally, in the case of multi-parameter models and even if the intersection \eqref{eq:Intersection2dim} has rank greater than 2, this conclusion can be made if e.g.\ $\Gamma$ is cut out by a correspondence defined over $\mathbb{R}$. Then $\Gamma_\mathbb{C}$ can be generated by forms from $H^3(\mathcal{X}_{z_0},\mathbb{Z}}\newcommand{\Iz}{\mathbb{z})$, which are invariant under $\gamma \mapsto \overline{\gamma}$, but also by forms from the algebraic de Rham cohomology $H^3_{\text{dR}}(\mathcal{X}_{z_0})$, on which $c^*$ acts by $\gamma \mapsto \overline{\gamma}$. For more details on correspondences we refer to \cite{Motives}.
As an example, consider again the Picard-Fuchs equation AESZ 34 (see equation~\eqref{eq:RiemannSymbolOfAESZ34} for its Riemann symbol) and the associated family of Calabi-Yau manifolds described above. This family has an attractor point of rank two at $z=-\frac{1}{7}$ where the Hodge structure splits as in \eqref{eq:HodgeSplitting}. The computation of the pullback $c^*$ of the complex conjugation map at this point is simplified by the fact that this family exhibits no singularities on the negative real axis of moduli space. By our discussion below equation \eqref{eq:c_star_from_W}, we can hence evaluate the matrix $\mathcal{W}$ introduced in \eqref{eq:def_W} immediately to the left of the MUM point. To this end, let
\begin{equation}
\Pi = \begin{pmatrix}
F_I\\X^I
\end{pmatrix}
\end{equation}
denote the period vector in the integral symplectic basis of the third cohomology adapted to the MUM point (see the discussion around \eqref{eq:period_at_MUM}), such that
\begin{equation}
\mathcal{W} = (\Pi,\Pi',\Pi'',\Pi''') \,.
\end{equation}
Identifying $c^*$ with its matrix expression in this basis, we must thus evaluate
\begin{equation}
\label{eq:pullbackofCCmap}
c^* = \overline{\mathcal{W}} \cdot \mathcal{W}^{-1} \,.
\end{equation}
The period vector $\Pi$ can be obtained by applying a universal matrix $T$, depending only on the topological data of the mirror Calabi-Yau threefold, to the period vector $\varpi$ in the Frobenius basis at the MUM point (see equation~\eqref{eq:ChangeOfBasisMatrixAroundMUMPT}). As the coefficients of the holomorphic functions $g_i(z)$ on which $\varpi$ depends (see \eqref{eq:Frobenius}) are rational, the imaginary contributions to $\varpi$ evaluated on the negative real axis arise only from the evaluation of the logarithms occurring in this expression. Analytically continuing along the upper half plane, we can thus write
\begin{equation}
\mathcal{W} = T
\begin{pmatrix}
1 & 0 & 0 & 0 \\
\pi i & 1 & 0 & 0 \\
\frac{(\pi i)^2}{2} & \pi i & 1 & 0 \\
\frac{(\pi i)^3}{6} & \frac{(\pi i)^2}{2} & \pi i & 1 \\
\end{pmatrix} \begin{pmatrix*}[r]
g_0(z) \\
g_0(z)\log(|z|) + g_1(z) \\
\frac{1}{2} g_0(z) \log^2(|z|) +g_1(z) \log(|z|) + g_2(z) \\
\frac{1}{6} g_0(z) \log^3(|z|) + \frac{1}{2}g_1(z) \log^2(|z|) + g_2(z) \log(|z|) + g_3(z)
\end{pmatrix*}
\end{equation}
for $z \in (-\infty,0)$. The final matrix in this expression, being real, does not contribute to \eqref{eq:pullbackofCCmap}, such that $c^*$ is determined solely by topological data \cite{Yang:2020sfu,Yang:2020lhd}:
\begin{equation}
c^* =
\begin{pmatrix}
1 & 1 & -\frac{1}{12}(c_2\cdot D+2D^3) & \frac{D^3}{2}-\sigma\\
0 & -1 & \frac{D^3}{2}-\sigma & -D^3+2\sigma\\
0 & 0 & -1 & 0\\
0 & 0 & -1 & 1
\end{pmatrix} =
\begin{pmatrix}
1 & 1 & -3 & 6\\
0 & -1 & 6 & -12\\
0 & 0 & -1 & 0\\
0 & 0 & -1 & 1
\end{pmatrix} \,;
\end{equation}
we have used that for the example under consideration, $D^3=12$, $c_2 \cdot D = 12$ and ${\sigma=0}$.
Normalizing the covariant derivative \eqref{eq:covderiv at attractor point} appropriately, we can now check explicitly that it gives rise to a flux $G_3$, which when expressed in the same basis underlying the expressions \eqref{eq:gens_Lambda} and \eqref{eq:gens_Lambda_perp} is given by
\begin{equation}
G_3 = \begin{pmatrix} 3 \\ -6 \\ 0 \\ 1\end{pmatrix}
-\tau \begin{pmatrix} -7 \\ 14 \\ -10 \\ -5 \end{pmatrix}
\end{equation}
with
\begin{equation}
\tau = i \,0.747399113909459\cdots~.
\end{equation}
Hence, $c^* F_3 = F_3$, $c^* H_3 = - H_3$, and $C_0 =0$. It is thus indeed possible to choose $CP$ invariant fluxes which stabilize the theory to a $CP$ invariant supersymmetric vacuum at $C_0=0$ and $z =- \frac{1}{7}$.
\acknowledgments
We would like to thank Janis D\"ucker, Daniel Huybrechts, Dominic Joyce, Christian Kaiser, Spiro Karigiannis, Boris Pioline, Duco van Straten and Stefan Theisen for useful conversations.
K.B.\ is supported by the International Max Planck Research School on Moduli Spaces of the Max Planck Institute for Mathematics in Bonn. M.E.\ is supported by the US Department of Energy under grant DE-SC0010008. A.K.K.P. acknowledges support under ANR grant ANR-21-CE31-0021. A.K.\ likes to thank Dr.\ Max R\"ossler, the Walter Haefner Foundation and the ETH Z\"urich Foundation for support.
\newpage
|
1,314,259,993,187 | arxiv |
\section{Introduction}
As a classic topic, change-point detection regains much attention recently in the context of uncovering structural change in big data. In particular, a normal mean change-point model and its variants have been applied to analyze high-throughput data for DNA copy number variation (CNV) detection. The CNV is defined as duplication or deletion of a segment of DNA sequences compared to the reference genome, and can cause significant effects at molecular levels and be associated with susceptibility (or resistance) to disease \citep{feuk2006structural,freeman2006copy,mccarroll2007copy}.
There are multiple sources of data that provide us with copy number information. The microarray Comparative Genomic Hybridization (aCGH) techniques have been widely used for CNV detection \citep{urban2006high}. The aCGH is helpful to detect long CNV segments with tens of kilobases (kb) or more, but is not able to locate small-scale CNVs with length shorter than its minimal resolution ($>$10 kb), which are common in the human genome \citep{sebat2004large,carter2007methods,wong2007comprehensive}. In addition to the aCGH approach, the single nucleotide polymorphism (SNP) genotypeing array has become an alternative in CNV detection due to its improved resolution \citep{citeulike:5084099}. For example, popular SNP array platforms such as Illumina \citep{Peifferetal} and Affymetrix \citep{McCarrolletal} allow to detect CNVs with kilobase-resolution.
In SNP arrays, CNV information is measured by the total fluorescent intensity signal ratios from both alleles at each SNP locus referred to as log-R-ratio (LRR). It also allows to have the relative ratio of the fluorescent signals between two alleles, known as B allele frequency (BAF). Finally, in very recent applications, aligned DNA sequencing data with even higher resolution can be directly used for CNV detection. The next generation sequencing (NGS) techniques typically produce a millions of short reads that are to be aligned with the reference genome. Both of the associated read depth (RD) and distances of paired-end (DPE) from the aligned sequence are important sources of inferring CNV \citep{medvedev2009computational,abyzov2011cnvnator,duan2013comparative,chen2017allele}. Note that there is a trade-off between the resolution and data size. With higher resolution data, it is possible to discover shorter CNVs; at the same time, the larger data size brings great challenge in computation. This might be one of the reasons why the SNP array has been most popular in recent CNV studies since aCGH data have low resolutions and RD or DPE data from NGS are too large to be handled directly. However, as related computing technologies advance, the NGS data are getting more attentions in the recent applications. Finally, we summarize popular data sources for CNV detection in Table \ref{tb::data}.
\input{tb_data.tex}
There have been tons of methods developed for CNV detection. Different approaches are applied to different types of data sources. As one of the most popular approaches, the CNV detection problem can be regarded as an application of the change-point model which has been actively studied in statistics. For example, both of the LRR from SNP array and $\log_2$ ratio from aCGH has a mean value of zero for normal copy number, while negative (resp. positive) for the deletion (resp. duplication). Similar ideas can also be employed for the RD data in the sense that one may observe less (resp. more) read counts in a region with deleted (resp. duplicated) copy number. In all the examples, the data structure changes at CNVs. Naturally, one may infer CNVs by checking the subregions where the LRR or read-depth is significantly different from the mean of rest.
The change-point model has a long history that traces back to the 1950s. See \cite{Page:1955,Page:1957,ChernoffZacks:1964,Gardner:1969} and \cite{SenSri:1975} for the early developments of change-point model with at most one change point. The data size considered in those papers is also small. However, new applications call for more flexible models capable of detecting multiple change points scattered along a huge sequence. Recent developments include circular binary segmentation \citep[CBS,][]{OVLW:2004,VO:2007}, $\ell_1$ penalization \citep{HuangEtal:2005,fusedlasso:2008,ZhangLange:2010} and total-variation-penalized estimation \citep[TVP,][]{harchaoui2010multiple}, fragment assembling algorithm \citep[FASeg,][]{yu2007forward}, screening and ranking algorithm \citep[SaRa,][]{NiuZhang:2010,HaoNiuZhang:2013}, likelihood ratio selection \citep[LRS,][]{CaiJengLi:2010}, simultaneous multiscale change point estimator \citep[SMUCE,][]{frick2014multiscale}, and wild binary segmentation \citep[WBS,][]{fryzlewicz2013wild} among many others. Hidden Markov model is another popular approach for CNV detection \citep{Fridlyand:HMM:2004,Wangetal:07,szatkiewicz2013improving}. Yet it relies on some application specific assumptions valid only for certain copy number data. \cite{Zhang:2010} provided a comprehensive overview on CNV detection as an application of the change-point model. \cite{roy2013evaluation} compared the performance of a few recent CNV detection methods under various scenarios.
As an application of the change-point model, inferring CNV is regarded as a very challenging problem since the CNVs subregions are usually very short and hidden in a very long sequence. The size of detectable CNVs typically range from 1,000 bps to megabases. \cite{international2010integrating} show that the average size of total CNVs in the individual genome is $3.5 \pm 0.5$ Mbp (0.1\%). As an illustration, we analyze the SNP array data collected from the Autism Genetics Resource Exchange \citep[AGRE,][]{bucan2009genome} which contain three parallel LRR sequences of father-mother-offspring trio. Panel (a) of Figure \ref{fg::illustration} depicts the LRR sequence of mother's whole genome and clearly illustrates that it is impossible to pick CNV out by eyeballs. Panels (b) and (c) show a zoom-in plot of one of the detected CNVs from mother's sequence and a histogram of sizes (in terms of the number of biomarkers) of CNVs which are commonly detected by several different methods considered in this paper, respectively. The size of the entire LRR sequence in (a) is 561,466, while one of the depicted CNV in (b) has size of 6. We would like to remark that the most of CNVs are very short (as shown in (c)) and hidden in a long and noisy sequence, which makes the problem non-trivial. We will revisit this data in Section \ref{s:real} where a complete analysis is illustrated.
\input{fg_illu.tex}
In this context, a desirable CNV detection method should be not only accurate enough to detect such short CNVs but also computationally fast enough to get estimates within a practically manageable time limit even for very long data sequence. Toward this, we propose a new change-point detection method called backward detection whose name comes from the backward variable selection in the linear regression. The backward detection is computationally efficient, with a complexity $O(n\log n)$ to analyze a sequence with length $n$. Moreover, it performs very well to pick out change-points which are very closely located. Therefore, it is an ideal tool for CNV detection. The idea of the backward detection is conceptually similar to the Wald's agglomerative clustering \citep{wald1963}, but different in that the location information of the sequence data is naturally employed. We also note that our method can be viewed as a ``bottom-up'' strategy for the change-point detection which has not studied extensively in the literature compared to ``top-down'' ones (such as binary segmentation) mainly due to its computational intensity. Recently, \cite{fryzlewicz2018} proposed an efficient ``bottom-up'' method for the general multiple change-point detection problem by using what they call the tail-greedy unbalanced Haar (TGUH) transformation. Yet, our numerical simulation shows that it suffers from detecting short and sparse signals which is common in CNV detection.
The rest of article is organized as follows. In Section \ref{s:model}, a normal mean change-point model and several popular detection strategies are introduced. In Section \ref{s:backward}, the backward detection is proposed in great details. A stopping rule for the backward detection is developed in Section \ref{s:cutoff}. Numerical performance of the proposed method is evaluated in Section \ref{s:sim}, and illustrations to both log-R ratio data from SNP array and the read depths from the aligned sequence data are given in Section \ref{s:real}. Final discussion follows in Section \ref{s:discussion}.
\section{Change-point Model} \label{s:model}
A normal mean change-point model assumes
\begin{align}\label{model1}
Y_i=\mu_i+\epsilon_i,\qquad i=1, 2, \cdots, n
\end{align}
with $\epsilon_i$ being iid $N(0, \sigma^2)$. The means $\mu_i$ are assumed to be piecewise constant with $K$ change points at $\mathbf{t} = (t_1, t_2, \cdots, t_K)^T$. Denote $t_0=0$ and $t_{K+1}=n$ by convenience. It means that $\mu_i=\mu_j$ for any $i,j\in\{t_k+1, \cdots, t_{k+1}\}$, $0\leq k\leq K$, and $\mu_{t_k}\ne\mu_{t_k+1}$ for $1\leq k\leq K$. Yet the number of change points $K$ is typically unknown. The goal of change-point detection is to estimate both the number $K$ and location vector of change points $\mathbf{t}$. Thus CNV detection can be regarded as a direct application of the change-point model (\ref{model1}). However, the CNVs are often very short and buried in a very long data sequence, which makes the problem even more challenging due to the high-dimensionality of $\boldsymbol{\mu} = (\mu_1, \cdots, \mu_n)^T$. The normal mean change-point model (\ref{model1}) is often reasonable in CNV application due to the random noise during the experiments \citep{barnes2008robust}.
A certain transformation can be considered otherwise. For example, the raw RD data are discrete and spatially correlated due to the complicated sequencing process, and hence the normal error assumption is not proper. Nevertheless, the local median transformation can be used to ensure its normality \citep{tony2012robust}.
Suppose that the number of change points, $K$, is known. Then the change-point detection problem can be formulated in terms of minimizing the sum of squared errors (SSE). For a set of numbers $Y_i$ with index $i$ in a set $\mathcal{G}$, we define their SSE by $SSE(\{Y_i, i\in \mathcal{G}\})=\sum_{i\in\mathcal{G}}(Y_i-\bar Y_{\mathcal{G}})^2$ where $\bar Y_{\mathcal{G}}=\frac{1}{|\mathcal{G}|}\sum_{i\in \mathcal{G}} Y_i$ and $|\mathcal{G}|$ denotes the cardinality of the index set $\mathcal{G}$. Then change-point detection is to estimate $\mathbf{t}$ by solving the following optimization problem
\begin{eqnarray}
\min_{\mathbf{t}}&&\sum_{j=0}^{K} SSE\left(\left\{Y_i: t_{j}+1\leq i\leq t_{j+1}\right\}\right),\label{obj:sse}\\
\mbox{subject to}&& 0=t_0<t_1<t_2< \cdots < t_K<t_{K+1}=n.\nonumber
\end{eqnarray}
Note that (\ref{obj:sse}) is inherently a combinatorial problem and very challenging for large $n$. The total number of different combinations is $\frac{n!}{K!(n-K)!}\geq \left(\frac{n}{K}\right)^K$ which can be huge especially as in the application of CNV. This makes it difficult to detect change points by solving (\ref{obj:sse}) directly not to mention the fact that $K$ is typically not known. When $n$ is small and $K$ is bounded, the exhaustive search method has been studied by \cite{Yao1988} and \cite{Yao:1989}. They showed that an exhaustive search with BIC is consistent to estimate $K$ and $\mathbf{t}$.
In order to improve computational efficiency, dynamic programming can be applied to solve (\ref{obj:sse}) with complexity of $O(n^2)$ \citep{BBM:2000,jackson2005algorithm}. \cite{killick2012optimal} developed an efficient algorithm named PELT that solves the problem with linear cost $O(n)$, but requires additional assumptions which might not be practical in certain applications. In general, these methods have not been widely applied in the CNV applications.
We remark that the mean change-point model (\ref{model1}) can be equivalently reformulated as a linear regression model \citep{HuangEtal:2005,fusedlasso:2008} and the change-point detection problem is then viewed as a variable selection one. Motivated by backward elimination methods in variable selection, we propose a stepwise procedure called backward detection (BWD) to solve the change-point detection problem. Some stepwise methods in the context of change-point detection have been explored. For example, a classical binary segmentation \citep[BS,][]{Vostrikova:1981} method applies single change-point detection tool recursively, identifying one change point each time, until that some stopping criterion is met. We refer BS as a forward detection method because it starts with a null model with no change point and sequentially detects change points, which is analogous to the forward variable selection in the regression context. In spite of its simplicity, as pointed out by \cite{OVLW:2004}, the forward detection is not able to to detect short segments buried in a long sequence of observations, which limits its utilization in certain applications such as CNV detection. The CBS \citep{OVLW:2004,VO:2007} modifies the BS by identifying two change points simultaneously and gained great popularity in the CNV detection. However, we observe from limited numerical studies that the CBS is still unsatisfactory when the true segment (i.e., CNV) is very short due to its nature from the forward detection. Moreover, CBS has higher computational complexity than the BS, which brings additional burden dealing with big data.
\section{Method} \label{s:backward}
\subsection{Why Not Forward Detection?}
In what follows, we elaborate why the forward detection may fail to detect short signals, which provides a clear motivation of the proposed BWD in CNV detection. The forward detection starts with no change point and try to detect the very first one by solving the following optimization problem
\begin{eqnarray} \label{eq::forward}
\min_{s_1}&&\sum_{j=0}^{1} SSE(\{Y_i, i\in\{s_{j}+1, \cdots, s_{j+1}\}\}),\label{obj:sse:fwd1}\\
\mbox{subject to}&& 0=s_0<s_1<s_2=n.\nonumber
\end{eqnarray}
The optimizer $\hat s_1$ of (\ref{obj:sse:fwd1}) estimates one of the change points and divides the data into two parts $\{Y_i, i\in\{1, 2, \cdots, \hat s_1\}\}$ and $\{Y_i, i\in\{\hat s_1+1, \cdots, n\}\}$. We may apply (\ref{obj:sse:fwd1}) to each of these two parts to detect further change points and this can be continued until we have identified all change points.
Note that the total number of combinations for (\ref{obj:sse:fwd1}) is $n$ and thus the corresponding optimization is feasible. However, as aforementioned the performance of the forward detection may not be satisfactory in some situation. For example, if there are only two change points at $t_1$ and $t_2$ and $\mu_i=0$ if $i < t_1$ or $i \ge t_2$ and $\mu$ if $t_1 \le i < t_2$, the forward detection does not work well if the length of signal $L = t_2 - t_1$ is small while both $t_1$ and $n-t_2$ are large. However, this type of challenging situation is very common in the CNV applications as shown in Figure \ref{fg::illustration}.
In order to illustrate drawbacks of the forward detection, we consider a simplified scenario in which the locations of the two potential change points $t_j,$ $j=1,2$ are known, but it is not clear whether the associated mean $\mu$ is actually changed (i.e., $\mu = 0$ or not). The following Proposition 1 formally states that the forward detection asymptotically fails even in this simple scenario unless $L$ is sufficiently large compared to $n$.
\begin{proposition}{Proposition 1.}{}
Suppose $\lim_{n \to \infty} t_1/n = c \in (0, 1)$ and $L = t_2 - t_1 = O(n^{\beta})$ for some $\beta \in [0, 1]$. If $\beta < 1/2$ then the forward detection fails as $n \to \infty$ for an arbitrary pair of $(\mu, \sigma^2)$ given.
\end{proposition}
\begin{proof}{Proof}
At the first step of the forward detection, it declares that the mean-change occurs at $t_j, j = 1, 2$ if $|\bar D_{n,t_j}| = | \bar Y_{t_j} - \bar Y_{n-t_j} |$ is significantly large enough. Here $\bar Y_t = \sum_{i=1}^{t}Y_i / t$ and $\bar Y_{n-t} = \sum_{i=t+1}^{n} Y_i / (n-t)$, for a given $t \in \{1, \cdots, n-1\}$.
To test for $t_1$, the sampling distribution of $\bar D_{n,t_1}$ for a given $\mu$ is
\begin{align*}
\frac{\bar D_{n,t_1} + L\mu/(n-t_1)}{\hat \sigma_n \sqrt{\frac{1}{t_1} + \frac{1}{n-t_1}}} \quad \stackrel{\mathcal{D}}{\to} \quad N \left(0 ,1 \right),
\end{align*}
where $\hat \sigma_n^2$ denotes a consistent estimator of unknown $\sigma^2$. Then it can be shown that the associate asymptotic power converges to the nominal level $\alpha$ for any given pair of $(\mu, \sigma^2)$ if $\lim_{n \to \infty} n^{-1/2}L = 0$. Similar result can be shown for $t_2$ as well, which completes proof.
\end{proof}
Proposition 1 provides a necessary condition of the forward detection in terms of the relative length of the true signal length $L$ as a function of sample size $n$. The order of $L$, $\beta$, should be larger than $1/2$ for the original change-point model where the change points $t_1$ and $t_2$ are unknown. Recently \cite{fryzlewicz2013wild} showed that the forward selection is consistent to recover true change points when $\beta$ is larger than $3/4$.
\subsection{Backward Detection}
Contrary to the forward detection, the BWD starts from the opposite extreme that every single position is assumed to be a change point. Namely we begin with $n$ groups corresponding to these $n-1$ change points and each group contains only one observation. We introduce notations $\mathbb{G}=\{\mathcal{G}_1, \mathcal{G}_2, \cdots, \mathcal{G}_n\}$ with $\mathcal{G}_i=\{i\}$.
The BWD works by repeatedly merging two neighboring groups into one. Note that the merging of two neighboring groups will increase the total sum of squared errors. For any two neighboring groups, we use the rise in the SSE to quantify the potential of merging them together. At each merging step, we choose to merge two neighboring groups with the smallest rise of SSE. Namely we define
\begin{align} \label{eq::R}
R_{i}=SSE(\{Y_j, j\in \mathcal{G}_i\cup \mathcal{G}_{i+1}\})-SSE(\{Y_j, j\in \mathcal{G}_i\})-SSE(\{Y_j, j\in \mathcal{G}_{i+1}\}),
\end{align}
where $SSE(\{Y_j, j\in \mathcal{G}\})$ denotes the sum of squared errors for all observations with indices in $\mathcal{G}$.
At the beginning of iteration $m=0, 1, \cdots, n-2$, there are $n-m$ groups. Denote the current groups by $\mathbb{G}^{(m)}=\{\mathcal{G}_1^{(m)}, \mathcal{G}_2^{(m)}, \cdots, \mathcal{G}_{n-m}^{(m)}\}$ and the corresponding potential of merging two neighboring groups by $\{R_{1}^{(m)},R_{2}^{(m)},\cdots, R_{n-m-1}^{(m)}\}$. The superscript is used to represent the $m$th iteration.
Identify $j=\displaystyle\operatornamewithlimits{argmin}_{i=1, 2, \cdots, n-m-1 } R_{i}^{(m)}$. Then we merge groups $\mathcal{G}_j^{(m)}$ and $\mathcal{G}_{j+1}^{(m)}$ into a new group. Updated grouping is denoted by $\mathbb{G}^{(m+1)}=\{\mathcal{G}_1^{(m)}, \mathcal{G}_2^{(m)}, \cdots, \mathcal{G}_{j-1}^{(m)}, \mathcal{G}_{j}^{(m)}\cup \mathcal{G}_{j+1}^{(m)}, \mathcal{G}_{j+2}^{(m)}, \cdots, \mathcal{G}_{n-m}^{(m)} \}$ and potentials of merging is updated by $\{R_{1}^{(m)},R_{2}^{(m)},\cdots, R_{j-2}^{(m)}, R_{j-}^{(m)}, R_{j+}^{(m)}, R_{j+2}^{(m)}, \cdots, R_{n-m-1}^{(m)}\}$, where
\begin{align*}
R_{j-}^{(m)} = SSE(\{Y_j, j \in \mathcal{G}_{j-1}^{(m)} & \cup \mathcal{G}_{j}^{(m)} \cup \mathcal{G}_{j+1}^{(m)}\}) - \\
& SSE(\{Y_j, j \in \mathcal{G}_{j-1}^{(m)}\})-SSE(\{Y_j, j\in \mathcal{G}_{j}^{(m)}\cup \mathcal{G}_{j+1}^{(m)}\}), \mbox{ and} \\
R_{j+}^{(m)} = SSE(\{Y_j, j \in \mathcal{G}_{j}^{(m)} & \cup \mathcal{G}_{j+1}^{(m)} \cup \mathcal{G}_{j+2}^{(m)}\}) - \\
& SSE(\{Y_j, j\in \mathcal{G}_{j}^{(m)}\cup \mathcal{G}_{j+1}^{(m)}\})-SSE(\{Y_j, j\in \mathcal{G}_{j+2}^{(m)}\}).
\end{align*}
Now, the steps described above are repeatedly applied until a desire stoping rule is satisfied. The associated stoping rule is discussed in the following section. If not terminated, only one group will be survived at the end of iteration $n-2$.
Despite their structural similarity, the BWD is substantially different from the froward detection, since the null and alternative hypotheses at each step are reverted. At each step, the BWD tests the equivalence between the two group means while the forward tests their difference. Therefore, the BWD tends to stay with more groups with smaller sizes unless there is strong evidence to merge some of them and hence is more powerful to detect short signals buried on a long sequence. We also remark that the BWD starts with solving a series of local problems each of which focuses on finding structural changes in a small part of the data and eventually towards to a single global problem at the end that employs the entire sequence. On the other hand, the forward detection starts as a global method and divided it into several local problems. This is one of the reasons why BWD is preferred to identify short signals in lengthy noise sequences, where the local methods are known to outperform the global methods in general.
Finally, the BWD algorithm can be summarized as follows.
\begin{itemize}
\item [] \verb"Input": $Y_1, \cdots, Y_n$. \par
\begin{itemize}
\item [1.] Initialize $\mathbb{G}^{(1)} = \{\{1\}, \cdots, \{n\}\}$ and $\mathbf{R}^{(1)} = \{R_1^{(1)}, \cdots, R_{n-1}^{(1)}\}$ from (\ref{eq::R}).
\item [2.]At the $m$th iteration, $m = 1, \cdots, n-1$:
\begin{itemize}
\item [2--1] Obtain $j = \operatornamewithlimits{argmin}_i R_i^{(m)}$.
\item [2--2] Break the loop if $R_j^{(m)}$ is larger than a prespecified cutoff, and go to the next step otherwise.
\item [2--3] Update
\begin{align*}
& \mathbb{G}^{(m+1)} =\{\mathcal{G}_{1}^{(m)}, \cdots, \mathcal{G}_{j-1}^{(m)},\mathcal{G}_{j}^{(m)}\cup\mathcal{G}_{j+1}^{(m)},\mathcal{G}_{j+2}^{(m)},\cdots,\mathcal{G}_{n-m}^{(m)}\},\\
& \mathbf{R}^{(m+1)} = \{R_{1}^{(m)},R_{2}^{(m)},\cdots, R_{j-2}^{(m)}, R_{j-}^{(m)}, R_{j+}^{(m)}, R_{j+2}^{(m)}, \cdots, R_{n-m-1}^{(m)}\}, \\
& K = n - m -1.
\end{align*}
\end{itemize}
\end{itemize}
\item [] \verb"Output": $\mathbb{G}^{(K)}.$
\end{itemize}
\subsection{Modification for epidemic change-points}
In CNV analysis, most parts of sequence (normal) have a known baseline mean, say $\mu_0$, and a mean-change away from $\mu_0$ (variant) is followed by a change back to $\mu_0$. This is often referred to as the epidemic change-points \citep{yao1993tests} and an important feature of CNV analysis. In order to to take into account such pairing structure, we modify the algorithm by adding the following Step between Step 2--2 and 2--3.
\begin{itemize}
\item If the sample average of observations in the merged sets, $\bar Y_{\mathcal{G}_{j}^{(m)}\cup\mathcal{G}_{j+1}^{(m)}}$ is not significantly different from the baseline mean $\mu_0$, i.e., $ \sqrt{v}{\left| \bar Y_{\mathcal{G}_{j}^{(m)}\cup\mathcal{G}_{j+1}^{(m)}} - \mu_0 \right|}/\hat \sigma_n > z_\alpha$ where $z_\alpha$ is the upper $\alpha$th quantile of standard normal random variable and $v = \left|\mathcal{G}_{j}^{(m)}\cup\mathcal{G}_{j+1}^{(m)}\right|$, then update $R_{j-}^{(m)}$ and $R_{j+}^{(m)}$ based on $\mu_0$ instead of $\bar Y_{\mathcal{G}_{j}^{(m)}\cup\mathcal{G}_{j+1}^{(m)}}$.
\end{itemize}
Finally, we develop \texttt{bwd} R-package that implements the proposed algorithm, and available on CRAN.
\subsection{Computational Complexity}
Computational efficiency is of practical interest in CNV applications due to their inherent high-dimensionality. At each iteration in the BWD, the most computationally intensive part is to find $j = \operatornamewithlimits{argmin}_i R_i^{(m)}$ which takes $O(n)$ at the worst. This gives the total complexity of $O(n^2)$, which is too slow especially when $n$ is very large.
However, it is realized that finding maximum and corresponding index is straight forward once $\mathbf{R}^{(1)}$ is ordered, which takes $O(n \log n)$ computations. Notice that the sorting step is required onetime at the initial stage. Once it is sorted, it take $O(1)$ to find the maximizer index at the $m$th iteration while we need additional effort to update $\mathbf{R}^{(m+1)}$ in an ordered fashion. However, such an update takes only $O(\log n)$ computations. In particular, we borrow the idea from the bi-section method, a well-known root finding algorithm. We can first compare $R_{j+}^{(m)}$ (or $R_{j-}^{(m)}$) to the median of the values in $\mathbf{R}^{(m)}$. Compare it the 75\%th percentile if it is greater than the median and 25\%th percentile otherwise. We continue this until finding its exact location. Finally, the total computational complexity of the BWD is then reduced to $O(n \log n)$.
\section{Stopping Rule}\label{s:cutoff}
In every step of the BWD, two small groups are merged into a bigger group and we want to test whether this merging removes a real change point. In such a standard case, it is natural to use $t$-statistic. Since the unknown variance is homogeneous across all the observations, a global estimate of the noise variance is used at every step. At the $m$th iteration the following statistics $S_{(m)}$ is used to determine when to stop the procedure where
\begin{equation} \label{eq::t.stat}
S_{(m)} = \frac{\left| \bar Y_j^{(m)}-\bar Y_{j+1}^{(m)} \right|}{\hat \sigma_n \sqrt{\big|\mathcal{G}_j^{(m)}\big|^{-1} + \big|\mathcal{G}_{j+1}^{(m)}\big|^{-1}}},
\end{equation}
and the backward procedure is terminated if $S_{(m)}$ is too large. Here $\hat \sigma_n^2$ denotes an estimate of the unknown noise variance based on all the observation. If the true signals are very short and sparse, the sample variance of $Y_i$ can also be used in practice as a simple alternative. The use of global estimate brings an additional saving in computation since $R_j = \hat \sigma_n^2 S_{(m)}^2$. We use $\hat \sigma_n^2 = n^{-1} \sum_{i=1}^{n} \big(Y_i - \bar Y_{i}^{(h)}\big)^2$ with $\bar Y_{i}^{(h)} = \sum_{j = i-h}^{i+h} Y_j/(2h+1)$ for a given window $h > 0$ in the upcoming analysis as used in \cite{NiuZhang:2010}. An alternative estimate is the median absolute deviation (MAD) estimator, as pointed out in \cite{CaiJengLi:2010}.
Similar to usual t-statistic, (\ref{eq::t.stat}) may cause a false alarm when both of the two groups are of small sizes. To avoid the possible false alarm caused by small group sizes, we can set $S_{(m)} = 0$ if both of the two consecutive segments are shorter than a minimum number $M$. The $M$ can be chosen to be, say, 3 or 5 depending on applications. Such modification is acceptable in CNV application since it is very unlikely that any of two CNVs are closely located.
Now, the question is how large the critical value should be to attain a desired target level $\alpha$ where $\alpha$ denotes the familywise error rate (FER) of the proposed BWD.
We remark that $(1-\alpha/2)$th quantile of $t$-statistic with the associated degrees of freedom will fail since $S_{(m)}$ is \textbf{correlated other group means} via the maximizer index $j$.
We propose the following numerical procedure to select a cutoff value that controls FER being at most $\alpha$.
\begin{enumerate}
\item Repeat the steps (a) -- (c) below $B$ times: for each of the $b$th iteration, $b = 1, \cdots, B$,
\begin{enumerate}
\item [1-(a)] Randomly generate a sequence of size $n$ from the null distribution that there exists no change point.
\item [1-(b)] Apply the backward procedure until merging the whole sequence into one group.
\item [1-(c)] $u_b = \max_{m = 1, \cdots, n-1} S_{(m)}$.
\end{enumerate}
\item The ($1- \alpha$)th sample quantile of $u_1, \cdots, u_B$ would be the cutoff value which attains a given level $\alpha$.
\end{enumerate}
We remark that the cutoff value is chosen from the null distribution of $\max_{m = 1, \cdots, n-1} S_{(m)}$, \textbf{not $S_{(m)}$ (step 1-(c))}, thus $\alpha$ controls the FER. The very first step 1-(a) that simulates samples from the null distribution is crucial in the proposed numerical procedure. Toward this we consider two scenarios: i) normality is assumed to be true while a noise variance $\sigma^2$ is still unknown. ii) neither normality nor $\sigma^2$ are known. In the first scenario, we can generate samples form the standard normal distribution. Notice also that this can be easily extended to any distribution other than normal distribution, whatever it is known. In the second scenario when underlying distribution is not known, the null distribution can be obtained by randomly permuting or bootstrapping residuals $r_i = Y_i - \bar Y_{i}^{(h)}, i = 1, \cdots, n$.
\input{fg_cutoff.tex}
The proposed numerical procedure becomes computationally too intensive especially when sample size is very large, for instance over a million, which is not uncommon in CNV applications. Under normality assumption, we numerically investigate cutoff values for different $\alpha = 0.01, 0.05$, and $0.10$ as functions of sample size $n$. Figure \ref{fg::cutoff} depicts estimated cutoffs for different sample sizes from 1,000 to 100,000 by 1,000 and it shows clear log-linear relationship between the estimated cutoffs and the sample size $n$. Thus desired cutoffs for large $n$ can be approximated from the fitted regression line.
\section{Simulated Examples} \label{s:sim}
We evaluate the performance of the proposed backward procedure via numerical comparison against existing methods. The target levels considered are $\alpha = .01$ and $.05$. We consider both of the original BWD (BWD1) and the modified BWD (BWD2) for epidemic change-points under the assumption that the baseline mean $\mu_0$ is known. As described in Section \ref{s:cutoff}, there are two ways to obtain the cutoff values depending on how to simulate null samples. We can obtain a cutoff either from standard normal samples under normality assumption (cutoff1) or from the permuted residuals if normality assumption is not valid (cutoff2). We take the former in Section 5.1 with Gaussian error and the latter in Section 5.2 with non-Gaussian error.
We consider the CBS, WBS, LRS, TVP as competing methods. The CBS is one of the most widely used methods in the literature and shares similar principals of a typical stepwise approach with the proposed method. The WBS is a recent development based on binary segmentation (i.e., forward detection) that overcomes its shortcoming when detecting short signal. The LRS is carefully designed method for detecting sparse and short signals and known as optimal under some required model assumptions that include normality, and shortness and sparsity of the signals. In addition, we compare our method to a recently proposed bottom-up method, called TGUH \citep{fryzlewicz2018}. TGUH is designed from general change-point detection problem and our simulation show that it suffers from detecting short signals.
We consider the following mean change model
\begin{align*}
y_i = \sum_{k=1}^\kappa \delta_k \boldsymbol{1}_{\{i \in I_k\}} + \epsilon_i
\end{align*}
where $\kappa$ denotes the number of signal segments (i.e., CNVs), $\delta_k, k = 1, \cdots, \kappa$, are unknown means of true signals, and $I_k \cap I_{k^{\prime}} = 0$ for any $k \neq k^\prime$. We set $|I_k| = L$ and $\delta_k = \delta, k = 1, \cdots, \kappa$, and hence the strength of true signals are controlled by $L$ and $\delta/\sigma$. We consider two different noise distributions of $\epsilon$ including normal distribution and $t$-distribution with degrees of freedoms of $df$. We set $(n, L, \delta) = \{1000, 3000, 5000\} \times \{5, 10\} \times \{1.5, 2.0, 2.5\} $ with $\sigma = 1$ for normal model, and $(L, df) = \{5, 10\} \times \{10, 5\}$ with $n = 1000$ and $\delta = 3$ for $t$-distribution model. The number of true segments is given by $\kappa = n/1000$ and minimum distance between two true segments is set to 200. Numerical performance is evaluated based on 1,000 independent repetitions.
We claim that the signal segment $I_k$ is correctly detected by $\hat I_k$ if $I_k \cap \hat I_k \neq \phi$ and $|\hat I_k| < 2L$. In order to measure performance of the methods the following two measures are considered.
\begin{itemize}
\item [-] Sensitivity: (\# of correctly detected signals) / (\# of true signals, $\kappa$)
\item [-] Precision : (\# of correctly detected signals) / (\# of detected signals)
\end{itemize}
Sensitivity relates to the ability to identify true signals and precision measures reliability of the detected signals. Notice that both measures lie between zero and one (by setting $0/0 = 0$) and a method is perfect if both measures have a value of one.
\subsection{Gaussian Error}
\input{tb_normal.tex}
In many applications including CNV detection, normality assumption is often used. Although our backward procedure does not strongly require normality assumption, it performs best under normal noise due to the use of squared error loss. Table \ref{tb::normal} reports performance of different methods considered in various scenarios under normality. The both original and modified versions of BWD outperform others except LRS in most scenarios considered. This is because the true signal is very short ($L = 5$ and $10$). LRS performs quite well. This is not surprising since the designed simulation model perfectly satisfies assumptions required for the LRS. The modification for epidemic change-points is useful when the true signals are not strong. TVP is very fast, but gives too many false positives. TGUH shows very low sensitivity indicating that it cannot detect short signals well. It is interesting to observe that the BWD still performs comparably well in the sense that it outperforms the LRS in terms of sensitivity with $\alpha = 0.05$ and in terms of precision with $\alpha = 0.01$. It is another benefit of the BWD that it controls relative importance between sensitivity and precision (or specificity) through the target level $\alpha$. We also remark that the BWD is simple and does not require stringent model assumptions such as sparsity.
\subsection{Non-Gaussian Error}
Performance of the change-point detection methods are evaluated under t-distributed noise. The BWD does not require normality assumption and hence not overly sensitive to the violation of normality, whereas LRS does. Before applying the LRS, we standardized the observations first by using sample mean and sample standard deviation. Remark that such naive estimates would work fairly well for standardizing observation since the signals are very short compared to the entire sequence of data. Table \ref{tb::t} contains numerical performance of the methods under consideration. Advantages of the BWD are much clearer than the previous setting with normality. The CBS fails to detect true signals when the signal strength is not very strong while the backward procedure performs well in all the scenarios considered. Both of the WBS and LRS are good in terms of sensitivity but they detect too many false signals in this case. Again, BWD2 outperforms BWD1 when the true signals are not strong.
\input{tb_t.tex}
\subsection{Empirical Test Level}
\input{tb_level.tex}
We numerically check whether the backward procedure actually attains a target nominal level $\alpha$ under the null hypothesis that there exists no signal. Since the two version of BWD show similar result we report the results of the original version only to avoid redundancy. We generate samples under the null hypothesis by letting $\delta = 0$ and report the proportion of the cases that any signal is detected by each methods (Table \ref{tb::level}). Recall that we have two scenarios. The `cutoff1' assumes normality and hence the levels are correct if the data are indeed from normal model but cannot satisfy nominal level if the data are from t-distribution. In this case, the `cutoff2' can be used as an alternative and the results seem good enough to be used in practice. Notice that there are a couple of cases in which `cutoff2' fails to attain the nominal level, which is partially due to the uncertainty of the null distribution. The CBS seems very conservative to detect signal and both LRS and TGUH break down when the normality assumption is not valid. TVP fails again to control the type I error.
As aforementioned, it is another distinguished advantage of the proposed BWD to be able to control type 1 error. This is practically attractive since the relative importance of sensitivity and specificity varies depending on applications.
\section{Real Data Illustration} \label{s:real}
\subsection{Trio Data from SNP array}
The (original) BWD is demonstrated for the SNP array data collected from the Autism Genetics Resource Exchange \citep[AGRE,][]{bucan2009genome}. The data set contains three parallel sequences of log $R$ ratio (LRR) for 547,458 SNPs over 23 chromosomes of father-mother-offspring trio.
\input{fg_result.tex}
All methods considered in Section \ref{s:sim} are applied except TVP and TGUH. For LRS, the data are standardized by the sample mean and variance. We set $\alpha = 0.05$ and the corresponding cutoff value is approximated from the log-linear relation between cutoff and sample size under normal assumption as described in Section \ref{s:cutoff}. We apply each of the methods in chromosome-wise and Figure \ref{fg::result} shows the results for the first two chromosomes (chromosomes 1 and 2) of the offspring. The three different types (and colors) of horizontal segments are estimated $\mu_t$ by the CBS, LRS and BWD, respectively. Notice that the LRS only detects very short and sparse signals and the detected signals are marked as vertical lines. We would like to point out that although all of the CBS, WBS and BWD are developed under a similar framework, the results are quite different. For example in chromosome 2 (Subfigure (b)), the CBS detects no change point after around 54,000th SNP position, while both the BWD and WBS detect several.
\input{fg_ven.tex}
The complete CNV detection results for the trio data are summarized by a Venn diagram in Figure \ref{fg::ven} that reports the number of CNVs detected by different methods for each of the trio (father/mother/offspring) as well as the collapsed.
We consider short detected segments whose lengths are between 2 and 200 in terms of SNP index as CNVs.
First, the LRS detects a much larger number of CNVs than other competing methods, while majorities ($237/356 = 66.5\%$) are unique calls which are likely to be suspicious as false signals. The BWD calls 121 CNVs which is larger than those from the CBS (84) and WBS(100), while unique calls by the BWD is only 12.4\% (15/121) smaller than any others (CBS: 34/82 = 41\%; WBS: 17/100 = 17\%). This can be interpreted that the BWD shows best precision if we assume that the most of CNVs uniquely called by a single method are false positives. Next, BWD misses only 2 CNVs that are identified by all other methods, while CBS, WBS, and LRS miss 24, 8, and 6 such CNVs, respectively, meaning that the BWD outperforms others in terms of sensitivity as well.
Finally, 25 CNVs identified by all the methods can be regarded as true CNVs and used in Figure \ref{fg::illustration} in order to show that short CNVs are indeed common in real data.
\input{tb_cnvs.tex}
The genetic information is to be inherited from parents to offsprings and can be utilized for validation of the detected CNVs. Table \ref{tb::cnvs} lists all the offspring's CNVs that are also detected from one/both of the parents. All the CNVs in Table \ref{tb::cnvs} are nearly, if not exactly, identical to the corresponding ones detected from the parents and thus those CNVs are considered as truth. We would like to emphasize that most of the true CNVs are quite short and both of the CBS and WBS miss many of them while the LRS and BWD miss only 1 and 3, respectively. We claim that some jointly detected CNVs from (at least one of) parents and offspring are still suspicious as false, if only a very minor portion of the detected CNVs overlapped compared to their entire length. The LRS detects 9 such suspicious CNVs while the CBS, WBS, and BWD detect 1, 1, and 2, respectively.
In summary, from the real data analysis for the trio SNP array, the LRS tends to call too many CNVs that includes large number of false positive while the CBS and WBS miss some true shorts CNVs, and we can conclude that the proposed BWD outperforms all others. This is concordant to the findings in the simulation studies in Section \ref{s:sim}.
\subsection{Read Depth from NGS Sequencing Data}
We further illustrate the (original) BWD on the RD data from high-throughtput sequencing data on chromosome 19 of a HapMap Yoruban female sample (NA19240) from the 1000 Genomes Project. The RD $y_i$ of the $i$th locus where $i = 1, \cdots, 54,361,060$ is adjusted by the guanine-cytosine (GC) content. Although the data can be used to analyze genomic variants in higher resolution with raw measure, as aforementioned the observations are highly flexible due to complicated sequencing process and requires a proper normalization/transformation. In order to handle these difficulties, we consider a local-median transformation as motivated by \cite{tony2012robust}. In particular, we firstly partition the RD data into small bins with size $M$, and then apply BWD for the sequence of the medians of observations in each bins. The transformed data sequence is then well-approximated by a normal distribution regardless the underlying distribution of the original data. If $M$ is large, the data are more accurately approximated by normal model, but the CNV shorter than $M$ bps cannot be accurately identified. (i.e., $M$ is a minimal resolution). As shown in Section \ref{s:sim} the BWD is not overly-sensitive to violation of the normal assumption and we set a relatively small number of $M = 100$ in the analysis.
For the BWD we set $\alpha = 0.05$ and the cutoff value is computed under the normal assumption. The BWD called fifteen CNVs. Figure \ref{fg::rd} provides zoom-in plots of some of CNVs identified by the BWD. The proposed method works reasonably well for the (NGS) read-depth data as well after simple transformation.
Many existing CNV analysis tools for high-throughput NGS data employ the CBS as a primal tool for calling CNVs. See \cite{duan2013comparative} and reference therein.
We would like to remark that the BWD can be a desirable alternative under the presence of short CNVs hardly detected by the CBS.
\input{fg_rd.tex}
\section{Discussion} \label{s:discussion}
We propose a BWD procedure for change-point detection and apply it to CNV detection. The proposed BWD is a simple procedure that can be readily employed for high-dimensional data, but it still performs very well as illustrated to both simulated and real data especially when the true signals of interest are short, which is often the case in CNV detection problem. Similar to the CBS, the BWD is a general approach for change-point detection problems that can be used in various applications besides CNV detection from which it originally motivated, since it does not depend on any application-specific assumption.
The simple idea of the proposed BWD provides a possibility of further extension in various ways. Firstly, the gain of backward procedure compared to forward detection including CBS is obvious for short signal detection. However, the forward detection also has a clear benefit when the true signal is long and the mean-change is minor. Thus we can select either of the two depending on application. Moreover, we can develop a method that hybrids between the forward and backward detections analogous to the stepwise variable selection in the regression context. The idea is straightforward but require additional effort to improve computational efficiency especially for CNV applications. Next, we can extend BWD with different loss functions other than the squared $L_2$ loss. For example, the absolute deviance error can be used as a reasonable alternative under the presence of outliers. It is also possible to generalize the idea to more complex structure such as graph \citep{chen2015graph} by introducing a proper loss function defined on the space on the complex data object. Finally, as motivated by trio data, the backward idea can be extended to detect common signals shared by multiple sequences of observations.
\bibliographystyle{dcu}
|
1,314,259,993,188 | arxiv | \section{Introduction}
That self-gravitating systems initially in highly spherically
symmetric configurations can relax to virial equilibria which break
this symmetry strongly has been known for several decades
\citep{Polyachenko_Shukhman_1981, merritt+aguilar_1985} and documented
since then by many numerical studies (see
e.g. \cite{aguilar+merritt_1990,theis+spurzem_1999,
boily+athanassoula_2006,barnes_etal_2009,worrakitpoonpon_2014}).
This phenomenon, of formation, and argued to play a crucial role in
cosmological structure formation { (see e.g. \cite{huss_etal_1999,
macmillan2006universal}) has come to be referred to as ``radial
orbit instability" (ROI)}. This name has been adopted since such an
instability has been shown \citep{antonov_1961,
Fridman+Polyachenko_etal_1984} to characterize spherically symmetric
stationary solutions of the collisionless Boltzmann equation with
purely radial orbits. Further it is plausible, as argued originally by
\cite{merritt+aguilar_1985}, that a similar mechanism is responsible
for the formation of triaxial structures observed starting from very
cold initial conditions, as in this case collapse tends to produce
strongly radial orbits. Different authors {(see references above)
have discussed how the symmetry breaking develops during the
evolution from both simple power law density profiles
(e.g. \cite{boily+athanassoula_2006}) and from cosmological initial
conditions (e.g. \cite{macmillan2006universal}.}
In this paper we consider how the degree of the final symmetry
breaking is related to the initial condition --- specifically to the
exponent of the initial density profile --- for the case of completely
cold initial conditions. Our focus on this aspect of the problem
allows us to elucidate the mechanism by which the symmetry breaking
actually occurs in the process of collapse from cold initial
conditions. More specifically, we show in detail how fluctuations
breaking spherical symmetry may be amplified by the very large energy
changes characteristic of the very violent relaxation from cold
initial conditions. This amplification is most effective when the
energy change a particle undergoes is both large and strongly
correlated with its initial radial position, leading to a maximal
effect from density profiles with intermediate exponents. We
underline that the mechanism we identify { bringing about the
amplification of the symmetry breaking operates far from equilibrium
and has no apparent link to the ROI of equilibrium systems.}
\section{Numerical simulations}
For our study we have simulated numerically, using the N-body code
{\tt Gadget} \citep{springel_2005}, the evolution from initial
conditions in which $N$ particles are distributed randomly inside a
sphere following a radial density profile $\rho (r) \propto
r^{-\alpha}$, with $\alpha$ in the range $0\leq \alpha \leq 2.5$ {
(the reasons for the choice of this range will be discussed below) }
The family of initial conditions are thus characterized by the two
parameters $\alpha$ and $N$. We will focus here on the dependence on
$\alpha$ of the degree of symmetry breaking of the relaxed state. We
have also varied $N$ systematically (for each $\alpha$) in a range
from a few thousand to one hundred thousand particles, and {\it in
this range} of $N$ our essential results and analysis are {weakly
sensitive} to this parameter. We will report in future work a more
detailed investigation of the subtle (and numerically challenging)
issue of the asymptotic large $N$ dependence of spherical symmetry
breaking \citep{boily+athanassoula_2006, worrakitpoonpon_2014}.
All results presented here are for simulations in which energy was
conserved to within a tenth of a percent. { For simulations with
$\alpha$ in the range $[0.25,2]$ this level of energy conservation
has been attained using typical values of the essential numerical
parameters in the GADGET code [$0.025$ for the $\eta$ parameter
controlling the time-step, and a force accuracy of $\alpha_F=
0.001$]. The cases in which $\alpha$ is outside this range are
numerically more challenging because of singularities --- discussed
further below --- both for $\alpha=0$ and $\alpha=3$ in the limit $N
\rightarrow \infty$. For these cases we have subjected our results
to additional {tests} of their robustness, checking their stability
in particular to smaller time-steps (see also the discussion in
\cite{joyce_etal_2009,Benhaiem_SylosLabini_2015}). We have also
studied carefully the effects of varying the force smoothing
parameter (corresponding to its small scale at small distances), and
{we} have found our results to be stable provided it is
significantly smaller than the minimal characteristic size (see
below) attained by the structure during its collapse For the
simulations reported below the smoothing parameter is always in this
latter range.}
\section{Results}
\subsection{Collapse and Virialization}
\begin{figure}
\vspace{1cm}
{
\par\centering \resizebox*{8cm}{7cm}{\includegraphics*{Fig1.eps}}
\par\centering
}
\caption{
{ Time evolution of the gravitational radius (as defined in
Eq.\ref{Rg_def}) normalized to its initial value $R_g(0)$, for
different $\alpha$ and with $N=10^4$. The inset shows the
characteristic time of the spread in fall time $\Delta T$
(normalized to $\tau_c$)
in simulations with the indicated $N$; the dotted line corresponds
to the analytical prediction given by Eq. (\ref{deltaT-alpha}). } }
\label{Rgrav_powerlaw_1e4}
\end{figure}
{ Before turning to the central issue in this article --- the
dependence of the asymmetry of the final virialized state on the
exponent $\alpha$ --- we consider first how various spherically
averaged indicators of the global evolution of the system vary with
$\alpha$, and how these dependencies can be understood. The results
in this section are in line with numerous previous studies of cold
systems with initial conditions in the class we consider, or
similar ones
(e.g. \cite{vanalbada_1982,aarseth_etal_1988,joyce_etal_2009}). We
simply focus on the quantities and behaviors which will be most
relevant for our analysis of the symmetry breaking in the next
section.}
In line with previous studies of such initial conditions, we observe
that the system evolves through a strong collapse followed by a
re-expansion which very rapidly leads to a virial equilibrium in which
most of the initial mass is bound. Fig.\ref{Rgrav_powerlaw_1e4}
shows, for different indicated values of $\alpha$, the temporal
evolution of the {\it gravitational radius} defined as \be
\label{Rg_def}
R_g(t) = \frac{G M_b(t)}{|W_b(t)|}
\ee
where $M_b(t)$ and $W_b(t)$ are respectively the mass which is bound
(i.e. particles with negative energy) and the potential energy of this
mass. The unit of time here is $\tau_c=\sqrt{ \frac { 3 \pi} {32 G
\overline{\rho}_0(R_0)}}$, where $\overline{\rho}_0 (R_0)$ is the
{average mass density inside the radius $R_0$} of the initial
spherical configuration, of radius $R_0$. It corresponds to the time
for a particle initially at the outer periphery (i.e. at $r=R_0$) to
fall to the center, in the continuum approximation (i.e. taking $N
\rightarrow \infty$ keeping the initial mass density profile fixed)
and without shell crossing. The scale $R_g(t) $ is a measure of the
characteristic size of the system, and we observe in all cases, to a
first approximation, the same qualitative behavior --- a collapse to a
minimal size attained around $t=\tau_c$ followed by a re-expansion and
stabilization. {In all cases the bound particles form a virialized
structure, with the virial ratio (which we do not display here)
showing a very similar qualitative behaviour to that of $R_g(t)$
but with a final value close to $-1$ in all cases.}
{There is, however, also a clear trend with $\alpha$: the smaller
is $\alpha$, i.e. the closer to flat the density, the more violent
is the collapse, with the system reaching a deeper minimum in a
shorter time \footnote{{ We do not show data for $\alpha=2.5$ in
Fig.\ref{Rgrav_powerlaw_1e4} because the potential energy $W_b$
diverges for $\alpha \geq 2.5$ in the limit $N \rightarrow
\infty$. As a measure of the characteristic size of the system
we have used in this case the radius containing $90\%$ of the
mass.} }. Further, we note that the larger is $\alpha$ the
denser are the inner shells of the cloud and the sooner the collapse
starts.}
The variation of the characteristic time for the collapse and
re-expansion with $\alpha$ is quantified in the inset in the
figure. It shows, as a function of $\alpha$, the measured time $\Delta
T$ estimated as the difference between the two times at which
$R_g(t)=R_g^*$ defined as $R_g^*=(R_g^{asyn} + R_g^{min})/2$ where
$R_g^{asyn}$ is the estimated asymptotic value of $R_g$ at $t \gg
\tau_c$.
{ The continuous curve, which has been extended to
$\alpha=3$, is obtained from the following simple
considerations. We work in the approximation that departure from
spherical symmetry, and also the effects of shell crossing, can be
neglected. }
For an initial mass density with radial profile $\rho(r) \propto
r^{-\alpha}$ for $r < R_0$, and zero for $r> R_0$, mass at an initial
radial distance $r_0$ from the center will then fall to the center in
a time $\tau_c (r_0)= \sqrt{ \frac {3\pi} {32 G
\overline{\rho}_0(r_0)} } = \tau_c (r_0/R_0)^{\alpha/2}$, where
$\overline{\rho}_0(r_0)$ is the initial mass density of the sphere of
radius $r_0$. The distribution $h(\tau)$ of these fall times to the
origin can then be calculated using $4\pi \rho(r_0) r_0^2 dr_0 = M
h(\tau) d\tau$ (where $M$ is the total mass). One finds
\be
\label{eq:htau}
h(\tau) = \frac{2(3-\alpha)}{\alpha \tau_c} \left(\frac{\tau}{\tau_c}\right)^{3(\frac{2}{\alpha}-1)}
\ee
for $\tau \leq \tau_c$ (and $h(\tau)=0$ otherwise).
{The spread in the fall times can be characterized by the variance
of $h(\tau)$, \be
\label{deltaT-alpha}
\Delta T_{th} = 2\sqrt{ \langle \tau^2 \rangle - \langle \tau
\rangle^2} = \frac{2 \alpha \tau_c}{6-\alpha}
\sqrt{\frac{(3-\alpha)}{3}} \;. \ee This expression, plotted in the
inset of Fig. \ref{Rgrav_powerlaw_1e4}, reaches a maximum at $\alpha
\approx 2.3$, and goes to zero for both $\alpha=0$ and $\alpha=3$ .
For $\alpha=0$, all particles fall to the origin at $\tau_c$ (the
well-known singularity of the canonical ``spherical collapse model''),
while as $\alpha$ steepens towards $\alpha=3$ almost all of the mass
is {initially at} small radii with fall times which are very small
compared to $\tau_c$. In the inset of Fig. \ref{Rgrav_powerlaw_1e4},
we see that Eq. (\ref{deltaT-alpha}) traces well the behavior of the
measured $\Delta T$ up to $\alpha \approx 2$. Thus, up to this value,
the characteristic time of the variation of the total potential indeed
just reflects the spread in the particles' fall times. Further for
larger $\alpha$ the behavior of $h(\tau)$ and $\Delta T_{th}$ indeed
reflect the qualitative change in behavior of the collapse we observe
in our simulations: while the collapse is completed only at $t \sim
\tau_c$ when the outermost mass falls, most of the mass falls at very
much shorter times. It is for this same reason that accurate numerical
integration becomes more costly as $\alpha$ increases and we report
results only up to $\alpha=2.5$.
}
One other macroscopic feature of relaxation from cold initial
conditions of this kind, which will be relevant in our discussion
below, is that they often lead to mass ejection i.e. some particles
gain enough energy so that their total energy is positive and they can
escape to infinity. We show (see also \cite{syloslabini_2013}) in
Fig. \ref{pf_alpha} the fraction $p_f$ of the particles with positive
energy after relaxation (at $t \approx 5 \tau_c$), for different
$\alpha$ (and $N$). The observed behavior as a function of $\alpha$
--- maximal ejection at $\alpha=0$, followed by a monotonic decrease
(approximately exponential) with $\alpha$ in a range until
$\alpha \approx 2$, beyond which there is a much sharper drop --- is
clearly related to the qualitative behavior of the fall times discussed
above. { This will be seen more explicitly in our analysis below. }
\begin{figure}
\vspace{1cm}
{
\par\centering \resizebox*{7cm}{6cm}{\includegraphics*{M7.eps}}
\par\centering
}
\caption{Fraction of ejected particles for different $\alpha$
{and different number of particles (see labels)}: average and
standard deviation over $20$ realizations for $N=10^3$ and $N=10^4$,
and $5$ realizations in the other two cases.}
\label{pf_alpha}
\end{figure}
\subsection{Symmetry breaking of relaxed state}
\begin{figure}
\vspace{1cm}
{
\par\centering \resizebox*{8cm}{5cm}{\includegraphics*{Fig3a.eps}}
\par\centering
}
{
\par\centering \resizebox*{8cm}{5cm}{\includegraphics*{Fig3bb.eps}}
\par\centering
}
\caption{Upper plots: Projections of the virialized structure for a
realization with $N=10^4$ for different $\alpha$ {at the final
time $t \approx 5 \tau_c$}. Lower plot: flattening ratio
{(average and standard deviation)} $\iota_{80}$ as a function of
$\alpha$, {estimated over 20 realizations for $N=10^3$ and
$N=10^4$, and $5$ realizations in the other two cases. }}
\label{projection_alpha}
\end{figure}
{ We focus now on the question of how the symmetry breaking in the
final state depends on the exponent $\alpha$ characterizing the
density profile of the initial condition.}
Shown in the upper
plots of Fig. \ref{projection_alpha} are a projection in a chosen
plane of the resulting virialized configurations for the various
indicated values of $\alpha$, for simulations with $N=10^4$. Visual
inspection suggests that the breaking of spherical symmetry is
apparently strongest in the intermediate values of $\alpha$, and
weakest for the case $\alpha=0$. This is confirmed by
Fig. \ref{projection_alpha} (lower panel) which shows, as a function
of $\alpha$, the parameter $\iota_{80}$ (the ``flattening ratio'') of
the relaxed state defined as
\be
\label{iota}
\iota_P = \frac{\lambda_1}{\lambda_3} -1
\ee
where $\lambda_1$ and $\lambda_3$ are, respectively, the largest and
smallest of the three eigenvalues of the moment of inertia, and the
subscript indicates that the estimate is made on the $P$ \% of
particles which are most bound (we take here $P=80$ following common
practice in the literature). We do not show here any information
about the intermediate eigenvalue $\lambda_2$, but analysis of it
shows that it is typically not close to the value of either of
$\lambda_1$ or $\lambda_3$, i.e. the relaxed structures are quite
triaxial.
{ For each $\alpha$ the different points in the lower panel in
Fig. \ref{projection_alpha} correspond to the indicated values of $N$,
and the error bars to the standard deviation measured over the
indicated number of realizations in each case. The plot shows clearly
--- in agreement with previous studies of these initial conditions
(e.g. \cite{aarseth_etal_1988,joyce_etal_2009,barnes_etal_2009,worrakitpoonpon_2014})
--- that the final state in the case $\alpha=0$ is in fact very close
to spherically symmetric. Further we note that above $\alpha \approx
1.5$ there is a clear trend towards progressively weaker symmetry
breaking, albeit with a lesser suppression than in the case
$\alpha=0$. We have checked {and confirmed} that these trends of
$\iota_{80}$ with $\alpha$ are also observed with different values of
P in the calculation of $\iota_{P}$.}
{In the light of the discussion in the previous section, these
lots suggest a possible correlation between the observed asymmetry
and the qualitative behavior of the spread in the fall times.
Further the (relative) suppression at larger $\alpha$ suggests that
there may be some connection between the ejection of matter and the
degree of symmetry breaking.}
How are particles' fall times related to the final state, and in
particular its asymmetry? In the relaxation process from such cold
initial conditions, particles energies change greatly as they move in
the time dependent mean field. Indeed this is the essence of
``violent relaxation" as originally described by \cite{lyndenbell}. It
can be verified directly, by tracking the energy of individual
particles, that the energy change of any given particle occurs
essentially as it passes through the center of the
collapsing/re-expanding structure. This is the case simply because
the mean field is most intense and most rapidly varying at this
time. As a consequence this energy change depends essentially on the
time window in which the particle passes through this central region.
The correlation between initial radial position and fall time ---
which is strong except in the limit $\alpha=0$ where all particles
have the same fall time (modulo finite $N$ fluctuations) --- might
thus be expected to lead to a correlation between the energy change of
a particle and its initial radial position.
\begin{figure}
\vspace{1cm}
{
\par\centering \resizebox*{8cm}{7cm}{\includegraphics*{Fig4a.eps}}
\par\centering
}
{
\par\centering \resizebox*{8cm}{3.5cm}{\includegraphics*{Fig4b.eps}}
\par\centering
}
\caption{{ Change in particle energies {(average in bins)} in
units of $\epsilon_0= G N^2 m/R_0$, plotted against their initial
radial position $r_0$ (normalized to the initial maximum radius
$R_0$), for the different indicated $\alpha$, and $N=10^4$. Note
that the scale on the y-axis is different in the different
plots.}}
\label{DeltaE-r0}
\end{figure}
To see whether this is indeed the case we plot in Fig.\ref{DeltaE-r0}
the total energy change, i.e. the difference between initial energy
and the energy measured at a time well after the collapse, when the
stationary state has been reached, { averaged in bins}, as a
function of their initial radial position, for simulations with
$N=10^4$.
{ As anticipated we see that, except for the case $\alpha=0$,
there is indeed a very clearly identifiable correlation between energy
change and initial position: up to $\alpha=2$ a large positive energy
boost is {obtained} by a large fraction of the particles in the outer
shells, while at the larger values of $\alpha$ large energy decreases
are experienced by the particles in the inner shells. The explanation
for these features, and for the behavior in the case $\alpha=0$, is
closely related to our discussion of the previous section. As noted,
particles' energies change essentially as they pass through the center
of the structure, and what determines the energy change is the
temporal variation of the mean field they move in at this time.
}
The behavior of the mean-field potential at any point in the center of
the structure reflect approximately that of the total potential shown
in Fig.\ref{Rgrav_powerlaw_1e4}. Particles which pass through the
center in the phase before the minimum is reached, at $t \sim \tau_c$,
will tend to lose energy because they climb out of a deeper potential
than they fall into, while the converse will be true for particles
which pass through the center as the system is re-expanding. These
latter receive an energy boost, which has been noted to be at the
origin of the mass ejection in both the case of cold collapse
\citep{joyce_etal_2009,syloslabini_2012,syloslabini_2013} and in
merging structures \citep{carucci_etal_2014}.
{Thus the trend we
observe with $\alpha$ is clearly linked to that we discussed of the
distribution of fall times $h(\tau)$: for small $\alpha$, a
significant amount of the mass ``falls later" and acquires an energy
boost, while at larger $\alpha$ most of the mass ``falls early" and
loses energy. For $\alpha=0$, on the other hand, the correlation
between the energy change of a particle and its initial position is
very markedly weaker, because in this case the time of fall of a
particle may become correlated with its initial radial position only
through the finite $N$ fluctuations (which regulate the singularity
of the collapse characteristic of this case).}
Let us consider now how the large energy injection into the outer
shells can lead to the very strong symmetry breaking of the relaxed
states observed in these cases. In the limit of exact spherical
symmetry, the dispersion of the energy change at any given initial
radius in Fig. \ref{DeltaE-r0} should vanish, and the observed finite
dispersion is a consequence of the spherical symmetry breaking.
Indeed what this dispersion implies is that the energy injection at
these large radii still depends sensibly on the direction of arrival,
and not just the initial radial position.
\begin{figure}
\vspace{1cm}
{
\par\centering \resizebox*{8cm}{7cm}{\includegraphics*{M2a.eps}}
\par\centering
}
\caption{{ Behavior of $\iota_{80}(t)$ (circles) and $\iota_{100}$
(squares, corresponding to all the bound particles)
as a function of time for different values of $\alpha\in [0,2.25]$
, in simulations with $N=10^4$}.}
\label{M2a}
\end{figure}
{The initial system is not perfectly spherical due to Poisson
fluctuations, and to a first approximation it can be described as an
ellipsoid with $\iota(0) \sim N^{-1/2}$. Therefore particles
experience, during the collapse, a force of which the tangential
component depends on the angle with the ellipsoid initial major
semi-axis. Consequently, as can be seen in Fig.\ref{M2a},
$\iota_{80}(t)$ grows rapidly as the particles coming from the
direction orthogonal to the initial major semi-axis collapse to the
center in a shorter time than those along this axis: this is the
\cite{Lin_Mestel_Shu_1965} instability whose effect is to amplify
the initial small eccentricity. The particles initially in the
outermost shells which arrive latest then travel through the rapidly
decreasing potential created by the other mass which is already
re-expanding, and gain energy. Note that, while for $\alpha=0$ the
amplification and subsequent decrease of the asymmetry to its final
value occurs in a very short time around the collapse, for $\alpha
>0$ the system shows a few oscillations before the complete
relaxation. This behavior reflects that of the gravitational radius
in Fig.\ref{Rgrav_powerlaw_1e4}. In addition, we note that for
$\alpha=0$ the amplification of the initial $\iota(0)$ is much less
marked compared to the cases with $\alpha>0$, in which the spread in
fall times of the particles is much greater. For $\alpha \ge 2$,
most of mass now collapses at very short times and accordingly
$\iota_{80}(t)$ shows a very fast growth for $t < \tau_c$ followed
by a decrease when the external particles pass through the center.}
\begin{figure}
\vspace{1cm}
{
\par\centering \resizebox*{8cm}{7cm}{\includegraphics*{M2c.eps}}
\par\centering
}
\caption{{
Behavior of absolute value of the cosine of
the angle between the smallest eigenvalue (which corresponds to
the major semi axis of the ellipsoid) at time $t=0$ and the
smallest eigenvalue at time $t$, for different $\alpha$. It can
be seen that it remains close to unity at all times, showing that
the orientation of major semi-axis remains essentially unchanged
throughout the evolution.}}
\label{M2c}
\end{figure}
{ The hypothesis that the amplification of the initial triaxiality
is due to finite $N$ effects via the instability described by
\cite{Lin_Mestel_Shu_1965}, and the subsequent role of energy gain
in amplifying the triaxiality, can be tested for
straightforwardly. Firstly, we would expect that the major semi-axis
at any time $t$ should be correlated to its value at $t=0$.
Secondly, }when there is a significant mass ejection, we would
expect there to be a correlation between the angular distribution of
the ejected particles and the orientation of the final triaxial
structure, and more specifically between the preferred direction for
ejection and the elongated axis of the final structure.
{ As a measure of these spatial distributions we use simply the
inertia tensor, determining its eigenvalues and eigenvectors.
Fig.\ref{M2c} (upper panel) shows that the orientation of the major
semi-axis indeed remains almost the same throughout the collapse and
virialization}.
The left panel of Fig. \ref{costhetahisto_alpha_1e4} shows, for the
case $\alpha=0.5$, which has both a strongly triaxial final state and
significant mass ejection, a histogram of realizations of the modulus
of the cosine of the angle between the eigenvectors corresponding to
the smallest eigenvalues for the ejected mass and the $80\%$ most
bound mass. There is again a very clear positive signal for the
correlation of the orientation of the axes.
The right panel of Fig. \ref{costhetahisto_alpha_1e4} shows the
correlation, measured in the same way, { for a number of
realizations}, between the {\it initial} distribution and the
ejected mass. We observe again a very clear correlation, and a
similar result is found considering the initial and final bound mass.
The reason for this strong correlation of the orientation of the
longest axes is simple: the moment of inertia of the initial mass
gives a measure of the small effective anisotropic due to the finite
$N$ fluctuations, and more specifically the axis of the smallest
eigenvalue is the axis along which the mass is ``stretched" furthest
away from the plane orthogonal to it passing through the center. It is
precisely along such a ``stretched" direction that one expects the
particles to arrive slightly later than the others --- just as the
mass along the longest axes of an ellipsoidal distribution --- and
thus to receive a slightly larger energy kick. This provides further
convincing evidence that the elongation of the final structure indeed
has its origin in the (relative) delay in particles' fall along these
directions.
\begin{figure}
\vspace{1cm} { \par\centering
\resizebox*{8cm}{7cm}{\includegraphics*{Fig5.eps}}
\par\centering }
\caption{Histogram of the values measured, in 20 realizations
of the case $\alpha=0.5$ and $N=10^4$, of the cosine of the
angle $\theta$ between the longest axes (determined from
inertia tensor)
of (i) the relaxed and ejected mass distributions (left panel),
and (ii) the initial and ejected mass distributions (right panel).}
\label{costhetahisto_alpha_1e4}
\end{figure}
{ This mechanism of symmetry breaking is more efficient for
$\alpha$ in the range $[0.5,1.5]$ (see Fig. \ref{DeltaE-r0}). There
is indeed greatest symmetry breaking in the final state for
intermediate values of $\alpha$ where the energetically boosted
particles are well localized in the outer radii, and where these
same particles represent a significant fraction of the mass. The
strong suppression of asymmetry in the case $\alpha=0$ is,
conversely, due to the fact that, although there are many particles
which pick up large energy boosts, they come from many different
parts of the initial structure. This leads to an effective
averaging over the angular fluctuations and a much more spherical
structure.}
{ We note that for $\alpha \ge 2$ there is a {markedly
different behavior of $\iota_{100}$ and $\iota_{80}$. This is a
result of the fact, as seen in Fig.\ref{pf_alpha}, that there is
no mass ejection in these cases: late-arriving particles do not
gain enough kinetic energy to escape from the system. In this
case there are however a significant fraction of high energy (but
bound) particles which reach large distances, giving rise to a
configuration that is more asymmetric than that of the $80\%$ most
bound particles.} Despite these differences the density and radial
velocity profiles are very similar for all values of $\alpha$ (see
Fig.\ref{densityprofile}), showing respectively a decay $n(r) \sim
r^{-4}$ and $\langle v_r^2 \rangle \sim r^{-1}$ as noticed by
\cite{syloslabini_2013}. However the $\alpha=0$ case leads to a
more compact configuration than $\alpha>0$ as a consequence of the
larger spread of the energy distribution.
}
\begin{figure}
\vspace{1cm} { \par\centering
\resizebox*{8cm}{7cm}{\includegraphics*{densityprofile.eps}}
\par\centering }
\caption{ Density (upper panel) and radial velocity (bottom panel)
profiles for the final configuration at $t= 5 \tau_c$ and for
different values of $\alpha$ { and with $N=10^4$. The dashed
line has a $r^{-4}$ behavior in the case of the density profile
(upper panel) and a $r^{-1}$ behavior in the case of radial
velocity profile.}}
\label{densityprofile}
\end{figure}
{We remark finally that a very similar dynamical process was observed
by \cite{Benhaiem_SylosLabini_2015} for the case of initial
conditions given by uniform ellipsoids. In addition, we note that
\cite{theis+spurzem_1999} considered a Plummer density profile with
a very small initial virial ratio, finding that a fast dynamical
collapse generates at its end the maximal triaxiality of the system:
the dynamical mechanism that generates such a triaxial structure is
the same at work in the power-law density profile.}
\section{Discussion and conclusions}
In summary, the mechanism for the generation of { the very strong
spherical symmetry breaking observed for certain density profiles}
during violent relaxation from cold spherical initial conditions is
essentially the existence of a preferential axis for the large
``energy injection'' to particles in the outer parts of the initial
structure, which leads to an elongation of the final structure along
the same axis. This axis is itself defined by the finite $N$
fluctuations breaking spherical symmetry in the initial conditions,
corresponding to an axis along which the matter is on average further
from the origin. Particles initially at larger radii along this axis
fall through the collapsing region later than particles along the
other axes, and as a consequence pick up a larger energy injection.
After this time particle energies change negligibly, and the strong
variation with energy as a function of angle along the (predominantly
radial) orbits leads, after further phase mixing, to a virialized
structure with a spatial structure reflecting the energy injection. In
particular the { major semi-axis} of the final structure is
correlated both with the (very slightly) long axis of the initial
condition, and with that along which the energy kick obtained by
particles during violent relaxation is greatest. This mechanism has no
apparent relation to the instability of equilibrium systems with
radial orbits. More specifically, the energy injection which is its
essential ingredient occurs in a very short time during violent
relaxation when the system is very far from equilibrium. The system
never approaches close to an equilibrium configuration which is
spherically symmetric with purely radial orbits.
The analysis presented here is of completely cold initial conditions
only. { We can anticipate that both how the asymmetry of the final
state, and the qualitative features of the collapse leading to it,
will show the same behavior as a function of $\alpha$ for simple
distributions of non-zero initial velocities,} provided the initial
virial ratio $b=2K/|W|$ associated is sufficiently small (see
e.g. \cite{syloslabini_2012}, which studies in particular the energy
injection and mass ejection as a function of $b$). Given
\citep{Polyachenko_Shukhman_1981, merritt+aguilar_1985} that warm
(and, in particular, equilibrium) initial conditions are observed in
many cases to give rise to triaxial structures, by a mechanism which
clearly is intimately related to the ROI of equilibrium systems, we
are led to the conclusion that there are (at least) two distinct
mechanisms subsumed under what is usually called ROI. We note that
this conclusion is not only consistent with previous studies, but
gives an explanation of one of their striking (and puzzling) results,
namely that when the initial virial ratio $b$ is varied, there is a
critical value at which there is a qualitative change in behavior:
above this value symmetry breaking occurs only if the velocity
distribution is sufficiently radial, while below this value symmetry
breaking occurs irrespective of whether the velocity distribution is
isotropic or not (see e.g. Figure 4 of \cite{barnes_etal_2009}, which
locates the value at $b \approx 0.05 - 0.15$ depending on the
profile).
While the dependence on the velocity anisotropic is a ``smoking gun"
for ``real" ROI operating for the warmer initial conditions, the
presence of a threshold below which the details of the velocity
distribution has no relevance is explained very naturally as the
dominance
{ in this region of a distinct mechanism operating far
from equilibrium, during the collapse, as we have described here.}
{A recent study} \citep{pakter_etal_2013} performs an analysis of the
stability to elliptical perturbations of a uniform sphere
(i.e. $\alpha=0$ here) with an isotropic velocity distribution and
oscillating under its mean field, and finds that there is a critical
value of the initial virial ratio below which there is such an
instability.
{ Whether or not this is in fact the
essential instability leading to deviation from spherical symmetry
at early times during the collapses we have studied, the analysis of
\cite{pakter_etal_2013} illustrates that, far from equilibrium,
there are indeed such instabilities even when the velocity
distribution is isotropic, and which are thus physically distinct
from the ROI mechanism for equilibrium systems.}
{We note finally that our analysis here has been for
isolated systems with initial power law initial conditions
in a non-expanding universe. A very similar phenomenology
of symmetry breaking starting from cosmological type
initial conditions in an expanding background has been
described in \cite{macmillan2006universal}, and the
ROI in this case has been linked (see also \cite{huss_etal_1999})
to the generation of a ``universal'' NFW-type density profile
in this context. We do not observe the ROI in our study to be
associated with such a final density profile: our profiles
are well characterised by a quite flat inner core
and power law decay $\sim r^{-4}$.
To determine whether the specific mechanism of amplification
of symmetry breaking we have described here --- associated
with the energy injection to material initially in the outer
shells --- is at play, or not, in the formation of triaxial
dark matter halos in cosmological models would require
further extensive study, incorporating a careful comparison
between the dynamics of isolated structures and halos
in the cosmological setting.}
\bigskip
Numerical simulations have been run on the Cineca PLX cluster (project
ISCRA QSS-SSG). In addition, this work was granted access to the HPC
resources of The Institute for scientific Computing and Simulation
financed by Region \^Ile de France and the project Equip@Meso
(reference ANR-10-EQPX- 29-01) overseen by the French National
Research Agency (ANR) as part of the “Investissements d’Avenir”
program.
|
1,314,259,993,189 | arxiv | \section{Introduction}
Proteins employ sophisticated binding mechanisms during their interplay with other physiological partners to fulfill crucial biological processes. Starting from the recognition of small rigid molecules by proteins all the way to complex rearrangements of protein-protein interactions, many different models of protein binding have been suggested.\cite{vogt2012, gianni2014, hammes2009, paul2016} The important transient conformational changes associated with binding can be hidden in equilibrium structures, and capturing them is the only way to provide comprehensive mechanistic insights. One example of a challenging question related to protein interactions is related to intrinsically disordered proteins. These proteins interact by coupling binding and folding and a lot of effort has been directed towards understanding the temporal ordering of the underlying events.\cite{wright2009} Therefore, clarifying the detailed binding mechanisms of protein interactions necessitates a kinetic perspective.\cite{chakrabarti2016l}
NMR-based techniques revealed invaluable insights into the complicated binding mechanisms of proteins, such as the ``fly-casting'' interaction mechanism of the intrinsically disordered pKID transcription factor with the KIX domain, i.e., a hydrophobic loose encounter complex is followed by the second (folding) phase.\cite{sugase2007} However, approaches that allow for a direct observation of events related to binding in a time-resolved manner are very scarce. They are commonly based on the stopped-flow methods,\cite{shammas2012, rogers2013, rogers2014} where rapid mixing of two interacting species is used as a trigger to initiate binding (or unbinding in competition experiments).\cite{shammas2016, gianni2016} These methods are limited by the mixing time, leaving the accessible time window in the millisecond regime.\cite{gianni2016} On the other hand, there are ultrafast laser-based approaches that rely on a temperature jump as a trigger for conformational changes, and fluorescence as a way of detection.\cite{dosnon2015} The different phases in the binding process of a yeast protease and its intrinsically disordered inhibitor have been resolved in this way.\cite{narayanan2008} However, temperature jumps, which are typically small, are limited by the size of the perturbation that can be induced. Single molecule fluorescence revealed the electrostatically driven encounter complex formation, followed by folding into the final 3D conformation.\cite{sturzenegger2018} A recent MD simulation revealed an atomistic support for the induced fit binding mechanism of an intrinsically disordered system \cite{robustelli2020}. Key contacts between the disordered peptide and the protein formed before or in parallel with the secondary structure formation. However, MD simulations are limited to very fast binders, and it is still very difficult to approach the relevant range of milliseconds.
The emerging strategy of designing photoswitchable proteins and peptides proved fruitful for diverse studies,\cite{Beharry2011} where a precise control of certain aspects of protein structure and/or function is necessary to control for instance protein folding,\cite{spo02,woolley05,rehm05,aemissegger05,schrader07,Ihalainen2008,Lorenz2016} allosteric communication,\cite{buchli13,stock2018, bozovic2020b, bozovic2020a} or biological activity.\cite{hoorens2018, schierling2010, brechun2017, zhang2010} Here we employ the previously designed photoswitchable RNase S to explore the kinetics and dynamics of (un)binding of this non-covalent complex.\cite{jankovic2019} This model system has been previously used to study the mechanism of coupled binding and folding,\cite{goldberg1999, bachmann2011} as the S-peptide fragment is unfolded in isolation, while it adopts the helical structure once bound to the S-protein part.\cite{Luitz2017, schreier1977, goldberg1999, richards1959, bachmann2011}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{fig1.pdf}
\caption{Molecular construct.\cite{jankovic2019} The S-protein (yellow) with highlighted tyrosine residues (magenta) binds the S-peptide (blue) in the \textit{cis}-state of the azobenzene moiety (orange), while it unbinds in the \textit{trans}-state. The picture was adapted from pdb-entry 2e3w.\cite{Boerema2008}} \label{figStruct}
\end{figure}
Our molecular construct is illustrated in Fig.~\ref{figStruct}.\cite{jankovic2019} The azobenzene moiety (orange) is covalently linked to the S-peptide (blue) via two cysteines (see Methods for details). By choosing the distance between these anchoring points, the $\alpha$-helicity of the S-peptide in the two states (\textit{cis} or \textit{trans}) of the photoswitch is either stabilized or destabilized,\cite{flint02} which in turn determines its binding affinity to the S-protein (yellow).\cite{jankovic2019} One can selectively switch between both states with light of the proper wavelength. In Ref.~\onlinecite{jankovic2019}, we designed five different mutants with varying anchoring points, and in one case with an additional mutation. The binding affinities of the S-peptide to the S-protein in the \textit{cis} and the \textit{trans}-states of the photoswitch have been measured by a combination of ITC, CD spectroscopy and intrinsic tyrosine fluorescence quenching. As anticipated by our design, the binding affinity is larger in the \textit{cis}-state for all mutants we investigated. However, the values for the binding affinities, and in particular the factors by which the binding affinity changes upon switching, vary significantly. S-pep(6,13), with the photoswitch linked at positions 6 and 13, sticks out in this regard, as it binds with reasonable affinity in the \textit{cis}-state, but no specific binding could be detected by CD spectroscopy in the \textit{trans}-state (fluorescence quenching indicated some degree of unspecific binding). This mutant will be the focus of the present kinetic study, as it approaches the ``speed limit'' of ligand unbinding and thus reveals its intrinsic dynamics, analogous to the concept of downhill protein folding.\cite{sabelko99,Yang03} As a control, we will also consider S-pep(6,10), which has a large (20 fold) change in binding affinity, but stays specifically bound to the protein in both states.\cite{jankovic2019}
In either case, the photoswitchable S-peptide does not have any fluorophore, while the S-protein has six tyrosine residues, one of which is located in the binding groove (Fig.~\ref{figStruct}). The amount of fluorescence will be sensitive to peptide binding, as the peptide (presumably mostly its azobenzene moiety) quenches the tyrosine fluorescence; this is the effect that enables one to determine binding affinities from the concentration dependent fluorescence yield.\cite{jankovic2019} Here, we measure fluorescence in a transient manner in order to follow the kinetics and/or dynamics of ligand binding and unbinding.
\section{Methods}
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=.45\textwidth]{fig2.pdf}
\caption{Time-resolved fluorometer. (a) Experimental setup used in this study; the various components are discussed in the text. Test experiments demonstrating the performance of the system are shown in panel (b) for a high frequency (33 Hz) and in panel (c) for a low frequency (1.1 Hz) of the syringe pushes.}\label{FigSetup}
\end{center}
\end{figure}
\subsection{Experimental Setup}
Fig.~\ref{FigSetup}a shows the experimental setup, which has been specifically designed for this study. A UV-LED at 265~nm with active area 1~mm$^2$ (M265D2, Thorlabs) was used to excite the fluorescence of the tyrosine residues of the protein sample. It was operated by a pulsed laser diode driver (LDP-V~10-10, PicoLAS), producing 15~ns pulses at a repetition rate of 200~kHz. The light was collected with a 50~mm lens, spatially filtered with an aperture with dia=5~mm, and focused with a 15~mm lens into the sample with a spot size of $\approx$300~$\mu$m. We estimated that the time-averaged power in the sample was $\approx1~\mu$W. A low power was anticipated to minimize the number of molecules that photo-isomerize induced by that measurement light (we estimated that it takes about a minute until every molecule in the measurement volume would have seen a 265~nm photon). The transmitted light was measured by an avalanche photo diode (APD, APD120A2/M, Thorlabs). The fluorescence light was collected in a 90$^\circ$ geometry by an large-aperture aspherical lens ($f$=40~mm, dia 50~mm), spectrally filtered with an interference filter transmitting $\approx$ 300-360~nm (XRR0340, Asahi Spectra), and imaged onto a photomultiplier (PM, PMA~175-N-M, PicoQuant). The signals from the APD and PM have been digitized in a home-built 16 bit ADC (similar to the one described in Ref.~\onlinecite{Farrell2020}), and transferred to a computer for data processing.
Pump pulses at 447~nm were generated with a GaN laser diode (PLPT9~450D\_E, Osram Opto Semiconductors), operated by another pulsed laser diode driver (LDP-V~10-10, PicoLAS) to produce pulses of 2~$\mu$s length at typical repetition frequencies of 1-33~Hz. While the maximum power of laser diode is specified at 3.5~W in cw-operation, we found that one can go up to 10~W in pulsed operation, revealing 20~$\mu$J of pulse energy in the 2~$\mu$s long pulses. The laser diode beam was pre-collimated (LTN330-C, Thorlabs), its elliptical shape corrected with two cylindrical lenses (50~mm and 150~mm), and then focused into the sample with a 150~mm lens, roughly matching the diameter of the probe light.
The intrinsic time resolution of the setup is 5~$\mu$s, determined by the repetition rate (200~kHz) of the UV-LED used to excite the fluorescence of the tyrosine residues. The signal-to-noise is inversely proportional to the square root of the number of detected fluorescence photons, which is shot-noise limited. As shot-noise is uncorrelated (i.e., white noise), signal-to-noise can be improved by time-filtering data with a Gaussian function, at the expense of time resolution. This effectively increases the number of detected photons, and signal-to-noise is inversely proportional to the square root of the effective time resolution after time-filtering.
The experiment required the exchange of sample between subsequent excitations from the 447~nm laser, which was achieved with a pulsed syringe pump pushed by a stepper motor (DRV014, Thorlabs). The stepper motor controller (KST101, Thorlabs) has an external trigger input, moving the stepper motor at desired time points with steps whose size can be pre-programmed. The syringe was connected to the sample cuvette\cite{Bredenbeck2003a} via rigid Teflon tubings. The channel in the sample cuvette was about 1~mm wide and 200~$\mu$m thick. The ratio of the dimension of the syringe vs that of the channel in the sample cuvette translated the 4~$\mu$m steps of the stepper motor into the desired $\approx 300~\mu$m steps of the excited spot in the sample cuvette. The stepper motor controller had to be programmed in Labview, allowing one to overwrite the default settings for the maximal velocity and acceleration, which was needed for stepping frequencies up to 33~Hz. A total sample volume of about 1~ml was needed. The timings of all components have been controlled with a programable delay generator (T560, Highland Technology).
Figs.~\ref{FigSetup}b,c demonstrate the performance of the pulsed syringe pump, using only the azobenzene-photoswitch (without protein, the part colored in orange in Fig.~\ref{figStruct}) as test sample. To this end, the sample was first prepared in its \textit{cis}-state with an excess of 370~nm light from a cw-LED, which illuminated the fused silica syringe. The $trans$-absorption is significantly larger at this wavelength (see Fig.~\ref{FigSpec}), thereby shifting the photo-equilibrium to the $cis$-state with typically 85\%. The 447~nm laser pulse then induced a \textit{cis}-to-\textit{trans} isomerization at time-zero with a quatum yield of $\approx$60\%.\cite{borisenko05}
Fig.~\ref{FigSetup}b shows that photo-isomerization is instantaneous on the timescale of this experiment, revealing a step-like increase in transmission of the 265~nm light from the UV-LED at time-zero, since the absorption of the photoswitch changes upon photo-isomerization (see Fig.~\ref{FigSpec}). The signal stays roughly constant for $\approx$15~ms after laser excitation, which will be the usable time window for measuring transient fluorescence, followed by a period of $\approx$10~ms for sample exchange upon pushing the syringe pump. The overall data are periodic, and we shifted the subsequent 5~ms, which are used to determine an offset, to negative times in Fig.~\ref{FigSetup}b. The dead time of the experiment is thus $\approx$15~ms. When a lower repetition rate is chosen, the dead time remains the same, while the usable time window increases accordingly, see Fig.~\ref{FigSetup}c (some of the protein samples were ``sticky'', hampering a smooth motion of the piston in the syringe, in which case it was necessary to increase the time between the syringe pushes and the 447~nm laser pulses to 100-250~ms). This plot also shows that the sample does not move and/or diffuse on a 1~s timescale between the syringe pushes. For a quick and complete exchange of the sample, it turned out to be very critical that absolutely no bubbles were present in the syringe, teflon tubings or the sample cuvette, and that the spatial overlap between the 265~nm probe light and the 447~nm laser pulses was carefully aligned.
The opposite \textit{trans}-to-\textit{cis} switching direction (ligand binding) could also be measured with this setup. To that end the sample was kept in the dark, in which case it would eventually relax into the lower-energy \textit{trans}-state. Illumination with the 447~nm laser then induced \textit{trans}-to-\textit{cis} isomerization, since in essence no \textit{cis}-peptides were present and since the \textit{trans}-state also absorbs at this wavelength (see Fig.~\ref{FigSpec}). The quantum yield is however significantly lower in this case.\cite{borisenko05} Photo-isomerisation competes with thermal \textit{cis}-to-\textit{trans} back relaxation, but since the time-averaged power of the 447~nm laser is very small (40~$\mu$W at 2~Hz), the photo-equilibrium of the sample as a whole will almost exclusively be on the \textit{trans}-side.
\subsection{Sample Preparation}
The S-protein was prepared by cleaving the commercial ribonuclease A from bovine pancreas (Sigma-Aldrich) with subtilisin (Sigma-Aldrich), as described in Refs.~\onlinecite{richards1959,jankovic2019} (with small modifications). To limit the proteolysis to a single peptide bond (between residues 20 and 21), we performed the cleavage reaction on ice overnight. The reaction was stopped by adjusting the pH value to 2. The S-protein was purified by C5 reverse-phase chromatography.
Photoswitchable peptides were prepared by crosslinking the cysteine-containing peptides with the water-soluble azobenzene-based photoswitch.\cite{zhang03} The peptides were first synthesized by standard Fmoc-based solid-phase peptide-synthesis using a Liberty 1 peptide synthesizer (CEM Corporation, Matthews, NC, USA). All amino acids were purchased from Novabiochem (La Jolla, CA, USA). The photoswitch (3,3'-bis(sulfonato)-4,4' bis(chloroacetamido)azobenzene) was added to a peptide reduced by tris(2-carboxyethyl)phosphine (TCEP) in 5x molar excess and incubated overnight. The linked peptides were purified by C18 reverse-phase chromatography.
The purity of all protein and peptide samples was analyzed by mass spectrometry. Concentrations were determined by amino acid analysis. All solutions were prepared in 50 mM sodium phosphate buffer pH 7.0.\\
\subsection{Model}
For a quantitative determination of the on- and off-rate constants, we considered the following coupled equilibria, in which two molecular species, the S-peptide in its \textit{cis} and \textit{trans}-states, compete for the same binding site on the S-protein $P$:
\begin{align}
PL_{cis}&\xrightleftharpoons[k_{on,cis}]{k_{off,cis}} P+L_{cis} \nonumber\\
PL_{trans}&\xrightleftharpoons[k_{on,trans}]{k_{off,trans}} P+L_{trans} \label{eqEqcouled}
\end{align}
The corresponding differential equations were solved numerically with the help of Mathematica. For the initial conditions, we first determined the equilibrium conditions in either the \textit{cis} or the \textit{trans}-state, assuming the S-peptide is 100\% in this state, and then switched 10\% or 5\% of the molecules for the \textit{cis}-to-\textit{trans} or \textit{trans}-to-\textit{cis} isomerization, respectively (accounting for the smaller isomerization quantum yields of the latter).
Despite the fact that the solutions of these differential equations are not strictly exponential, the deviation from exponential is very small and the data from the model
were fit to single-exponential functions.
The model is too simple to expect a quantitative fit of the experimental data; for example it ignores the possibility of unspecific binding, while we have evidence from comparing CD and fluorescence binding curves that unspecific binding does exist to a certain extent.\cite{jankovic2019} We therefore concentrated on the time constants; in the case of \textit{trans}-to-\textit{cis} switching only on those at lower S-protein concentration, when the effect of unspecific binding is expected to be less. Binding affinities known from Ref.~\onlinecite{jankovic2019} were taken over, and the other parameters of the model ($k_{on,cis}$ and $k_{on,trans}$ for S-pep(6,10), and $k_{on}$ and $K_{d,trans}$ for S-pep(6,13)) were varied until similar time constants as in experiment were obtained.
\section{Results}
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=.45\textwidth]{Fig3}
\caption{Absorption spectra. Absorption spectra exemplified for S-pep(6,13) (dashed lines) and S-pep(6,13)+S-protein (solid lines) in their \textit{cis} (red) and \textit{trans} (blue) states. The arrows indicate the wavelengths of the various light sources used in the experimental setup, as well as that of the fluorescence emission.}\label{FigSpec}
\end{center}
\end{figure}
\begin{figure*}[t]
\centering
\begin{center}
\includegraphics[width=.8\textwidth]{Fig4}
\caption{Time-resolved fluorescence. Relative fluorescence change for (a) \textit{cis}-to-\textit{trans} switching of S-pep(6,10) with 200~$\mu$M S-pep(6,10) and 200~$\mu$M S-protein, and (b) \textit{trans}-to-\textit{cis} switching with S-pep(6,10) concentration 200~$\mu$M, and S-protein concentration 100~$\mu$M (red) as well as 400~$\mu$M (blue). The same for (c) \textit{cis}-to-\textit{trans} switching of S-pep(6,13) with 400~$\mu$M S-pep(6,13) and 200~$\mu$M S-protein, and (d) \textit{trans}-to-\textit{cis} switching with S-pep(6,13) concentration 400~$\mu$M, and S-protein concentration 100~$\mu$M (red) as well as 400~$\mu$M (blue).The data in panels (a,b,d) are filtered with a Gaussian function with a width of 2.5~ms, those in panel (c) with with a width of 20~$\mu$s. In panels (a,b,d), the lines are exponential fits with the time constants indicated. In panel (c), the red line shows a stretched exponential fit ($\tau$=0.25~ms, stretching factor $\beta$=0.5), and the blue line a single exponential fit ($\tau$=0.29~ms). The residuals of these fits are shown as thin colored lines around the zero-line.} \label{FigExp}
\end{center}
\end{figure*}
\subsection{S-pep(6,10)}
To set the stage, we start with S-pep(6,10), whose binding affinities change by a large factor (20 fold) upon \textit{cis}-to-\textit{trans} switching, but with specific binding in both states (see Table~\ref{tab:Kd}). Fig.~\ref{FigExp}a shows the transient fluorescence measurement for \textit{cis}-to-\textit{trans} switching. Fluorescence increases, as expected since the tyrosines are quenched less upon unbinding of the S-peptide. An exponential fit to the data reveals a time-constant of 74~ms. Upon \textit{trans}-to-\textit{cis} switching (Fig.~\ref{FigExp}b), the sign of the signal inverts, representing stronger quenching of the tyrosine fluorescence upon ligand binding. The kinetics is concentration dependent with time constants of 42~ms for 100~$\mu$M S-protein (red) and 16~ms for 400~$\mu$M S-protein (blue, the concentration of the S-peptide has been 200~$\mu$M in both cases), as anticipated for a bimolecular reaction. The ratio of time constants closely resembles that of the S-protein concentrations. Furthermore, the amplitude of the 400~$\mu$M data (blue) is smaller, since we plot relative fluorescence change. In absolute numbers, the isomerized S-peptide is the same in both experiments (i.e., about 5\% of the S-peptide or 10~$\mu$M). In a relative sense, this is less for the large S-protein concentration.
In its simplest form, ligand binding/unbinding is discussed in terms of the following chemical equilibrium:
\begin{equation}
PL \xrightleftharpoons[k_{on}]{k_{off}} P+L\label{eqEq}
\end{equation}
where $PL$ is the ligand-bound state, and $P$ and $L$ denote protein and ligand, respectively.
The dissociation constant $K_d$ is related to the rate constants $k_{on}$ and $k_{off}$ by:
\begin{equation}
K_d=\frac{k_{off}}{k_{on}}. \label{eqKd}
\end{equation}
The equilibrium experiments of Ref.~\onlinecite{jankovic2019} can only determine the dissociation constant $K_d$, while the present kinetic experiments can also determine the on- and off-rate constants. Trends can be read off directly from Fig.~\ref{FigExp}a,b, but a more quantitative modelling is needed to extract these rate constants, taking into account the fact that both states of the photoswitch bind to the protein to a certain extent, and competitive binding of the S-peptide in its \textit{cis} and \textit{trans}-states, both of which exist after photoswitching (see Fig.~\ref{FigSim}, for details of that model, see Methods). The resulting kinetics are not strictly exponential owing to the coupled and nonlinear character of the corresponding differential equations, however, they deviate from exponential by less than what the experimental noise would allow one to see (Fig.~\ref{FigSim}). We therefore also fit the simulated data to exponential functions, and compare the extracted time-constants with the experimental ones. We obtain good qualitative agreement for both \textit{cis}-to-\textit{trans} and \textit{trans}-to-\textit{cis} switching when assuming on-rate constants $k_{on,cis}=3\cdot10^5$~M$^{-1}$s$^{-1}$ and $k_{on,trans}=1\cdot10^5$~M$^{-1}$s$^{-1}$, see Table~\ref{tab:Kd}.
\begin{table}[b]
\centering
\caption{Thermodynamic and kinetic constants for the two samples considered in this study.}
\label{tab:Kd}
\begin{tabular}{l | c| c |c |c }
& $K_{d,cis}$ & $k_{on,cis}$ & $K_{d,trans}$ & $k_{on,trans}$ \\\hline
S-pep(6,10) &2.3~$\mu$M\footnotemark[1] & $3\cdot10^5$~M$^{-1}$s$^{-1}$ & 47~$\mu$M\footnotemark[1] & $1\cdot10^5$~M$^{-1}$s$^{-1}$ \\
S-pep(6,13) &70~$\mu$M\footnotemark[1] & $9\cdot10^4$~M$^{-1}$s$^{-1}$ & 40~mM\footnotemark[2] & $9\cdot10^4$~M$^{-1}$s$^{-1}$ \footnotemark[3] \\
\end{tabular}
\footnotetext[1]{taken from Ref.~\onlinecite{jankovic2019}}
\footnotetext[2]{nominal dissociation constant; see text for discussion.}
\footnotetext[3]{not measured, but assumed to be the same as $k_{on,cis}$, see text for discussion}
\end{table}
\begin{figure*}[t]
\centering
\begin{center}
\includegraphics[width=.8\textwidth]{Fig5}
\caption{Model calculations. Change of free S-protein upon (a) \textit{cis}-to-\textit{trans} and (b) \textit{trans}-to-\textit{cis} switching of S-pep(6,10), and upon (c) \textit{cis}-to-\textit{trans} and (d) \textit{trans}-to-\textit{cis} switching of S-pep(6,13), as deduced from the model described in Methods. The concentrations are the same as in the experiment (Fig.~\ref{FigExp}). The points are the result from the model, the solid lines exponential fits to it, with the fitted time constants indicated. }\label{FigSim}
\end{center}
\end{figure*}
The extracted on-rate constants are in the same range as what has been observed in Refs.~\onlinecite{goldberg1999,bachmann2011} for a series of mutants of the RNase~S system without photoswitch. Two factors determine binding rate constants. The first is the diffusion controlled formation of an encounter complex, taking into account the fact that the two partners need to approach each other with a specific orientation, which results in typical on-rate constants in the range between $10^5$~M$^{-1}$s$^{-1}$ to $10^6$~M$^{-1}$s$^{-1}$.\cite{schreiber2009,rogers2013} The second factor concerns the fraction of molecules that leave the encounter complex before a stable protein-ligand complex is formed (in an induced fit scenario). Since the diffusion controlled step is often rate-limiting, $k_{on}$ typically varies only in a small range, in the case of the mutants of the RNase~S system between $1.6\cdot10^5$~M$^{-1}$s$^{-1}$ and $5.8\cdot10^5$~M$^{-1}$s$^{-1}$, see Ref.~\onlinecite{bachmann2011}. In our case, the \textit{cis}-state is more tightly bound, and correspondingly, $k_{on,cis}$ is $\approx$3 times faster than $k_{on,trans}$, however, this factor 3 is small in comparison to the overall factor 20, by which the binding affinities differ.
\subsection{S-pep(6,13)}
With that, we turn to S-pep(6,13), which is characterized as on-off system without any specific binding detected in the \textit{trans}-state.\cite{jankovic2019} At a first sight, the results in Fig.~\ref{FigExp}c,d look similar to those of S-pep(6,10) (Fig.~\ref{FigExp}a,b), however, binding and unbinding happens on completely different timescales (to that end note the different time ranges in Figs.~\ref{FigExp}c,d). Binding upon \textit{trans}-to-\textit{cis} switching, which is again concentration dependent, reveals 60~ms for 100~$\mu$M S-protein concentration and 40~ms for 400~$\mu$M S-protein concentration, with the S-peptide concentration being 400~$\mu$M in both cases (Fig.~\ref{FigExp}d). These are similar timescales as for S-pep(6,10). However, the ratio of time constants deviates significantly from that of the S-protein concentrations, for two reasons: First, the observed rate constant is the sum of the effective (i.e., concentration dependent) on- and off-rate constants, and the off-rate contributes more in a relative sense at lower protein concentrations. This effect is taken care of in the model of Fig.~\ref{FigSim}d, where a ratio of time constant smaller than 4 is indeed observed. In addition, by comparing CD with fluorescence quenching data, we concluded in Ref.~\onlinecite{jankovic2019} that some amount of non-specific binding also occurs in the \textit{trans}-state of S-pep(6,13). Binding of this fraction of molecules will be a unimolecular process, and thus not concentration dependent.
Unbinding upon \textit{cis}-to-\textit{trans} switching is faster by two orders of magnitudes, see Fig.~\ref{FigExp}c. Furthermore, the data reveal stretched-exponential kinetics $\exp[-(t/\tau)^\beta]$ with time-constant $\tau$=0.25~ms and a significant stretching factor $\beta$=0.5. From the residual shown in Fig.~\ref{FigExp}c, it is evident that the stretched exponential fit is better than a single-exponential fit with time-constant 0.29~ms (Fig.~\ref{FigExp}c, blue line), which we consider an average time-constant.
The binding affinity of the \textit{trans}-state could not be measured in Ref.~\onlinecite{jankovic2019}, since it is too small. For a modelling similar to the one used for S-pep(6,10), we therefore had to assume $k_{on,cis}=k_{on,trans}$, revealing $k_{on}\approx9\cdot10^4~\rm{M}^{-1}\rm{s}^{-1}$ and a nominal dissociation constant of $K_d=40$~mM, see Figs.~\ref{FigSim}c,d. The modelled unbinding kinetics of Fig.~\ref{FigSim}c is almost perfectly exponential, in contrast to the experimental data of Fig.~\ref{FigExp}c. Since binding essentially does not exist, $k_{on}$ is negligible, and the reaction is unimolecular to a very good approximation; very different from typical binding studies.\cite{goldberg1999,bachmann2011} This is due to the very special molecular system, which has been designed to not bind upon switching into the \textit{trans}-state. The stretched-exponential character of the experimental data thus must be due to an effect that is beyond Eq.~\ref{eqEqcouled}.
\section{Discussion and Conclusion}
A dissociation constant of $K_d=40$~mM is no longer a meaningful number; for example, with the molecular weight of protein plus peptide (14 kDa), one would conclude that a 50\% binding equilibrium is reached only at a protein density of 560~g/l, i.e., in a sample that contains about the same amount of water as it contains protein. This density is beyond a regime, in which Eq.~\ref{eqEq} is valid. In other words, the peptide does not bind at all to the protein in the \textit{trans}-state and unbinding is a barrier-less process. It is important to stress that this conclusion originates from the averaged time-constant of unbinding (0.29~ms), irrespective whether the kinetics is exponential or stretched-exponential. This time-constant is two to five orders of magnitudes faster than that of the mutants of the RNase~S system studied in Ref.~\onlinecite{bachmann2011}. One assumption had to be made to estimate the binding affinity in the \textit{trans}-state, namely that $k_{on,trans}=k_{on,cis}$, since only $k_{on,cis}$ is determined by the experimental data of Fig.~\ref{FigExp}d. This assumption is justified by the fact that on-rate constants vary only by relatively small amounts. Taking Ref.~\onlinecite{bachmann2011} as a basis with a variation of $k_{on}$ of a factor 3-4 for a wide variety of S-peptide mutants, this gives a feeling of the uncertainty in the estimate of $K_d=40$~mM. Even considering this uncertainty, one would conclude that the ligand in essence does not bind in the \textit{trans}-state of the photoswitch.
To illustrate the concept of barrier-less unbinding, Fig.~\ref{FigFreeEnergy}a shows a very simple model for the free energy of ligand binding as a function of the distance $R$ of the ligand from the protein.\cite{bicout00} The protein-ligand complex is stabilized by a binding energy $V_b$. Beyond the interaction range of the protein ($R_0$), the free energy decreases due to an entropic contribution,
which accounts for the larger space available to the ligand with increasing distance. The binding energy $V_b$ determines an energetic barrier for unbinding, while the barrier for binding is in essence of entropic nature. When the binding energy becomes zero, $V_b=0$, unbinding is barrier-less, see Fig.~\ref{FigFreeEnergy}b. Upon \textit{cis}-to-\textit{trans}-isomerisation, we switch from the free energy of Fig.~\ref{FigFreeEnergy}a to that of Fig.~\ref{FigFreeEnergy}b. The ensemble, which has been equilibrated on the free energy surface Fig.~\ref{FigFreeEnergy}a, all of the sudden finds itself in a non-equilbrium situation, and starts to evolve on the free energy surface Fig.~\ref{FigFreeEnergy}b.
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=.5\textwidth]{Fig6}
\caption{Free energy model: (a) Free energy of the ligand as a function of the distance $R$ of the ligand from the protein, where $R_0$ is the interaction range of the protein (in essence its size), $V_b$ the binding energy, and $S^\#$ an activation entropy. Panel (b) shows the same for barrier-less unbinding with $V_b=0$. The two scenarios resemble the situations in the \textit{cis}- and \textit{trans}-states of the photoswitch.}\label{FigFreeEnergy}
\end{center}
\end{figure}
In the realm of femtochemistry,\cite{zewail00} it is important to distinguish ``kinetics'' from ``dynamics''. The on- and off-rate constants in Eq.~\ref{eqEq} determine the probability of ligand binding and unbinding per time unit, and completely mask the complexity of the process. This approach is valid when the barriers are high enough so that their crossing becomes rate-limiting. In that limit, one can describe the kinetics by single numbers, $k_{on}$ and $k_{off}$. On the other hand, it is clear from MD simulations that ligand binding or unbinding, when looked at on an atomistic level, is a very complex and very heterogenous process, with different pathways consisting of many small steps.\cite{Blochliger2015,Luitz2017,robustelli2020} That is what we call the ``intrinsic dynamics'' of the process. When removing the unfolding barrier in S-pep(6,13), we take a glance at the intrinsic dynamics. The stretched-exponential function is a commonly chosen model to account for a distribution of timescales, with the stretching factor $\beta$ determining the width of that distribution.\cite{Johnston2006} In the present case, a distribution of timescales could result from the final unbinding of the S-peptide from various unspecific binding sites on the protein surface. Molecular dynamics (MD) simulations would help to provide more microscopic insights into the various pathways that give rise to the distribution of timescales we observe.
The situation strongly resembles that of ``downhill'' protein folding. When the folding barrier of a protein is removed, e.g., by mutations, one observes the intrinsic dynamics of the protein, sometimes also called the ``speed limit'' of protein folding.\cite{Yang03,kubelka04} At the same time, the dynamics becomes pronouncedly non-exponential,\cite{sabelko99} just like barrier-less unbinding in Fig.~\ref{FigExp}c. Similar to the protein folding problem,\cite{Ma2005} it is expected that other probes, e.g., transient IR spectroscopy (which is currently ongoing in our lab), will reveal different dynamical components of the process. As a word of caution, it should however be added that the implication of downhill folding for non-exponential kinetics has been questioned.\cite{hagen03}
One of the scenarios discussed in the context of ligand binding is that of an induced fit, which is described by the following reaction scheme:
\begin{equation}
P+L \xrightleftharpoons[k_{off}]{k_{on}} PL^\# \xrightleftharpoons[k'_{r}]{k_{r}} PL \label{eqEqind}
\end{equation}
Here, $PL^\#$ is a high-energy bound state that relaxes into $PL$ upon ``fitting'' the ligand into the binding site of a protein. Since the binding rate constant is concentration dependent, the diffusive step will not be rate-limiting at high enough concentrations, while the second step remains concentration independent. Observing an effective rate constant of binding that saturates with increasing concentration is considered an indicator for an induced fit.\cite{vogt2012, gianni2014, paul2016} However, if the rate constant $k_r$ of the induced fit is too fast, this approach might miss it, as it is not possible to increase the concentrations sufficiently. For typical on-rate constants of $10^5-10^6$~M$^{-1}$s$^{-1}$,\cite{schreiber2009} and typical maximal protein concentrations of 1~mM, that regime is already reached for $k_r>10^2-10^3$~s$^{-1}$. In addition to this inherent limitation, the time resolution of typical stop-flow instruments is in the range of 1~ms.\cite{gianni2014}
The experiment we perform here is closely related, as it can also be described by Eq.~\ref{eqEqind} (where $PL^\#$ would be the transition state in Fig.~\ref{FigFreeEnergy}), just that we consider here the unfolding direction. The forward and backward rate constants $k_r$ and $k'_r$ are connected to each other by the equilibrium constant of the second step. In the barrier-less case of Fig.~\ref{FigFreeEnergy}b, the equilibrium constant is 1, and $k_r=k'_r$. We observe $k'_r=4\cdot10^3$~s$^{-1}$, which is fast in light of the discussion above, but slow in terms of the structural rearrangements that are needed to fit the S-peptide into the binding site of the S-protein. That is, the folding of small $\alpha$-helices in solution occurs on a typically 3-4 orders of magnitude faster timescale ($10^6-10^7$~s$^{-1}$),\cite{kubelka04} even under constraints.\cite{Ihalainen2008}
In conclusion, due to the slow diffusive step inherent to any binding experiments, many induced fit scenarios might be missed in such experiments. In connection with a fast trigger, much quicker structural processes can be observed in the unbinding direction, revealing the intrinsic dynamics of the ligand during the unbinding event. That ``speed limit'' is in the range of few 100~$\mu$s for the RNase~S system.\\
\noindent\textbf{Acknowledgement:} We thank Claudio Zanobini and Karl Hamm for technical contributions at an early stage of this project. We also thank Rolf Pfister for the synthesis of the photoswitch (BSBCA), the Functional Genomics Center
Zurich, especially Serge Chesnov, for his work on the mass spectrometry. The work has been supported by the Swiss National Science Foundation (SNF) through the NCCR MUST and Grant 200020B\_188694/1.\\
\noindent\textbf{Accession Codes:} Ribonuclease A (P61823 (RNAS1\_BOVIN)) and Subtilisin (P00782 (SUBT\_BACAM))\\
\makeatletter
\def\@biblabel#1{(#1)}
\makeatother
\def\bibsection{\section*{}}
\noindent\textbf{References:}
\vspace{-1.5cm}
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2 \doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{54}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Vogt and Di~Cera(2012)Vogt, and Di~Cera]{vogt2012}
Vogt,~A.~D., and Di~Cera,~E. (2012) {Conformational selection or induced fit? A
critical appraisal of the kinetic mechanism}. \emph{Biochemistry} \emph{51},
5894--5902\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gianni \latin{et~al.}(2014)Gianni, Dogan, and Jemth]{gianni2014}
Gianni,~S., Dogan,~J., and Jemth,~P. (2014) {Distinguishing induced fit from
conformational selection}. \emph{Biophys. Chem.} \emph{189}, 33--39\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hammes \latin{et~al.}(2009)Hammes, Chang, and Oas]{hammes2009}
Hammes,~G.~G., Chang,~Y.-C., and Oas,~T.~G. (2009) {Conformational selection or
induced fit: a flux description of reaction mechanism}. \emph{Proc. Natl.
Acad. Sci. USA} \emph{106}, 13737--13741\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Paul and Weikl(2016)Paul, and Weikl]{paul2016}
Paul,~F., and Weikl,~T.~R. (2016) {How to distinguish conformational selection
and induced fit based on chemical relaxation rates}. \emph{PLoS Comput.
Biol.} \emph{12}, e1005067\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wright and Dyson(2009)Wright, and Dyson]{wright2009}
Wright,~P.~E., and Dyson,~H.~J. (2009) {Linking folding and binding}.
\emph{Curr. Opin. Struct. Biol.} \emph{19}, 31--38\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Chakrabarti \latin{et~al.}(2016)Chakrabarti, Agafonov, Pontiggia,
Otten, Higgins, Schertler, Oprian, and Kern]{chakrabarti2016l}
Chakrabarti,~K.~S., Agafonov,~R.~V., Pontiggia,~F., Otten,~R., Higgins,~M.~K.,
Schertler,~G.~F., Oprian,~D.~D., and Kern,~D. (2016) {Conformational
selection in a protein-protein interaction revealed by dynamic pathway
analysis}. \emph{Cell Rep.} \emph{14}, 32--42\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sugase \latin{et~al.}(2007)Sugase, Dyson, and Wright]{sugase2007}
Sugase,~K., Dyson,~H.~J., and Wright,~P.~E. (2007) {Mechanism of coupled
folding and binding of an intrinsically disordered protein}. \emph{Nature}
\emph{447}, 1021--1025\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shammas \latin{et~al.}(2012)Shammas, Rogers, Hill, and
Clarke]{shammas2012}
Shammas,~S., Rogers,~J., Hill,~S., and Clarke,~J. (2012) {Slow, reversible,
coupled folding and binding of the spectrin tetramerization domain}.
\emph{Biophys. J.} \emph{103}, 2203--2214\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rogers \latin{et~al.}(2013)Rogers, Steward, and Clarke]{rogers2013}
Rogers,~J.~M., Steward,~A., and Clarke,~J. (2013) {Folding and binding of an
intrinsically disordered protein: fast, but not ‘diffusion-limited’}.
\emph{J. Am. Chem. Soc.} \emph{135}, 1415--1422\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rogers \latin{et~al.}(2014)Rogers, Oleinikovas, Shammas, Wong,
De~Sancho, Baker, and Clarke]{rogers2014}
Rogers,~J.~M., Oleinikovas,~V., Shammas,~S.~L., Wong,~C.~T., De~Sancho,~D.,
Baker,~C.~M., and Clarke,~J. (2014) {Interplay between partner and ligand
facilitates the folding and binding of an intrinsically disordered protein}.
\emph{Proc. Natl. Acad. Sci. USA} \emph{111}, 15420--15425\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shammas \latin{et~al.}(2016)Shammas, Crabtree, Dahal, Wicky, and
Clarke]{shammas2016}
Shammas,~S.~L., Crabtree,~M.~D., Dahal,~L., Wicky,~B.~I., and Clarke,~J. (2016)
{Insights into coupled folding and binding mechanisms from kinetic studies}.
\emph{J. Biol. Chem.} \emph{291}, 6689--6695\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Gianni \latin{et~al.}(2016)Gianni, Dogan, and Jemth]{gianni2016}
Gianni,~S., Dogan,~J., and Jemth,~P. (2016) {Coupled binding and folding of
intrinsically disordered proteins: what can we learn from kinetics?}
\emph{Curr. Opin. Struct. Biol.} \emph{36}, 18--24\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Dosnon \latin{et~al.}(2015)Dosnon, Bonetti, Morrone, Erales,
di~Silvio, Longhi, and Gianni]{dosnon2015}
Dosnon,~M., Bonetti,~D., Morrone,~A., Erales,~J., di~Silvio,~E., Longhi,~S.,
and Gianni,~S. (2015) {Demonstration of a folding after binding mechanism in
the recognition between the measles virus NTAIL and X domains}. \emph{ACS
Chem. Biol.} \emph{10}, 795--802\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Narayanan \latin{et~al.}(2008)Narayanan, Ganesh, Edison, and
Hagen]{narayanan2008}
Narayanan,~R., Ganesh,~O.~K., Edison,~A.~S., and Hagen,~S.~J. (2008) {Kinetics
of folding and binding of an intrinsically disordered protein: the inhibitor
of yeast aspartic proteinase YPrA}. \emph{J. Am. Chem. Soc.} \emph{130},
11477--11485\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sturzenegger \latin{et~al.}(2018)Sturzenegger, Zosel, Holmstrom,
Buholzer, Makarov, Nettels, and Schuler]{sturzenegger2018}
Sturzenegger,~F., Zosel,~F., Holmstrom,~E.~D., Buholzer,~K.~J., Makarov,~D.~E.,
Nettels,~D., and Schuler,~B. (2018) {Transition path times of coupled folding
and binding reveal the formation of an encounter complex}. \emph{Nat.
Commun.} \emph{9}, 1--11\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Robustelli \latin{et~al.}(2020)Robustelli, Piana, Shaw, and
Shaw]{robustelli2020}
Robustelli,~P., Piana,~S., Shaw,~D.~E., and Shaw,~D.~E. (2020) {Mechanism of
Coupled Folding-upon-Binding of an Intrinsically Disordered Protein}.
\emph{J. Am. Chem. Soc.} \emph{142}, 11092--11101\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Beharry and Woolley(2011)Beharry, and Woolley]{Beharry2011}
Beharry,~A.~A., and Woolley,~G.~A. (2011) {Azobenzene photoswitches for
biomolecules}. \emph{Chem. Soc. Rev.} \emph{40}, 4422--4437\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sp{\"{o}}rlein \latin{et~al.}(2002)Sp{\"{o}}rlein, Carstens, Satzger,
Renner, Behrendt, Moroder, Tavan, Zinth, and Wachtveitl]{spo02}
Sp{\"{o}}rlein,~S., Carstens,~H., Satzger,~H., Renner,~C., Behrendt,~R.,
Moroder,~L., Tavan,~P., Zinth,~W., and Wachtveitl,~J. (2002) {Ultrafast
spectroscopy reveals subnanosecond peptide conformational dynamics and
validates molecular dynamics simulation}. \emph{Proc. Natl. Acad. Sci. USA}
\emph{99}, 7998--8002\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Woolley(2005)]{woolley05}
Woolley,~G.~A. (2005) {Photocontrolling peptide alpha helices}. \emph{Acc.
Chem. Res.} \emph{38}, 486--493\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Rehm \latin{et~al.}(2005)Rehm, Lenz, Mensch, Schwalbe, and
Wachtveitl]{rehm05}
Rehm,~S., Lenz,~M.~O., Mensch,~S., Schwalbe,~H., and Wachtveitl,~J. (2005)
{Ultrafast spectroscopy of a photoswitchable 30-amino acid de novo
synthesized peptide}. \emph{Chem. Phys.} \emph{323}, 28--35\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Aemissegger \latin{et~al.}(2005)Aemissegger, Krautler, van Gunsteren,
and Hilvert]{aemissegger05}
Aemissegger,~A., Krautler,~V., van Gunsteren,~W.~F., and Hilvert,~D. (2005) {A
photoinducible beta-hairpin}. \emph{J. Am. Chem. Soc.} \emph{127},
2929--2936\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schrader \latin{et~al.}(2007)Schrader, Schreier, Cordes, Koller,
Babitzki, Denschlag, Renner, L{\"{o}}weneck, Dong, Moroder, Tavan, and
Zinth]{schrader07}
Schrader,~T.~E., Schreier,~W.~J., Cordes,~T., Koller,~F.~O., Babitzki,~G.,
Denschlag,~R., Renner,~C., L{\"{o}}weneck,~M., Dong,~S.-L., Moroder,~L.,
Tavan,~P., and Zinth,~W. (2007) {Light-triggered beta-hairpin folding and
unfolding}. \emph{Proc. Natl. Acad. Sci. USA} \emph{104}, 15729--15734\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ihalainen \latin{et~al.}(2008)Ihalainen, Paoli, Muff, Backus,
Bredenbeck, Woolley, Caflisch, and Hamm]{Ihalainen2008}
Ihalainen,~J.~A., Paoli,~B., Muff,~S., Backus,~E. H.~G., Bredenbeck,~J.,
Woolley,~G.~A., Caflisch,~A., and Hamm,~P. (2008) {Alpha-Helix folding in the
presence of structural constraints.} \emph{Proc. Natl. Acad. Sci. USA}
\emph{105}, 9588--9593\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lorenz \latin{et~al.}(2016)Lorenz, Kusebauch, Moroder, and
Wachtveitl]{Lorenz2016}
Lorenz,~L., Kusebauch,~U., Moroder,~L., and Wachtveitl,~J. (2016) {Temperature-
and Photocontrolled Unfolding/Folding of a Triple-Helical Azobenzene-Stapled
Collagen Peptide Monitored by Infrared Spectroscopy}. \emph{ChemPhysChem}
\emph{17}, 1314--1320\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Buchli \latin{et~al.}(2013)Buchli, Waldauer, Walser, Donten, Pfister,
Bl{\"{o}}chliger, Steiner, Caflisch, Zerbe, and Hamm]{buchli13}
Buchli,~B., Waldauer,~S.~A., Walser,~R., Donten,~M.~L., Pfister,~R.,
Bl{\"{o}}chliger,~N., Steiner,~S., Caflisch,~A., Zerbe,~O., and Hamm,~P.
(2013) {Kinetic response of a photoperturbed allosteric protein}. \emph{Proc.
Natl. Acad. Sci. USA} \emph{110}, 11725--11730\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Stock and Hamm(2018)Stock, and Hamm]{stock2018}
Stock,~G., and Hamm,~P. (2018) {A Nonequilibrium Approach to Allosteric
Communication}. \emph{Philos. Trans. R. Soc. B Biol. Sci.} \emph{373},
20170187\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bozovic \latin{et~al.}(2020)Bozovic, Jankovic, and Hamm]{bozovic2020b}
Bozovic,~O., Jankovic,~B., and Hamm,~P. (2020) {Sensing the allosteric force}.
\emph{Nat. Commun.} \emph{11}, 5841\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bozovic \latin{et~al.}(2020)Bozovic, Zanobini, Gulzar, Jankovic,
Buhrke, Post, Wolf, Stock, and Hamm]{bozovic2020a}
Bozovic,~O., Zanobini,~C., Gulzar,~A., Jankovic,~B., Buhrke,~D., Post,~M.,
Wolf,~S., Stock,~G., and Hamm,~P. (2020) {Real-time observation of
ligand-induced allosteric transitions in a PDZ domain}. \emph{Proc. Natl.
Acad. Sci. USA} \emph{117}, 26031--26039\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hoorens and Szymanski(2018)Hoorens, and Szymanski]{hoorens2018}
Hoorens,~M.~W., and Szymanski,~W. (2018) {Reversible, spatial and temporal
control over protein activity using light}. \emph{Trends Biochem. Sci.}
\emph{43}, 567--575\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schierling \latin{et~al.}(2010)Schierling, No{\"e}l, Wende, Volkov,
Kubareva, Oretskaya, Kokkinidis, R{\"o}mpp, Spengler, Pingoud, \latin{et~al.}
others]{schierling2010}
Schierling,~B., No{\"e}l,~A.-J., Wende,~W., Volkov,~E., Kubareva,~E.,
Oretskaya,~T., Kokkinidis,~M., R{\"o}mpp,~A., Spengler,~B., Pingoud,~A.,
\latin{et~al.} (2010) {Controlling the enzymatic activity of a restriction
enzyme by light}. \emph{Proc. Natl. Acad. Sci. USA} \emph{107},
1361--1366\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Brechun \latin{et~al.}(2017)Brechun, Arndt, and Woolley]{brechun2017}
Brechun,~K.~E., Arndt,~K.~M., and Woolley,~G.~A. (2017) {Strategies for the
photo-control of endogenous protein activity}. \emph{Curr. Opin. Struct.
Biol.} \emph{45}, 53--58\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhang \latin{et~al.}(2010)Zhang, Timm, Arndt, and Woolley]{zhang2010}
Zhang,~F., Timm,~K.~A., Arndt,~K.~M., and Woolley,~G.~A. (2010) {Photocontrol
of Coiled-Coil Proteins in Living Cells}. \emph{Angew. Chem. Int. Ed.}
\emph{122}, 4035--4038\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Jankovic \latin{et~al.}(2019)Jankovic, Gulzar, Zanobini, Bozovic,
Wolf, Stock, and Hamm]{jankovic2019}
Jankovic,~B., Gulzar,~A., Zanobini,~C., Bozovic,~O., Wolf,~S., Stock,~G., and
Hamm,~P. (2019) {Photocontrolling Protein--Peptide Interactions: From Minimal
Perturbation to Complete Unbinding}. \emph{J. Am. Chem. Soc.} \emph{141},
10702--10710\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Goldberg and Baldwin(1999)Goldberg, and Baldwin]{goldberg1999}
Goldberg,~J.~M., and Baldwin,~R.~L. (1999) {A specific transition state for
S-peptide combining with folded S-protein and then refolding}. \emph{Proc.
Natl. Acad. Sci. USA} \emph{96}, 2019--2024\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bachmann \latin{et~al.}(2011)Bachmann, Wildemann, Praetorius, Fischer,
and Kiefhaber]{bachmann2011}
Bachmann,~A., Wildemann,~D., Praetorius,~F., Fischer,~G., and Kiefhaber,~T.
(2011) {Mapping backbone and side-chain interactions in the transition state
of a coupled protein folding and binding reaction}. \emph{Proc. Natl. Acad.
Sci. USA} \emph{108}, 3952--3957\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Luitz \latin{et~al.}(2017)Luitz, Bomblies, and Zacharias]{Luitz2017}
Luitz,~M.~P., Bomblies,~R., and Zacharias,~M. (2017) {Comparative molecular
dynamics analysis of RNase-S complex formation}. \emph{Biophys. J.}
\emph{113}, 1466--1474\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schreier and Baldwin(1977)Schreier, and Baldwin]{schreier1977}
Schreier,~A.~A., and Baldwin,~R.~L. (1977) {Mechanism of dissociation of
S-peptide from ribonuclease S}. \emph{Biochemistry} \emph{16},
4203--4209\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Richards and Vithayathil(1959)Richards, and Vithayathil]{richards1959}
Richards,~F.~M., and Vithayathil,~P.~J. (1959) {The preparation of
subtilisin-modified ribonuclease and the separation of the peptide and
protein components}. \emph{J. Biol. Chem.} \emph{234}, 1459--1465\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Boerema \latin{et~al.}(2008)Boerema, Tereshko, and Kent]{Boerema2008}
Boerema,~D.~J., Tereshko,~V.~A., and Kent,~S.~B. (2008) {Total synthesis by
modern chemical ligation methods and high resolution (1.1 {\AA}) x-ray
structure of ribonuclease A}. \emph{Biopolym. - Pept. Sci. Sect.} \emph{90},
278--286\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Flint \latin{et~al.}(2002)Flint, Kumita, Smart, and Woolley]{flint02}
Flint,~D.~G., Kumita,~J.~R., Smart,~O.~S., and Woolley,~G.~A. (2002) {Using an
Azobenzene Cross-Linker to Either Increase or Decrease Peptide Helix Content
upon Trans-to-Cis Photoisomerization}. \emph{Chem. Biol.} \emph{9},
391--397\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sabelko \latin{et~al.}(1999)Sabelko, Ervin, and Gruebele]{sabelko99}
Sabelko,~J., Ervin,~J., and Gruebele,~M. (1999) {Observations of strange
kinetics in protein folding}. \emph{Proc. Natl. Acad. Sci. USA} \emph{96},
6031--6036\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Yang and Gruebele(2003)Yang, and Gruebele]{Yang03}
Yang,~W.~Y., and Gruebele,~M. (2003) {Folding at the speed limit}.
\emph{Nature} \emph{423}, 193--197\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Farrell \latin{et~al.}(2020)Farrell, Ostrander, Jones, Yakami, Dicke,
Middleton, Hamm, and Zanni]{Farrell2020}
Farrell,~K.~M., Ostrander,~J.~S., Jones,~A.~C., Yakami,~B.~R., Dicke,~S.~S.,
Middleton,~C.~T., Hamm,~P., and Zanni,~M.~T. (2020) {Shot-to-shot 2D IR
spectroscopy at 100 kHz using a Yb laser and custom-designed electronics}.
\emph{Opt. Express} \emph{28}, 33584\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bredenbeck and Hamm(2003)Bredenbeck, and Hamm]{Bredenbeck2003a}
Bredenbeck,~J., and Hamm,~P. (2003) {Versatile small volume closed-cycle flow
cell system for transient spectroscopy at high repetition rates}. \emph{Rev.
Sci. Instrum.} \emph{74}, 3188--3189\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Borisenko and Woolley(2005)Borisenko, and Woolley]{borisenko05}
Borisenko,~V., and Woolley,~G.~A. (2005) {Reversibility of conformational
switching in light-sensitive peptides}. \emph{J. Photochem. Photobiol. A
Chem.} \emph{173}, 21--28\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zhang \latin{et~al.}(2003)Zhang, Burns, Kumita, Smart, and
Woolley]{zhang03}
Zhang,~Z., Burns,~D.~C., Kumita,~J.~R., Smart,~O.~S., and Woolley,~G.~A. (2003)
{A Water-Soluble Azobenzene Cross-Linker for Photocontrol of Peptide
Conformation}. \emph{Bioconjugate Chem.} \emph{14}, 824--829\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schreiber \latin{et~al.}(2009)Schreiber, Haran, and
Zhou]{schreiber2009}
Schreiber,~G., Haran,~G., and Zhou,~H.~X. (2009) {Fundamental aspects of
protein - Protein association kinetics}. \emph{Chem. Rev.} \emph{109},
839--860\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bicout and Szabo(2000)Bicout, and Szabo]{bicout00}
Bicout,~D.~J., and Szabo,~A. (2000) {Entropic barriers, transitions states,
funnels and exponential protein folding kinetics: A simple model}.
\emph{Protein Sci.} \emph{9}, 452--465\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zewail(2000)]{zewail00}
Zewail,~A.~H. (2000) {Femtochemistry: Atomic-Scale Dynamics of the Chemical
Bond}. \emph{J. Phys. Chem. A.} \emph{104}, 5660--5694\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bl{\"{o}}chliger \latin{et~al.}(2015)Bl{\"{o}}chliger, Xu, and
Caflisch]{Blochliger2015}
Bl{\"{o}}chliger,~N., Xu,~M., and Caflisch,~A. (2015) {Peptide binding to a PDZ
domain by electrostatic steering via nonnative salt bridges}. \emph{Biophys.
J.} \emph{108}, 2362--2370\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Johnston(2006)]{Johnston2006}
Johnston,~D.~C. (2006) {Stretched exponential relaxation arising from a
continuous sum of exponential decays}. \emph{Phys. Rev. B - Condens. Matter
Mater. Phys.} \emph{74}, 184430\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kubelka \latin{et~al.}(2004)Kubelka, Hofrichter, and Eaton]{kubelka04}
Kubelka,~L., Hofrichter,~J., and Eaton,~W.~A. (2004) {The protein folding
'speed limit'}. \emph{Curr. Opin. Struc. Biol.} \emph{14}, 76--88\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ma and Gruebele(2005)Ma, and Gruebele]{Ma2005}
Ma,~H., and Gruebele,~M. (2005) {Kinetics are probe-dependent during downhill
folding of an engineered lambda6-85 protein.} \emph{Proc. Natl. Acad. Sci.
USA} \emph{102}, 2283--2287\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Hagen(2003)]{hagen03}
Hagen,~S.~J. (2003) {Exponential Decay Kinetics in Downhill Protein Folding}.
\emph{Proteins} \emph{50}, 1--4\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
1,314,259,993,190 | arxiv | \section{Introduction}
Spintronics explores the spin properties of electrons independently of the charge transport \cite{Wolf,Awschalom,Zutic}. The main physical object that describes the transport of spin is the so called spin current. The understanding of its behavior are of major relevance for the development of new technologies. Nevertheless, the non-conservation of spin current continues to be a problem to be solved \cite{Qing,Vernes:2007,Sobreiro:2011ie}. The purpose of this work is to understand the fundamental nature of this non-conservation and to provide the conditions for the spin current to be conserved.
In an enlightening work by A.~Vernes, L.~Gy\"orffy and P.~Weinberger \cite{Vernes:2007}, the broken continuity equation for spin current was obtained from the non-relativistic limit of the time evolution of the Bargmann-Wigner operator \cite{Bargmann:1948ck,Itzykson:1980rh} within the Dirac Hamiltonian, resulting in
\begin{equation}
\frac{d\overrightarrow{s}}{dt}+\partial_i\overrightarrow{j}_i=\frac{e}{m}\overrightarrow{s}\times\overrightarrow{B}\;.\label{eq1}
\end{equation}
In expression \eqref{eq1}, $\overrightarrow{s}=\phi^\dagger\overrightarrow{\sigma}\phi$ is the spin density, $\overrightarrow{j}_i$ is the spin current, $\phi$ is the non-relativistic electron wave function and the \emph{rhs} is the usual microscopic Landau-Lifshitz torque. Equation \eqref{eq1} can be derived from the study of the time evolution of $\overrightarrow{s}$ within Pauli equation for the electron \cite{Stiles}. Escaping from standard electromagnetism, equation \eqref{eq1} can also be obtained from the flavor current of weak interactions, \emph{i.e.}, the $SU(2)$ gauge current \cite{Dartora:2008ccc, Dartora:2010zz}. Thus, spin current flow can actually be casted into a continuity equation, even though standard electromagnetism based on the $U(1)$ gauge symmetry is substituted by its next non-Abelian extension. It is fair to mention that Equation \eqref{eq1} first appeared in \cite{Sokolov:1986nk}, although no reference to spin currents has been made. Finally, In \cite{Sobreiro:2011ie}, a formal study of the origin of Equation \eqref{eq1} was performed by employing field theory techniques. In fact, the $U(1)$ gauge theory for electromagnetism in vacuum \cite{Itzykson:1980rh,Barut} was considered. In particular, the relativistic generalization of Equation \eqref{eq1} was obtained from the combination of the two most important symmetries of electromagnetism, namely, Lorentz and gauge symmetries. The relevant sector of the Lorentz symmetry is the restricted subgroup $L(1,3)\subset SO(1,3)$ called little group whose generators can be combined into a Casimir operator which is associated to spin eigenvalues. The same technique is applied here for the electromagnetism in a generic material media. Moreover, a spin-current analogue for the photon \cite{Sobreiro:2011ie} is also defined and discussed.
The starting point of this work is the Minkowski-Maxwell-Dirac action \cite{Barut,Post}. This action describes electrodynamics in a general material medium and electrons coupled to the electromagnetic field. The currents associated with gauge symmetry, chiral asymmetry, and little group symmetry are easily obtained. The latter is commonly known as Bargmann-Wigner current whose generators are precisely those associated with the values of spin in the same way that the generators of the subgroup of translations are associated to the mass value of particles. Although conserved, the Bargmann-Wigner current is not gauge invariant and then, it is hard to be associated with physical observables. Thus, by evoking the gauge principle for electrodynamics, we restore the gauge invariance of the Bargmann-Wigner. The price that is payed is that the spin current is no longer conserved. In that approach, the electromagnetic field is dynamical and a similar equation for electromagnetic fields is obtained. On the other hand, the broken continuity equation for the electronic sector is the same as the vacuum case \cite{Sobreiro:2011ie} while the photonic equation changes due to the properties of the medium. Finally, we explore the conditions for those currents to be conserved.
The letter is organized as follows: In Sect.~2 we study the starting action, its relevant symmetries and the respective conserved currents. In Sect.~3 we construct the gauge invariant spin currents and obtain their respective broken continuity equations. The space and time decomposition of these currents and the conservation conditions are obtained in Sect.~4. Finally, our conclusions and a discussion are displayed in Sect.~5.
\section{Electrodynamics in general material media}
We start with the usual action for electrodynamics in material media \cite{Post}
\begin{equation}
S=\int{d^4x}\;\overline{\psi}\left(i\hbar c\gamma^\mu D_\mu-mc^2\right)\psi-\frac{1}{4}\int{d^4x}\;G^{\mu\nu}F_{\mu\nu}\;,\label{action}
\end{equation}
where the field $\psi$ is a spinor field describing electron excitations and $\overline{\psi}$ its adjoint, $\overline{\psi}=\psi^\dagger\gamma^0$. The Clifford algebra $\left\{\gamma^\mu,\gamma^\nu\right\}=2\eta^{\mu\nu}$ allows the usage of Dirac representation for the $\gamma$-matrices and the metric tensor is defined with negative signature, $\eta=\mathrm{diag}(+1,-1,-1,-1)$. Useful extra quantities are $\gamma^5=\gamma_5=i\gamma^0\gamma^1\gamma^2\gamma^3$ and $\sigma^{\mu\nu}=\frac{i}{2}\left[\gamma^\mu,\gamma^\nu\right]\label{sigma}$. The derivative $D_\mu=\partial_\mu+i\frac{e}{\hbar c}A_\mu$ is the covariant derivative and the field strength is defined as $F_{\mu\nu}=\partial_\mu A_\nu-\partial_\nu A_\mu$, where $A_\mu$ is the electromagnetic potential. The tensor $G^{\mu\nu}$ is an auxiliary antisymmetric tensor,
\begin{eqnarray}
G^{\mu\nu}=\frac{1}{2}\chi^{\mu\nu\alpha\beta}F_{\alpha\beta}\;,\label{g}
\end{eqnarray}
where $\chi^{\mu\nu\alpha\beta}$ is the constitutive pseudo-tensor whose symmetry properties are $\chi^{\mu\nu\alpha\beta}=-\chi^{\nu\mu\alpha\beta}=-\chi^{\mu\nu\beta\alpha}=\chi^{\alpha\beta\mu\nu}$. The inverse relation of \eqref{g} is
\begin{eqnarray}
F_{\mu\nu}=\frac{1}{2}\overline{\chi}_{\mu\nu\alpha\beta}G^{\alpha\beta}\;,\label{f}
\end{eqnarray}
where $\overline{\chi}_{\mu\nu\alpha\beta}$ is determined by $\overline{\chi}_{\mu\nu\alpha\beta}{\chi}^{\alpha\beta\gamma\delta}=2(\delta^{\gamma}_\mu\delta^\delta_\nu-\delta^{\gamma}_\nu\delta^\delta_\mu)$.
The fermionic field equations obtained from \eqref{action} are
\begin{eqnarray}
\left(i\gamma^\mu D_\mu-\frac{mc}{\hbar}\right)\psi&=&0\;,\nonumber\\
\overline{\psi}\left(i\gamma^\mu\overleftarrow{D}_\mu^\dagger+\frac{mc}{\hbar}\right)&=&0\;.\label{eqf}
\end{eqnarray}
For the electromagnetic field, the equations are
\begin{equation}
\partial_\nu G^{\nu\mu}=j^\mu_f\;,\label{eqb}
\end{equation}
where $j_f^\mu=e\overline{\psi}\gamma^\mu\psi$ is the fermionic charge current (see next Section). Equations \eqref{eqb} are recognized as the inhomogeneous Maxwell equations. The homogeneous Maxwell equations, $\partial_\nu
\widetilde{F}^{\nu\mu}=0$, are obtained from the topological properties of the theory, where the dual field strength is defined as $\widetilde{F}^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}F_{\alpha\beta}$.
Obviously, the field $G^{\mu\nu}$ is composed by the fields $\vec{D}$ and $\vec{H}$ while the field strength $F^{\mu\nu}$ has $\vec{E}$ and $\vec{B}$ as components. Thus,
\begin{equation}
G^{\mu\nu}\equiv\begin{pmatrix}
0&cD^1&cD^2&cD^3\\
-cD^1&0&-H^3&H^2\\
-cD^2&H^3&0&-H^1\\
-cD^3&-H^2&H^1&0
\end{pmatrix}\;,\;\;\;
F^{\mu\nu}\equiv\begin{pmatrix}
0&-E^1/c&-E^2/c&-E^3/c\\
E^1/c&0&-B^3&B^2\\
E^2/c&B^3&0&-B^1\\
E^3/c&-B^2&B^1&0
\end{pmatrix}\;.\label{fs}
\end{equation}
The relation \eqref{g} can be \emph{unwrapped} as \cite{Post}
\begin{equation}
\begin{pmatrix}
\vec{D}\\
\vec{H}
\end{pmatrix}=\begin{pmatrix}
-\epsilon& \gamma\\
\gamma^\dagger& \zeta
\end{pmatrix}\begin{pmatrix}
-\vec{E}\\
\vec{B}
\end{pmatrix}\;,\label{dh}
\end{equation}
where the 3-dimensional tensors $\epsilon$, $\gamma$ and $\zeta=\mu^{-1}$ are related to electric permittivity, natural optical activity and magnetic permeability. In fact, from \eqref{g}, \eqref{fs} and \eqref{dh} we find
\begin{eqnarray}
D^i&=&\epsilon^{ik}E^k+\gamma^{ik}B^k\;,\nonumber\\
H^i&=&-{\gamma^*}^{ki}E^k+\zeta^{ik}B^k\;,
\end{eqnarray}
where
\begin{eqnarray}
\epsilon^{ik}&=&-\frac{1}{c^2}\chi^{0i0k}\;,\nonumber\\
\gamma^{ik}&=&\frac{1}{2c}\chi^{0ijl}\epsilon^{jlk}\;,\nonumber\\
{\gamma^*}^{ki}&=&\frac{1}{2c}\epsilon^{ijm}\chi^{jm0k}\;,\nonumber\\
\zeta^{ik}&=&\frac{1}{4}\epsilon^{ijl}\chi^{jlmn}\epsilon^{mnk}\;.
\end{eqnarray}
The action \eqref{action} displays a remarkable set of symmetries. Three of them is of great relevance in this work. We now discuss these symmetries. The first one is the $U(1)$ local symmetry which is characterized by the gauge transformations
\begin{eqnarray}
\delta_g\psi&=&-i\frac{e}{\hbar c}\alpha\psi\;,\nonumber\\
\delta_g\overline{\psi}&=&i\frac{e}{\hbar c}\alpha\overline{\psi}\;,\nonumber\\
\delta_gA_\mu&=&\partial_\mu\alpha\;,\label{gt}
\end{eqnarray}
where $\alpha$ is a spacetime dependent parameter. The action \eqref{action} is invariant under gauge transformations and the associated conserved current is $j_f^\mu=e\overline{\psi}\gamma^\mu\psi$, which expresses the conservation of the electric charge. Another well known fact in electrodynamics is the presence of the chiral symmetry for massless fermions. The chiral transformations
are defined as $\delta_c\psi=-i\frac{\alpha}{\hbar c}\gamma^5\psi$, $\delta_c\overline{\psi}=-i\frac{\alpha}{\hbar c}\overline{\psi}\gamma^5$ and $\delta_cA_\mu=0$, where $\alpha$ is now a constant parameter. The chiral non-conserved current is easily computed as
\begin{equation}
S^\mu=\overline{\psi}\gamma^\mu\gamma^5\psi\;,\label{chiralb}
\end{equation}
which leads to the broken continuity equation $\partial_\mu S^\mu=2i\frac{mc}{\hbar}\overline{\psi}\gamma^5\psi$.
The main set of transformations relevant to this work originates from the Poincar\'e group, $ISO(1,3)=SO(1,3)\ltimes\mathbb{R}^4$. The generators of the Poincar\'e group are denoted by $P_\mu=i\hbar\partial_\mu$ for translations and $J_{\mu\nu}=L_{\mu\nu}+I_{\mu\nu}$ for the Lorentz sector. Here, $L_{\mu\nu}$ is taken as the angular momentum part, $L_{\mu\nu}=i\hbar\left(x_\mu\partial_\nu-x_\nu\partial_\mu\right)/2$, while $I_{\mu\nu}$ is associated with the internal angular momentum. The Poincar\'e algebra can be described by:
\begin{eqnarray}
\left[P_\mu,P_\nu\right]&=&0\;,\nonumber\\
\left[J_{\mu\nu},J_{\alpha\beta}\right]&=&-\frac{i\hbar}{2}\left(\eta_{\mu\alpha}J_{\nu\beta}-\eta_{\mu\beta}J_{\nu\alpha}-\eta_{\nu\alpha}J_{\mu\beta}+ \eta_{\nu\beta}J_{\mu\alpha}\right)\;,\nonumber\\
\left[J_{\mu\nu},P_\alpha\right]&=&\frac{i\hbar}{2}\left(\eta_{\alpha\nu}P_\mu-\eta_{\alpha\mu}P_\nu\right)\;.\label{sl2c}
\end{eqnarray}
The so called little group, $L(1,3)\subset SO(1,3)$, can be understood as the set of Lorentz transformations that keeps invariant the linear momentum. This subsector of the Poincar\'e group is described by the generator
\begin{equation}
W^\mu=-\frac{1}{2\hbar}\epsilon^{\mu\nu\alpha\beta}J_{\nu\alpha}P_\beta=-\frac{1}{2\hbar}\epsilon^{\mu\nu\alpha\beta}I_{\nu\alpha}P_\beta\;,
\end{equation}
which is the Pauli-Lubanski vector. The generator $W^\mu$ has the following properties:
\begin{eqnarray}
\left[W^\mu,W^\nu\right]&=&\epsilon^{\mu\nu\alpha\beta}P_\alpha W_\beta\;,\nonumber\\
\left[J_{\mu\nu},W^\alpha\right]&=&-\frac{i\hbar}{2}\left(\delta^\alpha_\mu W_\nu-\delta^\alpha_\nu W_\mu\right)\;,\nonumber\\
\left[W^\mu,P_\alpha\right]&=&0\;,\label{little}
\end{eqnarray}
emphasizing the subgroup character of the little group as well as the fact that $L(1,3)$ is a stability subgroup with respect to the Poincar\'e group.
For fermions, it is easy to find \cite{Itzykson:1980rh}
$I_{\mu\nu}=\hbar\sigma_{\mu\nu}/2$, providing
\begin{equation}
W^\mu_f=-\frac{1}{4}\epsilon^{\mu\nu\alpha\beta}\sigma_{\nu\alpha}P_\beta=\frac{i}{2}\gamma^5\sigma^{\mu\nu}P_\nu\;,\label{Pauli-Lubanski}
\end{equation}
where the index $f$ denotes its fermionic character. The little group transformations are
then
\begin{eqnarray}
\delta_l\psi&=&-i\frac{\omega_\mu}{\hbar}W^\mu_f\psi\;,\nonumber\\
\delta_l\overline{\psi}&=&-i\overline{\psi}\overleftarrow{W}^\mu_f\frac{\omega_\mu}{\hbar}\;,\label{little0}
\end{eqnarray}
with $\omega_\mu$ a set of constant real parameters. The related Noether current is a second rank tensor field, the so called Bargmann-Wigner tensor
\begin{equation}
T_f^{\mu\nu}=c\overline{\psi}\gamma^\mu W^\nu_f\psi\;.\label{bw0}
\end{equation}
Explicitly,
\begin{equation}
T_f^{\mu\nu}=-\frac{\hbar c}{2}\overline{\psi}\gamma^\mu\gamma^5\sigma^{\nu\alpha}\partial_\alpha\psi\;.\label{bw1}
\end{equation}
For the electromagnetic field, the Pauli-Lubanski vector reads\footnote{It follows from the spin part of the Lorentz generator for a vector field,
$\sigma^{\mu\nu\alpha\beta}=\hbar(\eta^{\mu\alpha}\eta^{\nu\beta}-\eta^{\mu\beta}\eta^{\nu\alpha})/2$.}
\begin{equation}
W_b^{\mu\nu\alpha}=-\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}P_\beta\;,
\end{equation}
where the index $b$ characterizes its bosonic behavior. The little group transformation for $A_\mu$ is
\begin{equation}
\delta_lA_\mu=i\frac{\omega^\nu}{\hbar}W_{b\;\mu\nu\alpha}A^\alpha\;.\label{little1}
\end{equation}
Thus, for the vector field the corresponding Bargmann-Wigner current is
\begin{equation}
T_b^{\mu\nu}=\frac{1}{2}G^{\mu\alpha}\widetilde{F}_\alpha^{\phantom{\alpha}\nu}\;,\label{bw2}
\end{equation}
and the full Bargmann-Wigner conserved current is then
\begin{equation}
T^{\mu\nu}=T_f^{\mu\nu}+T_b^{\mu\nu}\;\;\bigg|\;\;\partial_\mu T^{\mu\nu}=0\;.\label{barg}
\end{equation}
\section{Gauge invariant currents}
It turns out that, in contrast to gauge and chiral currents, $T^{\mu\nu}$ is not a gauge invariant quantity, $\delta_gT_f^{\mu\nu}=-ie\overline{\psi}\gamma^\mu\left(W_f^\nu\alpha\right)\psi$. Thus, from the gauge principle, this current cannot be associated to a physical observable. Moreover, it is evident that it is only the fermionic sector $T^{\mu\nu}_f$ that breaks gauge symmetry. To circumvent this problem we generalize $T_f^{\mu\nu}$ to its simplest gauge invariant extension by replacing the ordinary derivative by the covariant one, \emph{i.e.}, the Pauli-Lubanski vector is replaced by
\begin{equation}
\mathcal{W}^\mu_f=-\frac{\hbar}{2}\gamma^5\sigma^{\mu\nu}D_\nu\;.
\end{equation}
The corresponding gauge invariant fermionic Bargmann-Wigner current is now
\begin{equation}
\mathcal{T}^{\mu\nu}_f=\frac{\hbar c}{2}\overline{\psi}\gamma^5\gamma^\mu\sigma^{\nu\alpha}D_\alpha\psi\;.\label{little1}
\end{equation}
Thus, since the electromagnetic sector is already gauge invariant, the full gauge invariant Bargmann-Wigner current is now gauge invariant, $\delta_g(\mathcal{T}_f^{\mu\nu}+T_b^{\mu\nu})=0$.
If in one hand we have the gauge invariance, in the other hand
$\mathcal{T}^{\mu\nu}$ is not conserved anymore. In fact, it can be shown that
\begin{equation}
\partial_\nu\mathcal{T}_f^{\nu\mu}=-\frac{e}{2}S_\nu
F^{\nu\mu}\;,\label{div1}
\end{equation}
where the field equations were used. For the bosonic sector it is easy to show that,
\begin{equation}
\partial_\nu T^{\nu\mu}_b=\frac{1}{2}j_{f\nu}\widetilde{F}^{\nu\mu}+\frac{1}{3}\widetilde{G}^{\mu\nu}\partial^\alpha F_{\alpha\nu}\;,\label{div2}
\end{equation}
where the field equations were used again. At equation \eqref{div2} we have used $\widetilde{G}^{\mu\nu}=\frac{1}{2}\epsilon^{\mu\nu\alpha\beta}G_{\alpha\beta}$.
Remarks: i.) equations \eqref{div1} and \eqref{div2} express the non-conservation of the gauge invariant
Bargmann-Wigner currents, and hold separately, since they are obtained independently of the continuity equation \eqref{barg}. ii.) The non-relativistic limit of \eqref{div1} reduces to the usual spin current equation \cite{Vernes:2007,Sobreiro:2011ie}.
One of the main problems with spin currents is the fact that they are not conserved quantities, \emph{i.e.}, in a generic system, equations \eqref{div1} and \eqref{div2} hold. The conditions for these currents to be conserved are then
\begin{equation}
S_\nu F^{\nu\mu}=0\;,\label{cond1}
\end{equation}
for the electron spin-current and
\begin{equation}
\frac{3}{2}j_{f\nu}\widetilde{F}^{\nu\mu}+\widetilde{G}^{\mu\nu}\partial^\alpha F_{\alpha\nu}=0\;,\label{cond2}
\end{equation}
for the bosonic spin-current. We will explore this issue in more detail at the next Section.
\section{Conserved currents}
To study the spin currents and their conservation it is convenient to decompose them into space and time sector. For that we define \cite{Sobreiro:2011ie}
\begin{eqnarray}
\mathcal{T}_{f}^{00}&=&-\frac{i\hbar c}{2}\psi^{\dag}\Sigma^{i}D_{i}\psi=-\frac{mc}{2}\mathcal{T}\;,\nonumber\\
\mathcal{T}_{f}^{i0}&=&-\frac{i\hbar c}{2}\psi^{\dag}\alpha^{i}\Sigma^{j}D_{j}\psi=-\frac{m}{2}\mathcal{T}^{i}\;,\nonumber\\
\mathcal{T}_{f}^{0i}&=&\frac{mc^2}{2}\psi^{\dag}\left(\beta\Sigma^{i}+\frac{i\hbar}{mc}\gamma^{5}D^{i}\right)\psi=\frac{mc}{2}\mathcal{J}^{i}\;,\nonumber\\
\mathcal{T}_{f}^{ij}&=&\frac{mc^2}{2}\psi^{\dag}\alpha^{i}\left(\beta\Sigma^{j}+\frac{i\hbar }{mc}\gamma^{5}D^{j}\right)\psi=\frac{m}{2}\mathcal{J}^{ij}\;,
\label{1p}
\end{eqnarray}
and
\begin{eqnarray}
T_b^{00}&=&\frac{c}{2}\vec{D}\cdot\vec{B}=c\mathcal{M}\;,\nonumber\\
T_b^{i0}&=&-\frac{1}{2}\left(\vec{H}\times\vec{B}\right)^i=\mathcal{M}^i\;,\nonumber\\
T_b^{0i}&=&-\frac{1}{2}\left(\vec{D}\times\vec{E}\right)^i=c\mathcal{N}^i\;,\nonumber\\
T_b^{ij}&=&-\frac{c}{2}D^iB^j+\frac{1}{2c}E^iH^j-\frac{1}{2c}E^kH^k\delta^{ij}=\mathcal{N}^{ij}\;.
\label{2p}
\end{eqnarray}
Thus, the non-conservation laws \eqref{div1} and \eqref{div2} decompose as
\begin{eqnarray}
\frac{\partial\mathcal{T}}{\partial t}+\vec{\nabla}\cdot\vec{\mathcal{T}}&=&-\frac{e}{mc}\vec{S}\cdot\vec{E}\;,\nonumber\\
\frac{\partial\vec{\mathcal{J}}}{\partial t}+\vec{\nabla}\cdot\stackrel{\leftrightarrow}{\mathcal{J}}&=&\frac{e}{m}\left(\frac{1}{c}S_0\vec{E}+\vec{S}\times\vec{B}\right)\;,\label{spin1}
\end{eqnarray}
and
\begin{eqnarray}
\frac{\partial\mathcal{M}}{\partial t}+\vec{\nabla}\cdot\vec{\mathcal{M}}&=&-\frac{1}{2}\vec{j}\cdot\vec{B}-\frac{1}{3}\vec{H}\cdot\left(\frac{1}{c^2}\frac{\partial\vec{E}}{\partial t}-\vec{\nabla}\times\vec{B}\right)\;,\nonumber\\
\frac{\partial\vec{\mathcal{N}}}{\partial t}+\vec{\nabla}\cdot\stackrel{\leftrightarrow}{\mathcal{N}}&=&-\frac{1}{2}\left(c\rho\vec{B}-\frac{1}{c}\vec{j}\times\vec{E}\right)+\frac{1}{3c}(\vec{\nabla}\cdot\vec{E})\vec{H}-\frac{c}{3}\vec{D}\times\left(\frac{1}{c^2}\frac{\partial\vec{E}}{\partial t}-\vec{\nabla}\times\vec{B}\right)\;,\label{spin2}
\end{eqnarray}
respectively. The second of \eqref{spin1} has as its non-relativistic limit the usual equation for spin currents where $\vec{\mathcal{J}}$ is the (relativistic) spin density and $\stackrel{\leftrightarrow}{\mathcal{J}}$ is the (relativistic) spin current \cite{Vernes:2007}. The same interpretation can be used for the second of \eqref{spin2} where $\vec{\mathcal{N}}$ is the (bosonic) density and $\stackrel{\leftrightarrow}{\mathcal{N}}$ is the bosonic current.
The conservation conditions \eqref{cond1} and \eqref{cond2} are reduced\footnote{The condition \eqref{cond3} is derived from the second of \eqref{spin1}. The first of \eqref{spin1} induces the condition $\vec{S}\cdot\vec{E}=0$, which follows naturally from \eqref{cond3}.} to
\begin{equation}
\vec{E}=-\frac{c}{S_0}\vec{S}\times\vec{B}\;.\label{cond3}
\end{equation}
and
\begin{eqnarray}
\frac{3}{2}\vec{j}\cdot\vec{B}+\vec{H}\cdot\left(\frac{1}{c^2}\frac{\partial\vec{E}}{\partial t}-\vec{\nabla}\times\vec{B}\right)&=&0\;,\nonumber\\
-\frac{3}{2}\left(c^2\rho\vec{B}-\vec{j}\times\vec{E}\right)+(\vec{\nabla}\cdot\vec{E})\vec{H}-c^2\vec{D}\times\left(\frac{1}{c^2}\frac{\partial\vec{E}}{\partial t}-\vec{\nabla}\times\vec{B}\right)&=&0\;,\label{cond4}
\end{eqnarray}
The condition \eqref{cond3} is a relatively simple requirement and accounts for the conservation of the electronic spin current. It imposes the conservation of the electronic spin current, independently of the media. Conditions \eqref{cond4}, on the other hand, are much more complicated and should account for the conservation of the photonic current. The second of \eqref{cond4} is actually the one that matters for the bosonic current conservation and it can be solved for $\vec{H}$,
\begin{equation}
\vec{H}=\frac{1}{(\vec{\nabla}\cdot\vec{E})}\left[\frac{3}{2}\left(c^2\rho\vec{B}-\vec{j}\times\vec{E}\right)+\vec{D}\times\left(\frac{\partial\vec{E}}{\partial t}-c^2\vec{\nabla}\times\vec{B}\right)\right]\;,\label{h}
\end{equation}
which is a quite complicated relation. Let us analyze simpler cases.
\subsection{Pure electronic case}
We consider the case which the electromagnetic field is external. In that case, $T_b^{\mu\nu}=0$ and equation \eqref{div1} is the one that describes the non-conservation of electronic spin current. It is clear that the condition for this current to be conserved is equation \eqref{cond1} whose decomposition is in expression \eqref{spin1}. The fact that \eqref{div1} is true for any kind of material media (including the vacuum) is a direct consequence of the action \eqref{action}. At this action, the spinor field directly couple with the electromagnetic field, \emph{i.e.}, the action \eqref{action} does not describe the interaction between the media and the electrons.
\subsection{Insulators}
Now, we consider a perfect insulator, \emph{i.e.}, there are no free electrons inside the material. Thus, $\mathcal{T}_f^{\mu\nu}=j^\mu_f=S^\mu=0$ and equation \eqref{div2} reduces to
\begin{equation}
\partial_\nu T^{\nu\mu}_b=\frac{1}{3}\widetilde{G}^{\mu\nu}\partial^\alpha F_{\alpha\nu}\;.\label{insul1}
\end{equation}
However, in the pure bosonic case, the conservation law \eqref{barg} is also valid and, because $T_f^{\mu\nu}=0$, is also a gauge invariant equation. Thus, the vanishing of the \emph{rhs} of \eqref{insul1} is not a requirement but a physical necessity. Then, there are three possible situations: i.) The media is linear. Then, the \emph{rhs} of \eqref{insul1} is proportional to $j_f^\mu$ (see \cite{Sobreiro:2011ie}) which vanishes by hypothesis. Then, \eqref{cond2} is trivially satisfied. ii.) The relation \eqref{cond2} is not automatically satisfied. In this case, the fact that the media is not isotropic/homogeneous is so strong that its constitutive relations cannot be described by $\chi$ as a Lorentz tensor. The material is so exotic that it induces a Lorentz breaking. This is clear from the fact that the current $T_b^{\mu\nu}$ describes a symmetry of the little group, \emph{i.e.}, a subgroup of the Lorentz group. iii.) The media is not that trivial that \eqref{cond2} is automatically satisfied. This condition has to be imposed as a subsidiary condition.
We focus on the third situation (c). In that case we must take $\rho=\vec{j}_f=0$ at expression \eqref{h}. Then, we find\footnote{The first of \eqref{cond4} is then automatically satisfied.}
\begin{equation}
\vec{H}=\frac{1}{(\vec{\nabla}\cdot\vec{E})}\vec{D}\times\left(\frac{\partial\vec{E}}{\partial t}-c^2\vec{\nabla}\times\vec{B}\right)\;,\label{h2}
\end{equation}
It is clear that, this condition is satisfied if the media is linear (case (a)).
Another interesting situation occur if $\vec{\nabla}\cdot\vec{E}=0$. In that case equations \eqref{cond4} reduces to
\begin{eqnarray}
\vec{H}\cdot\left(\frac{1}{c^2}\frac{\partial\vec{E}}{\partial t}-\vec{\nabla}\times\vec{B}\right)&=&0\;,\nonumber\\
\vec{D}\times\left(\frac{1}{c^2}\frac{\partial\vec{E}}{\partial t}-\vec{\nabla}\times\vec{B}\right)&=&0\;,\label{cond5}
\end{eqnarray}
Thus, $\vec{H}\bot\vec{K}$ and $\vec{D}\parallel\vec{K}$, where $\vec{K}=\frac{\frac{1}{c^2}\partial\vec{E}}{\partial t}-\vec{\nabla}\times\vec{B}$. Then, $\vec{H}\bot\vec{D}$.
\section{Discussion}
In this letter we have studied the properties of spin currents in general material media. From first principles (Lorentz symmetry and the gauge principle) we were able to show that the non-conservation of spin-currents are intrinsically inherent to electrodynamics.
It was shown that the relativistic generalization of spin currents always obeys the non conservation law \eqref{div1}. This relation is obtained from first principles of gauge theories, in particular, the Lorentz symmetry and the gauge principle. Although we have not considered a direct interaction term between electrons and the media, this task is not difficult to be discussed at phenomenological level. In fact, one can add an extra term to \eqref{div1} that simulates the spin loss of electrons from their interaction with the media, exclusively. For instance,
\begin{equation}
\partial_\nu\mathcal{T}_f^{\nu\mu}=-\frac{e}{2}S_\nu
F^{\nu\mu}+\zeta C^\mu\;,\label{div1x}
\end{equation}
where $\zeta$ characterizes the strength of the electron-media coupling and $C^\mu$ is a four-vector that depends on the medium properties. Both quantities depend on the material properties and should be characterized experimentally. The major challenge is to obtain this extra term from first principles. However, if the extra term is determined, it should be possible to adjust the electromagnetic fields to compensate the loss, namely, $C^\mu=\frac{e}{2\zeta}S_\nu
F^{\nu\mu}$, producing a conserved spin-current, eventually.
Another interesting result is obtaining by applying the very same first principles to the electromagnetic fields. The result is the bosonic analogue of the electronic spin current \eqref{bw2} whose non-conservation is described by \eqref{div2}. In contrast with the fermionic case, the bosonic non-conservation law changes with respect to the vacuum case \cite{Sobreiro:2011ie}, specially because of the interaction between the electromagnetic field and the medium through the constitutive tensor. The consequence is that, for perfect insulators, a condition that depends on the medium properties and the fields are obtained, \eqref{h2}. Thus, no phenomenological term is needed.
A problem that emerges at the bosonic current analysys concerns its interpretation. At the vacuum \cite{Sobreiro:2011ie}, the vector density $\mathcal{N}^i$ is identically zero while the current $\mathcal{N}^{ij}\propto \vec{E}\cdot\vec{B}\delta^{ij}$. Thus, the current is a kind of measure of non-orthogonality between electric and magnetic fields which may define a flowing current, although its corresponding density vanishes identically. Consequently, there will be flow only if the current is not conserved, see the \emph{rhs} of the second of equations \eqref{spin2}. In that case, the flow will depend on the presence of free charges $\rho$ and currents $\vec{j}$. The general case is obviously richer because we can demand conservation through \eqref{cond4} and still consider nontrivial flows and densities. In fact, the definitions $\mathcal{N}^{i}\propto\left(\vec{D}\times\vec{E}\right)^i$ and $\mathcal{N}^{ij}\propto c^2D^iB^j+E^iH^j-E^kH^k\delta^{ij}$ are immediately interpreted as a measure of how anisotropic is the medium, otherwise the previous case is recovered. The inevitable conclusion is that, if one wishes to transport information through $\mathcal{N}^{ij}$, it is necessary to consider anisotropic media. In that case, conditions \eqref{cond4} can be used to manipulate the fields and the medium properties in order to produce a conserved current.
Finally, it is worth mention that the introduction of the constitutive tensor $\chi^{\mu\nu\alpha\beta}$ is a useful technique to transfer the exotic properties of the medium to a Lorentz tensor. However, if condition \eqref{h2} is not automatically satisfied, then the medium is so exotic that its properties cannot be acomodated by $\chi^{\mu\nu\alpha\beta}$. A Lorentz breaking is inevitable.
The natural continuation of the present analysys, which is beyond the scope of this work, is to apply the conservation conditions here obtained to specific systems, explore reliable experimental situations and pursue a deep understanding of the bosonic current.
\section*{Acknowledgements}
RFS is thankful to the Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico\footnote{RFS is a level PQ-2 researcher under the program Produtividade em Pesquisa, 304924/2009-1.} (CNPq-Brazil). The authors acknowledge Rodolfo Casana and Diego Gonz\'alez for fruitful discussions.
|
1,314,259,993,191 | arxiv | \section{Introduction}
It is generally assumed that our Universe contains an approximately
equal amount of leptons and antileptons. The lepton asymmetry would be
of the same order as the baryon asymmetry, which is very small as
required by Big Bang Nucleosynthesis (BBN) considerations. The
existence of a large lepton asymmetry is restricted to be in the form
of neutrinos from the requirement of universal electric neutrality,
and the possibility of a large neutrino asymmetry is still open. {}From
a particle physics point of view, a lepton asymmetry can be generated
by an Affleck-Dine mechanism \cite{AF} without producing a large
baryon asymmetry (see ref.~\cite{Casas} for a recent model), or even
by active-sterile neutrino oscillations after the electroweak phase
transition \cite{Foot}.
We have studied some cosmological implications of relic degenerate
neutrinos \cite{Paper} (here degenerate refers to
neutrino-antineutrino asymmetry, not to mass degeneracy). We do not
consider any specific model for generating such an asymmetry, and just
assume that it was created well before neutrinos decouple from the
rest of the plasma. An asymmetry of order one or larger can have
crucial effects on the global evolution of the Universe. Among other
effects, it changes the decoupling temperature of neutrinos, the
primordial production of light elements at BBN, the time of equality
between radiation and matter, or the contribution of relic neutrinos
to the present energy density of the Universe. The latter changes
affect the evolution of perturbations in the Universe. We focus on the
anisotropies of the Cosmic Microwave Background (CMB), and on the
distribution of Large Scale Structure (LSS). We calculate the power
spectrum of both quantities, in the case of massless degenerate
neutrinos, and also for neutrinos with a mass of $0.07$ eV, as
suggested to explain the experimental evidence of atmospheric neutrino
oscillations at Super-Kamiokande \cite{SK}.
The effect of neutrino degeneracy on the LSS power spectrum was
studied in ref.~\cite{Larsen}, as a way of improving the agreement
with observations of mixed dark matter models with eV neutrinos, in
the case of high values of the Hubble parameter. Adams \& Sarkar
\cite{Sarkar} calculated the CMB anisotropies and the matter power
spectrum, and compared them with observations in the
$\Omega_\Lambda=0$ case for massless degenerate neutrinos. More
recently, Kinney \& Riotto \cite{Kinney} also calculated the CMB
anisotropies for massless degenerate neutrinos in the
$\Omega_\Lambda=0.7$ case.
\vspace{-0.25cm}
\section{Energy density of massive degenerate neutrinos}
\label{energy}
The energy density of one species of massive degenerate neutrinos and
antineutrinos, described by the distribution functions $f_\nu$ and
$f_{\bar{\nu}}$, is (we use $\hbar=c=k_B=1$ units)
\begin{equation}
\rho_\nu \! + \! \rho_{\bar{\nu}}= \!\!
\int_0^\infty \!\!\! \frac{dp}{2\pi^2} ~p^2 \sqrt{p^2 \! + \! m_\nu^2}
(f_\nu(p) \! + \! f_{\bar{\nu}}(p))
\label{defrhonu}
\end{equation}
valid at any moment. Here $p$ is the magnitude of the 3-momentum and
$m_\nu$ is the neutrino mass.
When the early Universe was hot enough, the neutrinos were in
equilibrium with the rest of the plasma via the weak interactions. In
that case the distribution functions $f_\nu$ and $f_{\bar{\nu}}$
changed with the Universe expansion, keeping the form of a Fermi-Dirac
distribution,
\begin{equation}
f_{\nu,\bar{\nu}}(p)=\Frac{1}{\exp \left(\frac{p}{T_\nu} \mp
\frac{\mu}{T_\nu}\right)+1}
\label{FD}
\end{equation}
Here $\mu$ is the neutrino chemical potential, which is nonzero if a
neutrino-antineutrino asymmetry has been previously produced. Later
the neutrinos decoupled when they were still relativistic, and from
that moment the neutrino momenta just changed according to the
cosmological redshift. If $a$ is the expansion factor, the neutrino
momentum decreases keeping $ap$ constant. At the same time the
neutrino degeneracy parameter $\xi \equiv \mu/T_\nu$ is conserved,
with a value equal to that at the moment of decoupling. Therefore one
can still calculate the energy density of neutrinos now from
\eq{defrhonu} and \eq{FD}, replacing $\mu/T_\nu$ by $\xi$ and
$p/T_\nu$ by $p/(y_\nu T_0)$, where $T_0 \simeq 2.726$ K and $y_\nu$
is the present ratio of neutrino and photon temperatures, which is not
unity because once decoupled the neutrinos did not share the entropy
transfer to photons from the successive particle annihilations that
occurred in the early Universe.
In the presence of a significant neutrino degeneracy $\xi$ the
decoupling temperature $T(\xi)$ is higher than in the standard case,
\cite{Freese,Kang}. The reaction rate $\Gamma$ of the weak processes,
that keep the neutrinos in equilibrium with the other species, is
reduced because some of the initial or final neutrino states will be
occupied. The authors of ref.~\cite{Kang} found that the neutrino
decoupling temperature is $T_{dec}(\xi) \approx
0.2\xi^{2/3}\exp(\xi/3)$ MeV (for $\nu_\mu$ or $\nu_\tau$). Therefore
if $\xi$ is large enough, the degenerate neutrinos decouple before the
temperature of the Universe drops below the different mass thresholds,
and are not heated by the particle-antiparticle annihilations,
reducing the ratio of neutrino and photon temperatures with respect
to the standard value $y_\nu=(4/11)^{1/3}$.
The present contribution of these degenerate neutrinos to the energy
density of the Universe can be parametrized as $\rho_\nu = 10^4 h^2
\Omega_\nu$ eV cm$^{-3}$, where $\Omega_\nu$ is the neutrino energy
density in units of the critical density $\rho_c=3H^2M_P^2/8\pi$,
$M_P=1.22 \times 10^{19}$ GeV is the Planck mass and $H=100h$ Km
s$^{-1}$ Mpc$^{-1}$ is the Hubble parameter.
The value of $\rho_\nu$ can be calculated as a function of the
neutrino mass and the neutrino degeneracy $\xi$, or equivalently the
present neutrino asymmetry $L_\nu$ defined as the following ratio of
number densities
\begin{equation}
L_\nu \equiv \frac{n_\nu-n_{\bar{\nu}}}{n_\gamma} =
\frac{1}{12\zeta (3)} y^3_\nu [\xi^3 + \pi^2 \xi]
\label{Lnu}
\end{equation}
We show\footnote{Here we assume $\xi>0$, but the results are also
valid for $\xi<0$ provided that $\xi$ and $L_\nu$ are understood as
moduli.} in figure \ref{lnumass} the contours in the $(m_\nu,L_\nu)$
plane that correspond to some particular values of $h^2
\Omega_\nu$. In the limit of small degeneracy (vertical lines) one
recovers the well-known bound on the neutrino mass $m_\nu \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 46$ eV
for $h^2 \Omega_\nu=0.5$. On the other hand, for very light
neutrinos the horizontal lines set a maximum value on the neutrino
degeneracy, that would correspond to a present neutrino chemical
potential $\mu_0 \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 7.4 \times 10^{-3}$ eV, also for $h^2
\Omega_\nu=0.5$. In the intermediate region of the figures the
neutrino energy density is $\rho_\nu \simeq m_\nu n_\nu (\xi)$ and the
contours follow roughly the relation
$L_\nu (m_\nu/\mbox{eV})\simeq 24.2 h^2\Omega_\nu$.
A similar calculation has been recently performed in reference
\cite{PalKar}. Note however that the ratio of neutrino and
photon temperatures was not properly taken into account for large
$\xi$.
The presence of a neutrino degeneracy can modify the outcome of BBN
(for a review see \cite{Sarkar96}). First a larger neutrino energy
density increases the expansion rate of the Universe, thus enhancing
the primordial abundance of $^4$He. This is valid for a nonzero $\xi$
of any neutrino flavor. In addition if the degenerate neutrinos are
of electron type, they have a direct influence over the weak processes
that interconvert neutrons and protons. This last effect depends on
the sign of $\xi_{\nu_e}$, and one gets $-0.06 \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}}
\xi_{\nu_e} \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 1.1$ \cite{Kang},
while a sufficiently long matter dominated epoch requires
$|\xi_{\nu_\mu,\nu_\tau}| \raise0.3ex\hbox{$\;<$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 6.9$ \cite{Kang}. This estimate
agrees with our analysis in section \ref{comparison} and places a
limit shown by the horizontal line in figure \ref{lnumass} in
the case of degenerate $\nu_\mu$ or $\nu_\tau$.
\begin{figure}[htb]
\vspace{-0.5cm}
\centerline{\psfig{file=lnumass.ps,angle=-90,width=0.49\textwidth}}
\vspace{-0.75cm}
\caption{Present energy density of massive degenerate neutrinos as a
function of the neutrino asymmetry.}
\label{lnumass}
\end{figure}
\section{Effects on the power spectra}
\label{results}
\begin{figure*}[t]
\vspace{-0.5cm}
\begin{eqnarray}
\psfig{file=figCMB.ps,width=0.48\textwidth}~~~~~
\psfig{file=figPK.ps,width=0.48\textwidth}
\nonumber
\end{eqnarray}
\vspace{-1.5cm}
\caption{CMB anisotropy and matter power spectra
for different models with one family of massless (solid lines) and
$m_{\nu} = 0.07$ eV (dashed lines) degenerate neutrinos. From bottom
to top (from top to bottom for $P(k)$), $\xi=0,3,5$.
Cosmological parameters are fixed as described in the text.
}
\label{fig.CMB}
\end{figure*}
We compute the power spectra of CMB anisotropies and LSS
using the Boltzmann code {\tt cmbfast} by Seljak \&
Zaldarriaga \cite{SelZal}, adapted to the case of one family of
degenerate neutrinos ($\nu$, $\bar{\nu}$), with mass $m_\nu$ and
degeneracy parameter $\xi$. Our modifications to the code
are reviewed and explained in \cite{Paper}.
The effect of $\xi$ and $m_{\nu}$ on the CMB anisotropy spectrum can
be seen in figure \ref{fig.CMB}. We choose a set of cosmological
parameters ($h=0.65$, $\Omega_b=0.05$, $\Omega_{\Lambda}=0.70$,
$\Omega_{CDM}=1-\Omega_b-\Omega_{\nu}-\Omega_{\Lambda}$,
$Q_{rms-ps}=18~\mu$K, flat primordial spectrum, no reionization, no
tensor contribution), and we vary $\xi$ from 0 to 5, both in the case
of massless degenerate neutrinos and degenerate
neutrinos with $m_{\nu}=0.07$ eV.
Let us first comment the massless case. The main effect of $\xi$ is
to boost the amplitude of the first peak\footnote{In fact, this is not
true for very large values of $\xi$, where recombination can take
place still at the end of radiation domination, and anisotropies are
suppressed. However in such a case the location of the first peak is
$l \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 450$, and the matter power spectrum is strongly
suppressed.}. Indeed, increasing the energy density of radiation
delays matter-radiation equality, which is known to boost the acoustic
peaks, and to shift them to higher multipoles, by a factor $( (1 +
a_{eq}/a_*)^{1/2} - (a_{eq}/a_*)^{1/2})^{-1}$ ($a_{eq}$ increases with
$\xi$, while the recombination scale factor $a_*$ is almost
independent of the radiation energy density). Secondary peaks are then
more affected by diffusion damping at large $l$, and their amplitude
can decrease with $\xi$.
In the case of degenerate neutrinos with $m_{\nu}=0.07$ eV, the
results are quite similar in first approximation. Indeed, the effects
described previously depend on the energy density of neutrinos at
equality. At that time, they are still relativistic, and identical to
massless neutrinos with equal degeneracy parameter. However, with a
large degeneracy, $\Omega_{\nu}$ today becomes significant: for
$\xi=5$, one has $\Omega_{\nu}=0.028$, i.e. the same order of
magnitude as $\Omega_b$. Since we are studying flat models,
$\Omega_{\nu}$ must be compensated by less baryons, cold dark matter
(CDM) or $\Omega_{\Lambda}$. In our example, $\Omega_b$ and
$\Omega_{\Lambda}$ are fixed, while $\Omega_{CDM}$ slightly
decreases. This explains the small enhancement of the first peak
compared to the massless case (3.4\% for $\xi=5$). Even if this
effect is indirect, it is nevertheless detectable in principle,
possibly by future satellite missions {\it MAP} and {\it Planck} (even
if one does not impose the flatness condition, the effect of
$\Omega_{\nu}$ will be visible through a modification of the
curvature).
We also plot in figure \ref{fig.CMB} the power spectrum $P(k)$,
normalized on large scales to COBE. The effect of both parameters
$\xi$ and $m_{\nu}$ is now to suppress the power on small scales.
Indeed, increasing $\xi$ postpones matter-radiation equality, allowing
less growth for fluctuations crossing the Hubble radius during
radiation domination. Adding a small mass affects the recent evolution
of fluctuations, and has now a direct effect: when the degenerate
neutrinos become non-relativistic, their free-streaming suppresses
the growth of fluctuations for scales within the Hubble radius. This
effect, already known for non-degenerate neutrinos \cite{Huetal}, is
enhanced in the presence of a neutrino degeneracy, since the average
neutrino momentum is shifted to larger values.
Our results for massless degenerate neutrinos can be compared with
those of previous works. We found the same effect of $\xi$ on the CMB
for $\Omega_{\Lambda}=0$ as in \cite{Sarkar}, while the revised
results in \cite{Kinney} also agree
with our calculations for $\Omega_{\Lambda}=0.7$.
\section{Comparison with observations}
\label{comparison}
Since the degeneracy increases dramatically the amplitude of the first
CMB peak, we expect large $\xi$ values to be favored in the case of
cosmological models known to predict systematically a low peak (unless
a large blue tilt is invoked, which puts severe constraints on
inflation).
Our goal here is not to explore systematically all possibilities, but
to briefly illustrate how $\xi$ can be constrained by current
observations for flat models with different values of
$\Omega_{\Lambda}$. Recent results from supernovae, combined with CMB
constraints, favor flat models with $\Omega_{\Lambda} \sim 0.6-0.7$.
We choose a model with $h=0.65$, $\Omega_b=0.05$,
$Q_{rms-ps}=18~\mu$K, no reionization and no tensor contribution, and
look for the allowed window in the space of free parameters
($\Omega_{\Lambda},\xi,n$). The allowed window is defined as the
intersection of regions preferred at the 95\% confidence level by four
independent experimental tests, based on $\sigma_8$ estimation,
Stromlo-APM redshift survey, bulk velocity reconstruction, and CMB
anisotropy measurements. Details concerning these tests can be found
in \cite{Paper}.
\begin{figure*}[t]
\vspace{-0.5cm}
\begin{eqnarray}
\psfig{file=figWIN00.ps,width=0.47\textwidth}~~~~
\psfig{file=figWIN60.ps,width=0.47\textwidth}
\nonumber
\end{eqnarray}
\vspace{-1.5cm}
\caption{LSS and CMB constraints in ($\xi$, $n$) space
for $\Omega_{\Lambda}=0$ (left) and $\Omega_{\Lambda}=0.6$ (right).
The underlying cosmological model is flat, with $h=0.65$,
$\Omega_b=0.05$, $Q_{rms-ps}=18~\mu$K, no reionization, no tensor
contribution. The allowed regions are those where the labels are. For
LSS constraints, we can distinguish between degenerate neutrinos with
$m_{\nu} =0$ (solid lines) and $m_{\nu} =0.07$ eV (dotted lines).}
\label{fig.WIN}
\end{figure*}
We plot in figure \ref{fig.WIN} the LSS and CMB allowed regions in
($\xi$, $n$) parameter space, for $\Omega_{\Lambda}=0$
and $0.6$.
In the case of degenerate neutrinos with $m_{\nu} = 0.07$ eV, the LSS
regions are slightly shifted at large $\xi$, since, as we saw, the
effect of $\xi$ is enhanced (dotted lines on the figure). The CMB
regions do not show this distinction, given the smallness of the
effect and the imprecision of the data. One can immediately see that
LSS and CMB constraints on $n$ are shifted in opposite direction with
$\xi$: indeed, the effects of $\xi$ and $n$ both produce a higher CMB
peak, while to a certain extent they compensate each other in $P(k)$.
So, for $\Omega_{\Lambda}\geq0.7$, a case in which a power spectrum
normalized to both COBE and $\sigma_8$ yields a too high
peak\footnote{At least, for the values of the other cosmological
parameters considered here. This situation can be easily improved,
for instance, with $h=0.7$.}, a neutrino degeneracy can only make
things worst, and we find no allowed window at all. In the other
extreme case $\Omega_{\Lambda}=0$, it is well known that the amplitude
required by $\sigma_8$ and the shape probed by redshift surveys favor
different values of $n$. We find that the neutrino degeneracy can
solve this problem with $\xi \raise0.3ex\hbox{$\;>$\kern-0.75em\raise-1.1ex\hbox{$\sim\;$}} 3.5$, but the allowed window
is cut at $\xi \simeq 6$ by CMB data, and we are left with an
interesting region in which $\Omega_0=1$ models are viable. This
result is consistent with \cite{Sarkar}. However, current evidences
for a low $\Omega_0$ Universe are independent of the constraints used
here, so there are not many motivations at the moment to consider this
window seriously. Finally, for $\Omega_{\Lambda}=0.5-0.6$, a good
agreement is found up to $\xi \simeq 3$. This upper bound could
marginally explain the generation of ultra-high energy cosmic rays by
the annihilation of high-energetic neutrinos on relic neutrinos with
mass $m_{\nu}= 0.07$ eV \cite{Gelmini}.
|
1,314,259,993,192 | arxiv | \section{Introduction}
The understanding of anomalous transport is one of the most important theoretical issues in the quest for controlled thermonuclear fusion \cite{horton2015}. In particular, we are concerned with the chaotic transport of charged particles advected by a turbulent electric field in a magnetized plasma \cite{horton2018}. In the context of the guiding center motion approximation, with the ${\bf E}\times{\bf B}$ drift velocity, this becomes an advection problem described by a low-dimensional Hamiltonian \cite{horton1985}.
Electrostatic fluctuations are thought to be responsible for turbulent transport in magnetically confined plasmas \cite{flutuacoes}. Such a mechanism has been found to agree with experimental evidence for low plasma pressure \cite{scott}. We limit ourselves to the anomalous transport of trace impurities that are so diluted that their presence does not alter the electric field. The problem becomes analogous to the Lagrangian description of passive scalar advection due to a given stream function, for a two-dimensional incompressible fluid flow \cite{advection,ottino}. Under these assumptions, the ${\bf E}\times{\bf B}$ drift motion of test particles is an exact model for the anomalous transport \cite{ciraolo}. Moreover, the ${\bf E}\times{\bf B}$ drift motion is observed in many magnetized plasma devices \cite{yves2021}, like magnetrons for material processing, many fusion devices and in Hall thrusters in which this term plays a major role in the anomalous transport of particles \cite{yves2020}.
One of the remarkable features of models of ${\bf E}\times{\bf B}$ drift motion is that chaotic particle motion is possible even for regular spatial configurations of the electric field, provided the corresponding Hamiltonian system is time-dependent, and thus non-integrable \cite{horton1985}. Such chaotic motion becomes a non-collisional source of enhanced cross-field particle diffusion, and has been found to yield results many orders of magnitude larger than neoclassical transport \cite{yves2020,pettini,amato}.
Particle escape in the plasma edge region is an issue directly related to chaotic cross-field transport. For example, if a chaotic orbit connects the plasma outer region and the tokamak inner wall, all particles with initial conditions therein will eventually escape the plasma and hit the tokamak wall. This phenomenon can be harnessed in order to divert particles from the plasma edge into carefully placed collecting plates called divertors \cite{5}.
The problem of particle escape in the plasma edge and the control of plasma-wall interactions are outstanding challenges of advanced tokamak scenarios, since heat and particle loads are expected to be typically very large. For example, ITER is expected to generate heat loads of $5 - 10~{\textrm{MW}}/{\textrm{m}}^2$ that can damage the tokamak inner wall, if not properly dealt with \cite{4item}.
A further complication is that the distribution of heat and particle loadings is highly nonuniform, when the particle trajectories related to escape are chaotic. This nonuniformity can be attributed to a geometrical structure underlying a chaotic orbit, called homoclinic and heteroclinic tangle, formed by the infinite number of intersections between stable and unstable invariant manifolds emanating from unstable periodic orbits embedded in the chaotic orbit \cite{elton}. Recently it was shown that mode-coupling is enhanced at the phase space regions occupied by homoclinic tangles \cite{meirielen}. The unstable manifolds, in particular, represent escape channels for particles in a chaotic orbit, and their geometry influences the spatial distribution of escape patterns \cite{evans}. This mechanism is capable to explain qualitatively experimental observations of heat fluxes deposited on divertor plates of tokamaks \cite{wingen}.
In this paper, we explore these ideas to investigate the presence of fractal structures related to particle escape in tokamaks undergoing chaotic trajectories. Our numerical simulations will be performed using a particle ${\bf E}\times{\bf B}$ drift model in presence of electrostatic fluctuations proposed by Horton {\it et al} \cite{horton}. The spectrum of electrostatic fluctuations is chosen so as to reduce the dynamics to a two-dimensional, area-preserving map characterizing a non-integrable Hamiltonian system. For values of the physical parameters taken from the Brazilian tokamak TCABR, and using the intensity of the fluctuating electrostatic potential as the tunable parameter, we typically obtain large chaotic orbits extending from the outer portion of the plasma to the tokamak inner wall. Our results, however, can be applied to other machines since the physical quantities are suitably normalized.
The most fundamental structure to be studied, in the context of particle escape in chaotic area-filling orbits, is the so-called escape basins, which are the sets of particle positions leading to escape through a certain exit. The escape basin boundary coincides with the stable manifold in the homoclinic tangle and thus has the same geometrical properties. As a consequence of the fractality, we have a sensitive dependence on initial condition, with respect to what region will the chaotic trajectory escape through. Recently, Mathias \textit{et al}\cite{amanda-phys-a} has studied the structures related the escape of particles through different exits in the boundary of the plasma, caused by two ${\bf E}\times{\bf B}$ drift waves.
We use a number of quantitative diagnostics for the characterization of the fractality of these structures, namely the uncertainty exponent (related to the box-counting dimension) and the corresponding information entropy. Both quantify the final-state uncertainty of the system, i.e. how much an improvement of the precision in the determination of an initial condition is reflected in the uncertainty of finding what exit will the corresponding trajectory escape through. We also identify the so-called Wada property, which is typical for three or more escape regions, and characterizing an extreme form of fractal behavior.
The rest of this article is organized as follows: in Section II, we outline the symplectic (area-preserving) map describing chaotic advection of test particles in the ${\bf E}\times{\bf B}$ drift motion caused by a radial equilibrium electric field plus electrostatic fluctuations. Section III is devoted to a detailed discussion of the radial profiles of the equilibrium safety factor of the magnetic field, the electric field, and the toroidal velocity of the plasma. Section IV discusses escape basins and their underlying mathematical structure. Section V deals with the numerical characterization of fractal structures using the uncertainty exponents and basin entropies. The Wada property and its quantitative characterization are discussed in Section VI. The last Section is devoted to our Conclusions.
\section{Symplectic map for drift motion}
Let us denote by $a$ and $R_0$ the minor and major plasma radius, respectively, in a tokamak. In the following we will describe the particle position using local coordinates $(r,\theta,\varphi)$, where $r$ is measured from the minor axis, $\theta$ is the poloidal angle, and $\varphi$ the toroidal angle. We assume a large aspect ratio approximation, ($\epsilon = a/R_0 \ll 1$), such that the equilibrium magnetic field is $\mathbf{B} =\left(0,B_\theta(r),B_\varphi\right)$, where $B_\varphi$ and $B_\theta$ are the toroidal and poloidal components, respectively.
Moreover, since $B_\theta \sim \epsilon B_\varphi$ we have $B \approx B_\varphi \gg B_\theta$ and thus consider $B$ as a uniform field. In this approximation the magnetic (flux) surfaces are nested tori with circular cross sections, with a radial profile for the corresponding safety factor
\begin{equation}
\label{qr}
q(r) = \frac{r B}{R_0 B_\theta(r)}.
\end{equation}
In this work we will consider two kinds of electrostatic fields: (i) an external and time-independent electric field in the radial direction, and (ii) the time-dependent field related to the drift instabilities, in the form
\begin{equation}
\label{efield}
{\bf E} = \bar{E}_r(r) {\hat{\bf r}} - \nabla{\tilde\phi}(r,\theta,\varphi;t).
\end{equation}
Under these conditions, the guiding-center motion can be thought as a superposition of a passive advection along the magnetic field lines, with velocity $v_\parallel$, and an ${\bf E}\times{\bf B}$ drift velocity. The resulting equation of motion for the guiding-center is thus
\begin{equation}
\label{eqm}
\frac{d{\bf r}}{dt} = v_\parallel \frac{\textbf{B}}{B} + \frac{\textbf{E}\times\textbf{B}}{B^2},
\end{equation}
which gives the components
\begin{align}
\label{eqr}
\frac{dr}{dt} & = - \frac{1}{rB}
\frac{\partial{\tilde\phi}}{\partial \theta}, \\
\label{eqt}
\frac{d\theta}{dt} & = \frac{v_\parallel(r)}{R_0q(r)} - \frac{\bar{E}_r(r)}{rB} + \frac{1}{rB} \frac{\partial{\tilde\phi}}{\partial r}, \\
\label{eqf}
\frac{d\varphi}{dt} & = \frac{v_\parallel(r)}{R_0}.
\end{align}
The electric potential related to the drift instabilities is assumed to exhibit a broad spectrum of frequencies $\omega_n = n \omega_0$ and wave vectors, characterized by a Fourier expansion in the general form \cite{horton}
\begin{equation}
\label{eq:spectrum}
\tilde{\phi}(r,\theta,\varphi;t) = \sum_{m,\ell,n} \phi_{m, \ell, n} (r) \cos(m\theta-\ell\varphi -n\omega_0 t),
\end{equation}
where the coefficients $\phi_{m,\ell,n}$ depend, in general, on the radius $r$ and time but, for simplicity, we will assume them as constants over the plasma region of interest in this work. Moreover, we retain only the dominant Fourier mode in Eq, (\ref{eq:spectrum}), with harmonics of the lowest frequency $\omega_0$, and fixed poloidal and toroidal mode numbers $m = M$ and $\ell = L$, respectively, such that
\begin{equation}
\label{eq:total.spectrum}
\tilde{\phi}(\theta,\varphi;t) = 2\pi\phi_{ML}\cos{(M\theta-L\varphi)} \sum_n \delta(\omega_0 t - 2\pi n)
\end{equation}
where we used the formulas
\begin{align}
\label{pois}
\sum_{n=-\infty}^{+\infty} \cos(n\omega_0 t) & = 2\pi \sum_n \delta(\omega_0 t - 2\pi n), \\
\label{poiss}
\sum_{n=-\infty}^{+\infty} \sin(n\omega_0 t) & = 0.
\end{align}
The drift motion of guiding centers is a Hamiltonian system, with canonical equations
\begin{equation}
\label{hh}
\frac{dI}{dt} = - \frac{\partial H}{\partial\Psi}, \qquad
\frac{d\Psi}{dt} = \frac{\partial H}{\partial I},
\end{equation}
where we define action and angle variables by $I=(r/a)^2$ and $\Psi= M\theta - L\varphi$, respectively. Making this transformation and using equation (\ref{qr}), reduces the system (\ref{eqr})-(\ref{eqf}) to the form
\begin{align}
\label{eqr1}
\frac{dI}{dt} & = \frac{4\pi M \phi_{ML}}{a^2B} \, \sin\Psi \sum_{n} \delta(\omega_0 t - 2\pi n), \\
\label{eqt1}
\frac{d\Psi}{dt} & = \frac{v_\parallel(I)}{R_0 q(I)} \, (M - q(I) L) - \frac{M \bar{E}_r(I)}{aB\sqrt{I}}.
\end{align}
We define discrete variables by considering a stroboscopic sampling of the action-angle variables at integer multiples of the characteristic period
\begin{align}
\label{stroboI}
I_n & = \lim_{\eta \searrow 0} I\left( t = \frac{2\pi n}{\omega_0} - \eta \right), \\
\label{strobo2}
\Psi_n & = \Psi\left(t = \frac{2\pi n}{\omega_0}\right),
\end{align}
leading to the two-dimensional Poincaré map
\begin{align}
\label{map1}
I_{n+1} & = I_n + \frac{4\pi M \phi}{a^2B\omega_0} \sin\Psi_n, \\
\nonumber
\Psi_{n+1} & = \Psi_n + \frac{2\pi v_\parallel(I_{n+1})}{\omega_0 R_0} \frac{M - L q(I_{n+1})}{q(I_{n+1})} \\
\label{map2}
& - \frac{2\pi M}{aB\omega_0} \frac{\bar{E}_r(I_{n+1})}{\sqrt{I_{n+1}}},
\end{align}
where we abbreviate $\phi_{ML}$ by $\phi$. We applied a normalization with respect to the radial electric field, so that $E_r=\bar{E}_r/E_0$ which we choose to be equal to the unity at the plasma edge. With $E_0$, the magnetic field $B_0=1.1 {\textrm{T}}$ and the minor plasma radius $a=0.18\,{\textrm{m}}$ we normalise all the physical quantities. The normalization factors are velocity $v_0=E_0/B_0$, time $t_0=a/v_0$, and electrical potential $\phi_0=a_0E_0$.
\section{Radial profiles}
The map defined by (\ref{map1})-(\ref{map2}) is area-preserving in the phase plane $(I,\Psi)$ corresponding to the Poincaré surface of section obtained by using (\ref{stroboI})-(\ref{strobo2}). It is important to emphasize that the Poincaré surface of section we deal with is in fact a stroboscopic sampling of the action and angle variables, rather than a fixed plane in space, like at $\varphi = $ const. Such a description would be possible by numerically solving the differential equations of motion (\ref{eqr})-(\ref{eqf}) and considering the intersections of the particle trajectory with a fixed plane. Hence in the present work, we will be interested in analyzing the escape in the phase plane of action-angle variables.
In order to investigate the dynamics generated by iterating the map (\ref{map1})-(\ref{map2}), we have first to give analytical expressions for three radial profiles: the safety factor $q(I)$, the radial electric field $E_r(I)$, and the toroidal velocity $v_\parallel(I)$. In this work we use parameters of the TCABR tokamak, operating at the Physics Institute of São Paulo University (Brazil), listed in Table 1 \cite{nascimento}.
\begin{table}[htb]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
parameter & symbol & value \\
\hline
\hline
minor radius & $a$ & $0.180~\mathrm{m}$ \\
\hline
major radius & $R_0$ & $0.615~\mathrm{m}$ \\
\hline
toroidal field & $B_\varphi$ & $1.1~\mathrm{T}$ \\
\hline
plasma current & $I_p$ & $100~\mathrm{kA}$ \\
\hline
central electron temperature & $T_{e0}$ & $400~\mathrm{eV}$ \\
\hline
central electron density & $n_{e0}$ & $3.0 \times 10^{19}~\mathrm{m^{-3}}$ \\
\hline
pulse duration & $\tau_p$ & $120~\mathrm{ms}$ \\
\hline
\end{tabular}
\end{center}
\caption{Main parameters of the TCABR tokamak \cite{nascimento}.}
\end{table}
We used $M=15$ and $L=6$ as the main poloidal and toroidal modes \cite{marcus2}. The normalized fundamental frequency is $\omega_0=16.36$.
Non-monotonic safety factors generate negative shear regions in the plasma, which improves the plasma confinement quality. There is a significant reduction of turbulent transport by using this type of safety factor \cite{levinton-1995,strait-1995}. The radial safety factor we considered, in terms of the action variable $I = r^2/a^2$, is
\begin{equation}
\label{eq:safety.profile}
q(I) = 5.0 - 6.3 \, I^2 + 6.3 \, I^3,
\end{equation}
so that, in the plasma edge $q(I=1) = 5.0$, which is consistent with measurements of plasma current, electron density and temperature in TCABR tokamak [FIG. \ref{fig:profiles}(a)].
Turbulent particle fluxes in H-mode tokamak discharges are reduced by the presence of a radial electric field with negative shear \cite{viezzer,hidalgo}, generating a shearless transport barrier \cite{marcus,marcus2} that is compatible with the reduction of the turbulent fluxes. We adopt the profile
\begin{equation}
\label{eq:electric.field.profile}
E_r(I) = 10.7 \, I - 15.8 \, \sqrt{I} + 4.13,
\end{equation}
so as to yield a local minimum in the desired plasma region \cite{nascimento} [FIG. \ref{fig:profiles}(b)].
We take into account the plasma rotation by considering a non-monotonic profile for the toroidal plasma velocity, which is related with shearless barriers \cite{ferro}. Spectroscopic techniques have been used for the measurement of toroidal plasma rotation velocities in TCABR discharges, giving values about $4.0~\mathrm{km/s}$ at the plasma edge \cite{nascimento}. A normalized parallel velocity profile to be used in this work, and consistent with TCABR observations, is given by [FIG. \ref{fig:profiles}(c)] \cite{ferro}
\begin{equation}
\label{eq:toroidal.velocity}
v_\parallel(I) = - 9.867 + 17.47 \, \tanh(10.1 \, I - 9.00).
\end{equation}
\begin{figure*}
\centering
\subfloat(a){\includegraphics[height=2.in]{q_profile.jpg}}
\subfloat(b){\includegraphics[height=2.in]{E_profile.jpg}}
\subfloat(c){\includegraphics[height=2.in]{v_profile.jpg}}
\caption{Radial profiles in terms of the action variable $I = r^2/a^2$ for the quantities: (a) safety factor, (b) equilibrium electric field, and (c) toroidal plasma velocity.}
\label{fig:profiles}
\end{figure*}
In our numerical simulations, we integrate the map (\ref{map1})-(\ref{map2}) using the profiles for the equilibrium safety factor, radial electric field, and toroidal velocities given by Eqs. (\ref{eq:safety.profile}), (\ref{eq:electric.field.profile}), and (\ref{eq:toroidal.velocity}), respectively. We keep all parameters fixed and choose the amplitude of the main electrostatic mode $\phi$ as the variable parameter. Proceeding in this way, we can evaluate the qualitative effects of increasing perturbation strength on the orbit structure generated by the map (\ref{map1})-(\ref{map2}).
\begin{figure*}
\centering
\subfloat(a){\includegraphics[height=2.5in]{fase-4.jpg}}
\subfloat(b){\includegraphics[height=2.5in]{fase-7.jpg}}
\subfloat(c){\includegraphics[height=2.5in]{fase-8.jpg}}
\subfloat(d){\includegraphics[height=2.5in]{fase-10.jpg}}
\caption{Phase portraits of the map (\ref{map1})-(\ref{map2}) for the following values of the perturbation amplitude $\phi$: (a) $4.92\times 10^{-3}$, (b) $7.65\times 10^{-3}$, (c) $8.74\times 10^{-3}$, and (d) $10.38\times 10^{-3}$.}
\label{fig:phase}
\end{figure*}
Figure \ref{fig:phase} depicts some phase portraits of the map, using rectangular coordinates for $I_n$ and $\Psi_n/(2\pi)$, for ease of visualization. For a relatively small value of $\phi$, we have a divided phase space consisting of an outer large chaotic sea, with remnants of periodic islands embedded in it, and an inner structure of invariant tori and island chains comprising the plasma core. The large chaotic sea intercepts the plasma boundary at the radial distance corresponding to $I = 1.0$, in such a way that an initial condition placed within the chaotic orbit will eventually escape the plasma through that boundary. This is thus an open Hamiltonian system. On the other hand, those initial conditions placed in the inner region are not expected to escape due to the invariant tori which act as dikes, preventing large-scale chaotic transport [FIG. \ref{fig:phase}(a)]. As the perturbation strength increases, the outer chaotic region is enlarged by engulfing some of the nearby invariant tori and island chains [FIG. \ref{fig:phase}(b)-(c)]. For large $\phi$, the chaotic region encompasses virtually all the region formerly occupied by the plasma column [FIG. \ref{fig:phase}(d)].
\section{Escape basins}
In this work, we are focusing on the dynamics of test particles, i.e. charged particles which are passively advected by the drift flow generated by the combined effects of crossed electric and magnetic fields. Such a particle can escape the tokamak by hitting some boundary surface, like that of a divertor plate, similar to those used to mitigate the plasma-wall interactions through exhaustion of particles escaping along a chaotic orbit near a plasma separatrix \cite{punjabi}.
Instead of investigating directly this type of escape, we will open the dynamical system given by the map equations (\ref{map1})-(\ref{map2}) by considering that the particles are able to escape by one or more exits in the $I \times \Psi$ phase plane \cite{viana-sanjuan-2007}. Accordingly, we will consider two exits placed at the position $I=1.0$: one from $-\pi\leq\Psi<0.0$, denoted by $L$, and the second exit $0.0\leq\Psi\leq \pi$, denoted by $R$.
Let us consider an initial condition $(I_0,\Psi_0)$. For each iteration of the map (\ref{map1})-(\ref{map2}) we make the following test: if $I_n \leq 1.0$ we continue iterating, otherwise we stop iterating and consider the value of $\Psi_n$. If $I_n > 1.0$ for some $n \ge 1$ and $-\pi\leq\Psi_n<0.0$ we consider an escape through exit $L$, otherwise through $R$. The sets of initial conditions, for which there is a value of $I_n > 1.0$ ($n \ge 1$) indicating an escape through exits $L$ and $R$, form their corresponding basins of escape, denoted by ${\cal B}(L)$ and ${\cal B}(R)$, respectively.
\begin{figure*}
\centering
\subfloat(a){\includegraphics[height=2.5in]{bacia-4.jpg}}
\subfloat(b){\includegraphics[height=2.5in]{bacia-7.jpg}}
\subfloat(c){\includegraphics[height=2.5in]{bacia-8.jpg}}
\subfloat(d){\includegraphics[height=2.5in]{bacia-10.jpg}}
\caption{Escape basins for the exits at $L: (I=1.0, -0.5\leq\Psi/(2\pi) <0.0)$ (green pixels) and $R: (I=1.0, 0.0\leq\Psi/(2\pi)\leq 0.5$ (red pixels). Those points that do not escape within a maximum alloted time are represented by white pixels. The amplitude $\phi$ of the electrostatic fluctuations is (a) $4.92\times 10^{-3}$, (b) $7.65\times 10^{-3}$, (c) $8.74\times 10^{-3}$, and (d) $10.38\times 10^{-3}$.}
\label{fig:basins}
\end{figure*}
In Figure \ref{fig:basins}(a)-(d), we show the basins of escape, of a region of the Poincaré surface of section, for different values of the perturbation strength $\phi$. Points belonging to the basin $L$ are painted green, whereas points of basin $R$ are depicted in red. The white region mostly indicates initial conditions that do not escape within a pre-specified large time $n^*$.
There are regions of white points in the plasma core, which corresponds to initial conditions that do not escape (after a maximum time $n^*$), because their trajectories in the phase plane remain on invariant curves, outside the chaotic region. Other white points are inside islands, thus do not escape either.
The mixing of the escape basins ${\cal B}(L)$ and ${\cal B}(R)$ is clearly seen at most points in the chaotic region. Moreover, the green escape basin, ${\cal B}(L)$, is significantly larger than the red escape basin, ${\cal B}(R)$, for all values we considered for the perturbation amplitude strength $\phi$, indicating a preferential escape through the $R$-exit. This asymmetric feature can be understood by considering the fractal structures that underlie chaotic dynamics in this region, as we describe later on.
The mixing between the two escape basins is non-uniform, what can be seen in FIG. \ref{fig:zoom}(a)-(b), where we show two consecutive magnifications of the escape basins depicted in FIG. \ref{fig:basins}(c). We observe that there is a finger-like structure of red basin filaments embedded in the green basin.
\begin{figure*}
\centering
\subfloat(a){\includegraphics[height=2.5in]{bacia-z1.jpg}}
\subfloat(b){\includegraphics[height=2.5in]{bacia-z2.jpg}}
\caption{Two consecutive magnifications of a region of the escape basins obtained for $\phi=8.74\times 10^{-3}$.}
\label{fig:zoom}
\end{figure*}
Not only the escape basins are intertwined at arbitrarily fine scales, but also the escape time $n_e$, i.e. the number of map iterations that an orbit takes to hit one of the exits, has a complicated distribution in the phase space. Let us take Fig. \ref{fig:time}(a), for example, which depicts the escape time (in a color bar) as a function of the initial condition $(I,\Psi/(2\pi))$ for the same parameters as the escape basins shown in Fig. \ref{fig:basins}(a). In the chaotic region, the escape time is found to be as finely intermixed as the escape basins themselves. The white points, as before, correspond to points for which the escape time exceeds a specified maximum time $n^*$.
\begin{figure*}
\centering
\subfloat(a){\includegraphics[height=2.5in]{CC-4.jpg}}
\subfloat(b){\includegraphics[height=2.5in]{CC-7.jpg}}
\subfloat(c){\includegraphics[height=2.5in]{CC-8.jpg}}
\subfloat(d){\includegraphics[height=2.5in]{CC-10.jpg}}
\caption{Escape times (indicated by a colorbar) for different values of the perturbation amplitude $\phi$: (a) $4.92\times 10^{-3}$, (b) $7.65\times 10^{-3}$, (c) $8.74\times 10^{-3}$, and (d) $10.38\times 10^{-3}$.}
\label{fig:time}
\end{figure*}
The invariant chaotic set underlying the large chaotic orbit can be used to understand the complicated structure of escape basins. Let us consider an unstable periodic orbit embedded in the chaotic region of any phase space depicted in Figure \ref{fig:phase}. The stable (unstable) manifold at this point is the set of points whose forward (backward) iterates of the map (\ref{map1})-(\ref{map2}) asymptotically approach each other. The intersections of the stable and unstable manifolds form a non-attracting invariant chaotic set called chaotic saddle \cite{saddle}.
If an initial condition $(I_0,\Psi_0)$ could be placed exactly on an invariant manifold, it would remain on this manifold for arbitrarily large time. However, if this point is off but very close to a given invariant manifold, it would remain so for some time until escape through some of the exits. This property can be used to generate numerical approximations of the invariant manifolds using the so-called sprinkler algorithm \cite{kantz}. Other algorithms for obtaining invariant manifolds are available, but this particular one is easier to apply since one does not need to consider inverse images of the points \cite{invariants}.
Let us consider a bounded region $\mathcal{R}$, of the phase space $I \times \Psi$ containing a chaotic orbit, and cover it with a fine grid of points. Each mesh point corresponds to an initial condition $(I_0,\Psi_0)$, and it is iterated $m$ times using the map (\ref{map1})-(\ref{map2}). After $m$ iterates, if the value of $(I_n,\Psi_n)$ remains inside $\mathcal{R}$, the corresponding initial conditions are numerical approximations of stable manifold $W^s(P)$ which emanates from an unstable periodic orbit $P$ embedded in the chaotic orbit in the region $\mathcal{R}$. Moreover, the $m$-th iterates themselves, $(I_m,\Psi_m)$, are numerical approximations of the unstable manifold $W^u(P)$. Analogously, the corresponding $m/2$-th iterate constitutes a numerical approximation of the chaotic saddle itself \cite{poon}. The underlying chaotic behavior of the system can be explained by the chaotic saddle, whose topological properties are similar to the Smale horseshoe \cite{smale}.
\begin{figure*}
\centering
\includegraphics[height=1.5in]{schema.jpeg}
\caption{Schematic figure showing the accumulation of the escape basin filaments at the stable manifold (red) of the chaotic saddle.}
\label{fig:schema}
\end{figure*}
In FIG. \ref{fig:manifolds} we show numerical approximations of stable and unstable manifolds, for $\phi = 8.74\times 10^{-3}$, obtained by the sprinkler method. We used a grid of $1000\times 1000$ initial conditions with $m=10$ iterations for our simulations. The boundary of the escape basins coincides with the stable manifold of the chaotic saddle as can be seen in Figures \ref{fig:zoom}(a) and \ref{fig:manifolds}(a). In order to understand this fact, let us consider a segment $S$ of the escape basin boundary intercepting the unstable manifold of some unstable periodic orbit $P$, embedded in a chaotic orbit.
Let us represent by $\mathbf{F}$ the area-preserving map (\ref{map1})-(\ref{map2}). Since $\mathbf{F}$ is a Poincaré map in a surface of section, there follows that it is invertible, i.e., there exists a unique inverse map $\mathbf{F}^{-1}$ for any value of the system parameters. The invariant manifolds emanating from an unstable periodic orbit $P$ are denoted by $W^s(P)$ (stable) and $W^u(P)$ (unstable). They intersect transversely at $P$, i.e. the angle between them is bounded away from zero. This amounts to say that $P$ is a hyperbolic fixed point of the map $\mathbf{F}$ or a point belonging to a hyperbolic periodic orbit \cite{ott}.
According to Figure \ref{fig:schema}, the backward images of the segment $S$, like $\mathbf{F}^{-1}(S)$ and $\mathbf{F}^{-2}(S)$, are smoothly deformed, becoming increasingly narrow along the direction of $W^u(P)$ and elongated along $W^s(P)$. In other words, as the number of map iterations goes to infinity, the escape basin boundary converges to the stable manifold of $P$.
This essentially happens because the intersections between the unstable manifold and the basin boundary converge exponentially fast according to the corresponding eigenvalue of the tangent map ${\bf DF}(P)$. The length of the lobes formed by the backward images increases to preserve areas. Hence, if the segment $S$ crosses the unstable (or stable) manifold of the chaotic saddle, the escape basin boundary is fractal. This accumulation process is shown in FIG. \ref{fig:schema}.
The unstable manifold shown in FIG. \ref{fig:manifolds}(b) indicates the path followed by map iterations before they escape, i.e. the escape channels by which particles pass toward the tokamak wall. If an initial condition $(I_0,\Psi_0)$ is off but very near the unstable manifold, it will remain near it for an arbitrarily long time. If the escape time $n_e$ exceeds a pre-specified maximum value $n^*$, our algorithm will not assign an escape basin for this initial condition. This can explain the presence of white points near the tokamak wall in the Figures showing both the escape basins and escape time at the Poincaré surface of section.
\begin{figure*}
\centering
\subfloat(a){\includegraphics[height=2.3in]{stable.jpg}}
\subfloat(b){\includegraphics[height=2.3in]{unstable.jpg}}
\subfloat(c){\includegraphics[height=2.3in]{saddle.jpg}}
\caption{Numerical approximations of the (a) stable and (b) unstable invariant manifolds for an unstable fixed point embedded in the chaotic region when $\phi=8.74\times 10^{-3}$. The points in (c) are numerical approximations of the corresponding chaotic saddle.}
\label{fig:manifolds}
\end{figure*}
\section{Characterization of fractal structures}
\subsubsection{Uncertainty exponent}
In order to characterize the fractality of the escape basins, we first calculated the uncertainty dimension according to the algorithm introduced by MacDonald {\it et al.} \cite{macdonald,macdonald1}. Any initial condition of the phase space is known with some uncertainty that we can represent by a disk of radius $\varepsilon$ centered at $(I_0, \Psi_0)$. If the disk intercepts the boundary of the escape basins, we call that initial condition $\varepsilon$-uncertain, i.e. it is impossible to predict with total confidence by which exit that initial condition will escape through, if it is specified with uncertainty $\varepsilon$. This impossibility is called final state uncertainty and it is directly related to the fractal nature of the escape basin boundary.
We consider a grid of initial conditions $(I_0, \Psi_0)$ in a given phase space region ${\cal R}$ containing a significant portion of the escape basin boundary. The points belonging to this grid are taken to be the centers of small disk of radii $\varepsilon$, and are iterated until the ensuing orbit escapes through $L$ or $R$ exits (if the orbit does not escape at all, it is discarded from the computation). For each grid point, two other initial conditions are randomly chosen inside the corresponding $\varepsilon-$disk and they are iterated again until reaching one of the two exits. If one of the three points, for a given $\varepsilon-$disk, fails to escape through the same exit, the center of this disk is considered $\varepsilon-$uncertain.
The uncertain fraction $f(\varepsilon)$ is the number of $\varepsilon-$uncertain conditions divided by total number of initial conditions. It is known that $f$ scales with $\varepsilon$ as a power law $f(\varepsilon)\sim\varepsilon^\xi$, where $\xi$ is called the uncertainty exponent. Let $d$ be the box-counting dimension of the escape basin boundary in the two-dimensional phase plane. In order to cover the boundary with boxes of length $\delta$, it takes $N(\delta) \sim \delta^{-d}$ of them, so that the box-counting dimension is given by
\begin{equation}
\label{boxcount}
d = \lim_{\delta\rightarrow 0} \frac{\ln N(\delta)}{\ln (1/\delta)}.
\end{equation}
Now we set $\delta$ equal to the initial condition uncertainty $\varepsilon$, and thus the area of the uncertain region of the phase space will be of the order of the total area of all $N(\delta)$ boxes used to cover the escape basin boundary. Given that the area of each box is $\varepsilon^2$, the uncertain area is of the order
\[
f(\varepsilon) \sim \varepsilon^2 N(\varepsilon) = \varepsilon^{2-d} = \varepsilon^\xi,
\]
so that the escape basin boundary dimension is $d = 2 - \xi$. If the escape basin boundary is a smooth curve ($d = 1$), then $\xi=1$. However, if the basin boundary is fractal, then $0<\xi<1$, so that its dimension is $1<d<2$.
In our simulations, we used a grid of $10^4\times10^4$ initial conditions placed in the chaotic region of the phase plane $I\times\Psi$ and iterated $10^5$ times. If the initial condition does not escape after this number of iterations, it is removed from the computation. For each value of $\varepsilon$, we repeat ten times the computation of the uncertainty fraction, the local error being the standard deviation of the results. Ten values of $\varepsilon$ are used to make a diagram of $\log{f(\varepsilon)}$ versus $\varepsilon$, the uncertainty dimension is determined by least-square fits. The global error is the average local error for each $\varepsilon$. Our results, for different values of $\phi$, are summarized in Table \ref{tab:dimension}. The uncertainty dimension varies very little with $\phi$ and is very close to $2.0$, which is the limiting case of an area-filling curve. In all those cases, the basin boundary is extremely mixed. These results point to an extreme fractal escape basin structure.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|} \hline
$\phi\,\,\,(10^{-3})$ & $\xi$ & $d$ & error \\\hline\hline
4.92 & 0.0001 & 1.999 & 0.001 \\\hline
7.65 & 0.0011 & 1.999 & 0.001 \\\hline
8.74 & 0.0034 & 1.997 & 0.002 \\\hline
10.38 & 0.0157 & 1.980 & 0.030 \\\hline
\end{tabular}
\caption{Uncertainty exponents and box-counting dimensions for the escape basin boundary for different values of the perturbation amplitude.}
\label{tab:dimension}
\end{table}
\subsubsection{Basin entropy}
We used the concept of basin entropy \cite{daza-entropy} to quantify the final state uncertainty produced by the fractality, using ideas of information theory. We considered a bounded region $\mathcal{R}$ in the chaotic region of the phase space in Fig. \ref{fig:phase}, characterize by the presence of $N_A$ exits. We divided $\mathcal{R}$ in a fine mesh of $N$ boxes, each of which containing a grid of $\zeta \times \zeta$ sample initial conditions. The map associates to each initial condition on the grid a single variable (called a color) labeled from 1 to $N_A$. The basin entropy can be obtained from computing the information entropy for the boxes.
The color in each grid point represents the value of an integer random variable $j$. Let $p_{ij}$ denote the probability that the $j$th color is assigned to the $i$th box, i.e. the frequency of color $j$ among the $\zeta^2$ initial conditions in box $i$. Treating the chaotic orbits of our map as statistically independent, the basin entropy of the $i$th box is defined as
\begin{equation}
\label{eq:si}
S_i = -\sum_{j=1}^{m_i} p_{ij} \, \log{p_{ij}},
\end{equation}
where $m_i\in[1,N_A]$ is the number of colors inside the box (with $ 0 \log 0 = 0$ by convention). The total basin entropy for the region $\mathcal{R}$ is then
\begin{equation}
\label{eq:Sb}
S_b = \frac{1}{N}\sum_{i=1}^N S_i.
\end{equation}
In the case of only one exit, $S_b=0$ and there is no uncertainty in the final state caused by fractality. Moreover, if there are $N_A$ equiprobable exits, the basin entropy assumes the maximum value $S_b = \log{N_A}$, completely characterizing the escape basin structure. We also adapt this entropy calculation to evaluate the uncertainty related to the escape basin boundary. In order to do this, we repeat the same calculation described above, but considering only the $N_b$ boxes that contain more that one color, i.e. if a box $i$ contains only one color, we disregard $i$ in the calculation of the entropy. In this way, noting also that $S_i = 0$ for single-color boxes, we compute the basin boundary entropy as $S_{bb} = (1/N_b) \sum_i S_i = N \, S_b / N_b$.
In our case, there are two exits $L$ and $R$, the area $\mathcal{R}$ is the rectangle $0.3\leq I\leq 1.0$, $-0.5 \leq \Psi/(2\pi) \leq0.5$ covered with a grid of $1000\times 1000$ initial conditions, distributed inside $4\times 10^4$ boxes, $\zeta=5$. For each box, we computed a maximum of $10^5$ iterations of the map for a number of initial conditions therein, the orbits that do not escape up to this time being excluded from the statistics. Let $n_L$ and $n_R$ denote the number of points in each grid cell that escape to exits $L$ and $R$, respectively. The probability for the $i$th box is
\begin{equation}
p_{iL} = \frac{n_L}{n_L + n_R}, \qquad
p_{iR} = \frac{n_R}{n_L + n_R},
\end{equation}
so that the entropy for that grid cell is $S_i = -p_L\log{p_L} - p_R\log{p_R}$. Summing up over the entropy of all boxes and dividing by the number of boxes, we obtain the basin entropy $S_b$. The basin boundary entropy is obtained, excluding from the summation those boxes for which either $p_L=0$ or $p_R=0$. Since there are two exits, $S_b$ and $S_{bb}$ vary between $0$ and $\log{2}\approx 0.69$.
\begin{figure}
\centering
\includegraphics[height=2.5in]{entropia-5.jpg}
\caption{Escape basin entropy (blue), basin boundary entropy (black), and relative area ${\cal A}$ of the red escape basin ${\cal B}(R)$ (red) as a function of the perturbation strength $\phi$.}
\label{fig:entropy}
\end{figure}
In Fig. \ref{fig:entropy} we show our results for the basin and boundary entropies as a function of the amplitude of the drift waves $\phi$. As a trend, the entropies increase with $\phi$, this means that the red and green basins become progressively more mixed and involved. Moreover, we see that both $S_b$ and $S_{bb}$ follow the increase of the occupied area of the red basin as depicted in Figure \ref{fig:entropy}. Inspecting Figures \ref{fig:basins}, we see that the green is predominant over the red basin, but with the increase of $\phi$ the red area becomes larger. The entropies increase as the red basin expands and becomes comparable to the green basin.
\section{Wada property}
The fractal structures we have discussed so far are related to boundaries between two different exit basins. However, it is interesting to investigate the case of three or more basins. Extending the previous reasoning, one would expect to find fractal structures in the corresponding boundaries, but for three or more basins an important question to be answered is: are almost all boundaries separating points of just two basins? If the answer is negative, conceptual problems may appear since the basins are restricted to a limited phase space domain. As we will see, not all boundary points separate just two basins, and there is a considerable number of boundary points that separate simultaneously three or more basins, which is a non-trivial topological concept called Wada property.
In order to discuss this result, some preliminary definitions are needed. Let the system have more than one escape basin. If a given point has a neighborhood consisting just of points belonging to a single escape basin, it is called an interior point. A point $P$ is a boundary point of the basin $\mathcal{B}$ if every open neighborhood of $P$ is a boundary point of the basin $\mathcal{B}$ and at least another basin $\mathcal{B'} \ne \mathcal{B}$. If the point $P$ is a boundary point of at least three different basins then we say that $P$ is a Wada point. If the escape basin boundary is a fractal curve, then a fraction of its points can be Wada points, so that we say the boundary has the Wada property (partially or totally).
The boundaries possessing the Wada property have important physical consequences, given that a boundary point turns to be arbitrarily close to points of at least three basins of escape \cite{yorke-1991}. Since an initial condition is always known up to a given uncertainty, in a system with the Wada property, it is not possible to say with certainty to which exit a particle will escape. Hence, the Wada property is an extreme form of final-state sensitivity.
We considered three exits by dividing the tokamak wall $I = 1.0$ into three congruent segments denoted by $L: -\pi < \Psi \leq -\pi/3$, $C: -\pi/3 < \Psi \leq \pi/3$, and $R: \pi/3 < \Psi \leq \pi$. We evaluated the corresponding escape basin for these exits using the same procedures already described for two exits. Our results are shown in FIG. \ref{fig:wada}(a), for a perturbation amplitude $\phi= 9.84\times 10^{-3}$. Points belonging to the basins of $L$, $C$, and $R$ are painted red, blue, and green, respectively. The corresponding escape basins have a similar shape as those described before, with a fingerlike structure, and they also seem to be densely intertwined, but it is difficult (if not impossible) to discern the Wada property just by a cursory inspection.
In order to test the Wada property, in the escape basins produced by the map (\ref{map1})-(\ref{map2}), we have to prove that the unstable manifold of a periodic orbit intersects all the escape basins. This is a necessary but not sufficient condition to have the Wada property fulfilled \cite{nusse-yorke-1996}. Since the rigorous demonstration of this property is not feasible for the map we are dealing with, we rely on numerical signatures of such behavior. In FIG. \ref{fig:wada}(b) we indicate that the unstable manifold emanating from an unstable fixed point, embedded in the chaotic region, intersects the three escape basins, what strongly suggests that the escape basins fulfill the Wada property. However, such evidence does not inform what is the fraction of boundary points that are Wada points.
\begin{figure*}
\centering
\subfloat(a){\includegraphics[height=2.5in]{bacia-wada.jpg}}
\subfloat(b){\includegraphics[height=2.5in]{wada-2-esse.jpg}}
\caption{(a) Basins of escape for the case of three exits. Points belonging to the basins of exits $L$, $C$, and $R$ are painted red, blue, and green, respectively. (b) A magnification of the black rectangular depicted in (a). The yellow points are a numerical approximation of the invariant unstable manifold crossing all escape basins.}
\label{fig:wada}
\end{figure*}
In order to characterize which boundary points have the Wada property, we used the so-called grid approach \cite{grid-approach}. Let ${\cal R}$ be a bounded region of the phase space (mostly in the chaotic region near the tokamak wall) containing $N_A\geq 3$ exits, and let us denote by $\mathcal{B}_j$, $j=1,2,\dots N_A$, the corresponding basins of escape. Using a fine rectangular mesh, the region ${\cal R}$ is divided into a set of non-overlapping boxes ${b_1, b_2,\dots, b_k}$. We iterate each point $(x,y)$ of ${\cal R}$ in order to find which exit the particle escapes through, so as to determine the corresponding escape basin $\mathcal{B}_j$.
We defined $C(x,y) = j$ if $j\in\mathcal{B}_j$ and $C(x,y) = 0$ if $(x, y)$ is in none of the sets. We denote by $C(b_j)$ the collection of grid boxes consisting of $b_j$ and all boxes having at least one point in common with $b_j$. The number of different colors in $C(b_j)$ is $M(b_j)$. Provided $M(b_j)\neq 1, N_A$, we take the two closest boxes in $C(b_j)$ with different colors and draw a line segment between them, calculating the color of the midpoint of this line. If the color of the segment midpoint is such that we have all the possible colors inside $C(b_j)$, then $M(b_j)=N_A$ and we stop the procedure. Otherwise, we chose intermediate points in this line segment and repeat this procedure until $M(b_j) = N_A$, unless the number of points exceeds a specified limit.
After having obtained the values of $M(b_j)$ for all grid points, we determine the set $G_m$ of those original boxes such that $M(b_j) = m$, for a given integer $m$. If $m=1$, we have the set $G_1$ containing points belonging to the interior of the $j$th escape basin (interior points). Analogously, the set $G_2$ contains points belonging to the boundary between two escape basins, that is, there are two different colors inside the set $C(b_j)$ (boundary points). In the same way, $G_3$ consists of points that belong to the boundary between three basins, i.e. the set $G_3$ contains Wada points (points satisfying the Wada property).
Since the procedure outlined involves a number of refinements, let us denote by $G_m^q$ the set $G_m$ obtained at the $q$-th procedure step. We expect that, as $q$ goes to infinity, the sequence of refinements converge to a final set $G_m$, in such a way that we compute the following quantity
\begin{equation}\label{eq:W_m}
W_m = \lim_{q\rightarrow\infty} \frac{\mathcal{N}(G_m^q)}{\sum_{j=2}^{N_A}\mathcal{N}(G_j^q)}, \qquad (m = 2, 3, \ldots N_A),
\end{equation}
where $\mathcal{N}(G_j^q)$ is the number of points of the set $G_j$ at the $q$th refinement step.
In the case of $W_m=0$, the system has (almost) no grid boxes that belong to the boundary separating $m$ escape basins. If $W_m=1$, then (almost) all the boxes belong to the common boundary of $m$ escape basins. The system is said to have the Wada property if $W_{N_A}=1$, given that is always possible to find at least a third color between two other colors. The system is said to be partially Wada when $0<W_{m}<1$, with $m\leq 3$.
In our problem, with $N_A=3$ escape basins, we calculated $W_2$ and $W_3$ for an increasing number of $q$ procedure steps of computing colors at the intermediate points between adjacent boxes, namely
\begin{align}
W_2 &= \frac{\mathcal{N}(G_2)}{\mathcal{N}(G_2) + \mathcal{N}(G_3)}, \label{eq:W_2} \\
W_3 &= \frac{\mathcal{N}(G_3)}{\mathcal{N}(G_2) + \mathcal{N}(G_3)}. \label{eq:W_3}
\end{align}
We checked, for each $q$th iteration of the procedure, whether or not points of $G_2$ may belong to $G_3$ by testing $2(q-2)$ initial conditions which are intermediate between the central box and a neighbor box with different color. If some of these initial conditions present the missing color, the central box is reclassified as $G_3$.
\begin{figure*}
\centering
\subfloat(a){\includegraphics[height=2.5in]{grid.jpg}}
\subfloat(b){\includegraphics[height=2.5in]{W2W3.jpg}}
\subfloat(c){\includegraphics[height=2.5in]{hist.jpg}}
\caption{(a) Basin structure of Figure \ref{fig:wada}(a), showing points belonging to the $G_1$ set (internal points, black), points of the $G_2$ set (boundary points between two basins, red), and points of the $G_3$ set (boundary points between three basins, green), after $q = 20$ refinement steps. (b) Values of the quantities $W_2$ (blue) and $W_3$ (orange) as a function of the refinement step. (c) Histogram (semilog) showing the number of reclassified points for various numbers of refinement steps.}
\label{fig:grid}
\end{figure*}
Our results, after $20$ refinement steps, are show by Figure \ref{fig:grid}(a), where we plot the points classified as $G_1$ (black points), $G_2$ (red points) and $G_3$ (green points). We observe a predominance of Wada points belonging to the set $G_3$, in agreement with the complex basin structure displayed by Fig. \ref{fig:wada}. Curiously the number of interior points is relatively small, as well as those belonging to a boundary between only two escape basins. This suggests that the Wada property holds in a quite large degree for our system.
As a matter of fact, the values of $W_2$ and $W_3$ are shown in Figure \ref{fig:grid}(b) as a function of $q$. We observe a fast convergence after just $q=4$ iterations, yielding $W_2\approx 0.0424$ and $W_3 \approx 0.9576$. Hence the basins of escape are partially Wada but, since {\it circa} $96 \%$ of the boundary points are Wada points, the system is close to be totally Wada. The fast convergence can be also appreciated in Figure \ref{fig:grid}(c), which shows a histogram for the number of points initially classified as belonging to the set $G_2$ but which are reclassified to the $G_3$ set in each refinement step, after a large number of evaluations of the quantities $W_2$ and $W_3$. We see that most of the convergence is obtained after $3$ to $5$ steps, and the number of reclassified points decreases exponentially to zero as $q$ increases.
\section{Conclusions}
Chaotic particle transport in magnetized plasmas is a subject of utmost interest in view of its applications in the diffusion of impurities in tokamaks, for example. Charged impurities can be treated as passive tracers, advected by a time-dependent ${\bf E}\times{\bf B}$ flow. From this point of view, particle dynamics can be cast into a Hamiltonian system with one-and-a-half degrees of freedom. As long as we consider the evolution of a limited number of particle impurities, our model would be preferable to a kinetic description, for example. In addition, the Hamiltonian nature of the equations is useful to explain the formation and evolution of a chaotic region near the peripheral region of the tokamak.
In this work, we investigated the escape of chaotic particle orbits using an area-preserving Poincaré map obtained from a drift Hamiltonian, using realistic profiles and parameters. While many related works focus on statistical properties of particle diffusion, we rather concentrate on the particle dynamics itself, identifying those sets of initial conditions leading to particle escape through exits placed at the tokamak boundary. Those exits can be adapted to include scenarios where divertor plates are suitably placed so as to reduce particle fluxes on sensitive parts of the tokamak inner wall.
Due to the underlying dynamical structure of the chaotic orbit which leads to particle escape, the escape basins and their boundaries have fractal characteristics, which have been identified and, whenever possible, quantified so as to measure the amount of final-state uncertainty.
Firstly, we divided the wall into two exits, through which the particles in the chaotic region (in phase space) can escape. The corresponding escape basins and their common boundary are fractal. The structure of the escape basins features an infinite number of fingers which follow the intersection of a basin boundary segment with the stable manifold of an unstable periodic orbit, embedded in the chaotic region. We verified this fact from direct numerical obtention of the invariant manifolds.
In addition, we quantified the fractality with the box-counting dimension of the escape basin boundary by using the uncertainty exponent method. Our numerical results show a dimension close to the dimension of the phase space itself (equal to two) for a wide interval of the perturbation strength values, and indicating a high degree of fractal behavior. This has important consequences on the predictability of the final state of the system: even if we achieve a great improvement in the uncertainty of the initial condition, this will have nearly no effect on the predictability if the final state of the system. In other words, it is practically impossible to predict by which exit will escape.
Since the values of the box-counting dimension are poorly affected by the intensity of perturbation, we used other quantitative diagnostics of fractal behavior. Accordingly, we calculated the corresponding basin entropy and basin boundary entropy. These quantities may vary between zero, when there is no uncertainty in the final state, and a maximum value of $\log 2$ (in the case of two exits). In the latter case, the basins are so intertwined that, for a randomly chosen particle, the probability of escaping through either exit is the same (equiprobable escape). We found that both entropies increase with the amplitude of the fluctuations, in the same way as the relative area occupied by one of the escape basins. Hence we conclude that this would be a better characterization of fractality than the dimension itself.
A non-trivial and challenging topological property of fractal basins is the Wada property, for the case of three or more escapes. In our work, we divided the tokamak wall into three exits of the same size, in order to investigate the Wada property, i.e. boundary points having in their neighbourhood points belonging to all three basins. A qualitative way to suggest the existence of Wada property is to show that the unstable manifold stemming from an unstable periodic orbit intersects all basins, what we numerically verified.
Moreover, a quantitative way to assess the degree to which the Wada property is fulfilled is the grid approach. Using this method of successive refinements, we found that, for a given value of the perturbation strength, $4.24\%$ of the boundary points separate two escape basins, whereas for $95.76\%$ of the boundary points separate three basins, thus displaying the Wada property, so that the system is partially but almost completely Wada.
The physical consequences of the Wada property are essentially the same as those deriving from the fractal nature of the escape basin boundaries. The difference, in the former case, is that the concept of fractal boundary, for three or more exits, acquires a more deep and precise meaning, from the mathematical point of view.
The theoretical model we used to describe the ${\bf E}\times{\bf B}$ drift flow, influenced by electrostatic fluctuations, have some evident drawbacks. Firstly we only considered one resonant mode, which is clearly a simplification, given the broadband nature of the measured spectra of electrostatic fluctuations in tokamaks. However, the addition of more resonant modes, while more realistic, would not modify the chaotic region in a way that would affect our main conclusions. By the same token, the assumption that the fluctuation amplitude is constant over a limited radius is rather strong, but improvements would not modify the main results of this paper. Our results are similar to the two drift waves model\cite{amanda-phys-a}, given that the structures studied are consequences of the dynamics underlying chaotic orbits in non-integrable
area-preserving systems. Finally, since the theoretical and computational analysis shown in this paper has been applied to ${\bf E}\times{\bf B}$ flows, we speculate that our results may be of interest in other plasma configurations displaying these features, like Hall thrusters \cite{yves2020,yves2021,hall,hall2} and magnetron discharges such as those used for High Power Impulse Magnetron Sputtering (HiPIMS) \cite{magnetrons1}, Penning sources \cite{magnetrons2}, and cusped-field thrusters.
\section*{Acknowledgments}
We would like to thank Gabriel Grime for his useful discussions and suggestions. R. L. V. gratefully acknowledges the hospitality extended to him during his stay at the Aix-Marseille University. The authors thank the financial support from the Brazilian Federal Agencies (CNPq) under Grant Nos. 407299/2018-1, 302665/2017-0, 403120/2021-7, and 301019/2019-3; the São Paulo Research Foundation (FAPESP, Brazil) under Grant Nos. 2018/03211-6 and 2022/04251-7; and support from Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) under Grants No. 88887.522886/2020-00, 88881.143103/2017-01 and Comité Français d’Evaluation de la Coopération Universitaire et Scientifique avec le Brésil (COFECUB) under Grant No. 40273QA-Ph908/18.
|
1,314,259,993,193 | arxiv | \section{Introduction}
Photonic architectures have emerged as a viable candidate for the development of quantum information processing protocols. Photons are immune from environmental disturbances, readily manipulated with classical tools, and subject to high efficiency detection \cite{obrien2009photonic}. For these reasons, many proof-of-principle experiments have been demonstrated that utilize either optical q-bits in the discrete variable (DV) regime \cite{walther2005experimental} or fluctuations of the quantized electric field (termed ``q-modes") in the continuous variable (CV) regime \cite{Furusawa2011}. Yet, an interaction among various photonics channels must be established in order to implement a universal set of quantum logical operations (i.e., 2-qubit gates).
While strong nonlinear interactions at the single-photon level are difficult to achieve, it is possible to initiate an interaction among photonic channels through the act of measurement.
Such measurement-induced nonlinearities are the basis of linear optical quantum computing \cite{kok2007linear,ralph2010optical}. The KLM scheme of quantum computing \cite{knill2001scheme} utilizes single photon sources, a linear optical network, and introduces the requisite nonlinearity with photon-counting detectors. Although the KLM scheme is fundamentally nondeterministic, it may, in principle, be rendered deterministic with the addition of entangled multiphoton ancilla states. Nonetheless, the overhead necessary to incorporate these states grows rapidly and presents a challenge to the practical scalability of the scheme.
An alternative approach has recently emerged that exploits the act of projective measurement itself as a means for achieving quantum gates \cite{raussendorf2001one}. In particular, a quantum logical operation can be realized by measuring the state of single nodes contained within a highly entangled multipartite state - the cluster state \cite{Nielsen2006,Lloyd2012}. Due to the multipartite nature of the entanglement, the result of a measurement propagates throughout the cluster in a deterministic fashion. Importantly, different logical gates are implemented by altering only the basis in which individual nodes are measured; consequently, the choice of basis does not necessitate a change in the cluster structure itself. As a result, the primary difficulty for implementing measurement-based computing schemes lies in the generation of the cluster state, which requires large scale entanglement.
Optical cluster states have been successfully constructed both in the DV \cite{walther2005experimental} and CV \cite{vanLoock2008,Furusawa2011} regimes. Continuous-variable entanglement, which is the domain of the current work, is of particular interest since the electric field is efficiently controlled and measured with classical devices, and the unconditional nature of photon generation allows for both high signal-to-noise ratios and data transfer rates. The traditional methodology to construct CV clusters is to introduce a series of independent
squeezed states of light into a linear optical network that is arranged in such a way as to produce the desired entanglement \cite{vanLoock2007}. Each node contained within these states, however, necessitates its own source of nonclassical states. Consequently, the incorporation of a large number of such modes rapidly encounters a complexity ceiling in terms of scalability and flexibility.
Alternatively, a multimode source may be exploited in which all of the requisite modes are copropagating within a single beam. One avenue toward cluster state generation exploits temporal encoding \cite{yokoyama2013optical}. Additionally, spatially multimode beams have proven useful for the generation of cluster states when detected with a spatially-resolved, multi-pixel apparatus \cite{Armstrong2012}. However, achieving spatial degeneracy over multiple modes is technically challenging, and an alternative approach is to generate frequency multimode beams, which may be accomplished with optical cavities that are resonant for a large number of copropagating frequency modes. Toward this end, optical frequency combs possess an intrinsic highly multimode structure due to the large number of individual frequencies contained within the comb. The frequency comb has already proven a reliable source of cluster states as the downconversion of a single pump photon in an OPO with a broad phase-matching bandwidth creates sets of entangled q-modes \cite{pfister2011,pfister2008,Chen2013}.
The present work demonstrates the use of an optical frequency comb to synchronously drive the downconversion process. The result is a highly multimode quantum state of light
that may be described either as a product of uncorrelated non-classical states that span the entire breadth of the downconverted spectrum and have specific pulse shapes, or as a highly entangled multipartite state. We show that the combination of homodyne detection with ultrafast pulse shaping permits recovery of the state's full covariance matrix in a basis of up to eight modes. This description reveals that multiple cluster states are simultaneously present in the multimode beam.
The paper is organized as follows: Section~\ref{sec:theory} outlines the theoretical principles governing formation of the quantum comb as well as the various bases in which it may be analyzed. The quantum comb is characterized in terms of its covariance matrix. The methodology by which this matrix is obtained is detailed in Section~\ref{sec:methods}, and demonstration of the state's multimode character is presented in Section~\ref{sec:results}. Given the covariance matrix, Section~\ref{sec:clusters} illustrates how the quantum comb may be examined in the various bases that reveal the presence of cluster states. Finally, concluding remarks and an outlook toward future development are discussed in Section~\ref{sec:discussion}.
\section{Theoretical Description}
\label{sec:theory}
Nonclassical CV photonic states are efficiently generated with an optical parametric oscillator (OPO). In the frequency comb regime, the high peak powers associated with ultrafast pulses elicit a strong nonlinear material response, which, in turn, provides an efficient platform for the creation of highly nonclassical states \cite{wenger2004non}. Moreover, a femtosecond pulse train contains upwards of $\sim 10^{5}$ individual frequency components, and is therefore readily described as a multi-frequency-mode object. The simultaneous downconversion of all these frequency elements in a nonlinear optical element induces an intricate ensemble of both symmetric and asymmetric frequency correlations with respect to the carrier frequency $\omega_0$ that extends across the breadth of the resultant comb~\cite{Pinel2012} (Fig.~\ref{fig-PDC}). These correlations are preserved provided that the optical cavity is synchronously pumped by the laser. If the resonant frequencies of the cavity are written as $\omega_{p} = \omega_{0} + p \cdot \omega_{\textrm{FSR}}$, with $p\in\mathbb{Z}$, this condition implies that $\omega_{\textrm{FSR}}$ is both the cavity free spectral range and the repetition rate of the pump laser. Such a device is called a SPOPO (Synchronously Pumped OPO).
\ffig{fig-PDC}{figure1}{Parametric downconversion of a femtosecond comb. The splitting of a single pump photon of frequency $2 \omega_{0}$ by pathway $1$ creates entanglement between the frequencies $\omega_{a}$ and $\omega_{b}$. An additional pump photon may downconvert by pathway $2$ and correlate frequencies $\omega_{a}$ and $\omega_{c}$. A correlation is also established between frequencies $\omega_{b}$ and $\omega_{c}$ by virtue of their mutual link to $\omega_{a}$. In this manner, every frequency of the downconverted comb becomes correlated with every other member of the comb.}{65mm}
The Hamiltonian corresponding to a single pass in the crystal that describes the parametric coupling between different cavity modes is then
\begin{equation}
\hat{H} = \textrm{i} \hbar g \sum\nolimits_{m,n} L_{m,n}\, \hat{a}_{m}^{\dagger} \hat{a}_{n}^{\dagger} + \textrm{h.c.},
\label{hamiltonian}
\end{equation}
where $g$, proportional to the pump amplitude, regulates the overall interaction strength and $\hat{a}_{m}^{\dagger}$ is the photon creation operator associated with a mode of frequency $\omega_{m}$. The coupling strength between modes at frequencies $\omega_m$ and $\omega_n$ is governed by the matrix $L_{m,n}=f_{m,n} \cdot p_{m+n}$, where $f_{m,n}$ is the phase-matching function \cite{walmsley2001,walmsley2008} and $p_{m+n}$ is the pump spectral amplitude at frequency $\omega_{m}+\omega_{n}$ \cite{patera2010}. In the absence of loss, the evolution of a single mode $\hat{a}_{m}$ is then specified by
\begin{equation}
\frac{d \, \hat{a}_{m} }{dt} = g \sum\nolimits_{n} L_{m,n}\, \hat{a}_{n}^{\dagger},
\end{equation}
which reveals that following the downconversion event, each frequency mode is coupled to every other mode with a strength moderated by $L_{m,n}$. Consequently, the downconversion of an ultrafast frequency comb has the potential to serve as a rich source of multipartite entanglement \cite{Valcarcel12}.
\subsection{Squeezed Mode Basis}
An alternative description of the state is obtained upon diagonalizing the coupling matrix $L_{m,n} = \sum_{k} \Lambda_{k}X_{k,m}\, X_{k,n}$, where $\{\Lambda_k\}$ and $\{X_k\}$ are its eigenvalues and eigenvectors, respectively \cite{Leuchs2002}. A new set of ``supermodes'' $\hat{S}_{k}$ may be defined that are linear combinations of the original, single frequency modes: $\hat{S}_{k} = \sum_{i}X_{k,i} \, \hat{a}_{i}$. The total Hamiltonian is then written as a sum of single-mode squeezing Hamiltonians independently acting on each supermode \cite{patera2010}:
\begin{equation}
\hat{H} = \textrm{i} \hbar g \sum\nolimits_{k} \Lambda_{k} \, \hat{S}_{k}^{\dagger \, 2}+ \textrm{h.c.}
\label{sqz-hamiltonian}
\end{equation}
The eigenspectrum $\Lambda_k$ specifies the number of non-vacuum, uncorrelated squeezed states contained in the SPOPO output and their associated degree of squeezing. Thus, the quantum comb may be described as either an entangled state in the basis of individual frequencies or as a set of uncorrelated squeezed states in the supermode basis. As the individual supermodes are decoupled, it is straightforward to describe the effect of the cavity. Since the cavity does not spectrally filter the optical state, each eigenvector is resonant within the cavity, and a standard type-I OPO calculation is applied to each mode as a means to infer the output state. It follows from \cite{patera2010} that at the cavity threshold and zero Fourier frequency, the noise of the squeezed quadrature normalized to vacuum is given by
\begin{equation}\label{sqz-level}
V_k = \left(\frac{\Lambda_0-|\Lambda_k|}{\Lambda_0+|\Lambda_k|}\right)^2.
\end{equation}
Assuming a Gaussian shape for the coupling matrix $L_{m,n}$, these eigenvalues may be written as \cite{patera2010}:
\begin{equation}
\label{eq:phases}
\Lambda_k = \Lambda_0 \, \rho^k
\end{equation}
with
\begin{eqnarray}
\Lambda_0 &=& \pi^{\frac{1}{4}} \sqrt{\frac{2}{ \tau_\textrm{p} \, \omega_{\textrm{FSR}}}} \cdot \sqrt{\frac{\tau_\textrm{p}^2}{\tau_1^2 + \tau_\textrm{p}^2}} \, , \nonumber \\
\rho &=& -1 + 2 \sqrt{\frac{\tau_2^2}{\tau_1^2 + \tau_\textrm{p}^2}},
\end{eqnarray}
where $\tau_1= | k_\textrm{p}' - k_\textrm{s}'| l/\sqrt{10}, \, \tau_2 = \sqrt{|k_\textrm{s}'' | l } /(4 \sqrt{3})$, and $\tau_\textrm{p}$ is the temporal duration of the pump pulse. The nonlinear crystal length is specified by $l$ while $k'$ and $k''$ are the first and second derivatives, respectively, of the frequency-dependent wave vector for the pump (p) and signal (s) pulses. For realistic experimental parameters $\rho \simeq -1$ \cite{patera2010}. Hence, Eq.~\ref{eq:phases} corresponds to an alternating geometric progression of ratio $\rho$ whose first element $\Lambda_0$ is positive.
The quadrature in which the $k^{\textrm{th}}$-mode of the field exhibits squeezing is determined by the phase of the corresponding eigenvalue $\Lambda_{k}$ \cite{patera2010}. As a result, the squeezing quadrature is predicted to alternate between the $x$ and $p$ quadratures with increasing mode index $k$. This theoretical prediction is well-verified in our experiment.
\subsection{Cluster State Basis}
\label{sse:clu_state_basis}
The output state of the SPOPO may be analyzed in a variety of different mode bases, and each basis reveals a specific entanglement structure \cite{Braunstein2005}. One class of entangled states of particular relevance for quantum information processing is that of cluster states. A cluster state is a highly entangled multimode state associated with a graph \cite{menicucci2011graphical}. This graph contains nodes that represent the various modes of the cluster state. An adjacency matrix $V$, which is real and symmetric, describes this graph and summarizes the entanglement connections among the various nodes (see Fig.~\ref{fig:clusters} for concrete examples).
It has been shown that the cluster state defined by the adjacency matrix $V$ may be constructed from a set of independently $p$-squeezed input modes by combining them with a linear optical network in the appropriate manner \cite{vanLoock2007}. The action of this optical network can be mathematically described by a unitary matrix $U_V$ that transforms the collection of $N$ uncoupled $p$-squeezed modes into a $N$-mode cluster state. The mathematical relation between $V$ and $U_V$ is detailed below.
\ffig{fig:clusters}{figure2}{Four-mode linear, square and T-cluster states (graphs and respective adjacency matrices $V_{\text{lin}}$, $V_{\text{square}}$, $V_{\text{T}}$).}{65mm}
The nullifier operators of a $N$-mode cluster state are derived from the adjacency matrix $V$ and may be written as:
\begin{equation} \label{eq:nullifier}
\hat{\delta}_i = \left( \hat{p}_{i}^{C} - \sum_{j} V_{ij} \cdot \hat{x}_{j}^{C} \right),
\end{equation}
where $\hat x_i^C$ and $\hat p_i^C$ are the quadrature operators for the node $\hat a_i^C $, defined such that $\hat a_i^C = \hat x_i^C + \textrm{i} \hat p_i^C $, and $i,j = 1,...,N$. Theoretically, a state is considered a cluster state of the adjacency matrix $V$ if and only if the variance of each nullifier approaches zero as the squeezing of the input modes approach infinity. From this definition, a unitary matrix $U_V$ may be constructed that defines the optical network for constructing a given cluster graph \cite{vanLoock2007}.
In order to determine the class of unitary matrices corresponding to a given adjacency matrix $V$, the unitary is decomposed as $U_{V} = X_{V} + \textrm{i} Y_{V}$, where $X_{V} = \textrm{Re} \left[ U_{V} \right]$ and $Y_{V} = \textrm{Im} \left[ U_{V} \right]$. The requirement that the variances of the nullifiers approach zero as squeezing goes to infinity is satisfied given the relation \cite{vanLoock2007}:
\begin{equation} \label{eq:LinNetwork}
Y_{V} = V \, X_{V}.
\end{equation}
After exploiting the fact that $U_V$ is a unitary matrix (i.e., $X_VX^{T}_V+Y_VY^{T}_V = 1$ \cite{dutta1995real}), an initial unitary matrix $U_{V}^0$ is found for the desired graph state.
Importantly, the unitary matrix $U_{V}$ that creates a given cluster state is not unique, and the corresponding nullifier criteria of Eq.~\ref{eq:nullifier} is satisfied for a collection of different unitary matrices. In the case of finite squeezing, certain unitary matrices are more efficacious than others at creating the target cluster state (in the sense that they lead to a lower value of the nullifier variances). Other possible solutions may be obtained from the initial $U_{V}^0$ by multiplying it by a general, real orthogonal matrix $\mathcal{O}$ with $\mathcal{O} \mathcal{O}^T = \mathcal I$, i.e. $U'_{V} = U^0_{V} \mathcal{O}$ \cite{ferrini2013compact}. Given that $U^0_{V}$ forms a cluster state, it is straightforward to demonstrate that $U'_{V}$ also satisfies Eq.~\ref{eq:LinNetwork}. Thus, upon multiplying a specific $U_{V}$ by any orthogonal matrix, it is possible to span the complete space of physical unitary matrices satisfying Eq.~\ref{eq:LinNetwork}.
In the case of the SPOPO, a large set of supermodes is available with each mode exhibiting a noise level given by Eq.~\ref{sqz-level}. In order to construct a cluster state from these modes, the $N$ modes displaying the highest degree of squeezing are selected, and the appropriate basis change defined by the matrix $U_{V}$ is applied. However, as the SPOPO output modes are not all squeezed along the same quadrature component, it is necessary to include an extra diagonal matrix $\Delta_{\text{sqz}} = \text{diag}\{e^{\textrm{i} \phi_1}, ..., e^{\textrm{i} \phi_N}\}$ that rotates each mode's squeezed quadrature into the common $\hat p$ direction. The transformation from the SPOPO squeezed modes to the desired cluster modes is then written as
\begin{equation} \label{eq:LinNetworkFourier}
\vec{a} \, ^{C} = U_{V} \, \Delta_{\text{sqz}} \, \vec{S} \, ,
\end{equation}
where $\vec{a} \, ^{C} = (\hat{a}_{1} ^{C}, ...,\hat{a}_{N} ^{C})$ is the collection of mode operators corresponding to each cluster node and $\vec S = (\hat S_1,...,\hat S_N)$ is the set of the leading $N$ supermodes as defined in Eq.~\ref{sqz-hamiltonian}. The remaining supermodes are left unchanged by the transformation and are not relevant for the $N$-mode cluster state considered here. In the present circumstance, a basis change $U_V$ is equivalent to a specific choice of measurement basis, which will be utilized to reveal cluster correlations embedded in the optical comb structure.
\section{Experimental Methods} \label{sec:methods}
The laser source is a titanium-sapphire mode-locked oscillator delivering $\sim 140 \textrm{fs}$ pulses ($\sim 6 \textrm{nm}$ FWHM) centered at 795nm with a repetition rate
of 76MHz. This source is frequency doubled in a 0.2mm BIBO crystal (single pass), and the resultant second harmonic pumps an OPO, which consists of a $2 \textrm{mm}$ BIBO crystal contained within a $\sim 4 \textrm{m}$ ring cavity exhibiting a finesse of $\sim 27$. The length of the cavity is locked to the inter-pulse spacing by injecting a phase-modulated near-infrared beam in a direction counter-propagating to the pump and seed. This locking beam is phase-modulated at 1.7MHz with an electro-optic modulator (EOM), and locking of the cavity length is accomplished with a Pound-Drever-Hall strategy. The cavity is operated below-threshold and in an unseeded configuration. Frequency correlations of the vacuum output are investigated with homodyne detection in which the local oscillator (LO) pulse form is manipulated with ultrafast pulse shaping methodologies.
\ffigDouble{fig-experiment}{figure3}{Experimental
layout for the creation and characterization of multimode
frequency combs. A titanium-sapphire oscillator produces a $76
\textrm{MHz}$ train of $\sim 140 \textrm{fs}$ pulses centered at
795nm. Its second harmonic synchronously pumps an OPO. The cavity output is analyzed with
homodyne detection, where the spectral amplitude and phase of the local
oscillator (LO) are shaped. The LO shaper is depicted here in a transmissive geometry for clarity. By varying the relative phase between the shaped LO and the SPOPO output, the $x$- and $p$-quadrature noises of the quantum state projected onto the LO mode are measured.}{115mm}
A 4f-configuration shaper is constructed in a reflective geometry with a programmable 512 x 512-element liquid-crystal modulator in the Fourier plane. Application of a periodic spatial grating to the spatial light modulator induces diffraction of the spectrally-dispersed light. The amplitude and phase of the diffracted spectrum are independently controlled by the groove depth and position of the spatial grating, respectively \cite{nelson2005}. By varying the relative phase between the
shaped LO and the SPOPO output, a measurement is obtained of the $x$- and $p$-quadrature noises for the quantum state projected onto the spectral form of the LO mode (see Fig. \ref{fig-experiment}).
Light detection is performed with silicon photodiodes ($\sim 90\%$ detection efficiency, 100MHz detection bandwidth), and the homodyne visibility is $92\%$. The noise level of sidebands situated 1MHz from the optical carrier is then examined. The cumulative loss of the system is taken to be $\sim 25\%$, and the measured signals are corrected accordingly. The SPOPO generates vacuum squeezed at a level of $\sim 6 \textrm{dB}$ (corrected) when projected onto a local oscillator pulse taken directly from the titanium-sapphire laser.
The noise properties of a Gaussian state are fully characterized in terms of its phase-space covariance matrix \cite{Braunstein2005}. This matrix of second-moments is directly reconstructed in the spectral domain by using the pulse shaper to measure noise correlations amongst different spectral regions. The LO spectrum is divided into discrete bands of equal energy (e.g., in eight bands), and the amplitude and phase of each band may be individually addressed. Gaps between the individual spectral regions are intentionally imposed in order to ensure orthogonality of the different regions. Importantly, the supplemental loss incurred from the inclusions of these holes is not accounted for when correcting the noise levels. The $x$ quadrature is defined as the field quadrature of lowest noise for the unshaped LO pulse. The noise content of both the $x$- and $p$-quadratures for each spectral region and all possible pairs of regions are measured, which amounts to 36 measurements in the case of eight frequency zones. Individual covariance elements are then constructed according to the following relation:
\begin{eqnarray}
\langle x_{i} x_{j} \rangle &=&
\left[ \langle (x_{i} + x_{j})^2 \rangle - \frac{P_{i}}{P_{i}+P_{j}} \langle x_{i}^2 \rangle - \frac{P_{j}}{P_{i}+P_{j}} \langle x_{j}^2 \rangle \right] \nonumber \\ && \times \, \frac{P_{i}+P_{j}}{2 \sqrt{P_{i} P_{j}}},
\end{eqnarray}
where $P_{i}$ and $P_{j}$ are the optical powers of frequency bands $i$ and $j$, respectively, which are measured with the homodyne photodiodes.
Importantly, it has been verified that the LO phase dependence for each of the 36 noise measurements follows the same dependence as that of the unshaped LO reference. Consequently, the lowest noise level for every spectral combination is present in the $x$ quadrature, i.e., there is no rotation of the squeezing ellipse between successive measurements. Additionally, it has been observed that cross-correlations of the form $\langle x \, p \rangle$ are absent, which permits the covariance matrix to be cast in a block diagonal form: one block for the $x$-quadrature and one block for the $p$-quadrature.
We have seen that a good reconstruction of the quantum comb is accomplished with eight discrete LO spectral bands (or with ten bands as presented in \cite{Roslund2013}). However, it is feasible to perform the measurements with a reduced number of spectral regions depending upon the application. In what follows, results will be presented for a variety of different dimensionalities of the LO frequency space.
\section{Experimental Results} \label{sec:results}
\subsection{State Reconstruction}
\ffig{fig-matrices}{figure4}{Experimentally measured quantum noise matrices for the $x$- (a) and $p$- (b) quadratures. The noise correlation matrix is defined as in Eq.~\ref{eq:corr_matrix}. Each matrix reveals significant correlations among the frequency bands of the comb.}{85mm}
The full covariance matrix of the quantum comb is reconstructed following the 36 requisite homodyne measurements. Fluctuations and correlations departing from the vacuum level are depicted with the noise correlation matrix, which is defined as:
\begin{equation}
C^{x}_{i,j} = \langle x_{i} x_{j} \rangle / \sqrt{\langle x_{i}^2 \rangle \langle x_{j}^2 \rangle} - \delta_{i,j} \langle x_\textrm{vacuum}^2 \rangle / \langle x_{i}^2 \rangle
\label{eq:corr_matrix}
\end{equation}
for the $x$-quadrature with a similar definition for the $p$-quadrature. The retrieved correlation matrices for the two field quadratures are shown in Fig.~\ref{fig-matrices}. The spectral wings of the state's $x$-quadrature possess excess noise as compared to the frequency bands near the central wavelength; however, the strongest correlations are also evident in the wings. Qualitatively, this situation is consistent with a two-mode squeezed state, in which tracing out a single mode results in a thermal state (i.e., quadrature-independent excess noise).
Entanglement among various frequency bands is quantitatively assessed with the positive partial transpose (PPT) criterion for continuous variables \cite{simon2000}, which probes the inseparability of a given state bipartition. A bipartition is created by dividing the eight frequency bands of the comb into two sets. The transposition of one of these sets is achieved through a sign change of all momenta operators $\hat{p}_{i}$ contained within the set: $\left( \hat{x}_{i}, \hat{p}_{i} \right) \rightarrow \Gamma_{ii} \cdot \left( \hat{x}_{i}, \hat{p}_{i} \right) = \left( \hat{x}_{i}, -\hat{p}_{i} \right) $. This time-reversal operation creates a new covariance matrix $V_{\textrm{PPT}} = \Gamma V \Gamma$, which must continue to satisfy the Heisenberg uncertainty relation: $P = \Gamma V \Gamma - i \Lambda \geq 0$, where $\Lambda$ is the symplectic matrix \cite{Braunstein2005}. The two bipartitions are entangled if the Heisenberg matrix $P$ is not positive definite. Given eight distinct spectral bands, 127 unique frequency band bipartitions exist. Each of these possible bipartitions is subjected to the PPT criterion, and the minimum eigenvalue of $P$ is shown in Fig.~\ref{fig-ppt}. As seen in the figure, every possible state bipartition is entangled. The absence of any partially separable form implies that the SPOPO output constitutes a genuine 8-partite state in which each resolvable frequency element is entangled with every other component \cite{Braunstein2005}. Accordingly, the downconversion of a femtosecond frequency comb indeed creates a quantum object exhibiting wavelength entanglement that extends throughout the entirety of its structure.
Two distinct bands of PPT values are evident in Fig.~\ref{fig-ppt}. The band exhibiting a higher degree of entanglement (lower PPT value) is composed of all bipartitions that separate the highest and lowest frequency zones (pixels 1 and 8). Within this band, the most strongly entangled form results from division of the spectrum at the central wavelength. Conversely, the alternative band (higher PPT value) consists of those partitions in which these extreme spectral zones are not disconnected. The partition that dissociates the two spectral wings from the remaining spectrum corresponds to the most weakly entangled structure. Consequently, the spectral wings may be considered as reproducing the situation of two-mode entanglement, which is consistent with the structure of the covariance matrix.
However, the multimode character of the comb can not be directly inferred from the high degree of multipartite entanglement. It is well known that the bipartition of a single mode squeezed field creates two entangled modes that satisfy the PPT criteria. As a means for comparison, Fig.~\ref{fig-ppt} also includes the same 127 spectral partitions for a simulated single mode field with quadrature values that correspond to those of the first comb supermode. As seen in the figure, all of these bipartitions also satisfy the inseparability criterion. The minimum eigenvalue of $P$ no longer depends upon the symmetry of the bipartition but only upon the relative power between the two partitions. Nonetheless, PPT values for the single mode case are weaker than those observed for the comb, which provides a first indication of the comb's multimode character.
\ffig{fig-ppt}{figure5}{The PPT (blue) inseparability criteria for all 127 bipartite combinations of the 8 spectral bands. All 127 bipartitions possess a PPT value below the entanglement boundary of 0.0, which indicates complete non-separability for the state. The PPT criteria is also applied to a simulated single mode squeezed state (red) with noise parameters corresponding to the first supermode. The single mode PPT values are ordered according to the full PPT values. The black dotted line represents the mean single mode PPT value. All full PPT bipartitions below this line are indicative of multimode character. }{85mm}
\subsection{Eigenmode Decomposition}
Although multipartite frequency entanglement is relevant for the creation of specialized entangled states, it is an extrinsic property of the comb. For example, as explained above, the PPT criteria depends upon a predefined allocation of individual frequency bands. Multipartite character may always be imparted to a single mode quantum object by simply dividing it with a beamsplitter.
However, the basis change introduced in Eq.~\ref{sqz-hamiltonian} may be implemented as a means to recover a set of independently squeezed spectral modes embedded in the beam. This generalized Schmidt decomposition is achieved by diagonalizing the recovered covariance matrix to reveal a set of decorrelated supermodes $\hat{S}_{k}$. When the matrices of Fig.~\ref{fig-matrices} are eigendecomposed, it is observed that although the individual $x$ and $p$ block eigenvectors are quite similar, they are not exactly equal. This implies that a common mode basis is not able to simultaneously diagonalize the two quadrature blocks.
In order to understand the physical origin of this effect, the complete decomposition of the symplectic matrix responsible for creating the multimode state is considered. The Bloch-Messiah reduction \cite{Braunstein2005-irreducible, dutta1995real} allows any symplectic transformation to be decomposed into an initial basis change, a perfect multimode squeezer, and a final basis change. When the input state to this transformation is vacuum, the first basis rotation is arbitrary, and the resultant multimode state may be understood as an assembly of squeezers in a given eigenbasis (as seen in Eq.~\ref{sqz-hamiltonian}). However, when the input state either contains classical noise or is not pure, both of these basis rotations become meaningful.
Application of the Bloch-Messiah reduction to a covariance matrix reveals the Williamson (or ``symplectic'') eigenvalues as well as the mode structures for both the classical noise and quantum squeezers. It is these Williamson eigenvalues that indicate the existence of residual classical noise on the input state. Importantly, in the presence of excess classical noise, the quantum squeezer basis and the supermode basis do not necessarily correspond. In the present experiment, the input state to the cavity is vacuum, which implies that residual classical noise is introduced by loss mechanisms. Correspondingly, the fact that the $x$ and $p$ blocks of the covariance matrix are not diagonalized by a common basis indicates that the loss mechanism is spectrally dependent (e.g., non-uniform transmission profile of the SPOPO output coupler).
A Bloch-Messiah reduction of the 8-mode covariance matrix was implemented in order to reveal the full structure of the comb state. The Williamson eigenvalues possess values close to unity, which indicate that the purity of the comb state is quite high. Additionally, the bases of the classical noise eigenmodes and the squeezed modes are independently uncovered. The squeezed mode basis remains largely unchanged from run to run, while the basis associated with the classical noise exhibits a large degree of variation that depends upon specific experimental conditions. This effect arises because the classical noise is relatively small compared to the quantum properties of the comb, and the eigenvalues are nearly degenerate. Consequently, the extraction of well-defined supermodes from the experimental covariance matrix is feasible even though the matrix can not be placed in a perfectly diagonal form due to the influence of classical noise.
In practice, the experimental supermodes are recovered with a more pragmatic strategy. Upon eigendecomposition of the covariance matrix, the modes exhibiting squeezing are observed to alternate between the $x$ and $p$ quadratures. The eigenstructures corresponding to the anti-squeezed modes, which likewise alternate between the two quadratures,
exhibit increased robustness to noise (which arises from their increased angular contribution to the squeezing ellipse). In order to determine the covariance matrix of an entirely decoupled mode set, the eight anti-squeezed eigenmodes are orthogonalized with a Gram-Schmidt procedure, and the covariance matrix is re-expressed in terms of this newly orthogonal basis. The resulting matrix is nearly diagonal and contains the squeezing value for each orthogonalized mode on its diagonal.
When performing the individual frequency band measurements utilized to construct the covariance matrix, multiple oscillations of each noise trace are collected in order to estimate the uncertainty of the corresponding squeezing and anti-squeezing levels. These uncertainties are exploited to assess the error level of each supermode squeezing value with a stochastic sampling methodology. Noise values for a particular spectral band combination are drawn from a normal distribution with a mean specified by the average of all identified peaks or valleys and a variance given by the variance of the extrema. A collection of $10^{4}$ individual covariance matrices is amassed, where each matrix is assembled by drawing samples from the necessary normal distributions. The Gram-Schmidt orthogonalization procedure is implemented for each matrix, which yields a squeezing spectrum and mode set. The mean squeezing spectrum is shown in Fig.~\ref{fig-eigenvalues} for the situations of 4, 6, and 8 discrete spectral regions.
In each case, the mean spectrum is generally noise-robust. For the leading modes, a larger overall squeezing level is observed for a smaller number of pixels, which is a consequence of the imposed spectral gaps. However, when the covariance matrix is reconstructed with a larger number of pixels, the eigenspectrum exhibits more modes. An increase in the number of available pixels is needed to replicate the spectral complexity of higher-order supermodes. In the case of 8 unique frequency bands, up to 7 squeezed modes are contained within the conglomerate comb structure (while 8 squeezed modes were found in the ten-pixel spectral reconstruction performed in \cite{Roslund2013}). The quadrature in which each of these modes exhibits noise reduction ($x$ or $p$) alternates between successive modes in agreement with theoretical predictions \cite{patera2010}. As such, the SPOPO behaves as an \emph{in situ} optical device consisting of an assembly of independent squeezers and phase shifters.
\ffig{fig-eigenvalues}{figure6}{Mean noise levels and uncertainties (dB) for each of the orthogonalized Gram-Schmidt modes. The mean eigenspectra are shown for 8 (red), 6 (blue), and 4 (green) unique frequency bands. The simulated eigenvalues corresponding to 8 frequency bands are shown for comparison (black).}{80mm}
The orthogonalized modes that originate from covariance matrices comprised of four and eight spectral zones are shown in Figs.~\ref{fig-supermodes}a and \ref{fig-8modes}, respectively. The spectral makeup of each retrieved experimental mode displayed in Fig.~\ref{fig-supermodes}a follows the form of a Hermite-Gauss polynomial, which approximates the predicted supermode profile \cite{patera2010}. However, as mentioned above, it becomes evident that the spectral complexity of higher-order supermodes is only reproducible with an increase in the number of pixels. Additionally, the spectral width $\Delta \lambda_{k}$ of supermodes following a Hermite-Gauss progression increases with the mode index $k$ as $\Delta \lambda_{k} =\sqrt{2 k+1} \cdot \Delta \lambda_{0}$.
\ffig{fig-supermodes}{figure7}{(a) Retrieved experimental supermodes with the spectral gaps removed. The field of each supermode is measured with spectral interferometry. (b) Noise traces corresponding to each of the experimental supermodes.}{85mm}
In order to assess the impact of both the LO bandwidth and the number of independent shaper elements on the observed squeezing levels, a series of simulations was performed utilizing the current experimental parameters. These simulations were performed by directly calculating the supermodes from the phase-matching properties of a BIBO crystal \cite{patera2010} while assuming a perfect cavity with a bandwidth of 50nm. The purity of the state is taken to match the quadrature noises of the first supermode. With these parameters, the cavity output contains $\sim 25$ modes with an equivalent level of squeezing.
Subsequently, a 8-frequency pixel homodyne detection apparatus is simulated without accounting for the supplemental losses incurred by the gaps between pulse shape pixels. The resulting eigenspectrum is shown in Fig.~\ref{fig-eigenvalues}. The simulation results are consistent with the fact that the spectral overlap diminishes between the fixed bandwidth of the LO spectrum and each progressively broadened supermode. This decline in the spectral overlap becomes especially prominent in the wings. While $\sim 25$ modes are initially present in the comb state, only $\sim 5$ are detected as seen in Fig.~\ref{fig-eigenvalues}. Two technical effects account for this loss of modes in the detection process. First, the fixed bandwidth of the LO only achieves perfect overlap with the first supermode. In addition, the high spatial frequency of the spectral structures created by the pulse shaper begin to exhibit appreciable spectral overlap with very high-order supermodes that are not squeezed. Both of these limitations introduce vacuum into the measurement, which degrades the overall squeezing levels. Consequently, the current observation of 7 squeezed modes does not represent an inherent upper limit to the quantum dimensionality of comb states. With the use of broader bandwidth LO pulses, increased spectral resolution, and large cavity bandwidths (all achievable experimentally), states possessing as many as $\sim 100$ squeezed modes are expected \cite{patera2010}.
\ffig{fig-8modes}{figure8}{Amplitude spectra corresponding to each of the orthogonal supermodes retrieved from the covariance matrix shown in Fig.~\ref{fig-matrices}.}{80mm}
\subsection{Eigenmode Corroboration}
The supermodes displayed in Fig.~\ref{fig-supermodes}a constitute an uncoupled set of independent squeezed states that can serve as a resource for the construction of specialized entangled structures, such as cluster states. As such, it is important to validate the structure and squeezing level of the modes retrieved from the covariance matrix.
Each of the modes derived from the covariance matrix is written directly onto the pulse shaper after bridging the spectral gaps that were imposed in the frequency band basis. The corresponding noise traces are seen in Fig.~\ref{fig-supermodes}b. Most importantly, each of these four modes exhibits squeezing at a level in accordance with that retrieved from the covariance matrix. Furthermore, the quadrature of squeezing alternates between successive modes. Thus, when the sum of the first two modes is written to the shaper, excess noise is present in both quadratures (not shown). Consequently, the SPOPO simultaneously generates states that are squeezed in either the amplitude or the phase quadrature.
\subsection{Discussion on State Purity}
The purity $\mathcal{P}$, which is an intrinsic property of the state, is accessible from the covariance
matrix with the relation $\mathcal{P} = 1 / \sqrt{\textrm{det}(\Sigma^E)}$, where $\Sigma^E$ is
the measured covariance matrix. The covariance matrix eigenvalues shown in Fig.~\ref{fig-eigenvalues} enable comparison of the state purity for 4-, 6- and 8-spectral-zone divisions of the LO spectrum. The purity values were also measured for different pump powers (not shown), and the observed variation follows the expected behavior for the output state of an OPO.
The imposition of gaps between the discrete spectral regions represents a loss, and therefore decreases the state purity. As fewer gaps are necessary to create four spectral zones, a higher state purity is expected for the four-band matrix as compared to the eight-band matrix. As seen in Fig.~\ref{fig-eigenvalues}, although the overall squeezing levels are made similar with an appropriate tuning of the pump power, the four-band state possess a slightly higher purity. By fine adjustment of the experimental parameters, a global purity ranging from $\mathcal{P} \sim 0.7 - 0.8$ is achievable while maintaining significant squeezing levels. The ability to achieve high purity states while maintaining their multimode nature constitutes an important resource for the construction of network structures.
\section{Cluster State Analysis} \label{sec:clusters}
\subsection{Creation of the Cluster Basis}
In the previous section, a basis change of the covariance matrix allowed retrieval of the theoretically predicted supermodes. Similarly, an analogous procedure may be applied to construct cluster state bases as defined by Eq.~\ref{eq:LinNetworkFourier}. In doing so, the feasibility for creating cluster states from experimentally retrieved covariance matrices may be directly probed.
Application of the Gram-Schmidt orthogonalization method described above defines a rotation matrix $U_{T}$, which transforms the correlated pixel bands $\vec{a} \, ^{\text{pix}}$ into a set of nearly decorrelated experimental supermodes $\vec{S}$ through the relation $\vec{S} = U_{T}^{-1} \, \vec{a} \, ^{\text{pix}}$.
The experimentally retrieved supermodes shown in Fig.~\ref{fig-supermodes} exhibit squeezing in alternating quadratures. Hence, it becomes necessary to apply a mode-selective phase rotation in order to transfer each mode's squeezing axis into the same direction. In order to accomplish this task, the phase-shift matrix $\Delta_{\text{sqz}} = \textrm{diag\{i,1,...,i,1\}}$ is applied. Subsequently, the cluster state corresponding to a particular adjacency matrix $V$ is constructed by applying the appropriate unitary matrix $U_{V}$. Thus, the total transformation relating the original pixel basis to the one parameterizing the cluster state is described by
\begin{equation}
\label{eq:utot}
\vec{a} \, ^{C} = U_{V} \, \Delta_{\text{sqz}} \, U_{T}^{-1} \, \vec{a} \, ^{\text{pix}} \equiv U_{\textrm{tot}} \, \vec{a} \, ^{\text{pix}} ,
\end{equation}
where $U_{V}$ satisfies Eq.~\ref{eq:LinNetwork}.
Among the group of matrices $U_V$ that satisfy Eq.~\ref{eq:LinNetwork}, one is selected that minimizes the nullifier variances of Eq.~\ref{eq:nullifier} for the transformed cluster modes $\vec{a} \, ^{C}$. This is accomplished by parameterizing the most general orthogonal matrix in terms of a collection of angles: $\mathcal O (\vec{\theta})$. Upon doing so, all physically relevant cluster unitaries are spanned as $U_V = U_V^0 \, \mathcal O(\vec{\theta})$ in line with the discussion of Sec.~\ref{sse:clu_state_basis}. An evolutionary strategy \cite{roslund2009accelerated} is employed to search for a set of angles $\vec{\theta}$ that minimizes the nullifier variances in Eq.(\ref{eq:nullifier}) for the transformed cluster modes $\vec{a} \, ^{C}$.
A symplectic transformation $S_{\textrm{tot}}$ corresponding to the optimal unitary matrix $U_{\textrm{tot}}$ may then be written as
$S_{\textrm{tot}} = \left( \begin{array}{cccccccc}
X_{\textrm{tot}} & - Y_{\textrm{tot}} \\
Y_{\textrm{tot}} & X_{\textrm{tot}} \\ \end{array} \right)$ with $U_{\textrm{tot}} = X_{\textrm{tot}} + \textrm{i} Y_{\textrm{tot}}$ \cite{dutta1995real}. This transformation is applied to the covariance matrix measured in the pixel basis $\Sigma^E$ in order to yield the covariance matrix of the cluster state:
\begin{equation}
\label{eq:trasfa}
\Sigma^{C} = S_{\textrm{tot}} \Sigma^{E} S_{\textrm{tot}}^{T}.
\end{equation}
Individual cluster correlations are then verified by determining whether the set of nullifier variances for each cluster state (as defined by Eq.~\ref{eq:nullifier}) lie below the shot noise level, i.e., $\delta_{i} < \delta_{\textrm{shot}} = 1$ for $i=1...N$. In this context, the shot noise level is defined as the nullifier variances obtained with a vacuum input to the linear network.
\subsection{Six-mode Cluster States}
In order to provide specific examples as to the potential for creating cluster states within the quantum comb, several different six-mode cluster structures are considered with corresponding graphs displayed in Table~\ref{tab:tablea}.
The requisite squeezed input modes are those originating from a covariance matrix measured in a basis of six spectral zones. Following optimization of the orthogonal matrix $\mathcal O (\vec{\theta})$, the nullifier variances for each cluster structure are computed from the cluster covariance matrix $\Sigma^{C}$ as defined in Eq.~\ref{eq:trasfa}. Each set of variances is normalized to the respective shot noise levels.
The cluster analysis is performed with two different sets of squeezed input modes. Set A exhibits relatively low input squeezing levels but a high state purity, while Set B displays high input squeezing levels and a lower state purity. This latter set of supermodes is obtained by operating closer to the cavity threshold. In both cases, each of the requisite nullifiers possesses a value below the shot noise level for all of the considered cluster structures. The higher input squeezing values present in Set B result in improved cluster correlations as highlighted by nullifier variances significantly below the shot noise limit.
\begin{widetext}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\cline{2-4}
\multicolumn{1}{c|}{}& Graph & Nullifiers $\{ \delta_i \}$ (Set A) & Nullifiers $\{ \delta_i \}$ (Set B) \\
\hline
Linear &
\includegraphics*[width=1cm]{table_graph1}
& \{0.85, 0.74, 0.69, 0.67, 0.75, 0.86\} &\{0.76, 0.58, 0.42, 0.46, 0.55, 0.78\} \\
\hline
Hexagon &
\includegraphics*[width=1cm]{table_graph2}
& \{0.79, 0.76, 0.76, 0.73, 0.76, 0.70\} & \{0.55, 0.55, 0.59, 0.52, 0.59, 0.63\} \\
\hline
Connected Hexagon &
\includegraphics*[width=1cm]{table_graph3}
& \{0.66, 0.65, 0.67, 0.65, 0.66, 0.63\} & \{0.43, 0.37, 0.44, 0.36, 0.44, 0.33\} \\
\hline
Maximally Connected Hexagon &
\includegraphics*[width=1cm]{table_graph4}
& \{0.67, 0.68, 0.65, 0.67, 0.67, 0.64\} & \{0.42, 0.43, 0.41, 0.45, 0.37, 0.27\} \\
\hline
Prism &
\includegraphics*[width=1cm]{table_graph5}
& \{0.66, 0.71, 0.73, 0.66, 0.71, 0.73\} & \{0.42, 0.47, 0.55, 0.42, 0.47, 0.55\} \\
\hline
Connected Square Pyramid &
\includegraphics*[width=1cm]{table_graph6}
& \{0.67, 0.64, 0.65, 0.64, 0.65, 0.67\} & \{0.41, 0.37, 0.37, 0.37, 0.37, 0.41\} \\
\hline
Double Square &
\includegraphics*[width=1cm]{table_graph7}
& \{0.71, 0.75, 0.66, 0.65, 0.71, 0.75\} & \{0.48, 0.60, 0.44, 0.36, 0.48, 0.60\} \\
\hline
Connected Double Square &
\includegraphics*[width=1cm]{table_graph8}
& \{0.71, 0.71, 0.66, 0.66, 0.72, 0.71\} & \{0.50, 0.53, 0.36, 0.36, 0.49, 0.53\} \\
\hline
Pentagonal Pyramid &
\includegraphics*[width=1cm]{table_graph9}
& \{0.69, 0.73, 0.73, 0.70, 0.66, 0.71\} & \{0.59, 0.48, 0.41, 0.48, 0.60, 0.40\} \\
\hline
\end{tabular}
\caption{Cluster state nullifiers $\{ \delta_i \}$ normalized with respect to the corresponding shot noise value of $\delta_{\textrm{shot}} = 1$. Set A consists of input modes with a maximum squeezing value of $-2.34~\textrm{dB}$ and a high state purity of $\mathcal{P} = 0.84$. Conversely, Set B utilizes input modes exhibiting a maximum squeezing level of $-6.48~\textrm{dB}$ and a state purity of $\mathcal{P} = 0.69$. In both cases, all of the considered cluster states are realized. The higher input squeezing levels associated with Set B result in enhanced violations of the nullifier variance criteria.\label{tab:tablea}}
\end{table}
\end{widetext}
\section{Discussion} \label{sec:discussion}
The intrinsic entanglement of the quantum frequency comb provides an irreducible, universal quantum resource \cite{Braunstein2005-irreducible} of direct relevance for quantum information processing. From this multimode resource, the creation of cluster states or any user-defined structure is affected through an appropriate basis change. In particular, the frequency entanglement present in the comb is arranged in such a way that multiple cluster states are simultaneously embedded in its structure. Importantly, the realization of these states does not necessitate any change in the optical architecture itself, but rather simply in the manner by which the state is measured. The projective measurements necessary to realize such entangled states may be implemented with any variety of spectrally-resolved homodyne detection, including pulse shaping of the local oscillator. Theoretical analysis has proven that it is possible to fabricate the cluster structures necessary for computation from the modes contained within the quantum comb \cite{ferrini2013compact}. Likewise, the experimental feasibility of achieving basis transformations through measurement has already been demonstrated in the analogous domain of spatially multimode beams \cite{Armstrong2012}.
It is important to stress that the technical difficulties currently limiting the number of observed squeezed modes to $\sim 10$ are not fundamental to the methodology. Suitable improvements to the experimental setup (e.g., better adapting the pump spectrum, etc.) are expected to lift these obstacles. Accordingly, simulations predict that $\sim 100$ squeezed modes are expected to be embedded in the frequency structure of the quantum comb \cite{patera2010}. Such a resource is scalable and ideal for implementing quantum information protocols.
In summary, the parametric downconversion of ultrafast frequency combs provides a practical and scalable multimode resource. The ability to generate top-down entanglement amongst thousands of frequencies with a single nonlinear interaction provides a unique capability and bodes well for the continued development of specialized quantum networks within highly multimode structures.
\acknowledgments
This work is supported by the European Research Council starting grant Frecquam and the French National Research Agency project Comb. C.F. is a member of the Institut Universitaire de France. J.R. acknowledges support from the European Commission through Marie Curie Actions, and Y.C. recognizes the China Scholarship Council.
\bibliographystyle{apsrev}
|
1,314,259,993,194 | arxiv | \section{Introduction}\label{sec:intro}
Change point analysis has a long and rich tradition, dating back to the work of \cite{Page1954} and \cite{Hinkley1970}. During the last decade change point methods have attracted considerable interest,
leading to substantial development of both methodology and diverse areas of applications. Recent surveys are given by \cite{horvath2014extensions} as well as \cite{Truong2018}, whilst \cite{Killick2012} provide a valuable resource collating recent published and software contributions.
.
These methods are of fundamental importance in many areas,
including econometrics \citep{aue2012sequential,hlavka2017fourier}, medicine \citep{fried2004online}, neuroscience \citep{Aston2012}, ocean-engineering \citep{NamAstonEckleyKillick2015}
and bioinformatics \citep{Rigaill2012}.
The challenge of detecting changes in multivariate time series has recently received growing attention. Notable contributions include \cite{Aue2009, Matteson2014, Zhang2010}. Initial research on this important problem has focussed on approaches for detecting those times at which changes occur in all series, e.g. \cite{Aue2009, Siegmund2011, Zhang2010}. More recently, several contributions have sought to relax this rather restrictive assumption, see \cite{Preuss2015} or \cite{Bardwell2018} for example.
This article considers a different, albeit multivariate, change point setting. Specifically, the work that we describe is inspired by an application arising from remote sensing, where the changes in each component of the multivariate time series are functionally related to the changes in other series. Remote sensing of gas emissions has been of considerable interest to researchers for a number of years. Applications range from monitoring green house gases \citep{Chen2006}, toxic gas emissions \citep{Bhattacharjee2008} and monitoring emissions from carbon storage resources \citep{Hirst2017}. In many of these examples, the primary objective is to be able to successfully locate sources of emission and quantify the emission rate(s).
\begin{figure}[b]
\centering
\includegraphics[width=3.5in]{plots/trajectories}
\caption{Trajectory}
\label{trajectory}
\end{figure}
The application we consider centres on the remote detection and location of the source of gas emissions based on aerial sensed-data, as introduced in \cite{Hirst2013}. Their approach consists of an ultra-sensitive, high precision methane gas sensor mounted on an aircraft to measure a continuous stream of air from the leading edge of a wing. The sensor samples data at a high rate with GPS, radar altitude, barometric pressure, air temperature, wind velocity and several other variables. Flight data are then combined with meteorological data, including additional physical modelling attributes, including wind direction and atmospheric boundary layer depth, to estimate the shape of the plume and, thereby locate the source of the emission origin.
The data that we consider, made available to us by \cite{Hirst2013}, provides a valuable test resource with known source locations. It is based on the atmospheric methane concentrations in the vicinity of two landfill sites. Specifically the data are collected by an aircraft flying at approximately 200m above ground level at a constant speed. This is well below the atmospheric boundary layer, that can constitute a `ceiling' on gases being transported from the ground. The aircraft surveys an area of approximately 40km $\times$ 40km, tracing back and forth in a snake-like fashion downwind of each landfill. Initial average wind speed and direction are also provided at multiple altitudes, see \cite{Hirst2013} for details. Figure \ref{trajectory} shows the flight trajectory in the vicinity of the landfill sites.
\begin{figure}[b]
\centering
\includegraphics[width=5in]{plots/landfill_plume}
\caption{The modified landfill data of the left-hand trajectory}
\label{Left_TransectConcentrations}
\end{figure}
Note that to avoid confounding of gas seepage when crossing the actual landfill, we only consider the data collected within the blue and red highlighted trajectory regions.
An alternative view of the left trajectory of the data is provided in Figure \ref{Left_TransectConcentrations} plotting the methane concentrations when aligned to reference distances from the source. {\color{black} The concentration data is collected discretely in time, resulting in a time series with varying length of around 200 data points in each leg as plotted in Figure \ref{Left_TransectConcentrations}. We re-register this data to form a multivariate time series with regularly spaced observations (for details we refer to ~\cite{silke_diss}, Chapter 18) before applying our methodology.}
Earlier work exploring this data set, described by \cite{Hirst2013}, sought to combine the observed gas concentration rates with idealised gas dispersion models to identify the locations of the unknown sources. Whilst effective, the method proposed requires strong assumptions on both the form of the (Gaussian) plume and about the dependence structure along the observed time series, in the form of independent and identically distributed Gaussian errors.
In this article we develop an alternative approach that both allows for dependence in error structure along the flight path, and makes less restrictive assumptions on the plume form. Specifically, we seek to develop theory and methodology that enable the analysis of such multivariate series, allowing for both dependence in time and a functional relationship between the location of change points across different components of the time series. We propose two different methods: The first only requires such a functional relationship generalizing the approach by \cite{horvath1999testing}, while the second one also uses approximate information about the reduction in concentration as the distance from the source increases. The latter approach has the potential to greatly increase power and hence estimation accuracy (see, e.g., \cite{aston2018}), while still being sufficiently robust with respect to a certain degree of misspecification of this concentration reduction.
The intuition that underpins our work is to view each of the aircraft transects as a time series in its own right, see Figure \ref{Left_TransectConcentrations} for an example. As such, our data is converted into a multivariate time series with each component corresponding to a transect of the flight path.
Assuming that a given time series component (transect) includes a crossing of the plume, then one would expect to see an elevated concentration of gas in the time region that corresponds to the aircraft crossing the plume, with lower concentrations either side of the plume. Henceforth we shall refer to this region of elevated gas concentration as the \emph{change region}. In the statistical literature, situations where the mean in an unknown interval differs from the rest of the data are called epidemic change problems (see e.g. \citet{eeg_data}, \citet{aston2018}).
The feature that sets the gas emission data apart from other epidemic change situations is the fact that the locations are not at the same place in each component.
Instead, due to the dispersion of the gas, it is natural to assume that the boundaries of the change regions are related to one another. The methodology which we propose seeks to encapsulate this relationship, allowing for a functional relationship that is parametrized by both known parameters (such as wind direction) and unknown parameters, e.g. the location of the source.
The article is organised as follows. In Section \ref{sec:cpa:model}
we give a general model description that is well suitable for the gas emission data after an appropriate preprocessing, but also allows for different examples. Section \ref{sec:cpa:test} derives and analyses two types of change point tests for the described model. While they may be of independent interest in other applications, for the purpose of the analysis of the gas emission data they are merely required as an intermediate step. In Section~\ref{section_est_cp} we derive two different estimators for the unknown source location (or more generally for the unknown parameters of the functional relationships describing the change region) and prove their consistency. Section~\ref{section_summary} summarizes the construction principles behind these tests and estimators and gives some insight into possible generalizations. Some simulations are given in Section~\ref{sec:sim_study}, while the left trajectory of the gas emission source is analyzed in detail in Section~\ref{sec:data}. Some concluding remarks can be found in Section~\ref{sec:conclusions}.
The proofs can be found in Appendix~\ref{sec:proofs} and the analysis of the right trajectory in Appendix~\ref{sec_right}.
\section{Change point analysis}\label{sec:cpa}
In this section, we begin by first describing a {\color{black} multivariate} modelling framework that takes the various attributes of the remote sensing change point problem into account. From this we {\color{black} propose two different ways of aggregating information across transects that will be the basis for the proposed estimators for the location of the gas emission source. Because estimation and testing are strongly related we also provide the corresponding test procedures in Appendix~\ref{app_test}.}
In both cases, the developed theory goes beyond the motivating data example of gas emission sources.
Nevertheless, we will make the connection to the data at hand at every step, while discussing the underlying construction principles and their consequences in more detail
in Section~\ref{section_summary}.
In so doing, we seek to
better understand how to customize or even generalize the presented procedures to other situations and examples.
\subsection{Model of the data}\label{sec:cpa:model}
As described in the introduction, following an appropriate transformation, the data is considered as a (dependent) multivariate time series with a different (in this example elevated) mean, within the change region of each component.
We define the change region $\{NF_{\vartheta_0}(i)<t\leqslant NG_{\vartheta_0}(i)\}$ in component $i$ by a pair of change points (in rescaled time) $(F_{\vartheta_0}(i),G_{\vartheta_0}(i))$, with $F_{\vartheta}(i)<G_{\vartheta}(i)$ for all $i=1,\ldots,d$ and all $\vartheta\in\Theta$. Here, $\vartheta_0\in\Theta$ denotes the true underlying parameters, while the functional relationship between change points is parametrized by the functions $F_{\vartheta}(\cdot)$ and $G_{\vartheta}(\cdot)$.
Clearly, these functions depend on both known parameters, such as the direction of the wind, and unknown parameters such as the location of the source and the opening angle of the cloud. For notational simplicity we will include the known parameters in the functional shape of $F,G$, so that $\vartheta\in\Theta$ are the unknown parameters only.
This leads to the following model for the data
\begin{equation}\label{eq_model}
X_i(t) = \mu_i + {\Delta}_i \mathds{1}_{\{F_{\vartheta_0}(i) <t/N\leqslant G_{\vartheta_0}(i)\}} + e_i(t),
\end{equation}
with $i=1, ..., d$ denoting the components of the multivariate time series and $t=1, ..., N$ the time point (after transforming the flight path into a multivariate time series).
Furthermore, we assume that $\vartheta\mapsto F_{\vartheta}(i)$ as well as $\vartheta\mapsto G_{\vartheta}(i)$ are continuous for all $i=1,\ldots,d$.
The errors $\{\boldsymbol{e}(\cdot)\}$ with $\boldsymbol{e}(t)=(e_1(t),\ldots,e_d(t))^T$ are stationary and centered with existing second moments and have to fulfill a (multivariate) functional central limit theorem. In particular, they can be dependent.
This model extends the classical epidemic setting, where $\vartheta=(\lambda_1,\lambda_2)$, $0<\lambda_1<\lambda_2<1$ are the two unknown change points (in rescaled time) and $F_{\vartheta}(i)=\lambda_1$, $G_{\vartheta}(i)=\lambda_2$.
\begin{figure}[bt]
\begin{center}
{\includegraphics[width=5in]{plots/plot_linear_cloud}}
\end{center}
\caption{{\color{black} Linear plume model: Three exemplary source locations with a linear cloud having different opening angles.}}
\label{figure_lin_plume}
\end{figure}
{\color{black} The main example in this paper is a linear plume with known or unknown opening angle as shown in Figure~\ref{figure_lin_plume}. The shaded field indicates possible source locations to be searched while we indicate three possible clouds with different source locations and different opening angles. The wind direction is not included in this model because the information is already taken into consideration at the time of the collection of the data, where the flight paths are chosen to be perpendicular to the wind direction.
{\color{black} The data consists of an 8-dimensional time series with only around 200 time points. Consequently, slightly different plume shapes will lead to almost the same change points in each of the transects. Indeed, some preliminary analyses have shown that both linear and Gaussian plumes lead to very similar results for the data example at hand. As such, in order to aid clarity and model parsimony, we adopt the simplest reasonable model in the simulation study and data analysis, namely a linear plume.
The theoretic results obtained under model~\eqref{eq_model} are much more general and go far beyond the linear plume example by allowing for many different shapes of the cloud.
Nevertheless, this model remains somewhat simplistic in other respects such as the assumption of a constant mean within the plume/change region, while the real data rather exhibits a gradual change. The methodology in this paper could easily be adapted to those type of changes using the same tools by similar adjustment to that for the projection method. However, this only leads to an improvement if the shape of the gradual change is known sufficiently well, which is typically not the case.
Additionally,
the alignment of the change points in the data example seems to deviate somewhat from any of the usual cloud shapes by being somewhat misaligned from one transect to the next, possibly caused by temporal changes in the wind direction, in particular for the right transect. We make use of this last observation by checking the robustness of our methodology with respect to misspecification; see Section~\ref{sec_right} in the appendix.
}
\subsection{Aggregation methodology}\label{sec:cpa:test}
{\color{black}
In a multivariate context aggregating information about possible change points in different components of the time series usually leads to an improved signal-to-noise ratio. This is due to the fact that by the aggregation the signal is increased by a larger amount than the noise level as long as the errors are not perfectly dependent. Consequently, a multivariate approach is usually preferable over several univariate approaches that are then combined later.
Therefore, we consider two different methods of aggregating information across transects that will be used for estimation purposes but can also be used for testing, as detailed in Appendix~\ref{app_test}.
}
The first {\color{black} approach} is related to the multivariate test statistic discussed in \citet{horvath1999testing} in the at-most-one-change situation {\color{black} which is obtained as a version of the likelihood ratio test statistic under normality assumptions}. Their statistic is strongly related to the panel statistic as discussed in \citet{horvath2012change}, where the difference lies in the fact that the number of components can be similarly large or even larger than the number of time points (requiring different asymptotic considerations).
Our statistic is different because it (a) takes an epidemic change into account and, more importantly, that (b) we allow for general parametrizations of how the change evolves through components (by allowing for an arbitrary parametrization of the change points).
{\color{black} The multivariate approach we propose is based on the following statistic}
\begin{align*}
&
{\color{black} A^M(\vartheta) = } \boldsymbol{S}_{\vartheta}^T \Sigma ^{-1} \boldsymbol{S}_{\vartheta},\\
& \text{where}\quad \boldsymbol{S}_{\vartheta}=(S_{\vartheta}(1),\ldots,S_{\vartheta}(d))^T,\qquad
S_{\vartheta}(i) = \sum\limits_{t=\lfloor N F_{\vartheta}(i)\rfloor+1}^{\lfloor NG_{\vartheta}(i)\rfloor} \left( X_i(t) - \frac{1}{N} \sum\limits_{l=1}^{N} X_i(l) \right),\\
&\phantom{\text{where}}\quad \Sigma=\sum_{h\in{\mathbb Z}}\Gamma(h), \quad \Gamma(h)=\operatorname{E} \boldsymbol{e}(0)\boldsymbol{e}(h)^T, h\geqslant 0,\quad \Gamma(h)=\Gamma(-h)^T, h<0.
\end{align*}
$\Sigma$ is the long-run covariance of the multivariate error sequence and can be replaced by a consistent estimator. In case of independent (across time) errors this reduces to the covariance matrix of $\boldsymbol{e}(0)$. If the dimension is even moderately large, the nonparametric estimation of the full long-run covariance matrix is statistically usually quite imprecise. {\color{black} See also the discussion in Remark~\ref{rem_mis_cov} in the appendix}. This is particularly problematic if the inverse of the covariance matrix is needed as is the case with the above statistic.
{\color{black} If the number of transects is large in comparison to the number of time points, then estimation errors can accumulate and identification may not be possible (\cite{BickelL2008}), where additional numerical errors may arise when inverting the matrix (see Chapter 14 in \cite{Higham2002}). The problem becomes even more difficult in the presence of time series errors (which requires the estimation of the spectrum at frequency 0) as well as under the presence of change points.}
This is {\color{black} less problematic} if $\Sigma$ has a diagonal structure, i.e.\ if the components are independent, and only the long-run variances have to be estimated. In our example, this assumption is reasonable (see Figure~\ref{figure_ACF_1}) otherwise bootstrap methods such as e.g.\ in \citet{Aston2012} can help.
{\color{black} Because the dependence between different transects seems to be very small (see Figure~\ref{figure_ACF_1} below), the latter approach is feasible even without using bootstrap methods. }
%
{\color{black} Nevertheless, because of these difficulties we also discuss the theoretic behavior of the testing and estimation procedures under misspecification i.e.\ allowing for inconsistent estimation of $\Sigma$ towards some positive definite matrix $\Sigma_A$ that is not necessarily the true (long-run) covariance matrix of the errors.}
{\color{black}
Additionally, the} estimation of the covariance matrix in a change point situation is complicated by the contamination by the change. For this reason, it is necessary to use the estimated errors within the estimation procedure.
{\color{black} Whilst the theory developed is completely general with respect to the choice of estimators of $\Sigma$, in the simulations and data example we use the following estimator:} Similarly to \cite{Aston2012} the errors are estimated componentwise by
{\allowdisplaybreaks\begin{align*}
&\hat{e}_i(t) = X_i(t) - \widehat{\mu}_i - \widehat{\Delta}_i \mathds{1}{\left\{\widehat{f}_i < t \leq \widehat{g}_i \right\}},\\
&\text{where }\widehat{\mu}_i = \frac{1}{\widehat{f}_i + N - \widehat{g}_i} \left( \sum_{t=1}^{\widehat{f}_i} X_i(t) + \sum_{t=\widehat{g}_i+1}^{N} X_i(t) \right),\\
&\phantom{\text{where}}
\widehat{\Delta}_i = \frac{1}{\widehat{g}_i - \widehat{f}_i} \sum_{t=\widehat{f}_i+1}^{\widehat{g}_i} X_i(t) - \hat{\mu}_i, \\
&\phantom{\text{where}}
\left( \widehat{f}_i,\widehat{g}_i \right)
= \arg\max \left\{ \left| \sum_{t=f_i + 1}^{g_i} \left( X_i(t) - \overline{X}_{i,N} \right) \right|: 1 \leq f_i < g_i \leq N \right\}.
\end{align*}}
{\color{black} In the dependent case, } we estimate the long-run variances $\widehat{\sigma}_i^2$, $i=1,\ldots,d$, by the flat-top estimator with automatic bandwidth selection as proposed by \cite{politis2003adaptive}
{\color{black} based on the estimated residuals}.
The long-run covariance matrix {\color{black} is then estimated} by the corresponding diagonal matrix $\widehat{\Sigma}=\mbox{diag}(\widehat{\sigma}_1^2,\ldots,\widehat{\sigma}_d^2)$.
{\color{black} The above way of aggregating is optimal if the change vector $\boldsymbol{\Delta}:=(\Delta_1,\ldots,\Delta_d)^T$ is allowed to be completely arbitrary. Often additional structural assumptions about $\boldsymbol{\Delta}$ are being made such as e.g.\ sparsity in the sense of many zeros. In such situations, many different approaches exist based on the idea of using a suitable projection into a lower dimensional space. For example, ~\cite{wang2018high} use a sparse singular value decomposition, \cite{cho2015multiple} use thresholding, \cite{jirak2015} uses information for each component separately,
\cite{mei2010efficient} and \cite{wang2018thresholded}
use a set of possibilities for which components are non-zero. A theoretical discussion of the potential of using (appropriate) projection methods in a multivariate setting can be found in~\cite{aston2018}.
In our data example we may reasonably assume some knowledge about the (relative) decay of the concentration from one transect to the other.
This information has not been taken into account by the above multivariate statistic: More precisely, we may assume to have } information about the change direction $\boldsymbol{\Delta}/ \|\boldsymbol{\Delta}\|$, {\color{black} where $\|\cdot\|$} is the Euclidean norm. While the exact physical decrease depends on several parameters and is difficult to know precisely, at least a rough direction is known. Specifically, the concentration might first increase (keeping in mind that the plume is actually a 3D-object so that the plane might only run into it at some distance behind the source), but then it will drop. This information can be used to increase the signal-to-noise ratio by using a projection onto $\widetilde{\boldsymbol{\Delta}}=(\widetilde{\Delta}_1,\ldots,\widetilde{\Delta}_d)^T$, which ideally is a multiple of $\mathbf{\Delta}$. In order to obtain the best signal-to-noise ratio, the data first needs to be standardized by $\Sigma_A^{-1/2}$, which also alters the change direction (hence the projection direction) by a factor $\Sigma_A^{-1/2}$. An ideal choice is given by $\Sigma_A=\Sigma$, where $\Sigma$ is the true covariance matrix, which is usually difficult to obtain, hence we allow for misspecification {\color{black} where $\Sigma_A\neq \Sigma$ in the below theory.}
If $\widetilde{\boldsymbol{\Delta}}$ is close to a multiple of the true concentration direction, the signal-to-noise ratio will greatly improve resulting in {\color{black} more precise estimators as well as higher testing power} (see \citet{aston2018} for more details). Projecting onto $\widecheck{\boldsymbol{\Delta}}=\Sigma_A^{-1/2}\widetilde{\boldsymbol{\Delta}}/{\|\Sigma_A^{-1/2}\widetilde{\boldsymbol{\Delta}}\|}$ yields the projected time series $\{Y(\cdot)\}$ with
\begin{align*}
&\|\Sigma_A^{-1/2}\widetilde{\boldsymbol{\Delta}}\|\, Y(t)=\boldsymbol{X}(t)^T\Sigma_A^{-1}\widetilde{\boldsymbol{\Delta}}= \left(\boldsymbol{\Delta}_{\{F_{\vartheta_0}(\cdot)<t/N\leqslant G_{\vartheta_0}(\cdot)\}}\right)^T\Sigma_A^{-1}\widetilde{\boldsymbol{\Delta}}+\boldsymbol{e}(t)^T\Sigma_A^{-1}\widetilde{\boldsymbol{\Delta}},
\\
&\text{where}\quad \boldsymbol{X}(t)=(X_1(t),\ldots,X_d(t))^T, \quad \boldsymbol{\mu}=(\mu_1,\ldots,\mu_d)^T \text{ and}\\
&\boldsymbol{\Delta}_{\{F_{\vartheta_0}(\cdot)<t/N\leqslant G_{\vartheta_0}(\cdot)\}}=\left({\Delta}_{\{F_{\vartheta_0}(\cdot)<t/N\leqslant G_{\vartheta_0}(\cdot)\}}(1),\ldots,{\Delta}_{\{F_{\vartheta_0}(\cdot)<t/N\leqslant G_{\vartheta_0}(\cdot)\}}(d) \right)^T,
\\
&\text{with }{\Delta}_{\{F_{\vartheta}(\cdot)<t/N\leqslant G_{\vartheta}(\cdot)\}}(i)=\begin{cases}
\Delta_i, &F_{\vartheta}(i)<t/N\leqslant G_{\vartheta}(i),\\
0, &\text{otherwise}.
\end{cases}
\end{align*}
We define
\begin{align*}
& Y(t)=\mathbf{D}^{{\Delta},\widetilde{\Delta}}_{\vartheta}(t)+e_P(t),\\
&\text{where }\mathbf{D}^{\Delta,\widetilde{\Delta}}_{\vartheta}(t):= \frac{\left(\boldsymbol{\Delta}_{\{F_{\vartheta_0}(\cdot)<t/N\leqslant G_{\vartheta_0}(\cdot)\}}\right)^T\Sigma_A^{-1}\widetilde{\boldsymbol{\Delta}}}{\|\Sigma_A^{-1/2}\widetilde{\boldsymbol{\Delta}}\|},\qquad {e}_P(t):=\frac{\boldsymbol{e}(t)^T\Sigma_A^{-1}\widetilde{\boldsymbol{\Delta}}}{\|\Sigma_A^{-1/2}\widetilde{\boldsymbol{\Delta}}\|}.
\end{align*}
By the above assumptions the function $\vartheta \mapsto \mathbf{D}^{{\Delta},\widetilde{\Delta}}_{\vartheta}(\cdot)$ is continuous and $t \mapsto \mathbf{D}^{{\Delta},\widetilde{\Delta}}_{\vartheta}(t)$ is left-continuous.
In the classical epidemic setting with $\vartheta=(\lambda_1,\lambda_2)$ and $F_{\vartheta}(i)=\lambda_1$, $G_{\vartheta}(i)=\lambda_2$, the projection time series also has an epidemic change. However, in the general model~\eqref{eq_model} it exhibits a gradual (epidemic) change (see Figure \ref{abrupt_vs_gradual_change}). More precisely, if $\widetilde{\boldsymbol{\Delta}}$ is correct and a diagonal covariance matrix $\Sigma_A$ is used, then the signal part has the following shape (multiplied by a constant indicating the strength of the change) in rescaled time $s=t/N$:
\begin{align*}
\mathbf{D}^{\widetilde{\Delta},\widetilde{\Delta}}_{\vartheta}(s)=\|\Sigma_A^{-1/2}\widetilde{\boldsymbol{\Delta}}\|\,\sum_{i=1}^d{\widecheck{\Delta}}_i^2{\mathds{1}}_{\{F_{\vartheta}(i)<s\leqslant G_{\vartheta}(i)\}}
\end{align*}
\begin{figure}[b]
\begin{subfloat}[Simulated multivariate data under $H_1$, i.e.\ with abrupt epidemic mean changes in every component.]
{\includegraphics[width=3in]{plots/simulated_data_under_H1_X}}
\end{subfloat}
\begin{subfloat}[Resalting univariate sequence with a gradual epidemic mean change.]
{\includegraphics[width=3in]{plots/simulated_data_under_H1_Y}}
\end{subfloat}
\caption{Simulated multivariate data $\boldsymbol{X}(t)$ under $H_1$ and resulting univariate sequence $Y(t)$.}
\label{abrupt_vs_gradual_change}
\end{figure}
{\color{black} The projection-based aggregation thus results in}
\begin{align*}
{\color{black} A^P(\vartheta)} &= \left| \sum\limits_{t=1}^{N} \left( \mathbf{D}_{\vartheta}(t/N) -\frac{1}{N} \sum\limits_{l=1}^{N} \mathbf{D}_{\vartheta}(l/N) \right) Y(t) \right|
= \left| \sum\limits_{t=1}^{N} \mathbf{D}_{\vartheta}(t/N) ( Y(t)-\bar{Y}_N) \right|,
\end{align*}
where $\mathbf{D}_{\vartheta}=\mathbf{D}_{\vartheta}^{\widetilde{\Delta},\widetilde{\Delta}}$. {\color{black} We note that this statistic is related to the one by \citet{extreme_gradual} and \citet{limitdistr_gradual} that was obtained as the likelihood ratio statistic for a (non-epidemic) gradual change with a given polynomial slope.}
{\color{black}
Based on these two version of aggregating information across different transects we derive estimators for the source location in the next section. In the context of change point detection there is a strong connection between estimators and tests in the sense that often test statistics are obtained as the maximum over all possible parameters, $\vartheta$, while estimators are obtained as the point $\vartheta$ that maximizes the corresponding test statistic. Also if a test statistic has a large power for a given alternative then the corresponding estimator will typically be more precise. Therefore, in Appendix~\ref{app_test} we detail properties of the corresponding test statistics, part of which are also needed to prove consistency of the corresponding estimators.
}
\subsection{Estimation of change points/gas emission source}\label{section_est_cp}
In classical change point procedures, such as the ones discussed in the previous section, natural estimators for the location of the change point can be obtained by looking at the point where the maximum is obtained.
Similarly, in the setting which we consider, the parameter maximizing the statistic is an estimator for the true parameter value:
\begin{align*}
& \widehat{\vartheta}_M={\color{black}\arg\max_{\vartheta\in\Theta}A^M(\vartheta)}=\arg\max_{\vartheta\in\Theta}\boldsymbol{S}_{\vartheta}^T \Sigma ^{-1} \boldsymbol{S}_{\vartheta},\\
&\widehat{\vartheta}_P=
{\color{black}\arg\max_{\vartheta\in\Theta} \frac{A^P(\vartheta)}{\left(\sum_{t=1}^N\left(\mathbf{D}_{\vartheta}(t/N) -\frac 1 N\sum_{l=1}^N\mathbf{D}_{\vartheta}(l/N)\right)^2 \right)^{1/2}}}\\
&=\qquad
\arg\max_{\vartheta\in\Theta} \frac{\left| \sum\limits_{t=1}^{N} \mathbf{D}_{\vartheta}(t/N) ( Y(t)-\bar{Y}_N) \right|}{\left(\sum_{t=1}^N\left(\mathbf{D}_{\vartheta}(t/N) -\frac 1 N\sum_{l=1}^N\mathbf{D}_{\vartheta}(l/N)\right)^2 \right)^{1/2}},
\end{align*}
where $\arg\max$ is the set of all maximizing values. {\color{black} In practise some representative is used. The normalization of the projection estimator is necessary in order to obtain consistent results for the source estimation in this gradual (after projection) situation.}
In particular we obtain an estimate for a plume, where the true source can be expected to be close to the origin of that plume.
{\color{black} However, identifiability in a small sample situation can be weak: E.g.\ in the gas emission example only relatively few data points per transect are observed so that many different clouds will cut each transect at almost the same locations. While each of the corresponding clouds segments the data reasonably, the actual source locations may vary by a much larger margin. In the data example, this effect can be seen by looking at the heat maps in Figures \ref{left_traj}(c) and (d) as well as \ref{right_traj_2} where the value of the statisic is very similar along vertical 'lines' in the source area. In this case, a source location higher up in combination with a slightly smaller opening angle results in a very similar segmentation of the data.
}
{\color{black} As pointed out in Section~\ref{sec:cpa:test} correct estimation of $\Sigma$ in particular in a time series/change point context may be difficult. Therefore, we explicitely allow for misestimation in the below theorem by
letting $\widehat{\Sigma}$ converge to some matrix $\Sigma_A$ that can be different from $\Sigma$.}
\begin{theorem}\label{theorem_est}
Let the assumptions on the errors of Theorem~\ref{theorem_null} hold. Furthermore, choose $\Theta$ such that $\vartheta_0$ is identifiable, i.e.\ there does not exist $\vartheta_1\neq\vartheta_0\in\Theta$ such that $F_{\vartheta_1}(i)=F_{\vartheta_0}(i)$ as well as $G_{\vartheta_1}(i)=G_{\vartheta_0}(i)$ for all $i=1,\ldots,d$ with $\Delta_i\neq 0$. Then, under a fixed alternative as in \eqref{eq_model} with $\Delta_i\neq 0$ for at least one $i=1,\ldots,d$, it holds:
\begin{enumerate}[(a)]
\item If $\widehat{\Sigma}\overset{P}{\longrightarrow} \Sigma_A$ for some diagonal positive definite matrix $\Sigma_A$ (not necessarily equal to $\Sigma$), the estimators based on the multivariate statistic are consistent, i.e.\ $\widehat{\vartheta}_M\overset{P}{\longrightarrow} \vartheta_0$.
\item Let the true parameter be identifiable from the projected signal in the sense that there does not exist $\vartheta_1\neq \vartheta_0$ such that $\mathbf{D}_{\vartheta_1}=a \mathbf{D}_{\vartheta_0}+b$ for some constants $a,b$. Then, if the projection direction $\widetilde{\Delta}$ (but not necessarily $\Sigma_A$) and the cloud shape are correct, the estimators based on the projection statistic are consistent $\widehat{\vartheta}_P\overset{P}{\longrightarrow} \vartheta_0$.
\end{enumerate}
\end{theorem}
For a linear plume and diagonal $\Sigma_A$, as is assumed in the data example and simulation study, the identifiability condition in the above theorem holds as soon as there is a change in at least two components. However, as we shall see, in the true data example with small $N$ and varying $y$-coordinate of the source location, the difference is very small. Consequently, it follows that the surface of $\boldsymbol{S}_{\vartheta}^T \Sigma ^{-1} \boldsymbol{S}_{\vartheta}$ is very flat along the $y$-axis. This is clearly seen by the heatmaps (for the possible sources) of the statistics for the gas emission data example in Figures \ref{left_traj}(c) and (d) as well as \ref{right_traj_2}.
{\color{black} The following remark gives some additional insight into the effect of misspecification or misestimation of the covariance structure on the estimation of the source location.}
\begin{rem}\label{rem_est}
\begin{enumerate}[(a)]
\item
The assertion for the multivariate procedure also holds for non-diagonal covariance matrices $\Sigma_A$ as long as the true source location is the unique maximizer of the signal $\|\Sigma_A^{-1/2}\mathbf{H}_{\vartheta}\|$ with $\mathbf{H}_{\vartheta}=(H_{\vartheta}(1),\ldots,H_{\vartheta}(d))^T$ and $H_{\vartheta}(i)=\Delta_i \,h_{\vartheta,\vartheta_0}(i)$ as in Lemma~\ref{lemma_alt}
\item If the cloud or the projection direction is misspecified, then the assertion for the projection statistic only holds in the sense that the best approximating source will be estimated (if identifiable unique), where the best approximating parameter $\vartheta_1$ is obtained as the maximizer of
\begin{align*}
\int_0^1\widetilde{\mathbf{D}}_{\vartheta_1}(z)\widetilde{\mathbf{D}}_{\vartheta_0}^{\Delta,\widetilde{\Delta}}(z)\,dz,\qquad \text{where }\,\widetilde{\mathbf{D}}=\frac{\mathbf{D}-\int_0^1\mathbf{D}(z)\,dz}{\left( \int_0^1(\mathbf{D}(z)-\int_0^1\mathbf{D}(s)\,ds)^2\,dz \right)^{1/2}},
\end{align*}
where for the misspecified cloud $\mathbf{D}_{\vartheta_0}^{\Delta,\widetilde{\Delta}}$ is the projected signal obtained from the correct cloud shape and change $\Delta$ when using the projection direction $\widecheck{\Delta}$, while $\mathbf{D}_{\vartheta_1}$ is the projected signal based on the supposed change direction and cloud shape that is also used in the statistic.
\end{enumerate}
\end{rem}
\subsection{Another look at the construction of the estimators}
\label{section_summary}
The model described in the previous section along with the testing procedures and estimators go far beyond this particular data example. For this reason, we will discuss the construction of the proposed estimators as well as their strengths and weaknesses in this section.
The methodology developed in this work is of potential use in those change point situations where a reasonable parametrization of a functional relationship between change points of different components is available, and which may depend on unknown parameters (like e.g.\ the precise shape of the cloud or its opening angle in the gas emission example). Below, we therefore shed some more light on how to customize or generalize the presented procedures to other situations and examples.
In this work we have considered the case of epidemic changes, where in each component there are precisely two changes and the mean of the first and last section are equal. One can easily extend this to situations of at most one change, where an example is a recession evolving through different branches of economy with the recession not hitting all of them at the same time. In such cases, the function $F_{\vartheta}=0$ needs to be set to be equal to zero in all procedures.
Both proposed procedures rely on the functional relationship $F_{\vartheta}$ as well as $G_{\vartheta}$, which must be specified in advance {\color{black} motivated by the given data set at hand. However, by allowing for additional unknown parameters in the procedure such as e.g.\ a range of opening angles in the gas emission example, various possible shapes can be incorporated into the analysis.}
Nevertheless, there are several problems attached to this: On the one hand a precise estimation is only possible if the true functional relationship is included in this scenario, so that one may want to use a large number of parameters $\vartheta$. On the other, this can make both estimation and testing much more difficult as the true signal can more easily be hidden beneath some random false signal. In testing, this means the quantiles of the null distribution may increase significantly, requiring a stronger signal for detectability. In estimation, precision may be lost due to random fluctuations. {\color{black} This effect can clearly be seen by comparing the upper panel in Figure~\ref{figure_ck_1} with the lower panel. All four pictures are based on the same signal strength but in the upper panel the true opening angle has been used while in the lower panel a range of opening angles is considered.} This effect is related to the usual trade-off between parametric and non-parametric methods.
Looking more closely at the assumptions required for consistency of the estimators, an identifiability assumption is needed in the sense that every parameter $\vartheta$ leads to a unique signal in terms of change points. For the projection statistic this requirement is stronger as the projected signal contains less information than the original multivariate signal. On the other hand, there is usually also less noise, which is an advantage. In practice, this identifiability issue shows by having several different sets of parameters that yield almost the same value of the statistic. This is true, in particular, if the number of parameters is large, i.e.\ fewer assumptions on the functional relationship of the change points are made. For the heat maps for the statistic at several different source locations as e.g.\ given in the last two panels of Figure \ref{left_traj} this can be seen by the large areas where the value of the statistic is particularly high. This is due to the fact that the data can similarly well be approximated by several sets of parameters (all leading to different but similar cloud shapes).
Both proposed procedures require an estimation of the inverse of the covariance matrix which is typically challenging in practice. The corresponding change point estimation problem is usually quite robust with respect to this, as also suggested by our theoretical results under misspecification. However, the testing procedure may suffer greatly. This is particularly bad for the multivariate procedure, where the Brownian bridges in the limit distribution are no longer independent, and critical values obtained from the independence assumptions are no longer valid. The consequence would be potentially dishonest (both conservative or liberal) testing procedures. The projection test on the other hand is much more robust in this respect, as the size is unaffected by dependence between components but it may suffer some power loss.
The main difference between the two procedures discussed in this work is that the projection method makes use of more information, requiring some knowledge about the functional relationship of the change points but also their relative strengths in each component (the absolute strength $\|\Delta\|$ does not matter). In many situations, such as in our data example, where some information about the diffusion of the gas can be used, such knowledge is available. We note that this information is not used by the multivariate statistic and, as a consequence, the signal-to-noise ratio of the projected time series is better so that both the power and the estimation precision increases. On the other hand, problems can also arise if the relative strength of the change in each component is misspecified. However, in our simulations we found the estimator of the clouds to be quite robust with respect to some mild to moderate misspecification of the relative strength of the change in each component. In the present paper, we have only worked with a precisely known decay, as an alternative one could consider to let it depend on unknowns as well.
\section{Simulations and data analysis}
\subsection{Some simulations}\label{sec:sim_study}
In this section, we illustrate the small sample properties of the above procedures by a small simulation study.
\begin{figure}[b]
\begin{subfloat}[Multivariate $\widehat{\theta}_M$
{\includegraphics[width=0.48\textwidth]{plots/multi_fixed_angle_delta_1iid}}
\end{subfloat}
\begin{subfloat}[Projection: $\widehat{\theta}_P$
{\includegraphics[width=0.48\textwidth]{plots/proj_fixed_angle_delta_1iid}}
\end{subfloat}
\begin{subfloat}[Multivariate $\widehat{\theta}_M$
{\includegraphics[width=0.48\textwidth]{plots/multi_max_angle_delta_1iid}}
\end{subfloat}
\begin{subfloat}[Projection: $\widehat{\theta}_P$
{\includegraphics[width=0.48\textwidth]{plots/proj_max_angle_delta_1iid}}
\end{subfloat}
\caption{Estimated clouds {\color{black} for i.i.d.\ data. Upper panel: Fixed opening angle of 20 degrees, lower panel: Allowing for opening angles between $10$ and $120$ degrees.}}
\label{figure_ck_1}
\end{figure}
\begin{figure}[bp]
\begin{subfloat}[ $\tau=0.1$
{\includegraphics[width=0.48\textwidth]{plots/proj_fixed_angle_delta_1iid_cont_sigma01}}
\end{subfloat}
\begin{subfloat}[$\tau=0.3$
{\includegraphics[width=0.48\textwidth]{plots/proj_fixed_angle_delta_1iid_cont_sigma03}}
\end{subfloat}
\caption{Estimated clouds {\color{black} for the projection statistic under misspecification of the change direction by normal errors with standard deviation $\tau$}. The same signal strength as in Figure~\ref{figure_ck_1} has been used.}
\label{figure_ck_2}
\end{figure}
{\color{black} We first focus on simulations supporting the theoretical observations in Section~\ref{section_summary}, which can best be seen for independent and identically distributed errors and by using the true variances. In a second step, time series errors are simulated with a similar autocorrelation structure as the estimated residuals from the data example. Additionally, the long-run covariances are estimated as described in Section~\ref{sec:cpa:test}, so that the simulated data is treated in exactly the same way as the gas emission data. All simulations are based on Gaussian data.} Under the alternative, an epidemic mean change whose boundaries develop according to a linear cloud (with an opening angle of $20^\circ$), $d=6, N=240$, {\color{black} is simulated}.
{\color{black} The magnitude of the change is generated as follows:}
A plane flying in a certain height over the cloud will only enter it fully at a certain distance to the source keeping in mind that the cloud is a 3-D-object. Consequently, we simulate the magnitude of the change points $\Delta_j$, $j=1,\ldots,d,$ such that it first increases quickly before decreasing again at a slower rate. This effect can also be clearly seen in the data (see Figure~\ref{Left_TransectConcentrations}).
More results dealing with different signal strengths and weight functions as well as size and power of the corresponding test procedures can be found in Section 17 of \citet{silke_diss}.
{\color{black} Our main aim is to judge the quality of the estimated cloud including the variability of the estimator. The source itself may only be very weakly identifiable because there are only relatively few data points at each transect. Thus, clouds belonging to several different potential source points may lead to similar change point locations in each of the transects. This effect can also be seen by the vertical lines in the heatmaps of the data example (Figures~ \ref{left_traj}(c) and (d) as well as \ref{right_traj_2}). At each of those potential source locations clouds with varying opening angles exist which have a similarly good fit to the data. For this reason, we visualize the quality of the estimated clouds instead by plotting the estimated clouds from all $1000$ simulations in one plot together with the true cloud. }
Figure~\ref{figure_ck_1} (a) and (b) give the results under the assumption of a known fixed opening angle.
The estimators from the projection statistic are {\color{black} somewhat} more precise than from the multivariate procedure, {\color{black} i.e.\ there are fewer estimated clouds at the wrong locations}. As discussed in Section~\ref{section_summary} the projection statistic -- unlike the multivariate statistic -- uses the additional information about the direction of the change $(\Delta_1,\ldots,\Delta_d)^T$ (up to multiplicative constants indicating the strength of the signal). In the gas emission example, this corresponds to having knowledge about the relative decline of the gas as the airplane gets further and further away from the source.
In Section~\ref{section_summary} it was also discussed how precision of the corresponding estimators ({\color{black} as well as power of the corresponding test statistics}) can be diminished by allowing for more flexibility in the parameters defining the cloud. We will check this effect empirically by not working with a fixed known opening angle but rather treat the angle as another unknown quantity. Effectively, this means that we are no longer only maximizing over the source location but also over the opening angle resulting in a substantial increase in computational effort. {\color{black} The corresponding simulation results are given in Figure~\ref{figure_ck_1} (c) and (d). In this case, the estimators for the cloud become much less precise if applied to the same time series, such that a stronger signal is needed to obtain the same precision. Indeed, while the strength of the signal remains the same by allowing for more flexibility in modelling by means of an unknown opening angle, the noise level of the statistic is greatly increased, where clearly the multivariate statistic is affected more strongly.
Thus,} both methods have, as one might expect, a much diminished quality of estimation.
{\color{black} This situation also shows that the gain in precision from the use of the projection statistic can be substantial } due to the use of the additional information of the {\color{black} change direction i.e.\ the} decay of the gas concentration with distance from source.
{\color{black} In order to check for robustness of the projection procedure with respect to misspecification of that change direction, we contaminate the true signal strength in each transect} by i.i.d.\ normal disturbances, i.e.\ the true change in component $i$ is given by $\Delta_i+\varepsilon_i$, $\varepsilon_i\sim N(0,\tau^2)$ i.i.d., $\|\mathbf{\Delta}\|=1$, while the projection statistic is still constructed with $\Delta_i$. {\color{black} We use $\tau=0.1$ as well as $\tau =0.3$, which is already a substantial contamination because the magnitude of the change in each transects lies between $0.22$ and $0.62$ so that the contamination is of similar magnitude than the signal.} The results are given in Figure~\ref{figure_ck_2} showing that the procedure is indeed quite robust with respect to at least slight to medium deviations from the truth.
\begin{figure}[b]
\begin{subfloat}[Multivariate $\widehat{\theta}_M$
{\includegraphics[width=0.48\textwidth]{plots/multi_fixed_angle_delta_3}}
\end{subfloat}
\begin{subfloat}[Projection $\widehat{\theta}_P$
{\includegraphics[width=0.48\textwidth]{plots/proj_fixed_angle_delta_3}}
\end{subfloat}
\begin{subfloat}[Multivariate $\widehat{\theta}_M$
{\includegraphics[width=0.48\textwidth]{plots/multi_max_angle_delta_3}}
\end{subfloat}
\begin{subfloat}[Projection $\widehat{\theta}_P$
{\includegraphics[width=0.48\textwidth]{plots/proj_max_angle_delta_3}}
\end{subfloat}
\caption{Estimated clouds for dependent data: {\color{black} Upper panel: Fixed opening angle of $20$ degrees, lower panel: allowing for opening angles between $10$ and $120$ degrees}}
\label{figure_dep}
\end{figure}
{\color{black}
In order to assess the effect of dependence on the procedure, we use the following dependent model: Each transect is generated independently as the following MA(9) model with standard Gaussian white noise: $X_t=e_t+0.3 e_{t-1}+0.2e_{t-2}+0.1 e_{t-3}-0.1 e_{t-5}-\ldots -0.5 e_{t-9}$, which was chosen because its autocorrelation and partial autocorrelation structures look similar to what we have seen in the actual data (see Figure~\ref{figure_ACF_1} for the corresponding plots for the first transect).
Because the noise level in this model is higher than for the above independent case and because we estimate the long-run variances, we use a stronger signal of $\|\mathbf{\Delta}\|=3$. The results can be found in Figure~\ref{figure_dep}. Clearly, the precision is again better for the projection than the multivariate procedure. As before, precision is better if the true opening angle is known as opposed to having to estimate the opening angle as well.
}
\subsection{Data analysis}\label{sec:data}
\begin{figure}[t]
\begin{subfloat}[ACF: First leg]
{\includegraphics[width=0.24\textwidth]{plots/eps_zeile1}}
\end{subfloat}
\begin{subfloat}[PACF: First leg
{\includegraphics[width=0.24\textwidth]{plots/eps_zeile1_PACF}}
\end{subfloat}
\begin{subfloat}[ACF: \mbox{Across components}]
{\includegraphics[width=0.24\textwidth]{plots/eps_spalte1}}
\end{subfloat}
\begin{subfloat}[PACF: \mbox{Across components}
{\includegraphics[width=0.24\textwidth]{plots/eps_spalte1_PACF}}
\end{subfloat}
\caption{ACF and PACF for the estimated errors from the left trajectory}
\label{figure_ACF_1}
\end{figure}
We now return to the gas emission data example outlined in Section \ref{sec:intro}.
{\color{black} This data example has already been analyzed by \cite{Hirst2013} who adopt a Bayesian approach. In brief, they model atmospheric point concentration measurements as the sum of a spatially and temporally smooth atmospheric background, augmented by concentrations from local sources.
Source emission rates are modelled by a Gaussian model taking possible multiple sources into account by means of a mixture model, whilst the atmospheric background concentration component is represented by a Markov random field. A reversible jump MCMC inference procedure is then used to provide point and uncertainty estimates for the plume origin. This approach also incorporates an optimisation approach to provide an initial point solution for inversion. These, and other necessary steps, combine to result in a computationally intensive procedure that relies on a multitude of parametric assumptions.
In contrast, our approach makes fewer computational demands and requires only quite mild assumptions while still giving good results. While we only analyse the case of a single source, our procedure can in principle be adapted to allow for multiple sources by appropriately defining change regions (which then no longer need to be epidemic). }
{\color{black} As pointed out in Section~\ref{sec:cpa:test} } a critical point for many change point tests and the corresponding estimators is the estimation of the long-run covariance matrix $\boldsymbol{\Sigma}$, which is a difficult problem statistically. There are two key aspects of the problem: first, time dependency and second, the large dimension of the covariance matrix with no structural assumption available. In the case of the gas emission data, time dependency is not negligible while the dependence between different components of the error process is very weak. By way of illustration, Figure Figurs~\ref{figure_ACF_1} (a) and (b) show the empirical autocorrelation function (ACF) and partial autocorrelation function (PACF) for the first {\color{black} transect} of the estimated error sequence. These plots clearly indicate the presence of dependence. Equivalent analyses for the other components indicate similarly. See for example Chapter 18 in \cite{silke_diss}. In contrast, Figures~\ref{figure_ACF_1} (c) and (d) show the ACF and PACF for the estimated errors from one leg to the next for examplary time point~1, where no dependence is visible. Again, for other time points, a similar picture is obtained. To use ACF and PACF in the latter context makes sense keeping in mind that the original data was indeed a one-dimensional time series that has been transformed to a multivariate time series for the purpose of the data analysis. As such the vector of observations at each time point is indeed a thinned version of that original time series. This leads us to only estimate the long-run variances (i.e.\ the diagonal elements of $\boldsymbol{\Sigma}$) while setting the off-diagonal elements to zero.
\begin{figure}[!b]
\begin{subfloat}[Estimated cloud based on the multivariate procedure]
{\includegraphics[width=0.48\textwidth]{plots/data_multiteststatistic_independent_griddedreht33_alpha0_linear_maxopeningangle}}
\end{subfloat}
\begin{subfloat}[Estimated cloud based on the projection procedure
{\includegraphics[width=0.48\textwidth]{plots/data_projectionstatistic_independent_griddedreht33_alpha0_linear_maxopeningangle_beta05}}
\end{subfloat}
\begin{subfloat}[Heatmap from the multivariate procedure]
{\includegraphics[width=0.98\textwidth]{plots/heatmap_multi.jpg}}
\end{subfloat}
\begin{subfloat}[Heatmap from the multivariate procedure]
{\includegraphics[width=0.98\textwidth]{plots/heatmap_proj.jpg}}
\end{subfloat}
\caption{Data Analysis for left trajectory}
\label{left_traj}
\end{figure}
In the following, we consider the left trajectory, while the analysis of the right trajectory is moved to the appendix (see Section~\ref{sec_right}). {\color{black} While the left trajectory can be considered well specified, the right trajectory is somewhat misspecified as the wind seems to have changed at some point. As such it gives some additional insight into the effect of misspecification.}
In both cases, we use a linear cloud as an approximation and do not have any knowledge about the actual opening angle. Thus we include a range of opening angles within the estimation procedure. While the simulations have shown that this can lead to some loss of precision, for the data analysis it does not seem to cause any problems. Figure~\ref{left_traj} (a) gives the corresponding cloud estimate for this data example and visual inspection suggests that a good fit has been obtained. {\color{black}
On the other hand, the figure also shows that the assumption of a constant mean within the change region is not met by the actual data at hand, where the concentration slowly increases before decreasing again.
While the methodology of this paper could be adapted to this situation, this requires additional model assumptions on the shape of this gradual change. A substantial improvement of this approach can only be expected if such information is indeed available which is typically not the case, so that we decided once again to work with the simpler model.
}
Keeping in mind that the main objective is to get a good approximation of the source of the gas emission, it is also worthwhile considering a heatmap of the values of the statistic for each considered possible source location (where the maximum over all opening angles is given). This heat map for the left trajectory can be found in Figure~\ref{left_traj} (c). It becomes apparent that the statistic takes particularly high values in a vertical area in the middle, where differences are indeed very small. Effectively, all of those places can be considered possible source location so that this heat map can be used as a search map for the gas emission source. Furthermore, the reason why the values of the statistic are very close within that area is the fact that a source closer to the lower end with a larger opening angle can approximate the signal as given by the discrete data set similarly well as a source closer to the upper end of the search area with a smaller opening angle. In a sense, this is related to an identifiability issue and what would be a flat likelihood surface in the context of maximum likelihood estimation.
For the projection estimator, we need to make additional assumptions on the decay of the gas intensity from one leg of the flight to the next. To this end, we use a function that first increase for the first legs before slowly decreasing to a similar level (for details please see \cite[Figure 17.1]{silke_diss}). We first use an increasing level because the cloud is a 3D object, and the airplane flies at a certain height, so that the airplane first has to enter the cloud leading to first increasing levels before the dispersion effect of the gas leads to a slower decay again. This kind of behaviour can indeed be seen in both trajectories.
Figure ~\ref{left_traj} (b) shows the estimated cloud while Figure~\ref{left_traj} (d) shows the heat map. While the source of the cloud that has been picked by the projection estimator is different from the one picked by the multivariate estimator, the corresponding clouds do divide the time series in a very similar manner. This is a similar effect as has been described above in the context of the heat map related to weak identifiability. Considering the heat map of the projection statistic the area with high values of the statistics (that could be searched) is similar but smaller, which could indicate that the use of the additional information does indeed lead to a more precise estimation.
\section{Conclusions}\label{sec:conclusions}
The methodology developed in this work takes a different view on multivariate change point analysis than the classical literature while including those situations as a special case. In the setting we consider, the change points across components no longer have to be aligned but can follow some kind of functional relationship. It is not necessary to know the functional relationship exactly but some reasonable parametrization needs to be available even if it depends on unknown parameter such as, for example, the precise shape of the cloud, its opening angle etc.
The main contribution of the paper is the derivation of two different estimators for the unknown parameters of the functional relationship, at least some of which are the parameters of interest (such as the source of the cloud in the gas emission example). The first estimator only uses the parametric information of the functional relationship but allows for arbitrary change directions (as denoted by $\boldsymbol{\Delta}/\|\boldsymbol{\Delta}\|$ in this paper). As such it is related to classical estimators for multivariate change point situations with the difference that it is no longer the change points that are of interest but rather the underlying parameters of the functional parametrization of the changes. The second estimator relies on the additional knowledge of the change direction (not the magnitude of the change) and is related to classical change point estimators after an appropriate projection of the data into one dimension. This can greatly increase the precision of the estimators but at the risk of inconsistency or at less precision if that direction is not correct. Some simulations suggest that the procedure is not too sensitive with respect to mild deviations from the truth.
As a by product we obtain two testing procedures each related to one of the two estimators, for which we derive the limit distribution under the null as well as show consistency under alternatives. While these tests are not of immediate interest in the context of the gas emission example, they may be of independent interest in other situations.
For both estimators and both testing procedures, only very mild nonparametric assumptions on the error sequence are required and the case of dependent errors is also taken into account. We do not make any specific assumptions on this dependence but only need the validity of a functional central limit theorem which has been shown for many different dependent time series and weak dependency concepts.
The development of the methodology is motivated by an application of remote detection and location of the source of gas emissions based on aerial sensed-data and throughout the paper the development of the methodology has been explained by \mbox{means} of that data set, and of course finally analysed with the new methodology. While the methodology gives reasonable results, it can also be deduced that the exact source location is not strongly identifiable on the basis of this kind of data set.
Finally, all methods can be adapted to different but similar applications, situations or models, for example while an epidemic change setting is discussed in this paper, extensions to other scenarios are straightforward. Section~\ref{section_summary} explains the underlying ideas and construction principles to help with this task.
\section*{Acknowledgements}
The authors are grateful to Bill Hirst and Philip Jonathan (Shell) for several valuable conversations that helped
motivate this work, and for providing access to the landfill data. {\color{black} The authors would also like to thank Philipp Klein (Otto-von-Guericke University, Magdeburg) for identifying a coding error in an earlier version of this work.}
This work was supported by the grant 'Resampling procedures for high-dimensional change point tests of dependent data' financed by the state of Baden Wurttemberg. In
addition, support from the Karlsruhe House of Young Scientists (KHYS) for a research visit to Lancaster (UK), the Isaac Newton Institute for Mathematical Sciences during the programme Statistical Scalability (supported by EPSRC grant numbers EP/K032208/1 and EP/R014604/1) and EPSRC (EP/N031938/1) are kindly acknowledged.
\bibliographystyle{plainnat}
|
1,314,259,993,195 | arxiv | \section{Introduction}
Recently, we have developed \cite{lv05a,lv05b} a new formalism
for nonlinear relativistic perturbations, based on a
covariant approach \cite{Hawking:1966qi,ellis} and inspired by
the work of Ellis and Bruni \cite{Ellis:1989jt}. A key quantity
in our formalism is the covector $\zeta_a$, a linear combination
of the spatial gradients of the energy density $\rho$ and of
$\alpha$, the local number of e-folds, or integrated expansion
along each worldline of the fluid.
As we showed in \cite{lv05a,lv05b}, $\zeta_a$, for a barotropic
fluid, obeys a remarkably simple {\it conservation equation},
which is {\it exact and fully nonlinear}. In the linear
approximation, this conservation equation reduces to the usual
conservation law for the linear curvature perturbation on uniform
energy density hypersurfaces, usually denoted $\zeta$. Thus, our
covector $\zeta_a$ can be seen as a covariant and nonlinear
generalization of the usual $\zeta$.
It must be stressed that, although our initial motivations were
related to cosmology, our formalism applies, in fact, to any
relativistic fluid, whatever the underlying geometry. Thus, in the
following we will not assume any particular type of spacetime,
unless specified.
The purpose of the present work is to clarify the geometrical
meaning of $\zeta_a$, and to extend our formalism in two new
directions. Whereas our formalism has been developed so far for a
{\it perfect} fluid, we consider here its extension to the case of
dissipative relativistic fluids.
Moreover, we consider the possibility of having
several different {\em interacting} fluids.
The covariant formalism of Ellis and Bruni \cite{Ellis:1989jt}
was first extended
to a dissipative multifluid system
in \cite{Hwang:1990am,Dunsby:1991vv}, where
both the exact and the linear form of perturbation equations were given
for noninteracting fluids. It was later generalized to the case of interacting
fluids in \cite{Dunsby:1991xk} for {\em linear} perturbations.
The extension to interacting fluids is particularly
useful in cosmology, where several types of matter coexist. As
a typical example, one can mention the analysis of the
cosmic microwave background
anisotropies, which has been for instance considered
within the covariant formalism and {\em at linear order} in
\cite{Challinor}.
In this paper we work in the framework of our nonlinear formalism
\cite{lv05a,lv05b}, based on covectors such as $\zeta_a$. As in
the single perfect fluid case, we show that it is possible to
derive simple, exact, and fully nonlinear evolution equations for
various covectors, constructed as linear combinations of the
spatial gradients of scalar quantities that characterize the
fluids, such as the energy density or the number particle, and of
the local number of e-folds $\alpha$.
The equations we obtain
represent nonlinear generalizations
of linear perturbation equations, some of which have been already
studied in the context of linear cosmological perturbations (see,
in particular,
\cite{Malik:2002jb,Malik:2004tf}). However, our equations are
covariant, i.e., independent of the coordinate system, and nonlinear, and it
is straightforward
to linearize or expand them up to second, or higher,
order in the perturbations.
Furthermore, our covariant approach allows to identify easily which
properties belong specifically to the linear or second order expansion,
and which properties remain valid at higher orders.
This work is organized as follows. In Sec.~\ref{sec:covariant} we
introduce the covariant formalism for dissipative fluids while in
Sec.~\ref{sec:covariantpert} we discuss the geometrical meaning of
the nonlinear covector $\zeta_a$. In Sec.~\ref{sec:identity} we
derive an identity that can be used to construct conservation
equations for various nonlinear covectors, representing the
perturbations of scalar quantities of a dissipative fluid. The
conservation equations for these quantities are derived in
Sec.~\ref{sec:nonconservation}, while in
Sec.~\ref{sec:interacting} we extend our formalism to interacting
fluids. Finally, in Sec.~\ref{sec:conclusion}, we conclude.
\section{Covariant formalism}
\label{sec:covariant} In this section, we briefly review the
covariant description for a dissipative relativistic fluid. There
is a substantial literature on dissipative relativistic fluids,
which is reviewed, for instance, in \cite{maartens}. The extension
of irreversible thermodynamics to relativistic fluids is hampered
by subtle issues. In particular, the first extensions, due to
Eckart in 1940 and to Landau and Lifshitz in the 1950's suffer
from non-causal behavior. An extended theory, which does not
suffer from this problem was developed by Israel and Stewart
\cite{Israel:1979wp}. In the present work, we will not need to
enter the details of these various formulations and, for details,
we refer the reader to \cite{maartens}, whose presentation and notation
will be followed here.
We first define the unit four-velocity of the fluid $u^a$ as the
average velocity of the fluid particles. This means that $u^a$ is
proportional to the particle current $n^a$, which can thus be
written as
\begin{equation}
n^a=n u^a, \qquad u^a u_a=-1,
\end{equation}
where $n$ is the particle number density.
In addition to the particle number density $n$, the fluid is
characterized by local equilibrium scalars: the energy density
$\rho$, the pressure $p$, the entropy $S$ and the temperature
$T$. In
general the effective pressure deviates from the local equilibrium
pressure so that $p_{\rm eff}= p+\Pi$. The energy-momentum
tensor can be written in the form
\begin{equation}
T_{ab}=\rho u_a u_b+ \left(p+\Pi\right) h_{ab}+ q_{a} u_{b}+q_b
u_a +\pi_{ab}, \label{emt}
\end{equation}
where $h_{ab}$ is the projection tensor orthogonal to the fluid velocity
$u^a$,
\begin{equation}
h_{ab}=g_{ab}+u_a u_b, \quad \quad (h^{a}_{\ b} h^b_{\ c}=h^a_{\
c}\, , \quad h_a^{\ b}u_b=0),
\end{equation}
and where the energy flow $q_a$ and the anisotropic stress
$\pi_{ab}$ satisfy the following properties:
\begin{equation}
q_au^a=0, \quad \pi_{ab}=\pi_{ba}, \quad \pi_a^{\ a}=0, \quad \pi_{ab}u^b=0.
\end{equation}
It is useful to introduce the familiar decomposition
\begin{equation}
\nabla_b u_a=\sigma_{ab}+\omega_{ab}+{1\over 3}\Theta
h_{ab}-\dot{u}_a u_b, \label{decomposition}
\end{equation}
with the (symmetric) shear tensor $\sigma_{ab}$, and the
(antisymmetric) vorticity tensor $\omega_{ab}$; the volume
expansion, $\Theta$, is defined by
\begin{equation}
\Theta \equiv \nabla_a u^a,
\end{equation}
while $\dot u^a$ is the acceleration, with the dot denoting the
covariant derivative projected along $u^a$, i.e., $\dot{} \equiv
u^a \nabla_a $.
We also introduce the covariant spatial derivative, which is
defined as
\begin{equation}
\label{Da} D_a \chi \equiv h_a^{\ b}\nabla_b \chi, \quad \quad D_a
\chi_b=h_a^{\ c} h_b^{\ d} \nabla_c \chi_d, \quad \quad {\rm etc,}
\end{equation}
where $\chi$ and $\chi_a$ are a generic scalar and covector, respectively.
As illustrated in \cite{Ellis:1989jt}, the covariant spatial
derivative is particularly useful to deal with cosmological
perturbations in a {\sl covariant} way, as an alternative to the
standard coordinate based approach.
The
conservation of the energy-momentum tensor,
\begin{equation}
\label{conserv} \nabla_a T^a_{\ b}=0,
\end{equation}
yields, by projection along $u^a$, the energy conservation
equation,
\begin{equation}
\dot\rho + \Theta (\rho + p+\Pi)+ \pi^b_{\ a} \nabla_b u^a +
\nabla_a q^a + q^a \dot u_a=0.
\end{equation}
Using the decomposition (\ref{decomposition}) and the definition
of the spatial covariant derivative (\ref{Da}), one can rewrite
the above equation in the form
\begin{equation}
\dot\rho + \Theta (\rho + p)= {\cal D}, \quad \quad {\cal
D}=-\left( \Theta\Pi+\pi^{ab} \sigma_{ab} + D_a q^a + 2 q^a \dot
u_a\right). \label{energy_cons}
\end{equation}
The scalar quantity ${\cal D}$ thus contains all the dissipative
terms and vanishes for a perfect fluid.
In irreversible thermodynamics, the entropy is not conserved but
increases according to the second law of thermodynamics. This can
be expressed by the inequality
\begin{equation}
\nabla_a S^a\geq 0,
\end{equation}
where $S^a$ is the entropy current.
One usually writes $S^a$ in the form
\begin{equation}
S^a=Snu^a+\frac{R^a}{T},
\end{equation}
where $R^a$ is a dissipative term. As discussed in \cite{maartens},
the explicit form for $R^a$ varies
according to the formalisms which have been introduced in the
literature. The entropy $S$
and temperature $T$ are the local equilibrium quantities, which
are related via the Gibbs equation,
\begin{equation}
TdS=d\left(\frac{\rho}{n}\right)+p\, d\left(\frac{1}{n}\right).
\end{equation}
This implies, in particular,
\begin{equation}
nT\dot S=\dot\rho+\Theta \left(\rho+p\right),
\end{equation}
where we have assumed conservation of the particle number, i.e.
\begin{equation}
\nabla_an^a=0 \quad \Leftrightarrow \quad \dot n+\Theta n=0.
\end{equation}
Using the energy conservation equation (\ref{energy_cons}),
this can be rewritten
as
\begin{equation}
nT\dot S={\cal D}.
\end{equation}
In terms of the entropy density $s=nS$, this gives, using once more
the particle conservation equation,
\begin{equation}
\label{s_dot}
\dot s+\Theta s=\frac{{\cal D}}{T}.
\end{equation}
Using this equation, one
gets
\begin{equation}
\nabla_a S^a=\frac{{\cal D}}{T}+\nabla_a\left(\frac{R^a}{T}\right).
\end{equation}
In the case of a non-dissipative fluid,
the right hand side is zero and the
above relation then expresses the conservation of entropy.
\section{Nonlinear covector}
\label{sec:covariantpert}
Here we illustrate the geometrical meaning of the nonlinear
covector that we introduced in \cite{lv05a,lv05b}. This
interpretation can easily be extended to the other covectors of
the same form that will be defined in this paper.
As we showed in our recent works \cite{lv05a,lv05b} (see also
\cite{Lyth:2003im,Lyth:2004gb,Rigopoulos:2003ak}
for other recent formulations of
conserved nonlinear perturbations), a crucial
quantity to define conserved nonlinear perturbations is the
spatial gradient of the local number of e-folds, $\alpha$, which
is defined as the integration of $\Theta$ along the fluid world
lines with respect to the proper time $\tau$,
\begin{equation}
\label{alpha_def} \alpha \equiv {1\over 3}\int d\tau \, \Theta.
\end{equation}
It follows that
\begin{equation}
\Theta = 3 \dot\alpha = 3 u^a\nabla_a\alpha .
\end{equation}
In \cite{lv05a,lv05b}, we have introduced, for a {\em perfect} fluid,
a linear combination
of the spatial covariant derivatives of $\alpha$ and $\rho$,
\begin{equation}
\zeta_a= D_a \alpha-\frac{\dot\alpha}{\dot \rho} D_a \rho.
\label{zeta2} \label{zeta}
\end{equation}
This covector is fully conserved on all scales for
adiabatic perturbations, and can be seen as the nonlinear
generalization of the usual $\zeta$.
In Sec.~\ref{sec:nonconservation} we will rederive the conservation
equation for this quantity.
At this stage, it is instructive to give a graphical
representation of $\zeta_a$. To work with a scalar quantity rather
than a covector, let us consider an infinitesimal vector $e^a$
which is orthogonal to the fluid four-velocity $u^a$ at some
spacetime point $q$. One can then write
\begin{equation}
e^a \zeta_a=\Delta\alpha-\frac{\dot\alpha}{\dot \rho}\Delta\rho,
\end{equation}
with
\begin{equation}
\Delta\alpha\equiv e^a\nabla_a \alpha, \quad \quad
\Delta\rho\equiv e^a\nabla_a \rho.
\end{equation}
As shown in Fig.~\ref{fig},
starting from our fiducial reference point $q$, the infinitesimal
quantity $\Delta \alpha$ corresponds to the shift in $\alpha$ when
one goes from $q$ to the neighboring point $q'$ indicated by
$e^a$. Thus $q'$ belongs to the hypersurface
$\Sigma_{\alpha+\Delta\alpha}$ characterized by the constant value
$\alpha+\Delta\alpha$. Similarly, $q'$ belongs to the constant
energy density hypersurface $\Sigma_{\rho+\Delta\rho}$. Now, these
two hypersurfaces also intersect the fluid worldline that goes
through $q$, but in general the intersections differ.
\begin{figure}
\begin{center}
\includegraphics[height=15pc]{fig.eps}
\caption{Geometric interpretation of $\zeta_a$.} \label{fig}
\end{center}
\end{figure}
The quantity $e^a \zeta_a$ quantifies, in terms of the number of
e-folds, the separation between these two intersection points.
Indeed, the
proper time interval between $q$ and the intersection of the
hypersurface $\Sigma_{\alpha+\Delta\alpha}$ with the worldline of $q$
is $\Delta \tau_\alpha\equiv {\Delta \alpha}/{\dot\alpha}$ while the
proper time interval between $q$ and the intersection of the
hypersurface $\Sigma_{\rho+\Delta\rho}$ with the worldline of $q$
is $\Delta \tau_\rho\equiv {\Delta \rho}/{\dot\rho}$. The difference
between these two proper time intervals is shown in the figure
and
the corresponding variation of $\alpha$ during this time difference interval
is $e^a \zeta_a= \dot \alpha (\Delta \tau_\alpha -
\Delta \tau_\rho)$.
\section{An identity for nonlinear covectors}
\label{sec:identity}
In this section we will derive a general identity which we will
use later for various cases. Namely, we will show that if one
starts with an equation of the form
\begin{equation}
\label{conserv_f}
\dot f+\Theta g=0,
\end{equation}
where $f$ and $g$ are two scalar quantities and, as before, the
dot denotes the derivative along $u^a$, one finds the identity
\begin{equation}
{\cal L}_u \left(D_a \alpha- \frac{\dot\alpha}{\dot f}D_a f\right)= 3
\frac{\dot\alpha^2}{\dot f}\left( D_a g- \frac{\dot g}{\dot
f}D_af\right). \label{D_conserv_f3}
\end{equation}
To show this, let us start by rewriting Equation (\ref{conserv_f})
as
\begin{equation}
\dot\alpha+\frac{\dot f}{3g}=0.
\end{equation}
Taking the spatially projected derivative, one gets
\begin{equation}
\label{D_conserv_f}
D_a\dot\alpha- \frac{\dot\alpha}{\dot f}D_a\dot f+\frac{\dot\alpha}{g}
D_ag=0.
\end{equation}
We now wish to invert the time derivative and the spatial
gradient. In order to do so, it is convenient to introduce the
Lie derivative along $u^a$, ${\cal L}_u$. Its action on a covector is given by
the expression
\begin{equation}
{\cal L}_u \chi_a \equiv u^c \nabla_c \chi_a + \chi_{c} \nabla_a u^c =
u^c
\partial_c \chi_a + \chi_{c} \partial_a u^c. \label{Lie_def}
\end{equation}
For a scalar, ${\cal L}_u
f=\dot f$. The Lie derivation along $u^a$ and the spatial gradient
$D_a$ do not commute. Instead, one finds \cite{lv05a,lv05b}
\begin{equation}
\label{Ddot}
D_a\left(\dot f\right) =
{\cal L}_u \left(D_a f\right)- \dot u_a \dot f.
\end{equation}
Applying this identity both to $\alpha$ and $f$, one obtains
\begin{equation}
D_a\dot\alpha- \frac{\dot\alpha}{\dot f}D_a\dot f
= {\cal L}_u \left(D_a \alpha\right)- \frac{\dot\alpha}{\dot f}{\cal L}_u \left(D_a f\right).
\end{equation}
Substituting in (\ref{D_conserv_f}), one finds
\begin{equation}
\label{D_conserv_f2}
{\cal L}_u \left(D_a \alpha- \frac{\dot\alpha}{\dot f}D_a f\right)
+\Tdot{\left(\frac{\dot\alpha}{\dot f}\right)} D_a f-3
\frac{\dot\alpha^2}{\dot f}
D_ag=0,
\end{equation}
where we have used Eq.~(\ref{conserv_f}) to rewrite the last term.
Moreover, Eq.~(\ref{conserv_f}) implies
\begin{equation}
\Tdot{\left(\frac{\dot\alpha}{\dot f}\right)}=3 \dot{g}
\frac{\dot\alpha^2}{\dot f},
\end{equation}
which can be used to rewrite Eq.~(\ref{D_conserv_f2}) in the form
given in Eq.~(\ref{D_conserv_f3}).
For practical purposes, it is useful to note that, for the
covectors
\begin{equation}
\label{zeta_f}
\zeta_a^{(f)}=D_a \alpha- \frac{\dot\alpha}{\dot f}D_a f,
\qquad
\Gamma_a^{(g,f)}=D_a g- \frac{\dot g}{\dot f}D_af,
\end{equation}
which are defined as very particular linear combinations of {\it
spatially projected} gradients, one can replace the latter by {\it
ordinary} gradients and write
\begin{equation}
\label{partial}
\zeta_a^{(f)}=\partial_a \alpha- \frac{\dot\alpha}{\dot f}\partial_a f,
\qquad
\Gamma_a^{(g,f)}=\partial_a g- \frac{\dot g}{\dot f}\partial_af.
\end{equation}
This is a consequence of the identity
\begin{equation}
D_a \chi = \partial_a \chi + u_a \dot \chi,
\end{equation}
valid for any scalar quantity $\chi$.
\section{Nonlinear (non-)conservation equations}
\label{sec:nonconservation}
That one can construct conserved cosmological perturbations
associated with each quantity whose local evolution is determined
entirely by the local expansion of the universe was shown in
\cite{Lyth:2003im}. There, it was pointed out that such a
construction can be extended even in the nonlinear regime,
although an explicit expression of the nonlinear perturbation
variables was not given.
Here we explicitly construct these variables and their evolution
equations. In particular, we use the identity (\ref{D_conserv_f3})
to derive conservation -- or non-con\-ser\-va\-tion -- equations
for various covectors which represent nonlinear perturbations. In
this section, we discuss these equations for the nonlinear
generalization of the curvature perturbation on hypersurfaces of
uniform number density $n$, uniform energy density $\rho$, or
uniform entropy density $s$. The corresponding covectors are
obtained immediately by replacing the quantity $f$ in
Eq.~(\ref{D_conserv_f3}) with $n$, $\rho$, or $s$, respectively.
As in \cite{lv05a,lv05b}, the curvature perturbation is replaced
here by $\alpha$, the local number of e-folds of an observer
comoving with the fluid particles.
\subsection{Particle conservation}
The particle conservation equation,
\begin{equation}
\nabla_a (n\, u^a)=0,
\end{equation}
can be rewritten as
\begin{equation}
\dot n+\Theta n=0, \label{particle_cons}
\end{equation}
which is exactly of the form (\ref{conserv_f}) with
$f=g=n$.
Equation (\ref{D_conserv_f3})
tells us immediately that the particle conservation law
yields, for the covector
\begin{equation}
\zeta_a^{(n)}= D_a \alpha -{\dot\alpha\over \dot n}D_an,
\end{equation}
the equation
\begin{equation}
{\cal L}_u\zeta_a^{(n)}=0.
\end{equation}
This result has already been given in \cite{lv05b}.
This can be generalized to the case where the number of particles
is not conserved, in which case one can write
\begin{equation}
\nabla_a (n\, u^a)=-\Gamma n,
\end{equation}
where $\Gamma$ is the decay rate, which is not supposed to be a constant here.
Equivalently, one can write
\begin{equation}
\dot n+\Theta n+\Gamma n =0,
\end{equation}
which is still of the form (\ref{conserv_f}) with $f=n$ and
$g=n+(\Gamma n)/\Theta$. Taking the spatially projected gradients,
one thus finds
\begin{equation}
{\cal L}_u\zeta_a^{(n)}=
\frac{3 \dot \alpha^2}{\dot n} \left(
D_a \Gamma - \frac{\dot \Gamma}{\dot n} D_a n
- D_a \Theta + \frac{\dot \Theta}{\dot n} D_a n\right).
\end{equation}
The particle annihilation (or production) rate $\Gamma n$ acts
here as a source for the evolution of the particle number density
perturbation. We will discuss more thoroughly the case of several
interacting fluids below in Sec.~\ref{sec:interacting}.
\subsection{Energy conservation}
\label{sec:energy_cons}
In the general case, if one introduces the quantity
\begin{equation}
\beta \equiv -\frac{{\cal D}}{\Theta}=
(\Theta\Pi+\pi^{ab} \sigma_{ab} + D_a q^a + 2 q^a \dot u_a )/{\Theta},
\end{equation}
then the energy conservation equation (\ref{energy_cons}), becomes
\begin{equation}
\label{continuity} \dot\rho + \Theta (\rho + p + \beta)=0,
\end{equation}
where $\beta$ acts as an extra pressure term, which we will call
{\em dissipative} pressure.
One can apply the identity derived in the previous section with
$f=\rho$ and $g=\rho+p+\beta$. This yields directly
\begin{equation}
{\cal L}_u\zeta_a= \frac{3 \dot \alpha^2}{ \dot \rho}\left(D_a p
-\frac{\dot p}{\dot \rho} D_a\rho + D_a \beta - \frac{\dot
\beta}{\dot \rho} D_a\rho \right),
\label{conserv1}
\end{equation}
in terms of the covector $\zeta_a$ defined in Eq.~(\ref{zeta2})
and introduced in \cite{lv05a,lv05b}.
Equation (\ref{conserv1}) is fully {\it nonperturbative} and
valid at {\it all scales}. It holds for any fluid, including
dissipative fluids with nonvanishing energy flow and anisotropic
stress. It can be rewritten as
\begin{equation}
{\cal L}_u \zeta_a= \frac{3 \dot \alpha^2}{ \dot \rho}
\left(\Gamma_{a}+\Sigma_{a}\right) , \label{conserv2}
\end{equation}
where, on the right hand side, one recognizes two covectors: the
nonlinear nonadiabatic pressure perturbation,
\begin{equation}
\label{Gamma}
\Gamma_a\equiv
D_a p- {\dot p\over \dot\rho}D_a\rho,
\end{equation}
which vanishes for purely adiabatic perturbations, i.e., when the
pressure $p$ is solely a function of the energy density $\rho$,
and a term combining the gradients of the dissipative pressure and
of the energy density,
\begin{equation}
\label{Sigma}
\Sigma_a\equiv
D_a \beta- {\dot \beta\over \dot\rho}D_a\rho\, ,
\end{equation}
which we will call {\em dissipative} nonadiabatic pressure perturbation.
This vanishes for a purely perfect fluid.
Note that since the dissipative pressure $\beta$ depends on the
local expansion $\Theta$, the dissipative nonadiabatic pressure
perturbation $\Sigma_a$ depends implicitly on the local expansion.
In the appendix, Sec.~\ref{sec:alternative}, we discuss an
alternative formulation of the non-conservation equations which
leads to evolution equations where the source terms do not depend
on $\Theta$.
\subsection{Entropy (non-)conservation}
Defining
\begin{equation}
\tilde\beta \equiv -\frac{{\cal D}}{\Theta T}=\frac{\beta}{T},
\end{equation}
one recognizes in (\ref{s_dot}) an equation of the form (\ref{conserv_f})
with $f=s$ and $g=s+\tilde\beta$. One can thus write the evolution
equation
\begin{equation}
{\cal L}_u\zeta^{(s)}_a= \frac{3 \dot\alpha^2}{\dot s}
\left(D_a \tilde\beta - \frac{\dot{\tilde\beta}}{\dot s} D_a s \right),
\end{equation}
in terms of the covector
\begin{equation}
\zeta^{(s)}_a\equiv
D_a\alpha-\frac{\dot\alpha}{\dot s}D_a s,
\end{equation}
which can be seen as the nonlinear generalization of the linear
curvature perturbation on constant entropy hypersurfaces.
\section{Interacting fluids}
\label{sec:interacting}
We now extend our nonlinear formalism to a system
of interacting fluids.
Our treatment follows the approach of \cite{Malik:2002jb,Malik:2004tf},
in the context of the linear theory, and can thus be seen as
a nonlinear generalization of these works. In particular, we
extend to the non-linear case their study of the coupled
evolution of curvature and {\em nonadiabatic} perturbations in a multifluid
system when energy transfer between the fluids is included.
\def\alpha{\alpha}
\def\beta{\beta}
\def{(\alpha)}{{(\alpha)}}
\def{(\beta)}{{(\beta)}}
\def{\cal Q}{{\cal Q}}
\def{\cal S}{{\cal S}}
We work in a common ``global'' frame defined by a unit
four-velocity $u^a$. The four-velocity can be conveniently chosen
depending on the physical problem (see \cite{Dunsby:1991xk} for a
discussion on this point). In this frame, the energy-momentum
tensor of each individual fluid can be expressed as
\begin{equation}
T^{{(\alpha)}}_{ab}= \rho^{\alpha}u_a u_b+ p^{\alpha} h_{ab}+ q^{{(\alpha)}}_{a} u_{b}+
q^{{(\alpha)}}_b u_a +\pi^{{(\alpha)}}_{ab}.
\end{equation}
In the appendix, in Sec.~\ref{app:emt}, we give explicitly the
transformations from the global frame to each individual
$\alpha$-fluid frame. Here, for simplicity, $ p^{\alpha}$
denotes the total effective pressure for each fluid and we will
not include $\Pi^\alpha$ in the dissipative terms.
For each fluid, the (non-)conservation of the energy-momentum
tensor reads
\begin{equation}
\nabla_a T^{(\alpha)}{}^{ab}= Q^{(\alpha)}{}^b,
\end{equation}
where $Q^{(\alpha)}{}^a$ is the energy-momentum transfer to the
$\alpha$-fluid. The conservation of the {\em total} energy-momentum
tensor implies the constraint
\begin{equation}
\label{sum_Q}
\sum_\alpha Q^{(\alpha)}{}^a = 0.
\end{equation}
Projecting along the four-velocity $u^a$ yields an energy
conservation equation,
\begin{equation}
\dot\rho^\alpha + \Theta (\rho^\alpha + p^\alpha)= {\cal D}^\alpha + {\cal Q}^\alpha,
\label{energy_cons_I}
\end{equation}
with
\begin{eqnarray}
{\cal Q}^\alpha &\equiv& - Q_b^{(\alpha)} u^b, \\
{\cal D}^\alpha &\equiv& -\left( \pi^{(\alpha)}_{ab} \sigma^{ab} + D_a
q^{(\alpha)}{}^a + 2 q^{(\alpha)}{}^a \dot u_a \right), \label{Q_alpha}
\end{eqnarray}
where the dot and the spatial derivative $D_a$ are now defined
with respect to the four-velocity $u^a$ which a priori does not
coincide with any fluid frame.
If one then introduces the dissipative pressure of the $\alpha$-fluid,
\begin{equation}
\beta^\alpha \equiv -\frac{{\cal D}^\alpha}{\Theta}, \label{58}
\end{equation}
the energy conservation equation (\ref{energy_cons_I}),
becomes of the form (\ref{continuity}), with $\beta^\alpha -
{\cal Q}^\alpha/\Theta$ acting as an extra pressure term. This yields
\begin{eqnarray}
{\cal L}_u\zeta^{(\alpha)}_a &=& \frac{3 \dot \alpha^2}{ \dot \rho^\alpha} \left( D_a
p^\alpha - \frac{\dot p^\alpha}{\dot \rho^\alpha} D_a \rho^\alpha + D_a \beta^\alpha
- \frac{\dot \beta^\alpha}{\dot \rho^\alpha} D_a \rho^\alpha
\right) \nonumber \\
&& - \frac{\dot \alpha}{ \dot \rho^\alpha}\left( D_a {\cal Q}^\alpha -
\frac{\dot {\cal Q}^\alpha }{\dot \rho^\alpha} D_a \rho^\alpha \right) +
\frac{{\cal Q}^\alpha}{3 \dot \rho^\alpha}\left( D_a \Theta - \frac{\dot
\Theta}{\dot \rho^\alpha} D_a \rho^\alpha\right) \label{zeta_multi1}.
\end{eqnarray}
This is the (non-)conservation equation for
\begin{equation}
\zeta^{(\alpha)}_a \equiv D_a \alpha - \frac{\dot \alpha}{\dot \rho^\alpha}
D_a \rho^\alpha, \label{zeta_multi2}
\end{equation}
the nonlinear generalization of the curvature
perturbation on uniform density hypersurfaces for the fluid $\alpha$.
This perturbation variable associated to the individual fluid $\alpha$
is conserved if the fluid is barotropic, $P^\alpha=P^\alpha(\rho^\alpha)$,
perfect, ${\cal D}^\alpha=0$, and decoupled from the other fluids,
${\cal Q}^\alpha=0$.
Note that $\zeta^{(\alpha)}_a$ is defined here with respect to the
four-velocity $u^a$ and not with respect to its own four-velocity
$u^{{{(\alpha)}}a}$, as it was the case in (\ref{zeta}). This does not
really affect the spatial gradients because they can be replaced,
as before, by ordinary gradients.
Nonetheless, the coefficient ${\dot \alpha}/{\dot \rho^\alpha}$
is different in the two cases, and one may have different
definitions of $\zeta^{(\alpha)}_a$ depending on $u^a$.
As discussed in the appendix Sec.~\ref{app:zeta}, in the
situations where the fluid relative velocities are small and the
geometry quasi-homogeneous, as in the cosmological context, then
the various definitions of $\zeta^\alpha$ are equivalent in the {\it
linear} theory. Their spatial projections (i.e., perpendicular to
$u^a$) are even equivalent up to {\em second} order.
To make the connection with the linear theory, in particular with
\cite{Malik:2002jb,Malik:2004tf}, one can rewrite
Eq.~(\ref{zeta_multi1}) as
\begin{equation}
{\cal L}_u\zeta^{(\alpha)}_a = \frac{3 \dot \alpha^2}{ \dot \rho^\alpha} \left(
\Gamma_a^{(\alpha)} + \Sigma_a^{(\alpha)} \right) - \frac{\dot \alpha}{ \dot
\rho^\alpha} \left( {\cal Q}_{a}^{(\alpha,{\rm intr})} +{\cal Q}_{a}^{(\alpha,{\rm
rel})} \right), \label{zeta_a_evol}
\end{equation}
where we have identified several individual source terms: the
intrinsic nonadiabatic pressure perturbation,
\begin{equation}
\Gamma_a^{(\alpha)} \equiv D_a p^\alpha - \frac{\dot p^\alpha}{\dot \rho^\alpha}
D_a \rho^\alpha,
\end{equation}
the dissipative nonadiabatic pressure perturbation,
\begin{equation}
\Sigma_a^{(\alpha)} \equiv D_a \beta^\alpha - \frac{\dot \beta^\alpha}{\dot
\rho^\alpha} D_a \rho^\alpha,
\end{equation}
the intrinsic nonadiabatic energy transfer,
\begin{equation}
{\cal Q}_{a}^{(\alpha,{\rm intr})} \equiv D_a {\cal Q}^\alpha - \frac{\dot {\cal Q}^\alpha
}{\dot \rho^\alpha} D_a \rho^\alpha,
\end{equation}
and the relative nonadiabatic energy transfer,
\begin{equation}
{\cal Q}_{a}^{(\alpha,{\rm rel})} \equiv - \frac{{\cal Q}^\alpha}{\Theta}\left(D_a
\Theta - \frac{\dot \Theta}{\dot \rho^\alpha} D_a \rho^\alpha \right).
\label{Q_rel}
\end{equation}
It is convenient, at this stage, to introduce the total
energy density and pressure,
\begin{equation}
\rho=\sum_\alpha \rho^\alpha, \qquad p=\sum_\alpha p^\alpha,
\end{equation}
which are defined with respect to our unspecified global frame
$u^a$.
By summing the individual energy conservation equations (\ref{energy_cons_I}), one
gets an equation as in (\ref{energy_cons}) with
\begin{equation}
{\cal D}= \sum_\alpha {\cal D}^\alpha.
\end{equation}
The sum of the ${\cal Q}^\alpha$ vanishes as a consequence of the constraint (\ref{sum_Q}).
The covector $\zeta_a$ corresponding to $\rho$ can be expressed as
the weighted sum of the individual $\zeta^{(\alpha)}_a$,
\begin{equation}
\zeta_a = \sum_\alpha\frac{\dot \rho^\alpha}{\dot \rho} \zeta^{(\alpha)}_a,
\end{equation}
and its evolution equation is given by Eq.~(\ref{conserv1}).
Since now the global fluid is made of individual fluids, it
is useful to split the global nonadiabatic pressure $\Gamma_a$
into two terms,
\begin{equation}
\Gamma_a = \Gamma^{({\rm intr})}_a + \Gamma^{({\rm rel})}_a
\label{Gamma_sum},
\end{equation}
where the {\it intrinsic} nonadiabatic pressure
perturbation is the sum of the individual nonadiabatic pressure perturbations,
\begin{equation}
\Gamma^{({\rm intr})}_a \equiv \sum_\alpha \Gamma^{{(\alpha)}}_a.
\end{equation}
The {\it relative} adiabatic pressure perturbation can be written in
the form
\begin{equation}
\Gamma^{({\rm rel})}_a \equiv -\frac{1}{2 \Theta \dot \rho}
\sum_{\alpha,\beta} \dot \rho^\alpha \dot \rho^\beta \left(\frac{\dot
p^\alpha}{\dot \rho^\alpha} - \frac{\dot p^\beta}{\dot \rho^\beta} \right)
{\cal S}_a^{(\alpha \beta)}, \label{Gamma_rel}
\end{equation}
where the relative entropy perturbations between the $\alpha$-
and $\beta$-fluids ${\cal S}_a^{(\alpha \beta)}$ is defined as
\cite{Malik:2002jb}
\begin{equation}
{\cal S}_a^{(\alpha \beta)} \equiv 3 \left( \zeta_a^{(\alpha)} - \zeta_a^{(\beta)} \right) =
- 3 \dot \alpha \left( \frac{D_a \rho^\alpha}{\dot \rho^\alpha} -
\frac{D_a \rho^\beta}{\dot \rho^\beta} \right). \label{S_def}
\end{equation}
In order to derive Eq.~(\ref{Gamma_sum}) from the definition
(\ref{Gamma_rel}), it is convenient to use
\begin{equation}
\zeta_a^{(\alpha)}=\zeta_a + \frac{1}{3} \sum_\beta \frac{\dot \rho^\beta}{\dot
\rho} {\cal S}^{(\alpha \beta)}. \label{zeta_a_zeta}
\end{equation}
Note that one can replace the spatial gradients in
Eq.~(\ref{S_def}) with partial derivatives.
A similar decomposition applies to the dissipative nonadiabatic
pressure perturbation $\Sigma_a$, which can be written as
\begin{equation}
\Sigma_a = \Sigma^{({\rm intr})}_a + \Sigma^{({\rm rel})}_a,
\end{equation}
with the intrinsic part,
\begin{equation}
\Sigma^{({\rm intr})}_a \equiv \sum_\alpha \Sigma^{{(\alpha)}}_a,
\end{equation}
and the relative part,
\begin{equation}
\Sigma^{({\rm rel})}_a \equiv - \frac{1}{2 \Theta \dot \rho}
\sum_{\alpha,\beta} \dot \rho^\alpha \dot \rho^\beta \left(\frac{\dot
\beta^\alpha}{\dot \rho^\alpha} - \frac{\dot \beta^\beta}{\dot \rho^\beta} \right)
{\cal S}_a^{(\alpha \beta)}. \label{SigmaS}
\end{equation}
Note that one could also work directly with ${\cal D}^\alpha$
instead of $\beta^\alpha$, these two quantities being related by
Eq.~(\ref{58}).
As a consequence, one could separate each
corresponding covector $\Sigma_a^{(\alpha)}$ into an intrinsic and a relative part, in
analogy with our treatment of ${\cal Q}^{(\alpha)}_a$.
Let us now rewrite the relative nonadiabatic energy transfer of
the $\alpha$-fluid of Eq.~(\ref{Q_rel}) as
\begin{equation}
{\cal Q}^{(\alpha,{\rm rel})}_a = -\frac{{\cal Q}^\alpha}{\Theta} \left( D_a \Theta -
\frac{\dot \Theta}{\dot \rho} D_a \rho \right) - \frac{ {\cal Q}^\alpha \dot
\Theta}{\Theta^2} \sum_\beta \frac{\dot \rho^\beta}{\dot \rho} {\cal S}_a^{(\alpha
\beta)},
\end{equation}
where we have employed Eq.~(\ref{zeta_a_zeta}) for the last term.
By taking the difference
between the evolution equations (\ref{zeta_a_evol}) for two
fluids, we finally obtain an evolution equation for the relative
entropy perturbation,
\begin{eqnarray}
{\cal L}_u {\cal S}_a^{(\alpha \beta)} = \Theta^2 \left( \frac{\Gamma_a^{{(\alpha)}}
+ \Sigma_a^{{(\alpha)}}}{\dot \rho^\alpha}
- \frac{\Gamma_a^{{(\beta)}} + \Sigma_a^{{(\beta)}}}{\dot \rho^\beta} \right) -
\Theta \left( \frac{{\cal Q}^{(\alpha, {\rm intr})}_a}{\dot
\rho^\alpha}- \frac{{\cal Q}^{(\beta, {\rm intr})}_a}{\dot
\rho^\beta} \right)
\quad \nonumber \\
+ \frac{\dot \Theta}{\Theta \dot \rho} \sum_\gamma \dot
\rho^\gamma \left( \frac{{\cal Q}^\alpha}{\dot \rho^\alpha}{\cal S}_a^{(\alpha \gamma)} -
\frac{{\cal Q}^\beta}{\dot \rho^\beta}{\cal S}_a^{(\beta \gamma)} \right) +
\left(\frac{{\cal Q}^\alpha}{\dot \rho^\alpha} -\frac{{\cal Q}^\beta}{\dot \rho^\beta}
\right) \left( D_a \Theta - \frac{\dot \Theta}{\dot \rho} D_a \rho
\right). \nonumber
\\ \label{S_evolu}
\end{eqnarray}
The above equation represents the nonlinear generalization of
the evolution equation for the relative entropy perturbation, established in
\cite{Malik:2004tf} in the context of the linear theory.
Note that, while the total curvature perturbation $\zeta_a$ is
sourced by the relative entropy perturbations between the fluids
${\cal S}_a^{(\alpha \beta)}$, through Eqs.~(\ref{Gamma_rel}) and
(\ref{SigmaS}), the $\zeta_a^{(\alpha)}$'s do not appear in the
evolution equation for the relative entropy perturbation
${\cal S}_a^{(\alpha \beta)}$. However, ${\cal S}_a^{(\alpha \beta)}$ is sourced by the
last term of Eq.~(\ref{S_evolu}), which depends on $\Theta$, i.e.,
on the local expansion. In the linear theory, this term vanishes
on large scales, i.e., on scales larger than the Hubble radius.
\section{Conclusion}
\label{sec:conclusion}
In the present work, we have extended our covariant formulation for
nonlinear perturbations
to the case of dissipative relativistic fluids, allowing for
interactions.
This extension could be used to tackle more sophisticated physical situations.
Cosmology is such an example since the matter content of the universe is made
of several fluids which can interact.
In our approach, the
important quantities are the covectors
$\zeta_a^{(f)}$, defined as linear
combinations of the spatial
gradients of the number of e-folds $\alpha$, and of some
scalar fluid quantities $f$,
for one or several fluids.
These covectors are fully nonlinear and
generalize the curvature perturbations on
hypersurfaces of constant $f$
defined in the context of linear cosmological perturbation theory.
The non-conservation equation for $\zeta_a$ associated with the
energy density, that we obtain when the fluid is unperfect, can be
derived and written in a form analogous to that found in our
initial formalism with a single perfect fluid, provided that the
pressure is modified such as to include dissipative effects.
For several interacting fluids, we can also define fully nonlinear
quantities that generalize analogous quantities introduced in the
linear theory. Remarkably, as in our initial formalism with a
single perfect fluid, the evolution equations that govern these
quantities ``mimic'' those found in the linear theory, with the
advantage that they are covariant, i.e., independent of any
coordinate system. Thus, it is straightforward to linearize them
or expand them at second, or higher, order in the perturbations.
Moreover, our fully nonlinear approach allows to identify which
property is specific to linear (or second) order and
which property will remain valid at all orders. For example,
in the multifluid case, there are a priori several nonlinear
generalizations of the
curvature perturbation, which depend on the choice of the reference
frame. In the cosmological context, they all reduce to the same
quantity at linear order.
|
1,314,259,993,196 | arxiv | \section{Introduction}
In his seminal paper \cite{Hawking1}, Steven Hawking showed that black holes
radiate thermally due to the quantum effects and this radiation is known as
Hawking radiation. Thus, for the first time, it has been established a
relation between thermodynamics and space-time geometry. Furthermore, the
entropy of the black hole is shown to be proportional to the surface area of
the black hole.
Besides the Hawking's original method, today there exists a number of
different approaches deriving the Hawking temperature \cite%
{gibbons1,gibbons2,umetso}. The tunneling method \cite%
{kraus1,kraus2,perkih1,perkih2,perkih3,ang,sri1,sri2,vanzo}, has been
studied in details and shown to be very successful for calculating the
Hawking temperature for different types of particles emitted from static as
well as stationary space-time metrics \cite{sh2,mann0,mann1,mann2,mann3}.
Hawking temperature depends on the black hole mass $M$, charge $Q$ and
angular momentum $J$, using the tunneling approach, it is also shown that,
the Hawking temperature for a particular black hole configuration remains
unaltered and unaffected by the nature of particles emitted from the black
hole. Moreover, the radiation spectrum is shown to deviate from pure
thermality due to the conservation of energy, and hence the theory is
consistent with an underlying unitary theory.
Due to the non-linearity of the Einstein's field equations it is very
difficult to find exact solutions. However, apart from the standard
solutions characterized with spherical symmetry, solutions with cylindrical
symmetry have also been found, such solutions are known as cylindrical black
holes or black strings \cite{lemos,cai}. The tunneling of scalar and Dirac
particles from charged static/rotating black string has been also
investigated \cite{gohar1,gohar2, ahmed1,ahmed2}. Recently, the tunneling of
massive spin-$1$ particles has attracted interest \cite%
{xiang,ali,sh11,sh3,sh4,kruglov1,kruglov2}. Therefore, in this paper, we aim
to study the tunneling of massive vector bosons $W^{\pm}$(spin-$1$
particles) from the space-time of a charged static and a rotating black
string. First, we derive the field equations by using the Lagrangian given
by the Glasgow-Weinberg-Salam model. We then use the WKB approximation and
the separation of variables which results with a set of four linear
equations, solving for the radial part by using the determinant of the
metric equals zero, we found the tunneling rate and the corresponding
Hawking temperature in both cases.
The paper is organized as follows. In Sec. II, we investigate the tunneling
of massive vector particles from the static charged black strings and
calculate the corresponding tunneling rate and the Hawking temperature. In
Sec. III, we extend our calculations for the case of tunneling of massive
vector particles from a rotating charged black string. In Sec. IV, we
comment on our results.
\section{Tunneling From Static Charged Black Strings}
We can begin by writing the Einstein-Hilbert action with a negative
cosmological constant in the presence of an electromagnetic field given by
\begin{equation}
S=\frac{1}{16\pi G}\int d^{4}x\sqrt{-g}\left(R-2\Lambda\right)-\frac{1}{16
\pi}\int d^{4}x \sqrt{-g}F^{\mu \nu}F_{\mu \nu},
\end{equation}
where the Maxwell electromagnetic tensor is given by
\begin{equation}
F_{\mu \nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}.
\end{equation}
If one takes into account the cylindrical symmetries of the space-time, then
the line element for a static charged black string with negative
cosmological constant in the presence of electromagnetic fields is shown to
be \cite{lemos,cai}
\begin{equation}
ds^{2}=-f(r)dt^{2}+f(r)^{-1}dr^{2}+r^{2}d\theta ^{2}+\alpha^{2}r^{2}dz^{2},
\label{metric1}
\end{equation}%
where
\begin{equation}
f(r)=\alpha^{2} r^{2}-\frac{b}{\alpha r}+\frac{c^{2}}{\alpha^{2}r^{2}},
\label{f}
\end{equation}
and
\begin{equation}
\alpha^{2}=-\frac{1}{3}\Lambda,\,\,\,\, b=4GM,\,\,\,\, c^{2}=4GQ^{2}.
\end{equation}
Solving for $\alpha^{2} r^{2}-\frac{b}{\alpha r}+\frac{c^{2}}{\alpha^{2}r^{2}%
}=0$, one can easily find the outer horizon given by \cite{gohar1}
\begin{equation}
r_{+}=\frac{b^{\frac{1}{3}}\sqrt{s}+\sqrt{2\sqrt{s^{2}-4p^{2}-s}}}{2\alpha},
\end{equation}
where
\begin{eqnarray}
s&=&\left(\frac{1}{2}+\frac{1}{2}\sqrt{1-4\left(\frac{4p^{2}}{3}\right)^{3}}%
\right)^{\frac{1}{3}}+\left(\frac{1}{2}-\frac{1}{2}\sqrt{1-4\left(\frac{%
4p^{2}}{3}\right)^{3}}\right)^{\frac{1}{3}}, \\
p^{2}&=&\frac{c^{2}}{b^{\frac{4}{3}}}.
\end{eqnarray}
Let us now write the Lagrangian density which describes the $W^{\pm}$-bosons
in a background electromagnetic field given by \cite{xiang}
\begin{equation}
\mathcal{L}=-\frac{1}{2}\left( D_{\mu }^{+}W_{\nu }^{+}-D_{\nu }^{+}W_{\mu
}^{+}\right) \left( D^{-\mu }W^{-\nu }-D^{-\nu }W^{-\mu }\right) +\frac{%
m_{W}^{2}}{\hbar ^{2}}W_{\mu }^{+}W^{-\mu }-\frac{i}{\hbar }eF^{\mu \nu
}W_{\mu }^{+}W_{\nu }^{-},
\end{equation}%
where $D_{\pm \mu }=\nabla _{\mu }\pm \frac{i}{\hbar }eA_{\mu }$ and $\nabla
_{\mu }$ is the covariant geometric derivative. Also, $e$ gives the charge
of the $W^{+}$ boson, $A_{\mu}$ is the electromagnetic vector potential of
the black string given by $A_{\mu}=(-h(r),0,0,0)$, here $h(r)=2Q/\alpha r$,
where $Q$ is the charge of the black string. Using the above Lagrangian the
equation of motion for the $W$-boson field reads
\begin{equation}
\frac{1}{\sqrt{-g}}\partial _{\mu }\left[ \sqrt{-g}\left( D^{\pm \nu }W^{\pm
\mu }-D^{\pm \mu }W^{\pm \nu }\right) \right] \pm \frac{ieA_{\mu }}{\hbar }%
\left( D^{\pm \nu }W^{\pm \mu }-D^{\pm \mu }W^{\pm \nu }\right) +\frac{%
m_{W}^{2}}{\hbar ^{2}}W^{\pm \nu }\pm \frac{i}{\hbar }eF^{\mu \nu }W_{\mu
}^{\pm }=0 \label{proca1}
\end{equation}%
where $F^{\mu \nu }=\nabla ^{\mu }A^{\nu }-\nabla ^{\nu }A^{\mu }$. In this
work, we will investigate the tunneling of $W^{+}$ boson, therefore one
needs to solve the following equation
\begin{equation}
\frac{1}{\sqrt{-g}}\partial _{\mu }\left[ \sqrt{-g}g^{\mu \alpha }g^{\nu
\beta }\left( \partial _{\beta }W_{\alpha }^{+}-\partial _{\alpha }W_{\beta
}^{+}+\frac{i}{\hbar }eA_{\beta }W_{\alpha }^{+}-\frac{i}{\hbar }eA_{\alpha
}W_{\beta }^{+}\right) \right] \label{proca2}
\end{equation}
\begin{equation*}
+\frac{ieA_{\mu }g^{\mu \alpha }g^{\nu \beta }}{\hbar }\left( \partial
_{\beta }W_{\alpha }^{+}-\partial _{\alpha }W_{\beta }^{+}+\frac{i}{\hbar }%
eA_{\beta }W_{\alpha }^{+}-\frac{i}{\hbar }eA_{\alpha }W_{\beta }^{+}\right)
+\frac{m_{W}^{2}g^{\nu \beta }}{\hbar ^{2}}W_{\beta }^{+}+\frac{i}{\hbar }%
eF^{\nu \alpha }W_{\alpha}^{+}=0,
\end{equation*}%
for $\nu =0,1,2,3$. Using the WKB approximation
\begin{equation}
W_{\mu }^{+}(t,r,\theta ,z )=C_{\mu }(t,r,\theta,z)\,\exp \left( \frac{i}{\hbar }%
S(t,r,\theta ,z)\right) , \label{ans1}
\end{equation}%
where the action is given by
\begin{equation}
S(t,r,\theta ,z)=S_{0}(t,r,\theta ,z)+\hbar\, S_{1}(t,r,\theta ,z)+\hbar\,
^{2}S_{2}(t,r,\theta ,z)+... \label{act}
\end{equation}
We can now use the last three equations and neglect the terms of higher
order of $\hbar $, then one can find the following set of four equations:
\begin{eqnarray}
0 &=&C_{0}\left(-(\partial _{1}S_{0})^{2}-\frac{(\partial _{2}S_{0})^{2}}{%
r^{2}f}-\frac{(\partial _{3}S_{0})^{2}}{\alpha^{2}r^{2}f}-\frac{m^{2}}{f}%
\right) +C_{1}\left((\partial _{1}S_{0})\left( eA_{0}+\partial
_{0}S_{0}\right) \right) +C_{2}\left(\frac{(\partial _{2}S_{0})}{r^{2}f}%
\left( \partial _{0}S_{0}+eA_{0}\right) \right) \notag \\
&+&C_{3}\left(\frac{(\partial _{3}S_{0})}{\alpha^{2}r^{2}f}\left( \partial
_{0}S_{0}+eA_{0}\right) \right) ,
\end{eqnarray}%
\begin{eqnarray}
0 &=&C_{0}\left( -(\partial _{1}S_{0})(eA_{0}+\partial _{0}S_{0})\right)
+C_{1}\left( -f\frac{(\partial _{2}S_{0})^{2}}{r^{2}}-f\frac{(\partial
_{3}S_{0})^{2}}{\alpha^{2}r^{2} }+(\partial
_{0}S_{0}+eA_{0})^{2}-m^{2}f\right) +C_{2}\left( f\frac{(\partial
_{1}S_{0})(\partial _{2}S_{0})}{r^{2}}\right) \notag \\
&+&C_{3}\left( f\frac{(\partial _{1}S_{0})(\partial _{3}S_{0})}{%
\alpha^{2}r^{2}}\right) ,
\end{eqnarray}%
\begin{eqnarray}
0 &=&C_{0}\left( -\partial _{2}S_{0}\left( \frac{\partial _{0}S_{0}+eA_{0}}{f%
}\right) \right) +C_{1}\left( f(\partial _{2}S_{0})(\partial
_{1}S_{0})\right) +C_{2}\left( -f(\partial _{1}S_{0})^{2}-\frac{(\partial
_{3}S_{0})^{2}}{\alpha^{2}r^{2}}+\frac{(\partial _{0}S_{0}+eA_{0})^{2}}{f}%
-m^{2}\right) \notag \\
&+&C_{3}\left( \frac{(\partial _{2}S_{0})(\partial _{3}S_{0})}{%
\alpha^{2}r^{2}}\right) ,
\end{eqnarray}%
\begin{eqnarray}
0 &=&C_{0}\left( -\partial _{3}S_{0}\left( \frac{\partial _{0}S_{0}+eA_{0}}{f%
}\right) \right) +C_{1}\left( f(\partial _{3}S_{0})(\partial
_{1}S_{0})\right) +C_{3}\left( -f(\partial _{1}S_{0})^{2}-\frac{(\partial
_{2}S_{0})^{2}}{r^{2}}+\frac{(\partial _{0}S_{0}+eA_{0})^{2}}{f}-m^{2}\right)
\notag \\
&+&C_{2}\left(\frac{(\partial _{2}S_{0})(\partial _{3}S_{0})}{r^{2}}\right) .
\end{eqnarray}
From the metric \eqref{metric1}, it is clear that due to the space-time
symmetries we can use the following ansatz for the action
\begin{equation}
S_{0}(t,r,\theta,z)=-Et+W(r)+J_{1}\theta +J_{2}z+C,
\end{equation}%
where $E, J_{1}, J_{2}$ and $C$ are constants. Therefore, the non-zero
elements of the coefficient matrix $\Xi $ are given by
\begin{eqnarray}
\Xi _{11} &=&-(W^{\prime})^{2}-\frac{J_{1}^{2}}{r^{2}f}-\frac{J_{2}^{2}}{%
\alpha^{2} r^{2}f}-\frac{m^{2}}{f} \notag \\
\Xi _{12} &=&-\Xi _{21}=W^{\prime}\left(eA_{0}-E\right) \notag \\
\Xi _{13} &=&\frac{J_{1}}{r^{2}f}\left(eA_{0}-E\right) \notag \\
\Xi _{14} &=&\frac{J_{2}}{\alpha^{2} r^{2}f}\left(eA_{0}-E\right) \notag \\
\Xi _{22} &=&\left(-f\frac{J_{1}^{2}}{r^{2}}-f\frac{J_{2}^{2}}{%
\alpha^{2}r^{2}}+(eA_{0}-E)^{2}-m^{2}f\right) \notag \\
\Xi _{23} &=&f\frac{W^{\prime}J_{1}}{r^{2}} \notag \\
\Xi _{24} &=&f\frac{W^{\prime}J_{2}}{\alpha^{2} r^{2}} \notag \\
\Xi _{31} &=&-J_{1}\frac{\left(eA_{0}-E\right)}{f} \notag
\end{eqnarray}
\begin{eqnarray}
\Xi _{32} &=&f J_{1}W^{\prime} \notag \\
\Xi _{33} &=&\left( -f(W^{\prime})^{2}-\frac{J_{2}^{2}}{\alpha^{2} r^{2}}+%
\frac{(eA_{0}-E)^{2}}{f}-m^{2}\right) \notag \\
\Xi _{34} &=&\frac{J_{1}J_{2}}{\alpha^{2} r^{2}} \notag \\
\Xi _{41} &=&-\frac{J_{2}\left(eA_{0}-E\right) }{f} \notag \\
\Xi _{42} &=&fJ_{2} W^{\prime} \notag \\
\Xi _{43}&=&\frac{J_{1}J_{2}}{r^{2}} \notag \\
\Xi _{44} &=&\left( -f(W^{\prime})^{2}-\frac{J_{1}^{2}}{r^{2}}+\frac{%
(eA_{0}-E)^{2}}{f}-m^{2}\right) .
\end{eqnarray}
The nontrivial solution of this equation \cite{kruglov1}
\begin{equation}
\Xi (C_{0},C_{1},C_{2},C_{3})^{T}=0, \label{matrixeq}
\end{equation}
is obtained by using the determinant of the matrix equals zero, $\det \Xi=0 $%
, it follows
\begin{equation}
m^{2}\Big(-r^{2}(E-eA_{0})^{2}\alpha^{2}+f^{2}r^{2}\alpha^{2}(W^{%
\prime})^{2}+\left((m^{2}r^{2}+J_{1}^{2})\,\alpha^{2}+J_{2}^{2}\right)f\Big)^{3}=0.
\end{equation}
Solving this equation for the radial part leads to the following integral
\begin{equation}
W_{\pm }(r)=\pm \int \frac{\sqrt{(E-eA_{0})^{2}-f(r)\left( m^{2}+\frac{%
J_{1}^{2}}{r^{2}}+\frac{J_{2}^{2}}{\alpha^{2}r^{2}}\right)}}{f(r)}dr.
\end{equation}
Expanding the function $f(r)$ in Taylor's series near the horizon
\begin{equation}
f(r_{+})\approx f^{\prime }(r_{+})(r-r_{+}),
\end{equation}%
and by integrating around the pole at the outer horizon $r_{+}$, gives
\begin{equation}
W_{\pm }(r)=\pm \frac{i\pi (E-eA_{0})}{f^{\prime }(r_{+})}. \label{integral}
\end{equation}
Now we can set the probability of the ingoing particle to $100\%$ (since
every outside particle falls into the black hole), it follows
\begin{equation*}
P_{-}\simeq e^{-2ImW_{-}}=1,
\end{equation*}%
which implies $ImC=-ImW_{-}$. For the outgoing particle we have $%
ImS_{+}=ImW_{+}+ImC$, and also we make use of $W_{+}=-W_{-}$, which leads to
the probability for the outgoing particle given by
\begin{equation}
P_{+}=e^{-2ImS}\simeq e^{-4ImW_{+}}.
\end{equation}
In this way the tunneling rate of particles tunneling from inside to outside
the horizon is given by
\begin{equation}
\Gamma =\frac{P_{+}}{P_{-}}\simeq e^{(-4ImW_{+})}.
\end{equation}
We can find the Hawking temperature simply by compering the last result with
the Boltzmann factor $\Gamma =e^{-\beta E_{net}}$, where $E_{net}=(E-eA_{0})$
and $\beta =1/T_{H}$, yielding
\begin{equation}
T_{H}=\frac{f^{\prime }(r_{+})}{4\pi }.
\end{equation}
Using Eqn.\eqref{f}, one can recover the Hawking temperature for a static
charged black string \cite{gohar1}
\begin{equation}
T_{H}=\frac{1}{4\pi}\left(2\alpha^{2}r_{+}+\frac{b}{\alpha r_{+}^{2}}-\frac{%
2c^{2}}{\alpha^{2}r_{+}^{3}}\right).
\end{equation}
\section{Tunneling From Rotating Charged Black Strings (RCBSs)}
Lemos derived a rotating charged cylindrically symmetric exact solution of
Einstein equations for a black string \cite{lemos}. The line element for a
RCBSs is given by \cite{gohar1}
\begin{equation}
ds^{2}=-F(r)\,dt^{2}+R^{2}(r)\left(N dt+d\theta \right) ^{2}+\frac{dr^{2}}{%
G(r)}+\alpha ^{2}r^{2}dz^{2},
\end{equation}
where the lapse function $F$ and the shift function $N$ are given as
\begin{equation}
G=\left( \alpha ^{2}r^{2}-\frac{b}{\alpha r}+\frac{c^{2}}{\alpha ^{2}r^{2}}%
\right) , \label{G}
\end{equation}%
\begin{equation}
F=fG,
\end{equation}%
\begin{equation}
f=\left( \gamma ^{2}-\frac{\omega ^{2}}{\alpha ^{2}}\right) ^{2}\frac{r^{2}}{%
R^{2}},
\end{equation}
\begin{equation}
N=-\frac{\gamma \omega }{\alpha ^{2}R^{2}}\left( \frac{b}{\alpha r}-\frac{%
c^{2}}{\alpha ^{2}r^{2}}\right) ,
\end{equation}%
and
\begin{equation}
R^{2}=\gamma ^{2}r^{2}-\frac{\omega ^{2}}{\alpha ^{4}}\left( \alpha
^{2}r^{2}-\frac{b}{\alpha r}+\frac{c^{2}}{\alpha ^{2}r^{2}}\right).
\end{equation}
Noted that the rotation parameter $a=J/M,$ constant $\alpha ^{2}=-\Lambda /3$%
, where $\Lambda $ is the cosmological constant, $M$ is the \ ADM mass, $Q$
is the charge of the black string, and $J$ is the angular momentum. In
addition, $b$ and $c$ are defined as%
\begin{equation}
b=4M\left( 1-\frac{3a^{2}\alpha ^{2}}{2}\right) ,
\end{equation}%
\begin{equation}
c^{2}=4Q^{2}\left( \frac{1-3a^{2}\alpha ^{2}/2}{1-a^{2}\alpha ^{2}/2}\right)
.
\end{equation}%
\newline
Furthermore, $\gamma ^{2}$ and $\omega ^{2}/\alpha ^{2}$ are defined as
\begin{equation}
\gamma ^{2}=\frac{2GM}{b}\pm \frac{2G}{b}\sqrt{M^{2}-\frac{8J\alpha ^{2}}{9}}%
,
\end{equation}%
\begin{equation}
\frac{\omega ^{2}}{\alpha ^{2}}=\frac{4GM}{b}\mp \frac{4G}{b}\sqrt{M^{2}-%
\frac{8J\alpha ^{2}}{9}},
\end{equation}%
or
\begin{equation*}
\gamma =\sqrt{\frac{1-\frac{a^{2}\alpha ^{2}}{2}}{1-\frac{3a^{2}\alpha ^{2}}{%
2}}},
\end{equation*}%
\begin{equation}
\omega =\frac{a\alpha ^{2}}{\sqrt{1-\frac{3a^{2}\alpha ^{2}}{2}}}.
\label{2aaa}
\end{equation}
Let us now introduce the electromagnetic field associated with the vector
potential of the RCBSs
\begin{equation}
A_{\mu }=(A_{0},0,A_{2},0)
\end{equation}%
where $A_{0}=-\gamma h(r)$, $A_{2}=$ $\frac{\omega }{\alpha ^{2}}h(r),$ and $%
h(r)$ is an arbitrary function of $r$ for the line charge density along the $%
z$-line given by $Q=\frac{Q_{z}}{\Delta z}=\gamma \lambda $. To exactly
reveal the massive vector particle's tunneling radiation, we should solve
the Proca equation in Eqn.(\ref{proca2}). Following the standard procedure,
we use the WKB approximation Eqn.(\ref{ans1}) with the action Eqn(\ref{act})
in the background of the RCBSs spacetime and neglect the factors of higher
orders of $\hbar $. Then using the following ansatz for the action
\begin{equation}
S_{0}=-Et+W(r)+J_{1}\theta +J_{2}z+k,
\end{equation}%
where $E,J_{1},J_{2}$ and $k$ are constants, we get four decoupled equations
such as:
\begin{equation*}
\frac{C_{0}}{fG^{2}R^{2}r^{2}\alpha ^{2}}\Big[fG^{3}R^{2}r^{2}\alpha
^{2}W^{\prime 2}+G\Big[\left( \left( m^{2}r^{2}\alpha ^{2}+J_{2}^{2}\right)
fG-r^{2}\left( eA_{2}+J_{1}\right) N\alpha ^{2}\left( \left(
eA_{2}+J_{1}\right) N-eA_{0}+E\right) \right) R^{2}
\end{equation*}%
\begin{equation*}
+r^{2}\alpha ^{2}fG\left( eA_{2}+J_{1}\right) ^{2}\Big]\Big]-\left( \left(
eA_{2}+J_{1}\right) N-eA_{0}+E\right) \frac{W^{\prime }}{fG}C_{1}+\frac{%
\left( -erA_{2}-J_{1}r\right) }{fGr}\left( \left( eA_{2}+J_{1}\right)
N-eA_{0}+E\right) C_{2}
\end{equation*}%
\begin{equation}
-\frac{C_{3}J_{2}}{fG}\left( \left( eA_{2}+J_{1}\right) N-eA_{0}+E\right) =0,
\end{equation}
\begin{equation*}
\frac{C_{0}}{fGR^{2}\alpha ^{2}r^{2}}\left( -\alpha
^{2}fG^{2}R^{2}eA_{0}r^{2}W^{\prime }+\alpha ^{2}R^{2}fG^{2}W^{\prime
}r^{2}E\right) +2\Big[\alpha ^{2}\left( \left( \left( -N^{2}A_{2}+NA_{0}\right)
R^{2}+fGA_{2}\right) e-R^{2}NE\right) r^{2}J_{1}
\end{equation*}%
\begin{equation*}
+\frac{1}{2}\alpha ^{2}r^{2}\left( -R^{2}N^{2}+fG\right) J_{1}^{2}-\alpha
^{2}R^{2}e\left( NA_{2}-A_{0}\right) r^{2}E
\end{equation*}%
\begin{equation*}
-\frac{1}{2}R^{2}\alpha ^{2}r^{2}E^{2}+\alpha ^{2}r^{2}\left( \frac{1}{2}%
\left( m^{2}fG-e^{2}\left( NA_{2}-A_{0}\right) ^{2}\right) R^{2}+\frac{1}{2}%
fGe^{2}A_{2}^{2}\right) +\frac{1}{2}R^{2}fGJ_{2}^{2}\Big]\frac{C_{1}}{%
fGR^{2}\alpha ^{2}r^{2}}
\end{equation*}%
\begin{equation}
+\Big[ -\alpha ^{2}fG^{2}R^{2}A_{2}er^{2}W^{\prime }-\alpha
^{2}G^{2}fR^{2}W^{\prime }r^{2}J_{1}\Big] \frac{C_{2}}{fGR^{2}\alpha
^{2}r^{2}}-GJ_{2}W^{\prime }C_{3}=0,
\end{equation}
\begin{equation*}
\Big[-\alpha ^{2}\left( -fG^{2}R\left( -R^{2}N^{2}+fG\right)
rE-fG^{2}\left( N^{2}A_{0}erR^{2}-fGA_{0}er\right) R\right)
rJ_{1}-\alpha^{2}RfG^{2}\left( \left( reA_{2}N^{2}-2A_{0}erN\right)
R^{2}-fGA_{2}er\right)rE
\end{equation*}
\begin{equation*}
-\alpha ^{2}r^{2}fG^{2}R^{3}NE^{2}+\alpha ^{2}fG^{2}\left( e^{2}\left(
NA_{2}-A_{0}\right) A_{0}rNR^{2}-fGA_{2}A_{0}e^{2}r\right) Rr\Big]\frac{C_{0}}{%
f^{2}G^{3}R^{3}r^{2}\alpha ^{2}}+\Big[-\alpha
^{2}r^{2}fG^{2}R\left(-R^{2}N^{2}+fG\right) W^{\prime }J_{1}
\end{equation*}%
\begin{equation*}
-2\alpha ^{2}fG^{2}R\left( -\frac{1}{2}erN\left( NA_{2}-A_{0}\right) R^{2}+%
\frac{1}{2}fGA_{2}er\right) rW^{\prime }+\alpha
^{2}R^{3}fG^{2}Nr^{2}W^{\prime }E\Big]\frac{C_{1}}{f^{2}G^{3}R^{3}r^{2}\alpha
^{2}}
\end{equation*}%
\begin{equation*}
+\Big[ -\alpha ^{2}\left( fG^{2}R^{3}NrE-fG^{2}NA_{0}erR^{3}\right)
rJ_{1}+f^{2}G^{3}R^{3}J_{2}^{2}+\alpha ^{2}fG^{2}\left( fGrm^{2}+e^{2}\left(
NA_{2}-A_{0}\right) A_{0}r\right) R^{3}r
\end{equation*}%
\begin{equation*}
-\alpha ^{2}R^{3}fG^{2}\left( NA_{2}er-2A_{0}er\right)
rE-\alpha^{2}r^{2}fG^{2}R^{3}E^{2}+r^{2}\alpha ^{2}R^{3}f^{2}G^{4}W^{\prime
2}\Big]\frac{C_{2}}{f^{2}G^{3}R^{3}r^{2}\alpha ^{2}}
\end{equation*}
\begin{equation}
+\Big[-\alpha ^{2}r^{2}fG^{2}R\left( -R^{2}N^{2}+fG\right) J_{2}J_{1}+\alpha
^{2}R^{3}r^{2}fG^{2}NJ_{2}E-\alpha ^{2}fG^{2}R\left( \left(
-N^{2}A_{2}+NA_{0}\right) R^{2}+fGA_{2}\right) er^{2}J_{2}\Big]C_{3}=0,
\end{equation}
\begin{equation}
\left( -fG^{2}A_{0}erR+RrG^{2}fE\right) \frac{J_{2}}{R\alpha ^{2}r^{3}fG^{2}}%
C_{0}-\frac{W^{\prime }J_{2}C_{1}}{r^{2}\alpha ^{2}}+\left(
-RfG^{2}A_{3}er-J_{1}RfG^{2}r\right) \frac{J_{2}}{R\alpha ^{2}r^{3}fG^{2}}%
C_{2}
\end{equation}%
\begin{equation*}
-\Big[r\left( -R^{2}N^{2}+fG\right) J_{1}^{2}-2\left( \left( \left(
-N^{2}A_{2}+NA_{0}\right) R^{2}+fGA_{2}\right) e-R^{2}NE\right)
rJ_{1}-rR^{2}fG^{2}W^{\prime 2}+rR^{2}E^{2}
\end{equation*}%
\begin{equation*}
+2erR^{2}\left( NA_{2}-A_{0}\right) E-\left( \left( m^{2}fG-e^{2}\left(
NA_{2}-A_{0}\right) ^{2}\right) R^{2}+fGe^{2}A_{2}^{2}\right) r\Big]\frac{C_{3}}{%
GrfR^{2}}=0.
\end{equation*}
Then the non-zero elements of the coefficient matrix $\Theta $ are
calculated as following%
\begin{eqnarray}
\Theta _{11} &=&\Big[fG^{3}R^{2}r^{2}\alpha ^{2}W^{\prime 2}+G\Big[\left( \left(
m^{2}r^{2}\alpha ^{2}+J_{2}^{2}\right) fG-r^{2}\left( eA_{2}+J_{1}\right)
N\alpha ^{2}\left( \left( eA_{2}+J_{1}\right) N-eA_{0}+E\right) \right) R^{2}
\\
&&+r^{2}\alpha ^{2}fG\left( eA_{2}+J_{1}\right) ^{2}\Big]\Big], \notag \\
\Theta _{12} &=&-\left( \left( eA_{2}+J_{1}\right) N-eA_{0}+E\right)
W^{\prime }, \notag \\
\Theta _{13} &=&\left( -erA_{2}-J_{1}r\right) \left( \left(
eA_{2}+J_{1}\right) N-eA_{0}+E\right) , \notag \\
\Theta _{14} &=&-J_{2}\left( \left( eA_{2}+J_{1}\right) N-eA_{0}+E\right) ,
\notag \\
\Theta _{21} &=&\left( -\alpha ^{2}fG^{2}R^{2}eA_{0}r^{2}W^{\prime }+\alpha
^{2}R^{2}fG^{2}W^{\prime }r^{2}E\right) , \notag \\
\Theta _{22} &=&2\Big[\alpha ^{2}\left( \left( \left( -N^{2}A_{2}+NA_{0}\right)
R^{2}+fGA_{2}\right) e-R^{2}NE\right) r^{2}J_{1}+\frac{1}{2}\alpha
^{2}r^{2}\left( -R^{2}N^{2}+fG\right) J_{1}^{2}-\alpha ^{2}R^{2}e\left(
NA_{2}-A_{0}\right) r^{2}E \notag \\
&&-\alpha ^{2}r^{2}fG^{2}R^{3}NE^{2}+\alpha ^{2}r^{2}\left( \frac{1}{2}%
\left( m^{2}fG-e^{2}\left( NA_{2}-A_{0}\right) ^{2}\right) R^{2}+\frac{1}{2}%
fGe^{2}A_{2}^{2}\right) +\frac{1}{2}R^{2}fGJ_{2}^{2}\Big], \notag \\
\Theta _{23} &=&\left[ -\alpha ^{2}fG^{2}R^{2}A_{2}er^{2}W^{\prime }-\alpha
^{2}G^{2}fR^{2}W^{\prime }r^{2}J_{1}\right] , \notag \\
\Theta _{24} &=&-GJ_{2}W^{\prime }, \notag \\
\Theta _{31} &=&\Big[-\alpha ^{2}\left( -fG^{2}R\left( -R^{2}N^{2}+fG\right)
rE-fG^{2}\left( N^{2}A_{0}erR^{2}-fGA_{0}er\right) R\right) rJ_{1} \notag \\
&&-\alpha ^{2}RfG^{2}\left( \left( reA_{2}N^{2}-2A_{0}erN\right)
R^{2}-fGA_{2}er\right) rE-\alpha ^{2}r^{2}fG^{2}R^{3}NE^{2}+\alpha ^{2}fG^{2}
\notag \\
&&+\left( e^{2}\left( NA_{2}-A_{0}\right)
A_{0}rNR^{2}-fGA_{2}A_{0}e^{2}r\right) Rr\Big], \notag \\
\Theta _{32} &=&\Big[-\alpha ^{2}r^{2}fG^{2}R\left( -R^{2}N^{2}+fG\right)
W^{\prime }J_{1}-2\alpha ^{2}fG^{2}R\left( -\frac{1}{2}erN\left(
NA_{2}-A_{0}\right) R^{2}+\frac{1}{2}fGA_{2}er\right) rW^{\prime } \notag \\
&&+\alpha ^{2}R^{3}fG^{2}Nr^{2}W^{\prime }E\Big], \notag \\
\Theta _{33} &=&\Big[-\alpha ^{2}\left(
fG^{2}R^{3}NrE-fG^{2}NA_{0}erR^{3}\right)
rJ_{1}+f^{2}G^{3}R^{3}J_{2}^{2}+\alpha ^{2}fG^{2}\left( fGrm^{2}+e^{2}\left(
NA_{2}-A_{0}\right) A_{0}r\right) R^{3}r \notag \\
&&-\alpha ^{2}R^{3}fG^{2}\left( NA_{2}er-2A_{0}er\right) rE-\alpha
^{2}r^{2}fG^{2}R^{3}E^{2}+r^{2}\alpha ^{2}R^{3}f^{2}G^{4}W^{\prime 2}\Big],
\notag \\
\Theta _{34} &=&\Big[-\alpha ^{2}r^{2}fG^{2}R\left( -R^{2}N^{2}+fG\right)
J_{2}J_{1}+\alpha ^{2}R^{3}r^{2}fG^{2}NJ_{2}E-\alpha ^{2}fG^{2}R\left(
\left( -N^{2}A_{2}+NA_{0}\right) R^{2}+fGA_{2}\right) er^{2}J_{2}\Big], \notag
\\
\Theta _{41} &=&\left( -fG^{2}A_{0}erR+RrG^{2}fE\right) J_{2}, \notag \\
\Theta _{42} &=&-W^{\prime }J_{2}, \notag \\
\Theta _{43} &=&\left( -RfG^{2}A_{2}er-J_{1}RfG^{2}r\right) J_{2}, \notag \\
\Theta _{44} &=&-\Big[r\left( -R^{2}N^{2}+fG\right) J_{1}^{2}-2\left( \left(
\left( -N^{2}A_{2}+NA_{0}\right) R^{2}+fGA_{2}\right) e-R^{2}NE\right)
rJ_{1}-rR^{2}fG^{2}W^{\prime 2} \notag \\
&&+rR^{2}E^{2}+2erR^{2}\left( NA_{2}-A_{0}\right) E-\left( \left(
m^{2}fG-e^{2}\left( NA_{2}-A_{0}\right) ^{2}\right)
R^{2}+fGe^{2}A_{2}^{2}\right) r\Big]. \notag
\end{eqnarray}
The nontrivial solution of this equation \cite{kruglov1}
\begin{equation}
\Theta (C_{0},C_{1},C_{2},C_{3})^{T}=0, \label{matrixeq}
\end{equation}%
is obtained by using the determinant of the matrix equals zero, $\det \Theta
=0$, it follows
\begin{equation}
-m^{2}\left[ -fG^{2}R^{2}r^{2}\alpha ^{2}W^{\prime 2}+\left( -f\left(
m^{2}r^{2}\alpha ^{2}+J_{2}^{2}\right) G+r^{2}\alpha ^{2}\left( \left(
eA_{2}+J_{1}\right) N-eA_{0}+E\right) ^{2}\right) R^{2}-Gf\alpha
^{2}r^{2}\left( eA_{2}+J_{1}\right) ^{2}\right] ^{3}=0.
\end{equation}
Solving this equation for the radial part leads to the following integral,
as noted that $F(r)=f(r)G(r),$
\begin{equation}
W_{\pm }(r)=\pm \int \frac{R(r)\sqrt{\left( E-eA_{0}+\left(
eA_{2}+J_{1}\right) N\right) ^{2}-F\left[\left( m^{2}+\frac{J_{2}^{2}}{%
r^{2}\alpha ^{2}}\right)+\frac{\left( eA_{2}+J_{1}\right) ^{2}}{R^{2}}\right]%
}}{\left(\gamma^{2}-\frac{\omega^{2}}{\alpha^{2}}\right)r \,G(r)}dr.
\end{equation}
Integrating around the pole at the outer horizon $r_{+}$, and by using $%
R(r_{+})=\gamma r_{+}$, gives \cite{ang,sri1}
\begin{equation}
W_{\pm }(r)=\pm \frac{i\pi \gamma \left( E-eA_{0}+\left( eA_{2}+J_{1}\right)
N\right) }{\left(\gamma^{2}-\frac{\omega^{2}}{\alpha^{2}}\right)G^{\prime
}(r_{+})},
\end{equation}
where $E_{net}=\left( E-eA_{0}+\left( eA_{2}+J_{1}\right) N\right).$ By the
same way used in the first part, the tunneling rate of particles tunneling
from inside to outside the horizon is given by
\begin{equation}
\Gamma =\frac{P_{+}}{P_{-}}\simeq e^{(-4ImW_{+})}.
\end{equation}
On the other hand, using Eqns.\eqref{G} and \eqref{2aaa}, it follows
\begin{equation}
\gamma^{2}-\frac{\omega^{2}}{\alpha^{2}}=1,
\end{equation}
and
\begin{equation}
G^{\prime}(r_{+})=\left(2\alpha^{2}r_{+}+\frac{b}{\alpha r_{+}^{2}}-\frac{%
2c^{2}}{\alpha^{2}r_{+}^{2}}\right).
\end{equation}
Again, comparing the Boltzmann factor $\Gamma =e^{-\beta E_{net}}$, with the
tunneling rate, gives the Hawking temperature \cite{gohar2,ahmed1}
\begin{equation}
T_{H}=\frac{G^{\prime }(r_{+})}{4\pi }\frac{\left( \gamma ^{2}-\frac{\omega
^{2}}{\alpha ^{2}}\right) }{\gamma }=\frac{1}{4\pi \gamma }\left( 2\alpha
^{2}r_{+}+\frac{b}{\alpha r_{+}^{2}}-\frac{2c^{2}}{\alpha ^{2}r_{+}^{2}}%
\right) .
\end{equation}
\section{Conclusion}
To summarize, in this paper, we \ derive the charged black strings
temperature using the Hamilton-Jacobi method of the tunneling formalism for
the massive vector particles. In the case of a static black string, we start
from the field equations, then we use the WKB approximation and the
separation of variables which results with a set of four equations. In order
to work out the Hawking temperature, we solve the radial part by using the
determinant of the metric equals zero. Next, we extend our results to the
rotating case and calculate the Hawking temperature. Finally, the results
presented in this work extend the tunneling method for massive \ vector
bosons in the case of static/rotating black strings and are consistent with
those in the literature\cite{gohar1,gohar2,ahmed1,ahmed2}.
\section{ACKNOWLEDGMENT}
The authors would like to thank the editor and the anonymous reviewers.
|
1,314,259,993,197 | arxiv | \section{Introduction}
Job scheduling is a fundamental task in optimization, with applications ranging from resource management in computing~\cite{salot2013survey,sharma2010survey} to operating transportation systems~\cite{kolen2007interval}.
Given a collection of \emph{machines} and a set of \emph{jobs} (or tasks) to be processed, the goal of job scheduling is to assign those jobs to the machines while respecting certain constraints.
Constraints set on jobs may significantly vary. In some cases a job has to be scheduled, but the starting time of its processing is not pre-specified. In other scenarios a job can only be scheduled at a given time, but there is a flexibility on whether to process the job or not.
Frequent objectives for this task can include either maximizing the number of scheduled jobs or minimizing needed time to process all the given jobs.
An important variant of job scheduling is the task of \emph{interval scheduling}: here each job has a specified starting time and its length, but a job is not required to be scheduled. Given $M$ machines, the goal is to schedule as many jobs as possible. More generally, each job is also assigned a \emph{reward} or weight, which can be thought of as a payment received for processing the given job. If a job is not processed, the payment is zero, i.e., there is no penalty. We refer to this variant as \emph{weighted interval scheduling}.
This problem in a natural way captures real-life scenarios. For instance, consider an assignment of crew members to flights, where our goal is to assign (the minimum possible) crews to the specified flights. In the context of interval scheduling, flights can be seen as jobs and the crew members as machines~\cite{kolen2007interval,mingozzi1999set}.
Interval scheduling also has applications in geometrical tasks -- it can be see as a task of finding a collection of non-overlapping geometric objects. In this context, its prominent applications are in VLSI design~\cite{hochbaum1985approximation} and map labeling~\cite{agarwal1998label,verweij1999optimisation}.
The aforementioned scenarios are executed in different computational settings. For instance, some use-cases are dynamic in nature, e.g., a flight gets cancelled. Then, in certain cases we have to make online decisions, e.g., a customer must know immediately whether we are able to accept its request or not. While in some applications there might be so many requests that we would like to design extremely fast ways of deciding whether a given request/job can be scheduled or not, e.g., providing an immediate response to a user submitting a job for execution in a cloud.
In this work, our aim is to develop methods for interval scheduling that can be turned into efficient algorithms across many computational settings:
\begin{center}
\emph{Can we design unified techniques for approximating interval scheduling very fast?}
\end{center}
In this paper we develop fast algorithms for the dynamic and local settings of computation. We also give a randomized black-box approach that reduces the task of interval scheduling on multiple machines to that of interval scheduling on a single machine by paying only $2 - 1/M$ in the approximation factor for unweighted jobs, where $M$ is the number of machines, and $e$ in approximation factor for weighted jobs.
A common theme in our algorithms is partitioning jobs over dimensions (time and machines). It is well studied in the dynamic setting how to partition the time dimension to enable fast updates. It is also studied how to partition over the machines to enable strong approximation ratios for multiple-machine scheduling problems. We design new partitioning methods for the time dimension (starting and ending times of jobs), introduce a partitioning method over machines, and deeply examine the relationship of partitioning over the time dimension and machines simultaneously in order to solve scheduling problems. We hope that, in addition to improving the best-known results, our work provides a new level of simplicity and cohesiveness for this style of approach.
\subsection{Computation Models}
In our work, we focus on the following two models of computation.
\paragraph{Dynamic setting.}
Our algorithms for the fully dynamic setting design data structures that maintain an approximately optimal solution to an instance of the interval scheduling problem while supporting insertions and deletions of jobs/intervals. The data structures also support queries of the maintained solution's total weight and whether or not particular interval is used in the maintained solution.
\paragraph{Local computation algorithms (LCA).}
The LCA model was introduced by Rubinfeld et al.~\cite{rubinfeld2011fast} and Alon et al.~\cite{alon2012space}.
In this setting, for a given job $J$ we would like to output whether $J$ is scheduled or not, but we do not have a direct access to the entire list of input jobs. Rather, the LCA is given access to an oracle that returns answers to questions
of the form: ``\emph{What is the input job with the earliest ending time among those jobs that start after time $x$?}''
The goal of the LCA in this setting is to provide (yes/no) answers to user queries that ask
``Is job $i$ scheduled?" (and, if applicable, ``On which machine?''), in such a manner
that all answers should be consistent with the same valid solution, while using as few oracle-probes as possible.
\subsection{Our Results}
Our first result, given in \cref{section:dynamic-unit}, focuses on designing an efficient dynamic algorithm for unweighted interval scheduling on a single machine. Prior to our work, the state-of-the-art result for this problem was due to \cite{bhore2020dynamic}, who design an algorithm with $O(\nicefrac{\log{n}}{\varepsilon^2})$ update and query time. We provide an improvement in the dependence on $\varepsilon$.
\begin{restatable}[Unweighted dynamic, single machine]{theorem}{theoremunweightedM}
\label{theorem:unweighted-M=1}
Let $\mathcal{J}$ be a set of $n$ jobs.
For any $\varepsilon > 0$, there exists a fully dynamic algorithm for $(1+\varepsilon)$-approximate unweighted interval scheduling for $\mathcal{J}$ on a single machine performing updates in $O\rb{\frac{\log(n)}{\varepsilon}}$ and queries in $O(\log(n))$ worst-case time.
\end{restatable}
We show that the ideas we developed to obtain \cref{theorem:unweighted-M=1} can also be efficiently implemented in the local setting, as we explain in detail in \cref{section:local} and prove the following claim. This is the first non-trivial local computation algorithm for the interval scheduling problem.
\begin{restatable}[Unweighted LCA, single machine]{theorem}{theoremunweightedMlocal}
\label{theorem:unweighted-M=1-local}
Let $\mathcal{J}$ be a set of $n$ jobs with their ending times upper-bounded by $N$.
For any $\varepsilon > 0$, there exists a local computation algorithm for $(1+\varepsilon)$-approximate unweighted interval scheduling for $\mathcal{J}$ on a single machine using $O\rb{\frac{\log{N}}{\varepsilon}}$ probes.
\end{restatable}
The most challenging and technically involved is our result for dynamic \emph{weighted} interval scheduling on a single machine, which we present in \cref{section:dynamic-weighted}. As a function of $1/\varepsilon$, our result constitutes an exponential improvement compared to the running times obtained in \cite{henzinger2020dynamic}.
\begin{restatable}[Weighted dynamic, single machine]{theorem}{theoremweighteddynamic}
\label{theorem:weighted-dynamic-M=1}
Let $\mathcal{J}$ be a set of $n$ jobs with their ending times upper-bounded by $N$ and their weights/rewards being in $[1,w]$.
For any $\varepsilon > 0$, there exists a fully dynamic algorithm for $(1+\varepsilon)$-approximate weighted interval scheduling for $\mathcal{J}$ on a single machine performing updates and queries in worst-case time $T \in \poly(\log n,\log N,\log w,\frac{1}{\varepsilon})$. The exact complexity of $T$ is given by
\[
O\rb{\frac{\log(n) \log^2(1/\varepsilon) \log^5(N)}{\varepsilon^{11}} + \frac{\log(\log(w)) \log(n) \log^5(N)}{\varepsilon^8} + \frac{\log^7(N)}{\varepsilon^{12}} + \log(w) \log(n) \log^4(N)}.
\]
\end{restatable}
By building on techniques we introduced to prove \cref{theorem:unweighted-M=1,theorem:unweighted-M=1-local}, we show similar results (\cref{theorem:unweighted-dynamic-scheduling-multiple,theorem:unweighted-LCA-scheduling,theorem:weighted-dynamic-scheduling}) in the case of interval scheduling on multiple machines at the expense of slower updates. We are the first to study dynamic and local interval scheduling in the general setting, i.e., in the setting of maximizing the total reward of jobs scheduled on multiple machines.
\begin{restatable}[Unweighted dynamic, multiple machines]{theorem}{theoremunweighteddynamicschedule}
\label{theorem:unweighted-dynamic-scheduling-multiple}
Let $\mathcal{J}$ be a set of $n$ jobs. For any $\varepsilon > 0$, there exists a fully dynamic algorithm for $(1 + \varepsilon)$-approximate unweighted interval scheduling for $\mathcal{J}$ on $M$ machines performing updates in $O\rb{\frac{M \log(n)}{\varepsilon}}$ and queries in $O(\log(n))$ worst-case time.
\end{restatable}
\begin{restatable}[Unweighted LCA, multiple machines]{theorem}{theoremunweighteddynamicscheduling}
\label{theorem:unweighted-LCA-scheduling}
Let $\mathcal{J}$ be a set of $n$ jobs with their ending times upper-bounded by $N$. For any $\varepsilon > 0$, there exists a local computation algorithm for $(1+\varepsilon)$-approximate unweighted interval scheduling for $\mathcal{J}$ on $M$ machines using $O\rb{\frac{M\log(N)}{\varepsilon}}$ probes.
\end{restatable}
\begin{restatable}[Weighted dynamic, multiple machines]{theorem}
{theoremweighteddynamicscheduling}
\label{theorem:weighted-dynamic-scheduling}
Let $\mathcal{J}$ be a set of $n$ jobs. For any $\varepsilon > 0$, there exists a fully dynamic algorithm for $\rb{\frac{M^M}{M^M-(M-1)^M}(1 + \varepsilon)}$-approximate\footnote{Note that this goes to $\frac{e}{e-1}(1+\varepsilon) \approx 1.58 (1 + \varepsilon)$ from below as $M$ tends to $\infty$.} weighted interval scheduling for $\mathcal{J}$ on $M$ machines performing updates in $O\rb{\frac{Mw \log(w) \log(n)}{\varepsilon^3}}$ and queries in $O(\log(n))$ worst-case time.
\end{restatable}
We provide general reductions that show how to reduce interval scheduling on multiple machines to the same task on a single machine. Our reductions incur only a small constant factor loss in the approximation and are easy to simulate in the dynamic setting.
These claims are proven in \cref{section:scheduling}.
\begin{restatable}{theorem}{theoremrandomunweighted}
\label{theorem:random-unweighted}
Given an oracle for computing an $\alpha$-approximate unweighted interval scheduling on a single machine, there exists a randomized algorithm for the same task on $M$ machines that yields an $(2-1/M)\alpha$-approximation in expectation.
\end{restatable}
\begin{restatable}{theorem}{theoremweightedrandom}
\label{theorem:weighted-random}
Given an oracle for computing an $\alpha$-approximate weighted interval scheduling on a single machine, there exists a randomized algorithm for the same task on $M$ machines that yields an $e \cdot \alpha$ approximation in expectation.
\end{restatable}
Note that the approximation guarantees we obtain in \cref{theorem:unweighted-dynamic-scheduling-multiple,theorem:weighted-dynamic-scheduling} are stronger than a direct application of \cref{theorem:random-unweighted,theorem:weighted-random} on \cref{theorem:unweighted-M=1,theorem:weighted-dynamic-M=1}. However, our reductions importantly give rise to significantly faster dynamic algorithms for scheduling on multiple machines, having no dependence on $M$. Concretely, \cref{theorem:random-unweighted,theorem:weighted-random} result in algorithms with the same time complexity as \cref{theorem:unweighted-M=1,theorem:weighted-dynamic-M=1} and only an increase in expected approximation factor of $(2-1/M)$ and $e$, respectively. The same running time is obtained because \cref{theorem:random-unweighted,theorem:weighted-random} assign jobs to machines in negligible time, and then each update or query just results in an update or query on the corresponding data structure for one machine.
\subsection{Related Work}
The closest prior work to ours is that of Henzinger et al.~\cite{henzinger2020dynamic} and of Bhore et al.~\cite{bhore2020dynamic}. \cite{henzinger2020dynamic} studies $(1+\varepsilon)$-approximate dynamic interval scheduling for one machine in both the weighted and unweighted setting. They obtain $O(\exp(1/\varepsilon) \log^2{n} \cdot \log^2{N})$ update time for the unweighted and $O(\exp(1/\varepsilon) \log^2{n} \cdot \log^5{N} \cdot \log{W})$ update time for the weighted case. They cast interval scheduling as the problem of finding a maximum independent set among a set of intervals lying on the $x$-axis. The authors extend this setting to multiple dimensions and design algorithms for approximating maximum independent set among a set of $d$-dimensional hypercubes, achieving a $(1+\varepsilon) 2^d$-approximation in the unweighted and a $(4 + \varepsilon)2^d$-approximation in the weighted regime.
The authors of \cite{bhore2020dynamic} primarily focus on the unweighted case of approximating maximum independent set of a set of cubes. For the $1$-dimensional case, which equals interval scheduling on one machine, they obtain $O(\nicefrac{\log{n}}{\varepsilon^2})$ update time, which is slower by a factor of $1/\varepsilon$ than our approach. They also show that their approach generalizes to the $d$-dimensional case, requiring $\poly \log{n}$ amortized update time and providing $O(4^d)$ approximation.
\cite{gavruskin2014dynamic} consider dynamic interval scheduling on multiple machines in the setting in which all the jobs must be scheduled. The worst-case update time of their algorithm is $O(\log(n)+d)$, where $d$ refers to the depth of what they call \emph{idle intervals} (depth meaning the maximal number of intervals that contain a common point); they define an idle interval to be the period of time in a schedule between two consecutive jobs in a given machine. The same set of authors, in \cite{gavruskin2015dynamic_monotonic}, study dynamic algorithms for the monotone case as well, in which no interval completely contains another one. For this setup they obtain an algorithm with $O(\log(n))$ update and query time.
In the standard model of computing (i.e. one processor, static), there exists an $O(n+m)$ running time algorithm for (exactly) solving the unweighted interval scheduling problem on a single machine with $n$ jobs and integer coordinates bounded by $m$ \cite{frank1976some}.
An algorithm with running time independent of $m$ is described in \cite{tardos2005algorithm}, where it is shown how to solve this problem on $M$ machines in $O(n \log (n))$ time.
An algorithm is designed in \cite{arkin1987scheduling} for weighted interval scheduling on $M$ machines that runs in $O(n^2 \log(n))$ time.
We refer a reader to \cite{kolen2007interval} and references therein for additional applications of the interval scheduling problem.
\paragraph{Other related work.}
There has also been a significant interest in job scheduling problems in which our goal is to schedule \emph{all} the given jobs across multiple machines, with the objective to minimize the total scheduling time. Several variants have been studied, including setups which allow preemptions, or setting where jobs have precedence constraints. We refer a reader to \cite{lenstra1978complexity,correa2005single,robert2008non,skutella2005stochastic,buttazzo2012limited,pinedo2012scheduling,levey20191} and references therein for more details on these and additional variants of job scheduling. Beyond dynamic algorithms for approximating maximum independent sets of intervals or hypercubes, \cite{cardinal2021worst} show results for geometric objects such as disks, fat polygons, and higher-dimensional analogs. In work after the original publication of our work, \cite{cardinal2021worst} contains a result that captures \cref{theorem:unweighted-M=1} with a more general class of fat objects.
\section{Overview of Our Techniques}\label{section:techniques}
Our primary goal is to
present unified techniques for approximating scheduling problems that can be turned into efficient algorithms for many settings. In this section, we discuss key insights of our techniques.
In the problems our work tackles, partitioning the problem instance into mostly-independent, manageable chunks is crucial. Doing so enables an LCA to determine information about a job of interest without computing an entire schedule, or enables a dynamic data structure to maintain a solution without restarting from scratch. To achieve this, we focus on methods for partitioning jobs in both the time dimension, and (when $M>1$) the machine dimension.
\subsection{Partitioning Over Time}
For simplicity of presentation, we begin by examining our method for partitioning over time for just the unweighted interval scheduling problem on one machine (i.e., $M=1$). In particular, we first focus on doing so for the dynamic setting.
Recall that in this setting the primary motivation for partitioning over time, is to divide the problem into independent, manageable chunks that can be utilized by a data structure to quickly modify a solution while processing an update. In our work, we partition the time dimension by maintaining a set of \emph{borders} that divide time into some number of contiguous regions.
By doing so, we divide the problem into many independent regions, and we ignore jobs that intersect multiple regions (or, equivalently, jobs that contain a border).
Our goal is then to maintain borders in a way such that we can quickly recompute the optimal solution completely within some region, and that the suboptimality introduced by these borders (e.g., we ignore jobs that contain borders) does not affect our solution much. In \cref{section:dynamic-unit}, we show that by maintaining borders where the optimal solution inside each region is of size $\Theta(\frac{1}{\varepsilon})$, we can maintain a $(1+\varepsilon)$-approximation of an optimal solution as long as we optimally compute the solution within each region.
The underlying intuition is that because each region has a solution of size $\Omega(\frac{1}{\varepsilon})$, we can charge any suboptimality caused by a border against the selected jobs in an adjacent region. Likewise, because each region's solution is size $O(\frac{1}{\varepsilon})$, we can recompute the optimal solution within some region quickly using a balanced binary search tree. We can dynamically maintain borders satisfying our desired properties by adding a new border when a region becomes too large, or merging with an adjacent region when a region becomes too small. As only $O(1)$ regions will require any modification when processing an update, this method of partitioning time, while simple, enables us to improve the fastest-known update/query time to $O(\log(n)/\varepsilon)$. The value of this technique is that it enables such worst-case update times, as one can obtain the same guarantee in amortized time simply by periodically rebuilding.
\subsection{Localizing the Time-Partitioning Method}
\label{sec:time-partitioning-overview}
We also show that this method of partitioning over time can be used to develop local algorithms for interval scheduling. Here, we desire to answer queries about whether a particular job is in our schedule. We hope to answer each of these queries consistently (i.e., they all agree with some approximately optimal schedule) and in less time than it would take to compute an entire schedule from scratch. Partitioning over time seems helpful for this setting, because this would enable us to focus on just the region of the job being queried. However, our previously mentioned method for maintaining borders does so in a sequential manner that we can no longer afford to do in this model of computation. Instead, we use a hierarchical approach to more easily compute the locations of borders that create regions with solutions not too big or too small.
For simplicity, we again focus on the unweighted setting with only one machine. In the standard greedy algorithm for computing unweighted interval scheduling on one machine, we repeatedly select the job $successor(x)$: ``\emph{What is the interval with the earliest endpoint, of those that start after point $x$?}'' (where $x$ is the endpoint of the previously chosen job). As reading the entire problem instance would take longer than desired, an LCA requires some method of probing for information about the instance. Our LCA utilizes such successor probes to do so. For further motivation, see \cref{section:local}. We outline a three-step approach towards designing an LCA that utilizes few probes:
\emph{Hierarchizing the greedy (\cref{alg:alg-global-exact}).} Instead of just repeatedly using $successor(x)$ to compute the solution as the standard greedy does, we add hierarchical structure that adds no immediate value but serves as a helpful stepping stone.
Consider a \emph{binary search tree} (BST) like structure, where the root node corresponds to the entire time range $[0, N]$. Each node in the structure has a left-child and a right-child corresponding to the 1st and the 2nd half, respectively, of that node's range. Eventually, leaf nodes have no children and correspond to a time range of length one unit. At a high-level, we add hierarchical structure by considering jobs contained in some node's left-child, then considering jobs that go between the node's left-child and right-child, and then considering jobs contained in the node's right-child. This produces the same result as the standard greedy, but we do so with a hierarchical structure that will be easier to utilize.
\emph{Approximating the hierarchical greedy (\cref{alg:alg-global-approx}).}
Now, we modify the hierarchical greedy so that it is no longer exactly optimal but is instead an approximation. At first this will seem strictly worse, but it will yield an algorithm that is easier to localize. When processing each node, we will first check whether it is the case that both the left-child and the right-child have optimal solutions of size $>\frac{1}{\varepsilon}$. A key observation here is that checking whether a time range has an optimal solution of size $>\frac{1}{\varepsilon}$ can be done by making at most $1+\frac{1}{\varepsilon}$ successor probes (i.e., one does not necessarily need to compute the entire optimal solution to check if it is larger than some relatively small threshold).
If both the left-child and the right-child would have optimal solutions of size $>\frac{1}{\varepsilon}$, then we can afford to draw a border at the midpoint of our current node and solve the left-child and right-child independently. Jobs intersecting a border are \emph{ignored}, and we charge the number of such ignored jobs, i.e., the number of drawn borders, to the size of solution in the corresponding left- and right-child. Ultimately, we show that the addition of these borders makes our algorithm $(1+\varepsilon)$-approximate. Moreover, and importantly, these borders introduce \emph{independence} between children with large solutions.
\emph{Localizing the approximate, hierarchical greedy (\cref{alg:alg-local-approx}).} Finally, we localize the approximate, hierarchical greedy. To do so, we note that when some child of a node has a small optimal solution, then we can get all the information we need from that child in $O(\frac{1}{\varepsilon})$ probes. As such, if a node has a child with a small optimal solution, we can make the required probes from the small child and recurse to the large child. Otherwise, if both children have large solutions, we can draw a border at the midpoint of the current node and only need to recurse down the child which contains the job the LCA is being queried about.
With these insights, we have used our partitioning method over time for local algorithms to produce an LCA only requiring $O(\frac{\log(N)}{\varepsilon})$ successor probes.
\subsection{Speeding-Up Weighted Interval Scheduling}
In our most technically involved result, we build upon the work of \cite{henzinger2020dynamic} and design the first $(1+\varepsilon)$-approximation algorithm for weighted interval scheduling that runs in $\poly(\log n,\log N,\log w,\frac{1}{\varepsilon})$ time. While our analysis is involved and requires background provided in later sections to fully understand, we discuss a high-level overview of key insights here. For more details, see \cref{section:dynamic-weighted}.
Like \cite{henzinger2020dynamic}, we build a hierarchical data structure over time ranges.
In particular, we have a binary search tree where each cell $Q$ covers a time range and its children $Q_L$ and $Q_R$ cover the left and right half of its time range, respectively. $Q_{root}$ covers the entire time range. Jobs are then assigned to cells where they are completely contained within the cell, and the job's length is approximately $\varepsilon$ fraction of the cell's time range.
We now outline how this structure is used in computation. As a reminder, our main goal is to compute a $(1+\varepsilon)$-approximate weighted interval scheduling. This task is assigned to $Q_{root}$ by sending range $[0, N]$ to it. However, instead of computing the answer for entire $[0, N]$ directly, $Q_{root}$ partitions the range $[0, N]$ into a number of ranges over which it is easy to compute approximate solutions (such ones are called \emph{sparse}) and the (remaining) ranges over which it is ``hard'' to compute approximate solutions at the level of $Q_{root}$. These hard-to-approximate ranges are deferred to the children of $Q_{root}$, and are hard to approximate because any near-optimal solution for the range contains many jobs.
So, a child $Q_C$ of $Q_{root}$ might receive \emph{multiple} ranges for which it is asked to find an approximately optimal solution. $Q_C$ performs computation in the same manner as $Q_{root}$ did -- the cell $Q_C$ partitions each range it receives into ``easy'' and ``hard'' to compute subranges. The first type of subranges is computed by $Q_C$, while the second type if deferred to the children of $Q_C$. Here, ``hard'' ranges are akin to nodes having large solutions in our description of \cref{alg:alg-global-approx} in \cref{sec:time-partitioning-overview}. The same as in \cref{sec:time-partitioning-overview}, these ``hard'' ranges have large weight and allow for drawing a boundary and hence dividing a range into two or more \emph{independent} ranges.
To decide which jobs are ``easy'' and which are ``hard'' to compute on the level of a cell, within each cell $Q$, we design an auxiliary data structure (that relates to a constant-factor approximation of the problem) that maintains a set of points $Z(Q)$ that partitions $Q$ into slices of time(where slices are the time ranges between consecutive points of $Z(Q)$).
It is instructive to think of $Z(Q)$ as a method of partitioning time such that we can use this partitioning to more efficiently search for a near-optimal solution of a later-defined structure.
In particular, $Z(Q)$ is designed such that the optimal solution within each slice has small total reward compared to the optimal solution over the entirety of $Q$.
The general goal of our analysis is to show that there exists a near-optimal solution of a desirable structure with regards to our hierarchical decomposition.
Intuitively, the main goal of this structure is to enable reducing computation of an approximate optimal global solution to computation of approximate solutions across multiple ranges of jobs, each range of jobs having solution of size $O(1/\varepsilon)$, that we call \emph{sparse} in the text below. (As we discuss later, approximate optimal solutions can be computed very efficiently in these sparse ranges.)
The main challenge here is to detect/localize these groups efficiently and in a way that yields a fast dynamic algorithm.
As an oversimplification, we define a solution as having \emph{nearly-optimal sparse structure} if it can be generated with roughly the following process:
\begin{itemize}
\item Each cell $Q$ receives a set of disjoint time ranges for which it is supposed to compute an approximately optimal solution using jobs assigned to $Q$ or its descendants. Each received time range must have starting and ending time in $Z(Q)$. $Q_{root}$ receives the time range $[0, N]$ containing all of time.
\item For each time range $\mathcal{R}$ received by $Q$, the process partitions $\mathcal{R}$ into disjoint time ranges of three types: ``sparse'' time ranges, time ranges to be sent to $Q_L$ for processing, and time ranges to be sent to $Q_R$ for processing. In particular, this means that subranges of $\mathcal{R}$ are deferred to the children of $Q$ for processing.
\item For every ``sparse'' time range, $Q$ computes an optimal solution using at most $\nicefrac{1}{\varepsilon}$ jobs that are assigned to $Q$ or its descendants.
\item The union of the reward/solution of all sparse time ranges on all levels must be a $(1+\varepsilon)$-approximation of the globally optimal solution (i.e., without any structural requirements).
\end{itemize}
We improve the best-known running time with two key advances. First, we propose novel charging arguments that enable us to partition each cell with only $|Z(Q)| = \poly(\nicefrac{1}{\varepsilon},\log(N))$ points and still show there is an approximately optimal solution with nearly-optimal sparse structure. Next, we design an approximate dynamic programming approach to efficiently compute near-optimal solutions for sparse ranges. Combined, this enables a more efficient algorithm for weighted interval scheduling.
\textbf{Novel charging arguments.} We now outline insights of our charging arguments that enable us to convert an optimal solution $OPT$ into a near-optimal solution $OPT'$ with nearly-optimal sparse structure while relaxing our partitioning to only need $|Z(Q)| = \poly(\nicefrac{1}{\varepsilon},\log(N))$ points. For a visual aid, see \cref{fig:charging}.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.9\textwidth]{figure-charging.eps}}
\caption{Visual example for charging argument.}
\label{fig:charging}
\end{figure}
As outlined in our overview of the nearly-optimal sparse structure, each cell $Q$ receives a set of disjoint time ranges, with each time range having endpoints in $Z(Q)$, and must split them into three sets: sparse time ranges, time ranges for $Q_L$, and time ranges for $Q_R$. We will be modifying $OPT$ by deleting some jobs. This new solution will be denoted by $OPT'$ and will have the following properties: (1) $OPT'$ exhibits nearly-optimal sparse structure; and (2) $OPT'$ is obtained from $OPT$ by deleting jobs of total reward at most $O(\varepsilon \cdot w(OPT))$. We outline an example of one such time range a cell $Q$ may receive in \cref{fig:charging} (annotated by ``received range $\mathcal{R}$''). Since our structure only allows a cell $Q$ to use a job if it is assigned to $Q$ or a descendant, any relatively valuable job assigned to $Q$ must be used now by putting it in a sparse time range. (Recall here that each job is assigned to exactly one cell $Q$, so a job assigned to $Q$ cannot be used by $Q$'s children.) We show one such job in \cref{fig:charging} in blue marked by ``B''. To have this job be in a sparse range, we must divide the time range $\mathcal{R}$ somewhere, as otherwise our solution in the received range will be dense. If we naively divide $\mathcal{R}$ at the partition of $Z(Q)$ to the left and right of the job ``B'', we might be forced to delete some valuable jobs (pictured in green marked by ``G''). Instead, we expand the division outwards in a more nuanced manner. Namely, we keep expanding outwards and looking at the job that contains the next partition point (if any). If the job's value exceeds a certain threshold, as those pictured as green and marked by ``G'' in \cref{fig:charging}, we continue expanding. Otherwise, it is below a certain threshold, pictured as brown and not marked in \cref{fig:charging}, and its deletion can be charged against the blue job. We delete such brown jobs and the corresponding partition points, i.e., the vertical red lines crossing those brown jobs, constitute the start and the end of the sparse range. By the end, we will have decided the starting and ending time of the sparse range, and what remains inside are blue job(s), green job(s), and yellow job(s) (also marked by ``Y''). Note that yellow jobs must be completely within a partition slice of $Z(Q)$.
Since we defined $Z(Q)$ such that the optimal reward within any grid slice is small, the yellow jobs have relatively small rewards compared to the total reward of green and blue jobs. Accordingly, we can delete the yellow jobs (to help make this time range's solution sparse) and charge their cost against a nearby green or blue job.
In \cref{fig:charging}, an arrow from one job to another represents a deleted job pointing towards the job who we charge its loss against. Finally, each sparse range contains only green job(s) and blue job(s). If there are more than $\nicefrac{1}{\varepsilon}$ jobs in such a sparse range, we employ a simple sparsifying step detailed in the full proof.
What remains are the time ranges of the received range that were not put in sparse ranges. These will be time ranges that are sent to $Q_L$ and $Q_R$. In \cref{fig:charging}, these ranges are outlined in yellow (and annotated with ``child subproblem''). However, the time ranges do not necessarily align with $Z(Q_L)$ or $Z(Q_R)$ as is required by nearly-optimal sparse structure. See \cref{fig:snap} for intuition on why we cannot just ``snap'' these child subproblems to the partition points in $Z(Q_L)$ and $Z(Q_R)$. (We say that a range $\mathcal{R}$ is \emph{snapped} inward (outward) within cell $Q$ if $\mathcal{R}$ is shrunk (extended) on both sides to the closest points in $Z(Q)$. Inward snapping is illustrated in \cref{fig:snap}.) Instead, we employ a similar charging argument to deal with snapping. As an analog to how we expanded outwards from the blue job for defining sparse ranges, we employ a charging argument where we contract inwards from the endpoints of the child subproblem. In summary, these charging arguments enabled us to show a solution of nearly-optimal sparse structure exists even when only partitioning each cell $Q$ with $|Z(Q)| = \poly(\nicefrac{1}{\varepsilon},\log(N))$ points.
\textbf{Approximate dynamic programming.} Now, we outline our key advance for more efficiently calculating the solution of nearly-optimal sparse structure. This structure allows us to partition time into ranges with sparse solutions. More formally, we are given a time range and must approximate an optimal solution that uses at most $\nicefrac{1}{\varepsilon}$ jobs. Previous approaches utilized a method similar to exhaustive search that requires an exponential dependence on $\nicefrac{1}{\varepsilon}$. We outline an approximate dynamic programming approach that only requires polynomial time dependence on $\nicefrac{1}{\varepsilon}$.
The relatively well-known dynamic programming approach for computing weighted interval scheduling is to maintain a dynamic program where the state is a time prefix and the output is the maximum total reward that can be obtained in that time prefix. However, for our purposes, there are too many possibilities for time prefixes. Instead, we invert the dynamic programming approach, and have a state referencing some amount of reward, where the dynamic program returns the minimum time prefix in which one can obtain a given reward. Unfortunately, there are also too many possible amounts of rewards. We observe that we do not actually need this exact state, but only an approximation. In particular, we show that one can round this state down to powers of $(1+\varepsilon^2)$ and hence significantly reduce the state-space. In \cref{sec:dp-sparse}, we show how one can use this type of observation to quickly compute approximate dynamic programming for a near-optimal sparse solution inside any time range.
\subsection{Partitioning over Machines}
Now we detail our techniques for extending interval scheduling methods for one machine, to scheduling for many machines. A key difficulty in extending such methods is that there is an \emph{inherent dependency} in the process of scheduling. Choosing to use (or to not use) a job on one machine directly affects the optimal schedule for the remaining machines. To overcome this, our work examines two approaches for scheduling. With the first approach, we maintain approximation guarantees almost the same to those for a single-machine setting at the expense of an $O(M)$ factor slowdown. With the second approach, we achieve the same time complexity as was achieved for a single machine at the expense of a slight multiplicative decrease in approximation guarantees.
\textbf{Partitioning over machines and time \emph{simultaneously}.} First, we explore partitioning over time and machines \emph{simultaneously}. At a high level, we do so by dynamically maintaining a partition over time and computing an approximately optimal solution for all machines together within each time range. However, as computing a solution for machines together is a process with dependencies, our algorithm incurs at least an $O(M)$ factor slowdown compared to analogous approaches for a single machine.
\emph{Unweighted jobs.} For scheduling unweighted jobs on multiple machines, there is a well-known centralized greedy approach similar in style to the greedy for scheduling unweighted jobs on one machine. As this greedy is efficient to simulate, we can employ an algorithm and analysis similar to how we dynamically computed unweighted interval scheduling on one machine. The notable difference is that we might need to charge $M$ jobs containing a border against our solutions in adjacent regions (as opposed to just charging $1$). Accordingly, we maintain borders where the optimal solution inside each region is size $\Theta(\frac{M}{\varepsilon})$.
\emph{Weighted jobs.} Using similar approaches in the setting with weighted jobs faces to challenges we must overcome. First, the well-known approach for computing this problem in the centralized setting uses minimum-cost flow (as opposed to a greedy) which is not clear how to efficiently simulate dynamically. To handle this, within borders we instead compute the weighted maximum independent set $M$ times which will only lose a factor of $\frac{M^M}{M^M - (M-1)^M}$ (upper-bounded by approximately $1.58$) in approximation guarantee. To compute the weighted maximum independent set, we use a dynamic programming approach. Finally, we note that we might need to charge $M$ jobs of reward $w$ containing a border ($Mw$ total reward) against our solutions in adjacent regions. So, we maintain borders where the optimal solution inside each region has total reward $\Theta(\frac{Mw}{\varepsilon})$.
\textbf{Partitioning over machines \emph{then} time.} Our second approach avoids the slowdown of the first, at the expense of a small multiplicative decrease in the approximation guarantee. To do so, we first partition jobs over machines and \emph{then} dynamically partition time to maintain solutions for each machine independently. In both of our results, we partition jobs among machines by assigning each job to a machine uniformly at random. Then, for each machine we simply maintain an approximately optimal schedule among jobs that were randomly assigned to it. This reduction immediately yields algorithms that are asymptotically just as fast as scheduling with only one machine. We now outline our techniques for showing this approach still maintains a strong approximation guarantee:
\emph{Unweighted jobs.} For scheduling on multiple machines, we note a symmetry among machines. If we can calculate the expected optimal solution of jobs assigned to a particular machine, then the expected optimal solution after all job assignments is simply this quantity multiplied by $M$.
To show a lower-bound for the expected optimal solution among jobs assigned to a particular machine, we recall that unweighted interval scheduling on one machine can be solved with a simple greedy method where we consider jobs in increasing order of their ending time and include the job if it does not intersect any previously included jobs. Interestingly, our method simulates this greedy on one machine by considering all jobs in an optimal solution for $M$ machines, where we lazily do not yet realize whether or not each job was assigned to this particular machine.
Then, as we run our greedy, we realize whether or not a job is assigned to this particular machine only when the greedy would choose to include this job. If we realize that this job was not assigned to this machine, then we continue the greedy method accordingly.
Otherwise, we continue the greedy as if we included this job, and we delete the at most $M-1$ jobs with later ending times that intersect this job (we obtain this $M-1$ upper-bound because we know the set of all jobs forms a valid solution on $M$ machines).
Whether or not a particular job is assigned to this particular machine is a Bernoulli random variable with parameter $\frac{1}{M}$, and we thus expect to see $M$ jobs that our greedy would select until we can actually use one on this machine. In total, our expected proportion of used jobs (among those in a particular optimal solution on $M$ machines) for this machine is at least $\frac{1}{M + (M-1)}$, so our global solution only loses a factor of $(2-1/M)$ in expectation by randomly assigning jobs to machines.
\emph{Weighted jobs.} The weighted setting presents unique challenges that the unweighted setting does not. For example, in our greedy-simulation approach for analyzing the reduction in the unweighted setting, one can show how long jobs that contain many other jobs are less likely to be included in the obtained solution for any machine. This is because, when we delete the $M-1$ jobs with later ending times than some particular job we chose to include in our solution, this will often delete the longer job that contains many jobs.
This is problematic in the weighted setting, as the longer job may provide extremely large reward. To handle this, we provide a different analysis where every job in some optimal solution among $M$ machines, has at least constant probability of being in our solution after randomly assigning jobs to machines.
To accomplish this, we introduce the following procedure. First, generate a uniformly random permutation and process all jobs (from the particular optimal solution on $M$ machines) in this order. When we process a job $J$, we include it on its assigned machine's schedule if (i) there are no jobs intersecting $J$ that are currently in $J$'s assigned machine's schedule, or (ii) all jobs intersecting $J$ that are currently in $J$'s assigned machine's schedule are completely contained within $J$. Note that if $J$ is selected because of the latter criteria, we delete all jobs scheduled in its assigned machine that are completely contained within $J$. With a detailed analysis, we show that no matter what the original optimal schedule over $M$ machines is, each job has probability at least $\frac{1}{e}$ of being included in the final schedule by using our procedure.
So, our global solution only loses a factor of $e$ in expectation by randomly assigning jobs to machines.
\section{Problem Setup}\label{section:setup}
In the interval scheduling problem, we are given $n$ jobs and $M$ machines. With each job $j$ are associated two numbers $s_j \ge 0$ and $l_j \ge 1$, referring to ``start'' and ``length'' respectively, meaning that the job $j$ takes $l_j$ time to be processed and its processing can only start at time $s_j$. We will use $N$ to denote an upper-bound on $s_j + l_j$, for all $j$. In addition, with each job $j$ is associated weight/reward $w_j > 0$, that refers to the reward for processing the job $j$. The task of \emph{interval scheduling} is to schedule jobs across machines while maximizing the total reward and respecting that each of the $M$ machines can process at most one job at any point in time.
\section{Dynamic Unweighted Interval Scheduling on a Single Machine}\label{section:dynamic-unit}
In this section we prove \cref{theorem:unweighted-M=1}.
As a reminder, \cref{theorem:unweighted-M=1} considers the case of interval scheduling in which $w_j = 1$ for each $j$ and $M = 1$, i.e., the jobs have unit reward and there is only a single machine at our disposal. This case can also be seen as a task of finding a maximum independent set among intervals lying on the $x$-axis. The crux of our approach is in designing an algorithm that maintains the following invariant:
\smallskip
\begin{minipage}{0.95\linewidth}
\begin{mdframed}[hidealllines=true, backgroundcolor=gray!15]
\vspace{-3pt}
\begin{invariant}\label{invariant:unweighted-MIS}
The algorithm maintains a set of borders such that an optimal solution schedules between $\nicefrac{1}{\varepsilon}$ and $\nicefrac{2}{\varepsilon}$ intervals within each two consecutive borders.
\end{invariant}
\end{mdframed}
\end{minipage}
\medskip
We aim for our algorithm to maintain \cref{invariant:unweighted-MIS} while keeping track of the optimal solution between each pair of consecutive borders. The high level in intuition for this is that if we do not maintain too many borders, then our solution must be very good (our solution decreases by size at most one every time we add a new border). Furthermore, if the optimal solution within borders is small, it is likely easier for us to maintain said solutions. We prove that this invariant enables a high-quality approximation:
\begin{lemma}\label{lemma:invariant-value}
A solution that maintains an optimal solution within consecutive pairs of a set of borders, where the optimal solution within each pair of consecutive borders contains at least $K$ intervals, maintains a $\frac{K+1}{K}$-approximation.
\end{lemma}
\begin{proof}
Consider an optimal solution $OPT$. We will now design a $K$-approximate optimal solution $OPT'$ as follows: given $OPT$, delete all intervals in $OPT$ that overlap a drawn border.
Fix an interval $J$ appearing in $OPT$ but not in $OPT'$. Assume that $J$ intersects the $i$-th border. Recall that between the $(i-1)$-st and the $i$-th border there are at least $K$ intervals in $OPT'$. Moreover, at most one interval from $OPT$ intersects the $i$-border. Hence, to show that $OPT'$ is a $\frac{K+1}{K}$-approximation of $OPT$, we can charge the removal of $J$ to the intervals appearing between the $(i-1)$-st and the $i$-th border in $OPT'$.
\end{proof}
Not only does \cref{invariant:unweighted-MIS} enable high-quality solutions, but it also assists us in quickly maintaining such a solution. We can maintain a data structure with $O(\frac{\log(n)}{\varepsilon})$ updates and $O(\log(n))$ queries that moves the borders to maintain the invariant and thus maintains an $(1+\varepsilon)$-approximation as implied by \cref{lemma:invariant-value}.
\theoremunweightedM*
\begin{proof}
Our goal now is to design an algorithm that maintains \cref{invariant:unweighted-MIS}, which by \cref{lemma:invariant-value} and for $K = \nicefrac{1}{\varepsilon}$ will result in a $(1+\varepsilon)$-approximation of \textsc{Maximum-IS}\xspace.
On a high-level, our algorithm will maintain a set of borders. When compiling a solution of intervals, the algorithm will not use any interval that contains any of the borders, but proceed by computing an optimal solution between each two consecutive borders. The union of those between-border solutions is the final solution.
Moreover, we will maintain the invariant that every contiguous region has an optimal solution of length $[\frac{1}{\varepsilon}, \frac{2}{\varepsilon})$.
In the rest, we show how to implement these steps in the claimed running time.
\paragraph{Maintained data-structures.}
Our algorithm maintains a balanced binary search tree $T_{\rm{all}}$ of intervals sorted by their starting points. Each node of $T_{\rm{all}}$ will also maintain the end-point of the corresponding interval. It is well-known how to implement a balanced binary search tree with $O(\log n)$ worst-case running time per insertion, deletion and search query. Using such an implementation, the algorithm can in $O(\log n)$ time find the smallest ending-point in a prefix/suffix on the intervals sorted by their starting-points. That is, in $O(\log{n})$ time we can find the interval that ends earliest, among those that start after a certain time.
In addition, the algorithm also maintains a balanced binary search tree $T_{\rm{borders}}$ of the borders currently drawn.
Also, we will maintain one more balanced binary search tree $T_{\rm{sol}}$ that will store the intervals that are in our current solution.
\paragraph{Update after an insertion.}
Upon insertion of an interval $J$, we add $J$ to $T_{\rm{all}}$. We make a query to $T_{\rm{borders}}$ to check whether $J$ overlaps a border. If it does, we need to do nothing; in this case, we ignore $J$ even if it belongs to an optimal solution. If it does not, we recompute the optimal solution within the two borders adjacent to $J$. If after recomputing, the new solution between the two borders is too large, i.e, it has at least $\frac{2}{\varepsilon}$ intervals, then draw/add a border between the $\frac{1}{\varepsilon}$-th and the $(1+\frac{1}{\varepsilon})$-th of those intervals.
\paragraph{Update after a deletion.}
Upon deletion of an interval $J$, we delete $J$ from $T_{\rm{all}}$. If $J$ was not in our solution, we do nothing else. Otherwise, we recompute the optimal solution within the borders adjacent to $J$ and modify $T_{\rm{sol}}$ accordingly. Let those borders be the $i$-th and the $(i+1)$-st. If the new solution between borders $i$ and $i+1$ now has size less than $\nicefrac{1}{\varepsilon}$ (it would be size exactly $\nicefrac{1}{\varepsilon}$), we delete an arbitrary one of the two borders (thus combining this region with an adjacent region). Then, we recompute the optimal solution within the (now larger) region $J$ is in. If this results in a solution of size at least $\nicefrac{2}{\varepsilon}$, we will need to split the newly created region by adding a border. Before splitting, the solution will have size upper-bounded by one more than the size of the solutions within the two regions before combining them as an interval may have overlapped the now deleted border (one region with size exactly $\frac{1}{\varepsilon}-1$ and the other upper-bounded by $\frac{2}{\varepsilon}-1$). Thus, the solution has size at in range $[\nicefrac{2}{\varepsilon},\frac{3}{\varepsilon})$. We can add a border between interval $\nicefrac{1}{\varepsilon}$ and $\nicefrac{1}{\varepsilon}+1$ of the optimal solution, and will have a region with exactly $\nicefrac{1}{\varepsilon}$ intervals and another with $[\nicefrac{1}{\varepsilon},\nicefrac{2}{\varepsilon})$ intervals, maintaining our invariant.
In all of these, region sizes are $O(\nicefrac{1}{\varepsilon})$, so recomputing takes $O(\nicefrac{\log (n)}{\varepsilon})$ time.
For queries, we will have maintained $T_{\rm{sol}}$ in our updates such that it contains exactly the intervals in our solution. So each query we just need to do a lookup to see if the interval is in $T_{\rm{sol}}$ in $O(\log n)$ time.
\end{proof}
This result improves the best-known time complexities \cite{bhore2020dynamic,henzinger2020dynamic}. Unfortunately, it does not immediately generalize well to the weighted variant.
\section{LCA for Interval Scheduling on a Single Machine}\label{section:local}
In this section we design local algorithms for interval scheduling, using techniques developed in the previous section.
While our previous algorithm is desirable in that it gives an efficient and simple algorithm to efficiently partition the time dimension and maintain an approximate solution, it requires bookkeeping (our partitioning relies on the past history of requests made before the update). We design local algorithms for interval scheduling that do not require knowledge of such bookkeeping. We need some way to probe information about ``similar'' intervals: as such, we will assume probe-access to an oracle that gives information about other intervals.
In contrast to the dynamic setting, our oracle has no dependence on $\varepsilon$ and thus can be used for any $\varepsilon$.
An LCA in this setting will answer queries of the form $LCA(S,\varepsilon)$, where we are given a set of intervals $S$ and approximation parameter $\varepsilon$, and on {\em query interval $I \in S$, we must answer
whether $I$ is in our $(1+\varepsilon)$-approximation} in such a way that is consistent
with all other answers to queries to the $LCA$ (with the same $\varepsilon$).
Generally, we develop a partitioning method that does not require much bookkeeping while attempting this, such that our notion of leveraging locality extends beyond any particular computation model.
While achieving this, we assume our LCA is given probe-access to a \emph{successor oracle} that answers what we will call \emph{successor probes} or $successor(x)$: ``\emph{What is the interval with the earliest endpoint, of those that start after point $x$?}'' This is a natural question for obtaining information about local intervals in this setting.
In particular, given that an interval is in our solution and ends at some point $x$, $successor(x)$ would be the next interval chosen by the classic greedy algorithm (for the unit-weight setting).
Such an oracle could be implemented with $O(\log n)$ time updates and queries (in a manner similar to how $T_{\rm{all}}$ is used in \cref{theorem:unweighted-M=1}).
Since our LCA outputs different solutions for different choices of $\varepsilon$, there is a strong sense in which an oracle that is independent of $\varepsilon$ (such as the oracle we utilize) is unable to maintain nontrivial bookkeeping (meaning the oracle could not give the LCA nontrivial information about the solution).
We focus on the unit-reward interval \textsc{Maximum-IS}\xspace problem ($M=1$, $w=1$). Our emphasis in doing so is not the specific problem instance or probe-model (e.g., in \cref{section:scheduling} we modify our probe-model and show an algorithm that works for multiple machines), but instead emphasizing a method of partitioning over time that utilizes locality and limited bookkeeping. %
At a high level, our novel partitioning method can be viewed as a rule-based approach that uses few probes to identify whether any given interval is in our solution. This approach is oblivious to query order.
To illustrate how to employ successor probes, we will first design a probe-based algorithm, denoted by \textsc{Probe-based-Opt}\xspace. Then, we will describe an exact global algorithm.
We will modify this (exact) global algorithm to an approximate global one by partitioning time into independent regions that enable a sense of locality. Finally, we will introduce an LCA motivated by the approximate global algorithm.
As mentioned before, suppose we have access to the \emph{successor probe} or $successor(x)$: ``What is the interval with the earliest endpoint, of those that start after point $x$?''
Note that access to such a probe can be provided in $O(\log(n))$ update and probe time.
\begin{lemma}\label{lemma:query-based-opt}
There exists an algorithm (that we call \textsc{Probe-based-Opt}\xspace) that gives an optimal unweighted solution within some range $[L,R]$ with $|OPT|+1$ successor probes.
\end{lemma}
\begin{proof}
We now describe \textsc{Probe-based-Opt}\xspace algorithm.
It is a classic result that an optimal solution for unweighted interval \textsc{Maximum-IS}\xspace is achieved by greedily choosing the interval with earliest ending point among those that start after the last chosen ending point.
We use such probes to easily simulate a greedy algorithm for the optimal solution within range $[L,R]$. We start by making a successor probe $successor(L)$. If this interval has an ending point at most $R$, we let that interval be the first one in our optimal solution. Otherwise, the optimal solution is of size zero. Now, we calculate the optimal solution within the range $[ending\_point(successor(L)), R]$ in the same way. Thus, we repeatedly make successor probes at the ending point of the last interval we have chosen.
\end{proof}
Moving forward, we prove an LCA in the unit-reward \textsc{Maximum-IS}\xspace setting:
\theoremunweightedMlocal*
\begin{proof}
\textbf{Hierarchically Simulating Greedy.}
We aim to hierarchically simulate the greedy algorithm so that it will be easier to adapt towards an LCA. To do this, we utilize a binary tree over the integer points in $[0,N]$. For a node $Q$ in this binary tree, its left child is denoted by $Q_L$ and right child denoted by $Q_R$. We use $Q_{mid}$ to denote the midpoint of the interval corresponding to $Q$. The intervals corresponding to $Q_L$ and $Q_R$ are such that they divide $Q$ exactly in half at its midpoint $Q_{mid}$.
We say that an interval $J$ is assigned to the node $Q$ in the binary tree where $J$ starts in the range contained by $Q_L$ and ends in the range contained by $Q_R$. An equivalent characterization is that $J$ is assigned to the \emph{largest node} $Q$, i.e., $Q$ corresponding to the largest interval, where $J$ contains $Q_{mid}$. As all intervals assigned to a node $Q$ share a common point $Q_{mid}$, at most one of them can be in our solution. In our hierarchical simulation, we decide at the node $Q$ which (if any) of the intervals assigned to it will be in our solution. To accomplish this, we define $f(Q,earliest)$ as a function that computes the interval scheduling problem within the range covered by $Q$, assuming we cannot use any interval that starts before the time $earliest$. Our function $f$ will decide which intervals are in our solution, and it will return the end of the last interval chosen in the range covered by $Q$. As such, calling $f(Q_{\rm{root}},0)$ corresponds to calculating the global solution.
\paragraph{Description of \cref{alg:alg-global-exact}.}
We now provide an algorithm for globally computing $f(Q,earliest)$ as \cref{alg:alg-global-exact}. This algorithm simulates the classic greedy approach for calculating the exact unweighted interval \textsc{Maximum-IS}\xspace.
Intuitively, this approach proposes a new way of visualizing and computing this greedy process that will be helpful for obtaining fast LCA.
We simulate the greedy on intervals in $Q_L$ to find the last ending time it will select before $Q_{mid}$, then we determine if the greedy chooses an interval $I_{mid}$ that contains $Q_{mid}$, and finally we simulate the greedy on intervals within $Q_R$.
\begin{algorithm}[h]
\SetKw{Print}{Print}
\Input{$Q$ : a tree node, corresponding to a time-range \\
$earliest$ : earliest starting time for future intervals}
\Output{Finds/prints a set of non-overlapping intervals such that (1) each interval is contained in $Q$, and (2) no interval starts before $earliest$\\
Returns ending time of last interval selected so far}
\BlankLine
\BlankLine
$after\_left\_earliest \gets f(Q_L, earliest)$ \\
$I_{mid} \gets$ interval after $after\_left\_earliest$ containing $Q_{mid}$ with earliest end time
\uIf{$I_{mid} \ne \emptyset$ \textbf{and} no interval is contained within $I_{mid}$}{
$after\_mid\_earliest \gets end(I_{mid})$ \\
\Print $I_{mid}$
}
\Else{
$after\_mid\_earliest \gets after\_left\_earliest$
}
$after\_right\_earliest \gets f(Q_R,after\_mid\_earliest)$ \\
\Return $after\_right\_earliest$
\caption{Global, exact algorithm for
$f(Q,earliest)$
\label{alg:alg-global-exact}}
\end{algorithm}
\begin{lemma}\label{lemma:global_exact}
\cref{alg:alg-global-exact} is a global algorithm for calculating unweighted interval \textsc{Maximum-IS}\xspace.
\end{lemma}
\begin{proof}
As this algorithm simulates the classic greedy approach, its correctness follows immediately.
\end{proof}
\paragraph{An easier to locally simulate, approximate global process}
We now modify \cref{alg:alg-global-exact} to more easily lend itself to local computation, while weakening our claim from an exact solution to $(1+\varepsilon)$ approximation. This modified global process will serve as an approximate solution that is easier for an LCA to simulate. We first describe the main intuition behind our modification, and then provide more details on how to design the algorithm (see \cref{alg:alg-global-approx}).
Consider a node $Q$ (defined as in \cref{alg:alg-global-exact}) and its left and right children $Q_L$ and $Q_R$, respectively. If optimal solutions within $Q_L$ and $Q_R$ are both large, i.e., have size at least $\nicefrac{1}{\varepsilon}$, we can afford to create a boundary at $Q_{mid}$ and not use any interval containing $Q_{mid}$ (in which case we reduce the size of an optimum solution by at most one), ``charge'' the potential interval intersecting this boundary point $Q_{mid}$ to the size of solutions in $Q_L$ and $Q_R$, and handle $Q_L$ and $Q_R$ independently. \cref{lemma:invariant-value} implies that this approach leads to $(1+\varepsilon)$-approximate scheduling. Being able to handle $Q_L$ and $Q_R$ independently is crucial for designing our desired LCA -- it enables us to explore only one of the two nodes to answer whether a given interval $I$ belongs to an approximate solution or not. Notice that if we have not discarded intervals containing $Q_{mid}$ and if $I$ belongs to the range defined by $Q_R$, then we would need to learn an approximate solution of $Q_L$ first before we could decide whether $I$ is an approximate solution of $Q_R$.
Otherwise, at least one of optimum solutions in $Q_L$ and $Q_R$ contains at most $\frac{1}{\varepsilon}$ intervals. For cells that have at most $\frac{1}{\varepsilon}$ intervals we use \textsc{Probe-based-Opt}\xspace to compute their optimum with $O(1/\varepsilon)$ successor probes. On the node (if any) that has solution larger than $1/\varepsilon$ we simply recurse. As we show in \cref{lemma:alg-local-approx}, this recursion is efficient enough even in the context of LCA. We now provide more details on the algorithm itself.
\paragraph{Description of \cref{alg:alg-global-approx}.}
We now define an algorithm (\cref{alg:alg-global-approx}) for globally computing an approximation of $f(Q,earliest)$. As the first step of the algorithm, we invoke \textsc{Probe-based-Opt}\xspace to identify whether or not simulating the greedy within $Q_L$ and $Q_R$ will both have large solutions with at least $\nicefrac{1}{\varepsilon}$ intervals. (Notice that to obtain this information we do not need to compute the entire solution in $Q_L$ or $Q_R$, but only up to $1/\varepsilon$ many intervals.)
If \emph{both} have solutions of size at least $1/\varepsilon$, the algorithms draws a border at $Q_{mid}$ (hence ignoring any interval that intersects $Q_{mid}$) and simulates the approximate greedy on $Q_L$ and $Q_R$ independently by invoking \cref{alg:alg-global-approx} on $Q_L$ and $Q_R$.
Otherwise, at least one of $Q_L$ and $Q_R$ has an optimal solution of size less than $1/\varepsilon$. \cref{alg:alg-global-approx} simulates exact greedy on nodes that have an optimal solution of size at most $1/\varepsilon$, and invokes \cref{alg:alg-global-approx} recursively on the node (if any) that has larger solution. In addition, \cref{alg:alg-global-approx} determines whether the greedy chooses an interval $I_{mid}$ that contains $Q_{mid}$, which is used to determine parameter $earliest$ for the processing of $Q_R$.
\begin{algorithm}[h]
\SetKw{Print}{Print}
\Input{$Q$ : cell \\
$earliest$ : earliest valid starting time for future intervals \\
$\varepsilon$ : approximation parameter}
\Output{Returns ending time of last interval selected so far \\
Prints each interval in the solution exactly once}
\BlankLine
\BlankLine
\If{$OPT(Q_L) > \nicefrac{1}{\varepsilon}$ and $OPT(Q_R) > \nicefrac{1}{\varepsilon}$}{
%
%
Draw a border at $Q_{mid}$ \\
\tcc{In our LCA, we will need to invoke only one of these.}
Invoke $f(Q_L,earliest)$ and $f(Q_R,Q_{mid})$ \\
\Return $f(Q_R,Q_{mid})$
}
\uIf{$OPT(Q_L) \le \nicefrac{1}{\varepsilon}$}{
\tcc{See \cref{lemma:query-based-opt} to recall \textsc{Probe-based-Opt}\xspace.}
$after\_left\_earliest \gets \textsc{Probe-based-Opt}\xspace(Q_L,earliest)$ \\
\Print intervals in $\textsc{Probe-based-Opt}\xspace(Q_L,earliest)$
}
\Else{
$after\_left\_earliest \gets f(Q_L,earliest)$
}
$I_{mid} \gets$ interval after $after\_left\_earliest$ containing $Q_{mid}$ with earliest end time
\uIf{$I_{mid} \ne \emptyset$ \textbf{and} no interval is contained within $I_{mid}$}{
$after\_mid\_earliest \gets end(I_{mid})$ \\
\Print $I_{mid}$
}
\Else{
$after\_mid\_earliest \gets after\_left\_earliest$
}
\uIf{$OPT(Q_R) \le \nicefrac{1}{\varepsilon}$}{
$after\_right\_earliest \gets \textsc{Probe-based-Opt}\xspace(Q_R,after\_mid\_earliest)$ \\
\Print intervals in $\textsc{Probe-based-Opt}\xspace(Q_R,after\_mid\_earliest)$
}
\Else{
$after\_right\_earliest \gets f(Q_R,after\_mid\_earliest)$
}
\Return $after\_right\_earliest$
\caption{Global, approximate algorithm for
$f(Q,earliest)$
\label{alg:alg-global-approx}}
\end{algorithm}
\begin{lemma}\label{lemma:global_approx}
\cref{alg:alg-global-approx} is a global algorithm for calculating a $(1+\varepsilon)$-approximation of unweighted interval \textsc{Maximum-IS}\xspace.
\end{lemma}
\begin{proof}
Note that \cref{alg:alg-global-approx} will compute $f(Q,earliest)$ exactly (by simulating the classic greedy) other than when it draws borders so that it can compute answers for $Q_L$ and $Q_R$ independently. However, we only draw borders when both the region the left and right of the border has a solution with at least $\nicefrac{1}{\varepsilon}$ intervals. As such, we maintain the requirements for \cref{lemma:invariant-value} to hold and can simulate the greedy exactly within borders which immediately shows correctness for a $(1+\varepsilon)$-approximation.
\end{proof}
\paragraph{Designing an LCA}
We design an LCA that simulates the approximate, global process of \cref{alg:alg-global-approx}. Note that \cref{alg:alg-global-approx} never recurses on both $Q_L$ and $Q_R$ unless we drew a border between them, in which case the recursive calls are independent. Since we are now designing an LCA that only determines whether a particular interval is in a solution, we can ignore one of the two independent subproblems. So, we design an LCA that only needs to recurse down one child each time and has desirable runtime. We design an algorithm for a slightly modified function $f(Q,earliest,I)$, where we compute whether $I$ is in our solution.
\paragraph{Description of LCA \cref{alg:alg-local-approx}. }
We now define an algorithm for locally computing an approximation of $f(Q,earliest,I)$ in \cref{alg:alg-local-approx}. This algorithm directly builds on \cref{alg:alg-global-approx}, whose description is provided above. The key difference between \cref{alg:alg-local-approx} and \cref{alg:alg-global-approx} is that when \cref{alg:alg-local-approx} draws a border, the algorithm does not calculate both $f(Q_L, earliest, I)$ and $f(Q_R, Q_{mid}, I)$, as they are independent and it suffices to compute the output of only one of those $Q_L$ and $Q_R$. If $I \in Q_L$, then \cref{alg:alg-local-approx} invokes $f(Q_L, earliest, I)$ as the output is independent of $f(Q_R, Q_{mid}, I)$. Otherwise, if $I \in Q_R$ or $I \notin (Q_L \cup Q_R)$, then the algorithm invokes $f(Q_R, Q_{mid}, I)$ as either $I$ has already been decided on whether it will be in the output or the algorithm only needs the result of $f(Q_R, Q_{mid}, I)$. As we show in the next claim, this suffices to guarantee LCA complexity of $O(\log{n} / \varepsilon)$. The rest of algorithm \cref{alg:alg-local-approx} is the same as \cref{alg:alg-global-approx}.
\begin{algorithm}[h]
\SetKw{Print}{Print}
\Input{$Q$ : cell \\
$earliest$ : earliest valid starting time for future intervals \\
$\varepsilon$ : approximation parameter\\
$I$ : interval}
\Output{Returns ending time of last interval selected so far \\
Prints ``Yes'' once if $I$ is in the desired solution within $Q$, else prints nothing}
\BlankLine
\BlankLine
\If{$OPT(Q_L) > \nicefrac{1}{\varepsilon}$ and $OPT(Q_R) > \nicefrac{1}{\varepsilon}$}{
Draw a border at $Q_{mid}$ \\
\lIf{$I \in Q_L$} {
\Return $f(Q_L, earliest, I)$
}
\lElse {
\Return $f(Q_R, Q_{mid}, I)$
}
}
\uIf{$OPT(Q_L) \le \nicefrac{1}{\varepsilon}$}{
\tcc{See \cref{lemma:query-based-opt} to recall \textsc{Probe-based-Opt}\xspace.}
$after\_left\_earliest \gets \textsc{Probe-based-Opt}\xspace(Q_L,earliest)$
\lIf{$I \in \textsc{Probe-based-Opt}\xspace(Q_L, earliest) $ solution} {
\Print ``Yes''
}
}
\Else{
$after\_left\_earliest \gets f(Q_L,earliest)$
}
$I_{mid} \gets$ interval after $after\_left\_earliest$ containing $Q_{mid}$ with earliest end time
\uIf{$I_{mid} \ne \emptyset$ \textbf{and} no interval is contained within $I_{mid}$}{
$after\_mid\_earliest \gets end(I_{mid})$
\lIf{$I_{mid} = I$} {
\Print ``Yes''
}
}
\Else{
$after\_mid\_earliest \gets after\_left\_earliest$
}
\uIf{$OPT(Q_R) \le \nicefrac{1}{\varepsilon}$}{
$after\_right\_earliest \gets \textsc{Probe-based-Opt}\xspace(Q_R,after\_mid\_earliest)$ \\
\lIf{$I \in \textsc{Probe-based-Opt}\xspace(Q_L, after\_mid\_earliest) $ solution} {
\Print ``Yes''
}
}
\Else{
$after\_right\_earliest \gets f(Q_R,after\_mid\_earliest)$
}
\Return $after\_right\_earliest$
\caption{Local, approximate algorithm for
$f(Q,earliest)$
\label{alg:alg-local-approx}}
\end{algorithm}
\begin{lemma}\label{lemma:alg-local-approx}
\cref{alg:alg-local-approx} is a $(1+\varepsilon)$-approximation LCA for unweighted interval \textsc{Maximum-IS}\xspace using $O(\frac{\log{N}}{\varepsilon})$ successor probes.
\end{lemma}
\begin{proof}
Correctness follows from that our algorithm simulates \cref{alg:alg-global-approx} which is a $(1+\varepsilon)$-approximation by \cref{lemma:global_approx}. To show that our LCA is efficient, we note that at each of the $\log(N)$ levels we only invoke one instance of $f$ for a child. Additionally, we only use $O(\frac{1}{\varepsilon})$ successor probes at each of these levels. We identify when $OPT(Q_L)$ and $OPT(Q_R)$ are greater than $\nicefrac{1}{\varepsilon}$ by using $\nicefrac{1}{\varepsilon}+1$ steps of \textsc{Probe-based-Opt}\xspace. So in total, our LCA only uses $O(\frac{\log(N)}{\varepsilon})$ successor probes.
\end{proof}
Thus, we have our desired LCA.
\end{proof}
Such an approach can use other probe-models that enable us to effectively simulate successor probes. For example, we could consider a probe-model where we want to know all intervals that intersect a certain point. Regardless, our goal is to emphasize this partitioning method that enables more local algorithms due to its lack of bookkeeping.
\section{Dynamic Weighted Interval Scheduling on a Single Machine}\label{section:dynamic-weighted}
This section focuses on a more challenging setting in which jobs have non-uniform weights. Non-uniform weights introduce difficulties for the approach mentioned in \cref{section:dynamic-unit}, as adding a border (which entails ignoring all the jobs that cross that border) may now force us to ignore a very valuable job. Straightforward extensions of this border-based approach require at least a linear dependence on $w$. This is because an ignored job containing a border can have a reward of $w$ (as opposed to just $1$), requiring $\nicefrac{w}{\varepsilon}$ reward inside the region to charge it against (as opposed to just $\nicefrac{1}{\varepsilon}$). In this work we show how to perform this task while having only logarithmic dependence on $w$.
The starting point of our approach is the decomposition scheme designed in the work of Henzinger et al.~\cite{henzinger2020dynamic}, which we overview in \cref{sec:decomposition-henzinger}.
Both our algorithm and our analysis introduce new ideas that enable us to design a dynamic algorithm with running time having only polynomial dependence on $\nicefrac{1}{\varepsilon}$, yielding an exponential improvement in terms of $\nicefrac{1}{\varepsilon}$ over \cite{henzinger2020dynamic}.
As the first step we show that there exists a solution $OPT'$, which is a $(1+\varepsilon)$-approximate optimal solution, that has \emph{nearly-optimal sparse structure}, similar to a structure used in \cite{henzinger2020dynamic}. We define properties of this structure in \cref{sec:convenient-structure}, although it is instructive to think of this structure as of a set of non-overlapping time ranges in $[0, N]$ such that:
\begin{enumerate}[(1)]
\item Within each time range there is an approximately optimal solution which contains a small number of jobs (called \emph{sparse});
\item The union of solutions across all the time ranges is $(1+\varepsilon)$-approximate; and
\item There is an efficient algorithm to obtain these time ranges.
\end{enumerate}
Effectively, this structure partitions time such that we get an approximately optimal solution by computing sparse solutions within partitioned time ranges and ignoring jobs that are not fully contained within one partitioned time range. This result is described in detail in \cref{sec:convenient-structure}.
Once equipped with this structural result, we first design a dynamic programming approach to compute an approximately optimal solution within one time range. To obtain an algorithm whose running time is proportional to the number of jobs in the solution for a time range,
as opposed to the length of that range, we ``approximate'' states that our dynamic programming approach maintains, and ultimately obtain the following claim whose proof is deferred to \cref{sec:dp-sparse}.
\begin{restatable}{lemma}{lemmadpsparse}
\label{lemma:sparse-sol-approx}
Given any time range $\mathcal{R} \subseteq [0, N]$ and an integer $K$, consider an optimal solution $OPT(\mathcal{R}, K)$ in $\mathcal{R}$ containing at most $K$ jobs. Then, there is an algorithm that in $\mathcal{R}$ finds a $(1+\varepsilon)$-approximate solution to $OPT(\mathcal{R}, K)$ in $O\rb{\frac{K \log(n) \log^2(K/\varepsilon)}{\varepsilon^2} + \log(\log(w)) \log(n)}$ time and with at most $O\rb{\frac{K \log(Kw)}{\varepsilon}}$ jobs.
\end{restatable}
Observe that running time of the algorithm given by \cref{lemma:sparse-sol-approx} has no dependence on the length of $\mathcal{R}$. Also observe that the algorithm possibly selects slightly more than $K$ jobs to obtain a $(1+\varepsilon)$-approximation of the best possible reward one could obtain by using at most $K$ jobs in $\mathcal{R}$ (i.e., $OPT(\mathcal{R}, K)$).
Finally, in \cref{sec:combining-ingredients} we combine all these ingredients and prove the main theorem of this section.
\theoremweighteddynamic*
\subsection{Decomposition of Henzinger, Neumann, and Wiese}
\label{sec:decomposition-henzinger}
In discussions of our methods, we extensively use notation and methods described in this section. As discussed earlier, our approach for solving this problem focuses on finding $OPT'$, a $(1+\varepsilon)$-approximation of an optimal solution, that has nearly-optimal sparse structure. We now examine the decomposition of time by Henzinger et al.~\cite{henzinger2020dynamic} to provide background for defining the structure that we use. They also consider settings with multi-dimensional jobs, but we recall their work in the one-dimensional setting.
The algorithms of Henzinger et al., maintain a constant factor approximation to the weighted interval \textsc{Maximum-IS}\xspace problem by using a hierarchical decomposition of the time range $[0,N]$ into a binary tree; for the sake of simplicity and without loss of generality, assume that $N$ is a power of $2$. Hence, a node in the binary tree, also called a \emph{cell}, at depth $j$ covers a time range of length $N / 2^j$. Each cell $Q$ has two children, $Q_L$ and $Q_R$, that exactly contain the left and right half of the $Q$'s time range, respectively.
Then, each job is assigned to a cell $Q$ completely containing it and at a depth such that the job makes up approximately $\varepsilon$ fraction of $Q$'s length. If no such cell exists, then the job is ignored. The set of jobs assigned to a cell $Q$ is denoted by $C'(Q)$ (and thus each job in $C'(Q)$ are at least $\varepsilon$ proportion of the length of $Q$'s time range), while the set of all jobs assigned to $Q$ or one of its descendants is denoted $C(Q)$. By using a random offset\footnote{We add a random offset uniformly from $[0,N]$ to every job. By doing this, the probability some job is not assigned to a cell (where it would be approximately $\varepsilon$ fraction of the length) is the probability it is not fully within a cell of the desired length which is $O(\varepsilon)$.}, we can afford to ignore all intervals that are not assigned to any $Q$, and still have an optimal solution of total weight at least $(1-O(\varepsilon)) \times w(OPT)$ in expectation. Ideas similar to this random offset usage have been presented in works such as \cite{arora1998polynomial,indyk2004algorithms,andoni2014parallel}. Moreover, this property holds with constant probability, which can further be extended to be with high probability by running multiple data structures with different random offsets simultaneously and using the data structure with the best solution.
For the reader that would like more familiarity with the approach designed in Henzinger et al.~\cite{henzinger2020dynamic}, we recommend reading \cref{section:pq} before the rest of this section. \cref{section:pq} contains detailed explanation of the properties and design of $P(Q)$, where $P(Q)$ contains a set of points frequently used in our analysis and discussion. This set of points in $P(Q)$ corresponds to a particular subset of endpoints for jobs in $C(Q)$, where the weight of a point in $P(Q)$ is equal to the weight of its corresponding job. Here, we only outline the guarantees of $P(Q)$ relevant to our approach. There is a data structure $P(Q)$ for each cell $Q$, which can be used to output the weight of a constant factor (not necessarily $(1+\varepsilon)$) approximation of an optimal solution when using intervals \emph{only completely contained} within some time range of the cell $Q$.
The set of points in $P(Q)$ is chosen such that that the total weight of all the points in $P(Q)$ for any time range $[L,R]$ inside $Q$, denoted by $w(P(Q)[L,R])$, is guaranteed to be at least a constant fraction of the optimal solution within $Q$ for that range, $w(OPT(Q,[L,R]))$.
Further, it is also guaranteed that for the time range corresponding to the entirety of any cell $Q$, it holds that $w(P(Q))$ is at most a constant factor larger than the optimal solution for $Q$, $w(OPT(Q))$. While the constant-factor guarantees of $P(Q)$ are not arbitrarily close to $1$, \cite{henzinger2020dynamic} can, perhaps surprisingly, still utilize $P(Q)$ to partition time in a way that will later enable an efficiently
modifiable $(1+\varepsilon)$-approximation.
In \cref{section:dynamic-unit}, we introduced the idea of maintaining borders such that the optimal solution within consecutive borders is small.
In our goal to efficiently maximize the total reward of chosen jobs (both in the local and dynamic setting), it was helpful to partition time into ranges where the optimal solutions within ranges had a small fraction of total reward.
Analogously, $P(Q)$ is used to create a set of grid endpoints $Z(Q)$ such that optimal solution within the grid slice between each consecutive pair of grid endpoints (not including the endpoints) has small reward.
Note that, in our one-dimensional work, ``grid slices'' and ``grid endpoints'' correspond to ranges of time and points of time, respectively. We continue with this inherited terminology in our work because it is helpful for distinguishing between various classes of ranges and points of time that we will later discuss.
By loosening some details, $P(Q)$ is a weighted set of interval endpoints of a $\Theta(1)$-approximate interval scheduling solution and $Z(Q)$ are weighted quantiles of $P(Q)$.
We emphasize that $P(Q)$ and $Z(Q)$ do not represent a $(1+\varepsilon)$ approximation (they do represent a $\Theta(1)$ approximation, though), but we will use it to obtain such an approximation. The constant-factor approximation of $P(Q)$ serves as a scaffold for helping us partition time with $Z(Q)$, that enables a $(1+\varepsilon)$-approximation.
Formally, for a set of grid endpoints $Z(Q)$, we define a grid slice as follows.
\begin{definition}[Grid slice]\label{definition:grid-slice}
Given a set of grid endpoints $Z(Q) = \{r_1, r_2, \ldots, r_k\}$ with $r_i < r_{i + 1}$, we use \emph{grid slice} to refer to an interval $(r_i, r_{i + 1})$, for any $1 \le i < k$. Note that a grid slice between $r_i$ and $r_{i + 1}$ does not contain $r_i$ nor $r_{i + 1}$.
\end{definition}
To partition a cell $Q$ into $X$ grid slices, \cite{henzinger2020dynamic} add grid endpoints at $X-1$ weighted quantiles of $P(Q)$ (i.e., at the end of the prefix of points in $P(Q)$ that contains weight $\frac{1}{X}w(P(Q)),\frac{2}{X}w(P(Q)),\dots,$ $\frac{X-1}{X}w(P(Q))$) to guarantee that the optimal weight of jobs fully within each grid slice is $O(\nicefrac{w(P(Q))}{X})$. Recall that $w(OPT(Q,[L,R])) = O(w(P(Q)[L,R]))$. By selecting grid endpoints at $X-1$ weighted quantiles of $P(Q)$, we guarantee the weight of $P(Q)$ within any grid slice is $\le \nicefrac{w(P(Q))}{X}$ and thus the optimal solution within the grid slice is $O(\nicefrac{w(P(Q))}{X})$.
Note that the start and end of $Q$ are also included in $Z(Q)$, so $|Z(Q)|=X+1$. These grid endpoints $Z(Q)$ are used to divide the problem into easier subproblems, which can be seen as an analog to the usage of borders we introduce in \cref{section:dynamic-unit}.
At a high level, $Z(Q)$ is used to define a set of segments that motivate dynamic programming states of the form $DP(Q,S)$, where each $S$ corresponds to a segment between two grid endpoints of $Z(Q)$, and $DP(Q,S)$ computes an approximately optimal solution among schedules that can only use jobs which are both in $C(Q)$ and contained within the segment of time $S$. The key idea is that this dynamic programming enables them to partition time into \emph{dense} and \emph{sparse} ranges. Solutions for sparse ranges are computed by $Q$, while dense ranges are solved by children with dynamic programming (by further dividing the dense range into more sparse and dense ranges). Henzinger et al. has an exponential dependence on $\nicefrac{1}{\varepsilon}$ in both their running time for approximating solutions in sparse ranges (which are defined as having $\le \nicefrac{1}{\varepsilon}$ jobs in their solution) and the number of required grid endpoints $|Z(Q)|$. As such, their update has running time with exponential dependence on $\nicefrac{1}{\varepsilon}$ as well.
\subsection{Solution of Nearly-Optimal Sparse Structure}
\label{sec:convenient-structure}
To remove all exponential dependence on $\nicefrac{1}{\varepsilon}$, we introduce a new algorithm for approximating sparse solutions, and a modified charging argument to reduce the number of grid endpoints $|Z(Q)|$ required in each cell. With this, we will compute an approximately optimal solution of the following very specific structure.
\begin{definition}[Nearly-optimal sparse structure]
To have nearly-optimal sparse structure, a solution must be able to be generated with the following specific procedure:
\begin{itemize}
\item Each cell $Q$ will receive a set of time ranges, denoted as $RANGES(Q)$, with endpoints in $Z(Q)$. To start, $Q_{root}$ will receive one time range containing all of time (i.e., $RANGES(Q_{root}) = \{[0,N]\}$)
\item $RANGES(Q)$ is split into sets of disjoint time ranges: $SPARSE(Q)$, $RANGES(Q_L)$, $RANGES(Q_R)$
\item $SPARSE(Q)$, a set of time ranges, must have endpoints in $Z(Q) \cup Z(Q_L) \cup Z(Q_R)$
\item For each child $Q_{child}$ (where $child \in \{L,R\}$) of $Q$, $RANGES(Q_{child})$ must have all endpoints in $Z(Q_{child})$
\item The total weight of sparse solutions (solutions with at most $\nicefrac{1}{\varepsilon}$ jobs) within sparse time ranges must be large (where $SPARSE\_OPT(\mathcal{R})$ denotes an optimal solution having at most $\nicefrac{1}{\varepsilon}$ jobs within range $\mathcal{R}$): \[ \sum_Q \sum_{R \in SPARSE(Q)} w(SPARSE\_OPT(\mathcal{R})) \ge (1-O(\varepsilon))w(OPT)
\]
\end{itemize}
\end{definition}
Now, we prove our result for a $(1+\varepsilon)$-approximation to dynamic, weighted interval \textsc{Maximum-IS}\xspace algorithm with only polynomial time dependence on $\nicefrac{1}{\varepsilon}$.
We build on the decomposition of Henzinger et al., but take a new approach for solving the small sparse subproblems, where we use an approximate dynamic programming idea to remove exponential dependence on $\nicefrac{1}{\varepsilon}$ in the best known running time for these subproblems.
We also develop novel charging arguments, with a particular focus on changing where deleted intervals' weights are charged against and introducing a \emph{snapping budget}, which we use to relax the required number of grid endpoints $|Z(Q)|$ to depend only polynomially on $\nicefrac{1}{\varepsilon}$. As a reminder, $Z(Q)$ is a set of grid points within $Q$ such that between any two consecutive points we are guaranteed that the optimal solution has small weight. Like Henzinger et al., our final algorithm will consider a number of subproblems for each cell proportional to $|Z(Q)|^2$, so improvements in $|Z(Q)|$ directly lead to improvements in the best-known running time.
Effectively, we make each of our smaller subproblems easier to solve while also reducing the number of subproblems we need to solve. All improvements are exponential in $\varepsilon$.
Specifically, we set $Z(Q)$ such that $|Z(Q)| = O(\frac{\log^2(N)}{\varepsilon^4})$ weighted quantiles of $P(Q)$ and thus the optimal solution within any grid slice has total weight at most $\frac{\varepsilon^4}{\log^2(N)} w(P(Q))$. We also add $\nicefrac{1}{\varepsilon}+1$ grid endpoints to $Z(Q)$ spaced evenly over the time range of $Q$, so that any job in $C'(Q)$ must contain a grid endpoint of $Z(Q)$. This follows because jobs in $C'(Q)$ must make up $\varepsilon$ fraction of $Q$'s time range, and the distance between any consecutive grid endpoints of $Z(Q)$ must be less than $\varepsilon$ fraction of $Q$'s time range.
Moving forward, our goal is to show that we can use this decomposition to easily discover a partition into sparse time ranges (as sparse solutions are easier to compute approximately optimally) where we still have a near optimal solution by constraining ourselves to a solution of nearly-optimal sparse structure. Later, in \cref{lemma:sparse-sol-approx}, we show how to exploit this structure and design an efficient algorithm that provides a $(1+O(\varepsilon))$-approximation of $OPT$.
\begin{lemma} \label{lemma:sol-structure}
There exists a solution $OPT'$ that has nearly-optimal sparse structure and such that $w(OPT') \ge (1 - O(\varepsilon)) w(OPT)$. Thus, $OPT'$ is a $(1 + O(\varepsilon))$-approximation of $OPT$.
\end{lemma}
\begin{proof}
We emphasize that the goal of this lemma is not to show how to construct a solution algorithmically, but rather to show that there exists one, that we refer to by $OPT'$, that has a specific structure and whose weight is close to $OPT$.
In this paragraph, we provide a proof overview. At a high-level, we show this claim by starting with $OPT$, and maintaining a solution $OPT'$ that holds our desired structure and only deletes jobs with total weight $O(\varepsilon \times w(OPT))$. Our process of converting $OPT$ to $OPT'$ is recursive, as we start at the root and work down. At a cell, we begin by deleting jobs belonging to it with small weights (and hence can be ignored). Then, we use the remaining jobs assigned to the cell to define sparse time ranges, using a process detailed in the following proof.
This gives us a set $SPARSE(Q)$ of sparse time ranges. For time ranges of $RANGES(Q)$ that are not designated as sparse time ranges, we will essentially consider them dense time ranges, that will be delegated to children cells of $Q$. In order to delegate a time range to a child $Q_{child}$, we require that the delegated time range must have endpoints that align with $Z(Q_{child})$. Accordingly, we perform modifications to ``snap'' the time ranges' endpoints to $Z(Q_{child})$ for the corresponding child $Q_{child}$ of $Q$ and include the ``snapped'' time ranges in $RANGES(Q_{child})$. We show that throughout this process, we do not delete much weight from $OPT$ and obtain an $OPT'$ that has our desired structure. Now, we present the proof in detail:
\paragraph{Deleting light jobs.}
We now describe how to modify $OPT$, obtaining $OPT'$, such that $OPT'$ does not contain any jobs in $C'(Q)$ with small weight (later referred to as \emph{light} jobs) and $OPT'$ is a $(1+\varepsilon)$-approximation of $OPT$. Note that we will never actually compute $OPT'$. It is only a hypothetical solution that has nice structural properties and that we use to compare our output to.
For a cell $Q$, consider a time range it receives in $RANGES(Q)$. We shall split this time range into sparse time ranges (to be added to $SPARSE(Q)$) and dense time ranges (to be added to $RANGES(Q_L)$ or $RANGES(Q_R)$). We define all jobs in $C'(Q) \cap OPT$ with weight $\le \frac{\varepsilon^2}{\log(N)} w(P(Q))$ as \emph{light} jobs. Immediately, we delete all light jobs. Note that light jobs are only the low-weight jobs in $C'(Q)$, and not all of $C(Q)$ (i.e. it does not include jobs assigned to descendants of $Q$). Since $w(P(Q))$ is a constant-factor approximation of the optimal solution within $Q$, and there are at most $\frac{1}{\varepsilon}$ jobs in $C'(Q) \cap OPT$ (as each job in $C'(Q)$ has length at least $\varepsilon$ fraction of $Q$'s time range by definition of assignment), we delete at most $O(\frac{\varepsilon}{\log(N)} w(P(Q)))$ weight at every cell, $O(\frac{\varepsilon}{\log(N)} w(OPT))$ weight at every level, and $O(\varepsilon w(OPT))$ weight in total. We desire to delete these light jobs so that all remaining jobs in $C'(Q) \cap OPT'$ have weight lower-bounded by some threshold, so some amount of suboptimality can later be charged against each of these jobs.
\paragraph{Utilizing heavy jobs.}
We now focus on showing how to construct our solution from \emph{heavy} jobs.
We define heavy jobs as all jobs in $C'(Q) \cap OPT$ that are not light. At a high-level, and as the first step, we select time ranges which contain all the heavy jobs; each time range will span the grid between two (not necessarily consecutive) endpoints in $Z(Q)$.
Some of these time ranges may contain many jobs in $OPT$, so we perform an additional refinement to divide them up into sparse time ranges. In this refinement, we will split up those time ranges such that we do not delete too much weight and, moreover, all of the resulting time ranges have at most $\nicefrac{1}{\varepsilon}$ jobs. These time ranges now constitute $SPARSE(Q)$. A detailed description of this process of determining $SPARSE(Q)$ is given in stages from ``utilizing heavy jobs'' to ``sparsifying regions.'' For an example of this process that uses the terminology later described in these stages, see \cref{fig:sparse}.
Any remaining time ranges not selected at this stage will effectively be dense time ranges, and are delegated into $RANGES(Q_L),RANGES(Q_R)$. This process of designating time ranges to delegate is detailed in stages from ``creating dense ranges'' to ``resolving leafs.''
As a reminder, we have chosen $Z(Q)$ such that the total weight inside any grid slice (a time range between two consecutive endpoints of $Z(Q)$) of $Q$ is at most $\frac{\varepsilon^4}{\log^2(N)} w(P(Q))$.
Now, consider the set of jobs in $C'(Q) \cap OPT'$ (i.e., the set of jobs in $OPT$ assigned to $Q$ that haven't been deleted yet) within $RANGES(Q)$. We call this set our heavy jobs, $H$.
Recall that $Z(Q)$ contains grid endpoints. For each heavy job (in arbitrary order), consider the grid endpoint immediately to its left and to its right. Without loss of generality, consider the right one and call it $r$. How we proceed can be split into two cases:
\begin{enumerate}[(1)]
\item In the first case, $r$ overlaps a job $J$ in $OPT'$ with weight at most $ \frac{\varepsilon^3}{\log(N)} w(P(Q))$. We delete $J$ and draw a boundary at $r$. Such a $J$ may exist if it belongs to a descendant of $Q$ (if it belonged to $Q$, we would have already deleted it as a light job).
After repeating this process for each heavy job, these boundaries will define the starting and ending times of the ranges containing \emph{all} the heavy jobs. %
In doing this, we will charge the weight of $J$ against the original heavy job whose weight is at least a factor $\nicefrac{1}{\varepsilon}$ larger. Each job in $OPT$ will have at most $O(\varepsilon)$ fraction of its weight charged against it, so this step in total will delete at most $O(\varepsilon w(OPT))$ weight.
There are at most two jobs we charge in this manner for that original heavy interval, one for the grid endpoint to the right and one to the left.
\item In the other case, $r$ overlaps a job $J$ that has weight greater than $\frac{\varepsilon^3}{\log(N)} w(P(Q))$. We call $J$ a \emph{highlighted} job. Our algorithm proceeds by considering the grid endpoint immediately to the right of $J$. We determine what to do with this grid endpoint in a recursive manner. Meaning, we proceed in the same two cases that we did when considering what to do with $r$, and continue this recursive process until we finally draw a boundary.
\end{enumerate}
After this process, we will have drawn \emph{regions} (time ranges corresponding to where we drew a left and right boundary for) in which $OPT'$ has at least one but possibly multiple heavy jobs, a number of highlighted jobs (possibly zero), and potentially some remaining jobs that are neither heavy nor highlighted (we call these \emph{useless}). It is our goal to convert these regions into time ranges that we can use as sparse time ranges.
Our process also guarantees these regions have borders with endpoints in $Z(Q)$.
Note that we have created regions within some time range of $RANGES(Q)$, but not every point in the time range is necessarily contained within a region.
\paragraph{Deleting useless jobs.} In each of these generated regions, we define \emph{useless} jobs as all jobs that are neither heavy nor highlighted. Useless jobs were not previously deleted as they are completely contained within grid slices. We want to convert these regions into sparse regions, but there may be many useless jobs that make these regions very dense.
Thus, we will delete all jobs in each region that are useless.
By the process of generating regions, any such job is fully contained within a grid slice for which there is a heavy or highlighted job partially overlapping the grid slice.
We charge deletion of all useless jobs in a given slice by charging against a highlighted or heavy job that must partially overlap the given slice.
By definition of $Z(Q)$, useless jobs in the slice add up to a total weight of at most $\frac{\varepsilon^4}{\log^2(N)} w(P(Q))$. This is because we set $Z(Q)$ with $|Z(Q)| = O(\frac{\log^2 (N)}{\varepsilon^4})$ quantiles of $P(Q)$ and thus the optimal solution within any grid slice has total weight at most $\frac{\varepsilon^4}{\log^2 (N)} w(P(Q))$. Moreover, $\frac{\varepsilon^4}{\log^2(N)} w(P(Q))$ is at least a factor of $\varepsilon$ less than the highlighted or heavy job we are charging against (and there are only two such slices whose useless jobs are charging against any highlighted or heavy jobs).
\paragraph{Sparsifying regions.} Now, each region only contains heavy or highlighted jobs.
We aim to split regions into ranges for $SPARSE(Q)$ without deleting much weight.
A region may have more than $\frac{1}{\varepsilon}$ jobs (meaning it is not sparse). If this is the case, we desire to split the region into time ranges that each have $\le \frac{1}{\varepsilon}$ jobs and start/end at grid endpoints. To do so, we number the jobs in a region from left to right and consider them in groups based on their index modulo $\frac{1}{\varepsilon}$. Note that a group does not consist of consecutive jobs.
Then, we delete the group with lowest weight. We delete this group because we make the observation that all remaining jobs in the region must contain a grid endpoint within it. This is because heavy jobs must contain a grid endpoint by how we defined $Z(Q)$, and highlighted jobs must contain a grid endpoint by their definition. Thus, we can delete the jobs belonging to the lightest group and split the time range at the grid endpoints contained inside each of the deleted jobs.
In doing so, we lose at most a factor of $\varepsilon$ of the total weight of all the considered jobs.
However, now each resulting time range will have at most $\frac{1}{\varepsilon}$ jobs and thus will be a valid sparse range in $SPARSE(Q)$ (because for any range containing a number of consecutive jobs greater than $\frac{1}{\varepsilon}$, we will have split it). Note that all these sparse ranges have endpoints in $Z(Q)$. With all of its terminology now defined, readers may find the example illustrated in \cref{fig:sparse} helpful for their understanding.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.9\textwidth]{figure-deleting-light-jobs.eps}}
\caption{This example illustrates how the sparse regions are created. All vertical segments within $Q$, which are red in the figure, correspond to the points in $Z(Q)$. The cell $Q$ is divided by $Z(Q)$ such that the optimal solution within every grid slice is small. As a reminder, a grid slice is an open time-interval between two consecutive points in $Z(Q)$; see \cref{definition:grid-slice} for a formal definition. First, we delete light jobs (not pictured in this figure). Heavy jobs are represented by horizontal lines within $Q$. Second, we start with a heavy job (the blue horizontal segment marked by ``B''). From this heavy job, we expand the region outwards as necessary. In this example, we expanded to the right, seeing two highlighted jobs (the green horizontal segments marked by ``G'') until we saw a job with low enough weight intersecting a grid endpoint (these job segments are colored in brown and crossed). We delete such brown jobs, and use the grid endpoints they intersected to define the region (outlined in purple and annotated by ``new region''). Useless jobs (pictured in yellow) are then deleted. Later, we sparsify the region.}
\label{fig:sparse}
\end{figure}
\paragraph{Creating dense ranges.} Recall that not all of the time ranges that we are modifying from $RANGES(Q)$ were partitioned into regions. Points in time that were not included in a region with a heavy job will not be inside any region.
We call these remaining time ranges our \emph{dense ranges} because they may contain many jobs.
Ideally, we assign dense ranges to $RANGES(Q_L)$ and $RANGES(Q_R)$. However, that may be invalid in the following scenario.
One of those dense ranges may overlap with $Q_{mid}$ and hence would not be contained completely within a child of $Q$, $Q_{child}$.
So, consider the job $J$ in $OPT'$ that overlaps with $Q_{mid}$. If $J$ has weight at most $\frac{\varepsilon^2}{\log(N)} w(P(Q))$, we delete $J$ and charge this to $Q$, and split into two dense ranges at the midpoint of $Q$. Otherwise, we retroactively consider $J$ as a heavy job and treat it as such (e.g. creating a border around it as was done for the heavy intervals previously).
All heavy jobs still have weight at least $\frac{\varepsilon^2}{\log(N)} w(P(Q))$ and there will still be at most $O(\frac{1}{\varepsilon})$ heavy jobs in each cell, so our argument regarding charging budgets still holds.
\paragraph{Snapping dense ranges.} The remaining dense time ranges have one remaining potential issue, that their endpoints may not align with $Z(Q_{child})$ even though they align with $Z(Q)$. For an example of this issue, see \cref{fig:snap}. The core of this problem is that these dense time ranges correspond to time ranges we would like to delegate to children of $Q$ (i.e., add to $RANGES(Q_L)$ and $RANGES(Q_R)$). However, there is the requirement that time ranges delegated to $RANGES(Q_L)$ and $RANGES(Q_R)$ must have endpoints in $Z(Q_L)$ and $Z(Q_R)$, respectively. Therefore, we have to modify the dense ranges so they align with the grid endpoints of one of $Q$'s children. It is tempting to naively ``snap'' the endpoints of these time ranges inward to the nearest grid endpoints of $Z(Q_{child})$, meaning to slightly contract the endpoints of the time ranges inward so they align with $Z(Q_{child})$.
Unfortunately, this might result in some jobs being ignored in the process (as illustrated in \cref{fig:snap}); a cell does not consider jobs which are not within a given range. If these ignored jobs have non-negligible total reward, ignoring them can result in a poor solution.
In the stage ``snapping dense ranges'' we detail a more involved contraction-like snapping process that contracts inwards similar to our argument for expanding outwards from heavy jobs when we determined sparse ranges. In our contraction-like snapping process, we convert some of the beginning and end of the dense range into sparse ranges, so we do not need to delete some of the high-reward jobs that we would need to delete with naive snapping. In the stages from ``using essential jobs'' to ``resolving leafs'', we detail how to apply modifications to fulfill the required properties and how to analyze the contraction process with charging arguments.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.9\textwidth]{figure-snapping-dense-ranges.eps}}
\caption{This example illustrates why the snapping we perform has to be done with care. The horizontal segments in this figure represent jobs. We show an initial dense range (outlined in purple) with endpoints in $Z(Q)$. With dashed vertical lines, we show where these endpoints are in $Q_L$. Importantly, they are not aligned with $Z(Q_L)$, i.e., the vertical dashed lines do not belong to $Z(Q_L)$. However, our structure \emph{requires} that dense ranges align with $Z(Q_{child})$, so we must address this. If we were to naively snap the endpoints of the dense range inwards to the endpoints of $Z(Q_L)$, then we would need to delete some jobs (these deleted jobs are colored in yellow and marked by ``Y''), while some other jobs would not be affected (like the remaining jobs in this example, those colored in blue). While this naive snapping may be fine in some cases, it will incur significant loss in cases in which the ``Y'' jobs have large weight. Notice that naively snapping outward to define a new region corresponding to the purple one is not a solution neither, as this could cause the dense time range to overlap with a previously selected sparse time range. Having overlapping ranges can cause us to choose intersecting jobs, and thus an invalid solution. Thus, we detail a more comprehensive manner of dealing with snapping.}
\label{fig:snap}
\end{figure}
Consider an arbitrary unaligned dense time range $U$. Ideally, we would ``snap'' the endpoints of $U$ inward to the nearest grid point of $Z(Q_{child})$ (i.e. move the left endpoint of $U$ to the closest grid point of $Z(Q_{child})$ to its right, and the right endpoint of $U$ to the closest grid endpoint of $Z(Q_{child})$ to its left). However, doing so may force us to delete a job in $OPT'$ that is too valuable (as we would have to delete jobs that overlap the section of $U$ that was snapped inwards). So, we will handle $U$ differently.
Without loss of generality, suppose we want to ``snap'' inward the left endpoint of $U$ to align with $Z(Q_{child})$. Doing so may leave some jobs outside the snapped range. We define the cost of snapping as the total weight of jobs that were previously contained within the range but are no longer completely contained within after snapping.
If immediately snapping inward the left endpoint to the nearest grid point of $Z(Q_{child})$ would cost at most $\frac{2\varepsilon^2}{\log(N)} w(P(Q))$, we do that immediately.
Otherwise, this snapping step would cost more than $\frac{2\varepsilon^2}{\log(N)} w(P(Q))$, implying that there is a job that overlaps with the grid endpoint of $Z(Q_{child})$ to the right of $U$'s left endpoint (all other jobs we are forced to delete are strictly inside a slice of $Z(Q_{child})$ and thus have total weight $\le \frac{\varepsilon^4}{\log^2(N)} w(P(Q_{child})) \le \frac{\varepsilon^4}{\log^2(N)} w(P(Q))$) and has weight of at least $\frac{2\varepsilon^2}{\log(N)} w(P(Q) - \frac{\varepsilon^4}{\log^2(N)} w(P(Q)) \ge \frac{\varepsilon^2}{\log(N)} w(P(Q)$.
We mark that job as ``essential''.
\\
Then, we look to the right of that essential job and examine the job that overlaps the next grid endpoint to the right in $Z(Q_{child})$. If this job has weight at most $\frac{\varepsilon^2}{\log^2(N)} w(P(Q))$, we delete it and draw a boundary.
Otherwise, we mark it as ``essential'' and continue (following the same process). When we are done, we have a prefix of the dense time range that contains some number of ``essential'' jobs and other jobs, and then a border at a grid endpoint of $Z(Q_{child})$. The final ``snapping'' where we deleted jobs to add the split point had cost $\le \frac{2\varepsilon^2}{\log^2(N)} w(P(Q))$. In essence, these essential jobs are the collection of jobs that were to valuable for us to delete them when we were undergoing the snapping process.
\paragraph{Using essential jobs.}
We will assume this dense time range had a snapping budget and charge the aforementioned final snapping cost to that.
Now, we just need to find a way to use the time range prefix with the essential jobs. We delete all jobs that are not essential in this time range with a similar argument as earlier, that such a job is completely contained in a grid slice with total weight of jobs $\le \frac{\varepsilon^4}{\log^2(N)} w(P(Q))$ which is at most a factor of $\varepsilon$ of an essential job partially contained within the slice (and it is partially contained within at most two slices). Then, we convert this time range of essential jobs (with potentially many such essential jobs) into sparse time ranges in the same way as done previously during the ``sparsifying regions'' step. We do so by grouping the jobs according their index modulo $\frac{1}{\varepsilon}$, deleting the group with least total weight, and drawing a border at the grid endpoint of $Z(Q_{child})$ contained within the deleted jobs. Again, by our process we know all such essential jobs must contain a grid endpoint. This creates sparse time ranges with endpoints in $Z(Q)\cup Z(Q_{child})$ and our dense time range has endpoints in $Z(Q_{child})$ so they are both valid.
\paragraph{Financing a snapping budget.} Finally, we need to show that we actually have a sufficient snapping budget. Consider our dense time ranges. We may adjust their endpoints in other scenarios, but we only split dense time ranges into more dense time ranges when they intersect $Q_{mid}$ or they contain regions created by heavy jobs. As only one dense range can contain $Q_{mid}$ and there are only $O(\nicefrac{1}{\varepsilon})$ heavy jobs, at cell $Q$ there are always at most $O(\frac{1}{\varepsilon})$ new dense time ranges being created.
If we give each of them a snapping budget of $O(\frac{\varepsilon^2}{\log(N)} w(P(Q)))$, then in total cell $Q$ is giving out a budget of at most $O(\frac{\varepsilon}{\log(N)} w(P(Q)))$, and thus we do not lose more than $O(\varepsilon w(OPT))$ in total. We showed above that each dense range will use at most $O(\frac{\varepsilon^2}{\log^2(N)} w(P(Q)))$ of its snapping budget at each level, so it will will use $O(\frac{\varepsilon^2}{\log(N)} w(P(Q)))$ in total and stay within its allotted budget of $O(\frac{\varepsilon^2}{\log(N)} w(P(Q)))$ throughout.
\paragraph{Resolving leafs.} Finally, when we are at depth $\log(N)$ (a leaf node) there is only possibly room for one job in $Q$. So we simply consider all of $RANGES(Q)$ as $SPARSE(Q)$.
This now concludes the proof by providing a way to convert $OPT$ to a solution $OPT'$ that obeys our structure and is a $(1+\varepsilon)$-approximation of $OPT$.
\end{proof}
\subsection{Efficiently Approximating Sparse Solutions}
\label{sec:dp-sparse}
Now, we focus on designing an efficient algorithm for approximating optimal solution in a sparse time range.
\lemmadpsparse*
\begin{proof}
To prove this claim, we use a dynamic programming approach where our state is the total weight of jobs selected so far. The dynamic programming table $\textsc{earliest}\xspace$ contains for state $X$, $\textsc{earliest}\xspace[X]$, the earliest/leftmost point in time for which the total weight of $X$ is achieved. If we implement this dynamic programming directly, it would require space proportional to the value of solution (which equals the largest possible $X$). Our goal is to avoid this time/space dependence. To that end, we design an \emph{approximate} dynamic program that requires only poly-logarithmic dependence on the value of an optimal solution. We derive the following technical tool to enable this:
\begin{claim}
\label{claim:approximate-sum}
Let $S$ be the set of all powers of $(1+\varepsilon/K)$ not exceeding $W$, i.e., $S = \{(1 + \varepsilon/K)^i \mid 0 \le i \le \lfloor \log_{1 + \varepsilon/K}{W} \rfloor\}$.
Consider an algorithm that supports the addition of any $K$ values (each being at least $1$) where the sum of these $K$ values is guaranteed to not exceed $W$. The values are added one by one. After each addition step, the algorithm maintains a running-total by rounding down the sum of the new value being added and the previous rounded running-total to the nearest value in $S$. Then, the final running-total of the algorithm is a $(1+\varepsilon)$ approximation of the true sum of those $K$ values.
\end{claim}
\begin{proof}
Consider the sequence of $K$ values and thus $K$ additions. Let $OPT$ denote the exact sum of the $K$ values. Let $SOL$ denote the running-total we achieve at the end of our additions. Finally, let $\textsc{CUR}\xspace_i$ denote the running-total as we do these additions at the beginning of stage $i$, which must be in $S$ at the end of every stage. We prove that $SOL \ge (1-\varepsilon)OPT$ and thus $SOL$ is a $(1+\varepsilon)$ approximation of $OPT$. Initially, $\textsc{CUR}\xspace_0=0$. Each step, we add some value $v_i$ to $\textsc{CUR}\xspace_i$. This new value $\textsc{CUR}\xspace'_i = \textsc{CUR}\xspace_i + v_i$. Then, we round $\textsc{CUR}\xspace'_i$ to the nearest power of $(1+\varepsilon/K)$ and denote this as $\textsc{CUR}\xspace''_i$. We call the amount we lose by rounding down the loss $\ell_i = \textsc{CUR}\xspace'_i - \textsc{CUR}\xspace''_i$. For the next stage, we denote $\textsc{CUR}\xspace_{i+1}=\textsc{CUR}\xspace''_i$. Note that
\[
\frac{\ell_i}{OPT} \le \frac{\ell_i}{SOL} \le \frac{\ell_i}{\textsc{CUR}\xspace''_i} = \frac{\textsc{CUR}\xspace'_i - \textsc{CUR}\xspace''_i}{\textsc{CUR}\xspace''_i} \le \frac{\varepsilon}{K}
\]
or, otherwise, we would have rounded to a different power of $(1+\varepsilon/K)$. Thus, $\ell_i \le OPT (\frac{\varepsilon}{K})$. Note that $SOL=\textsc{CUR}\xspace_{K}$ and $\textsc{CUR}\xspace_{K} + \sum_{i}\ell_i = OPT$. As such,
\[
SOL = OPT - \sum_i \ell_i \ge OPT - K \rb{OPT \rb{\frac{\varepsilon}{K}}} = OPT - \varepsilon \cdot OPT = (1 - \varepsilon) OPT.
\]
\end{proof}
Inspired by \cref{claim:approximate-sum}, we now define a set of states $S$ as follows. Our states will represent powers of $(1+\varepsilon/K)$ from $1$ to $Kw$, and hence
\[
|S| = O\rb{\frac{\log(Kw)}{\log(1+\varepsilon/K)}} = O\rb{\frac{K\log(Kw)}{\varepsilon}}.
\]
Using this, we create a set of states $S$ which corresponds to powers of $(1+\varepsilon/K)$ from $1$ to $Kw$ (and $0$). We want to maintain for each of these states, approximately the smallest prefix with at most $K$ jobs where we could get total weight approximately equal to $s \in S$. To do this, we loop over the states in increasing order of value. Suppose the current state corresponds to having approximate weight $s \in S$ and $\textsc{earliest}\xspace[s]$ is the shortest prefix we have that has approximate weight $s$. Then we loop over all rounded weights $v \in \{(1+\varepsilon)^i\}$. There are $O(\nicefrac{\log(w)}{\varepsilon})$ such $v$. For each $v$, set $\mathcal{V}$ to be the value of $s+v$ rounded down to the nearest power of $(1+\varepsilon/K)$. Then, if the earliest ending time of a job with rounded weight $v$ that starts after $\textsc{earliest}\xspace[s]$ is less than $\textsc{earliest}\xspace[\mathcal{V}]$, we update $\textsc{earliest}\xspace[\mathcal{V}]$ to that ending time. We can calculate the earliest ending time of any job, with a particular rounded weight, starting after some specified time in $O(\log(n))$ time by maintaining a balanced binary search tree (as done in \cref{section:dynamic-unit}) for each of the $O(\nicefrac{\log(w)}{\varepsilon})$ rounded weights (to powers of $(1+\varepsilon)$). This negligibly adds $O(\log(n))$ time to each update. In total, this solution runs in $O(\frac{K \log(n) \log(w) \log(Kw)}{\varepsilon^2})$ time.
However, let $w_{max}$ denote the largest weighted job completely contained within $\mathcal{R}$. Note that any job with weight $<\frac{\varepsilon w_{max}}{K}$ can be ignored as even $K$ of them will have total weight $<\varepsilon w_{max}$. If we can approximate $w_{max}$, then we can only focus on jobs with weights in $[\frac{\varepsilon w_{max}}{K},w_{max}]$ and effectively modify $w$ to be $\nicefrac{K}{\varepsilon}$ by dividing all weights by $\frac{\varepsilon w_{max}}{K}$.
To approximate $w_{max}$, we maintain a number of balanced binary search trees (at most $O(\log(w))$) where the $i$-th balanced binary search tree contains all jobs with weight $\ge2^i$ sorted by starting time. Maintaining these balanced binary search tree negligibly adds $O(\log(w)\log(n))$ time to each update. Then, to approximate $w_{max}$, we can binary search for the smallest $i$ such that there is an interval in the $i$-th balanced binary search tree completely inside $\mathcal{R}$. This gives a 2-approximation for $w_{max}$, taking $O(\log(\log(w))\log(n))$ time, and enabling us to use $w=O(\nicefrac{K}{\varepsilon})$ in the above runtime bound. As such, this algorithm runs in $O(\frac{K \log(n) \log^2(K/\varepsilon)}{\varepsilon^2} + \log(\log(w)) \log(n))$ time.
To show the algorithm's correctness, observe that since we always round down, we will not overestimate the cost. Moreover, we show with \label{claim:approximate-adding} that any set of $K$ additions will be within a factor of $(1+\varepsilon)$ from its true value.
\end{proof}
\begin{corollary}
For our usage, $K=\frac{1}{\varepsilon}$. As such, we have a $(1+\varepsilon)$-approximation algorithm of the minimum solution with at most $\frac{1}{\varepsilon}$ jobs that runs in time $O\rb{\frac{K \log(n) \log^2(K/\varepsilon)}{\varepsilon^2} + \log(\log(w)) \log(n)}=O\rb{\frac{\log(n) \log^2(1/\varepsilon)}{\varepsilon^3} + \log(\log(w)) \log(n)}$.
\end{corollary}
\subsection{Combining All Ingredients -- Proof of \cref{theorem:weighted-dynamic-M=1}}
\label{sec:combining-ingredients}
Now, we put this all together to get a cohesive solution that efficiently calculates an approximately optimal solution of the desired structure. %
For each cell $Q$, we will compute a sparse solution corresponding to each segment formed by considering all pairs of grid endpoints $Z(Q) \cup Z(Q_L) \cup Z(Q_R)$ and a dense solution for each segment $S$ formed by pairs of endpoints $Z(Q)$ denoted as $DP(Q,S)$.
To compute all sparse solutions, we use $O(|Z(Q) \cup Z(Q_L) \cup Z(Q_R)|^2)$ calls to our algorithm from \cref{lemma:sparse-sol-approx} resulting in $O(|Z(Q) \cup Z(Q_L) \cup Z(Q_R)|^2 (\frac{\log(n) \log^2(1/\varepsilon)}{\varepsilon^3} + \log(\log(w)) \log(n))) = O(\frac{\log(n) \log^2(1/\varepsilon) \log^4(N)}{\varepsilon^{11}} + \frac{\log(\log(w)) \log(n) \log^4(N)}{\varepsilon^8})$ running time.
To compute all $DP(Q,S)$, we build on the proof of \cref{lemma:sol-structure}. Namely, from the proof of \cref{lemma:sol-structure} a $(1+\varepsilon)$-approximate solution is maintained by dividing $S$ into sparse (i.e., $SPARSE(Q)$) and dense segments of $Q_L$ and $Q_R$ (i.e., $RANGES(Q_L),RANGES(Q_R)$).
We update our data structure from bottom to top. Hence, when we update $DP(Q_L)$ and $DP(Q_R)$ it enables us to learn approximate optimal values gained from a set $RANGES(Q_L)$ and $RANGES(Q_R)$. Thus, to calculate $DP(Q,S)$ we consider an interval scheduling instance where jobs start at a grid endpoint of $S$ and end at a grid endpoint of $S$.
In this instance, jobs correspond to all the sparse segments of $Z(Q),Z(Q_L),Z(Q_R)$ and all the dense segments of $Z(Q_L),Z(Q_R)$. We can compute this dense segment answer for all dense segments $Z(Q)$ in $O(|Z(Q) \cup Z(Q_L) \cup Z(Q_R)|^3)=O\rb{\frac{\log^6(N)}{\varepsilon^{12}}}$ time, with a dynamic program where the state is the starting and ending point of a segment and the transition tries all potential grid endpoints to split the range at (or just uses the interval from the start to the end). For each update, we update one cell $Q$ at each of the $\log(N)$ levels from bottom to top by recomputing the optimal sparse solutions for segments and the respective $DP(Q)$. Finally, at the beginning of each update, we use the $O(\log(w) \log(n) \log^4(N))$ algorithm of \cite{henzinger2020dynamic} as a subroutine to update $Z(Q)$ accordingly. As such, our total update time is $O\rb{\frac{\log(n) \log^2(1/\varepsilon) \log^5(N)}{\varepsilon^{11}} + \frac{\log(\log(w)) \log(n) \log^5(N)}{\varepsilon^8} + \frac{\log^7(N)}{\varepsilon^{12}} + \log(w) \log(n) \log^4(N)}$.
\section{Scheduling Algorithms on Multiple Machines with Partitioning}\label{section:scheduling}
In the previous sections we focused on the case of a single machine, i.e., $M = 1$. In this section, we extend our results to the setting where there are multiple machines on which to schedule jobs ($M > 1$). %
We begin by discussing interval scheduling on multiple machines in the unweighted setting, and then turn to the weighted setup.
\subsection{Unweighted Interval Scheduling on Multiple Machines}
An efficient centralized/sequential algorithm to exactly calculate unweighted interval scheduling has structure very similar to the greedy algorithm for unweighted interval \textsc{Maximum-IS}\xspace. We use that to show that modifications of our results for single-machine setting lead to results in the multiple-machine setup.
\theoremunweighteddynamicschedule*
For the setting of local unweighted interval scheduling, we show the following.
\theoremunweighteddynamicscheduling*
These theorems are proved in \cref{proof:theorem:unweighted-dynamic-scheduling-multiple,proof:theorem:unweighted-LCA-scheduling}.
\subsection{Weighted Interval Scheduling on Multiple Machines}
For the weighted interval scheduling problem, the well-known minimum-cost flow based algorithm requires $O(n^2 \log(n))$ time. It is not clear how to efficiently simulate this approach in the dynamic or local setting. Instead, we consider alternative approaches for partitioning jobs over machines.
When $M=1$ for scheduling, the optimal solution has a structure similar to that of \textsc{Maximum-IS}\xspace. \cite{bar2001approximating} study a natural greedy approach for $M > 1$ which consists of $M$ times performing the following: in the $i$-th step take the (weighted) \textsc{Maximum-IS}\xspace of the currently non-scheduled jobs; schedule these jobs on the machine $i$.
(To be precise, we note that \cite{bar2001approximating} study this algorithm in a more general variant of weighted interval scheduling where start/end times are flexible.)) Theorem 3.3 of \cite{bar2001approximating} implies that using an $\alpha$-approximation for \textsc{Maximum-IS}\xspace $M$ times, in the way as described above, gives a $\frac{ (\alpha M)^M} { (\alpha M)^M - (\alpha M - 1)^M}$-approximation (and thus a $\frac{\alpha M^M}{M^M - (M-1)^M}$ approximation) for weighted interval scheduling. Hence, a natural question to ask is whether this approximation can be retained even when using approximate algorithms and in settings other than centralized. We answer this question affirmatively by showing the following results, whose proof is deferred to \cref{proof:theorem:weighted-dynamic-scheduling}.
\theoremweighteddynamicscheduling*
The scheduling algorithm guaranteed by the theorem above is at least a factor of $M$ slower than its \textsc{Maximum-IS}\xspace counterparts. Moreover, the update time of the same algorithm is $\Omega(w)$, while the update time for dynamic weighted interval \textsc{Maximum-IS}\xspace (see \cref{theorem:weighted-dynamic-M=1}) has only logarithmic dependence on $w$. The main reason for such behavior of \textsc{Maximum-IS}\xspace-like algorithms is that they partition time in such a way that each region contains a sparse subproblem, e.g., containing $O(M / \varepsilon)$ jobs, that is easy to solve. However, such regions must have size $\Omega(w)$ in the weighted interval scheduling variant. To see that, consider a long job of reward $w$, with $w$ small non-intersecting jobs of reward $1$ inside it. The optimal scheduling for $M=2$ machines would include all such jobs. However, any partitioning of time that ensures there are $O(M/\varepsilon)$ jobs within each partitioning (akin to the ideas we developed in earlier sections) would discard the long job (removing half the total reward). Thus, intuitively, any algorithm giving better than $2$ approximation would not be able to partition the time-axis as performed in earlier section, and hence all sparse subproblems would have size $\Omega(w)$.
To alleviate this shortcoming, we employ a new \emph{partitioning scheme over machines} to achieve scheduling algorithms that run in $o(M)$ and $o(w)$ time. Instead of a sequential process, we uniformly randomly assign each job to a machine. Then, a job is only allowed to be scheduled on the machine it was assigned to. With these constraints, the interval scheduling problem is equivalent to the \textsc{Maximum-IS}\xspace problem for each machine given the intervals assigned to it. On the positive side, this results to a scheduling task that computationally can be solved as efficiently as \textsc{Maximum-IS}\xspace. However, it is unclear what is the approximation loss of this scheduling scheme. Surprisingly, we show that our scheme incurs only the multiplicative factor of $e$ in the approximation loss.
Before we proceed to analyzing the approximation guarantee of this scheme, as a warm-up, we show that compared to \cref{theorem:unweighted-dynamic-scheduling-multiple,theorem:unweighted-LCA-scheduling} this approach yields an even more efficient method for computing unweighted interval scheduling on multiples machines. This efficiency comes at the expense of slightly worsening the approximation guarantee.
\theoremrandomunweighted*
Our proof of \cref{theorem:random-unweighted} is given in \cref{proof:theorem:random-unweighted}. Our main contribution is a black-box result for \emph{weighted} interval scheduling on \emph{multiple} machines, stated as follows.
\theoremweightedrandom*
\begin{proof}
The algorithm begins by immediately assigning each job to one of the machines uniformly at random. Then, we find an optimal solution on each machine with the jobs that were randomly assigned to it, where this subproblem is the \textsc{Maximum-IS}\xspace problem. Accordingly, this randomized algorithm achieves the same runtime as the oracle for \textsc{Maximum-IS}\xspace.
Our hope is to show that the union of the optimal solutions for each machine (once we have randomly assigned the jobs), is a high-quality approximation of the globally optimal solution where jobs are not randomly constrained to particular machines. Such a result follows more simply for the proof of \cref{theorem:random-unweighted} in \cref{proof:theorem:random-unweighted}, yet for the weighted case we use a more interesting approach. Instead of directly arguing about the optimal solutions of each \textsc{Maximum-IS}\xspace problem, we develop a global strategy that respects the random machine constraints and guarantees that each job in $OPT$ has at least a constant probability of being in the final schedule.
Fix uniformly at random a permutation of the jobs of $OPT$, and consider the jobs of $OPT$ in this order. When we consider a job, we also reveal the machine it is assigned to by $OPT$. Throughout this process, in parallel, we are building an \emph{alternative schedule} as follows. Suppose we are currently considering job $J$ and suppose it has been assigned to machine $P$ by $OPT$. If all the jobs we have scheduled on $P$ so far are either completely contained by $J$ or do not intersect $J$, then we include $J$ in our schedule (deleting all scheduled jobs in $P$ that are contained in $J$). Otherwise, we do not schedule $J$.
Now, we characterize when $J$ is in our schedule at the end of this process. If all jobs completely containing $J$ are assigned to another machine, and all jobs intersecting $J$ that appear earlier in the permutation are assigned to different machines (or are completely contained in $J$), then $J$ will be in our final schedule. As such, a lower bound for the probability $J$ is in our schedule, is the product of
\begin{enumerate}[(1)]
\item the probability of all jobs containing $J$ being assigned to different machines, and
\item the probability of all other jobs that intersect $J$ (ignoring jobs $J$ completely contains) and have earlier permutation indices being assigned to different machines.
\end{enumerate}
Suppose there are $C$ jobs that completely contain $J$. Then, no other jobs on those $C$ machines can intersect $J$ as they form a valid schedule. For the remaining $M-1-C$ machines, at most 2 jobs can intersect $J$ that neither completely contain $J$ nor are completely contained within $J$ (both jobs must contain an endpoint of $J$). Thus, the most pessimistic scenario is that there are $C$ machines in $OPT$ containing a job that completely contains $J$ and $M-1-C$ machines in $OPT$ containing two jobs that partially intersect $J$. The probability all $C$ jobs completely containing $J$ are assigned to different machines is $(1-\frac{1}{M})^C$. For the $2(M-1-C)$ jobs that partially overlap with $J$, we take a probability measure over all random permutations. Note that, as the permutation is chosen uniformly randomly, $J$ is equally likely to be at each position of the permutation considering only $J$ and the $2(M-1-C)$ jobs. Moreover, if $J$ is at position $i$, then the probability $J$ is in the final schedule is $(1-\frac{1}{M})^i$. Thus, the probability, all of the intersecting jobs with $J$ are either assigned to different machines or have later permutation positions is $\frac{\sum_{i=0}^{2(M-1-C)} (1-1/M)^i}{2(M-1-C)+1}$.
This gives us a lower-bound where we pessimistically classify machines in the original solution as \emph{containing} machines that have a job completely containing $J$, and \emph{intersecting} machines that have two jobs partially intersecting $J$. For simplicity, we will denote the lower-bound that $C_1$ containing machines do not violate $J$ as $f_{contain}(C_1) = (1-\frac{1}{M})^{C_1}$ and the lower-bound that $C_2$ intersecting machines do not violate $J$ as $f_{intersect}(C_2) = \frac{\sum_{i=0}^{2 C_2} (1-1/M)^i}{2 C_2 +1}$.
Combined, our lower bound that each job is in our schedule is
$f_{contain}(C) \times f_{intersect}(M-1-C)$,
where $C$ can take integer values in range $0$ to $M-1$.
\begin{claim}
The quantity $f_{contain}(C) \times f_{intersect}(M-1-C)$ is minimized when $C=M-1$ (i.e., all other machines have one job completely containing this job).
\end{claim}
\begin{proof}
To show this, we show that $f_{contain}(C) \times f_{intersect}(M-1-C) \ge f_{contain}(M-1) \times f_{intersect}(0)$. By factoring out $f_{contain}(C)$, this is equivalent to showing $f_{intersect}(M-1-C) \ge f_{contain}(M-1-C)$. For simplicity, we set $C'=M-1-C$ and show $f_{intersect}(C') - f_{contain}(C') \ge 0$ for all integer $C'$ from $0$ to $M-1$. Additionally, we define $x=(1-\frac{1}{M})$. As $M>1$, we note that $x \in [\frac{1}{2},1)$. Accordingly:
\begin{align*}
& f_{intersect}(C') - f_{contain}(C') \\
= & \frac{\sum_{i=0}^{2 C'} (1-1/M)^i}{2 C' +1} - (1-1/M)^{C'} \\
= & \frac{\sum_{i=0}^{2 C'} x^i}{2 C' +1} - x^{C'} \\
= & \frac{\sum_{i=0}^{C' - 1} (x^{C'-i} + x^{C'+i} - 2 x^{C'})}{2 C' +1} \\
= & \frac{\sum_{i=0}^{C' - 1} (x^{C' - i} \times (1 + x^{2i} - 2 x^{i}))}{2 C' +1} \\
= & \frac{\sum_{i=0}^{C' - 1} (x^{C' - i} \times (x^{i} - 1)^2)}{2 C' +1} \\
\ge & 0
\end{align*}
The last step is obtained because each summand is non-negative. This shows that $f_{intersect}(C') \ge f_{contain}(C')$ for all valid integer $C'$ and thus $f_{contain}(C) \times f_{intersect}(M-1-C)$ is minimized when $C=M-1$.
\end{proof}
Thus our lower bound of a job being in the resulting solution is always at least $f_{contain}(M-1) = (1-\frac{1}{M})^{M-1} \ge \frac{1}{e}$.
With this, we show that our generative process results in a schedule on average that has weight at least $\frac{|OPT|}{e}$. This implies a $\alpha$-approximate \textsc{Maximum-IS}\xspace algorithm yields an $e \alpha$-approximation
\end{proof}
As such, we explore the relationship between partitioning over time and machines to solve the interval scheduling problem. To achieve a $(1+\varepsilon)$-approximations for unweighted and $(\frac{e}{e-1}+\varepsilon)$-approximations for weighted scheduling, we simultaneously partition over time and machines at the expense of slower algorithms. However, if we tolerate $(2-1/M+ \varepsilon)$-approximations for unweighted scheduling or $(e + \varepsilon)$-approximations for weighted scheduling,
we randomly partition over machines \emph{then} time to achieve comparable efficiency to the \textsc{Maximum-IS}\xspace problem.
\subsection{Proof of \cref{theorem:unweighted-dynamic-scheduling-multiple}}
\label{proof:theorem:unweighted-dynamic-scheduling-multiple}
We maintain a modified version of \cref{invariant:unweighted-MIS}, where the algorithm maintains a set of borders such that an optimal solution for within each two consecutive borders is of size between $\nicefrac{M}{\varepsilon}$ and $\nicefrac{2M}{\varepsilon}+M$ jobs. Direct modification of \cref{lemma:invariant-value} shows that this maintains a $(1+\varepsilon)$-approximation (the size of solutions between consecutive borders is a factor of $M$ larger than in \cref{theorem:unweighted-M=1} because $M$ jobs may intersect any border).
As a starting point, we consider the classic greedy algorithm for unweighted interval scheduling on multiple machines \cite{tardos2005algorithm}, that we recall next:
\begin{itemize}
\item Among jobs that start after the earliest time any machine is free, find the one with the earliest ending time.
\item Then, among machines that can take the job, schedule the job to the machine that becomes free at the latest time.
\end{itemize}
This solution can easily be simulated in $O(|OPT|\log(n))$ time by a method similar to \textsc{Probe-based-Opt}\xspace. In our dynamic version, we handle insertions and deletions analogously to as in \cref{theorem:unweighted-M=1}.
More specifically, when a job is added inside a region, we recompute an answer for the region in $O(\frac{M \log(n)}{\varepsilon})$ time. If the solution becomes too large, we add a border after the $\frac{M}{\varepsilon}$-th ending point of a job in the solution (this will invalidate at most $M$ jobs, leaving the left half with a $[\nicefrac{M}{\varepsilon},\nicefrac{M}{\varepsilon}+M]$ size solution and the right half with a $[\nicefrac{M}{\varepsilon}+1, \nicefrac{M}{\varepsilon}+M+1]$ size solution). If deleting a job makes the recomputed solution too small, we combine with an adjacent region (and if the region is now too large, we add a new border to split the region like above).
With essentially the same approach as \cref{theorem:unweighted-M=1}, we obtain $O(\frac{M \log(n)}{\varepsilon})$ updates and $O(\log(n))$ queries in worst-case time.
\subsection{Proof of \cref{theorem:unweighted-LCA-scheduling}}
\label{proof:theorem:unweighted-LCA-scheduling}
First, we modify the \emph{successor oracle} for this result. Consider an instance with two machines and two jobs corresponding to time ranges $[1,4]$ and $[2,3]$. No successor oracle probe will ever return the first job because a successor oracle prove will never return a job completely contained in another job. Thus the original successor oracle is not strong enough to determine any particular constant-factor approximation to scheduling even with infinite probes. To remedy this, we modify the successor oracle such that it ignores a set of jobs given with the probe (it is not a concern that this set will be very large, as any probe-efficient algorithm will not know many jobs to specify for the set) which enables us to simulate a subroutine analogous to \textsc{Probe-based-Opt}\xspace.
With this new successor oracle, our algorithm and analysis is almost identical to \cref{alg:alg-local-approx} proven in \cref{theorem:unweighted-M=1-local}. Our key difference is that we now set the thresholds for drawing borders to when $OPT(Q_L)$ and $OPT(Q_R)$ are larger than $\nicefrac{M}{\varepsilon}$ instead of $\nicefrac{1}{\varepsilon}$. With this, we are maintaining the modified version of \cref{invariant:unweighted-MIS} from \cref{theorem:unweighted-dynamic-scheduling-multiple} that is shown to result in a $(1+\varepsilon)$-approximation. More concretely, to simulate this process we define a function $f(Q,first\_empties)$ analogous to that of $f(Q,first\_empty)$ from \cref{alg:alg-local-approx}. The primary differences are the aforementioned factor of $M$ increase of the threshold for drawing a border, our simulation of \textsc{Probe-based-Opt}\xspace is thus a factor of $M$ slower, we have $M$ possible $I_{mid}$, and we keep track of and return the times for all $M$ machines (hence $first\_empties$ instead of $first\_empty$).
With essentially the same approach as \cref{theorem:unweighted-M=1-local}, we obtain a local computation algorithm for $(1+\varepsilon)$-approximate unweighted interval scheduling on $M$ machines using $O(\frac{M\log(N)}{\varepsilon})$.
\subsection{Proof of \cref{theorem:weighted-dynamic-scheduling}}
\label{proof:theorem:weighted-dynamic-scheduling}
We outline an alternative approach to a dynamic algorithm for weighted independent set of intervals based on \cref{section:dynamic-unit}. While a stronger result is presented in \cref{section:dynamic-weighted}, that approach does not easily lend itself well to repeatedly calculating \textsc{Maximum-IS}\xspace. We instead build off the simpler result from \cref{section:dynamic-unit}.
We maintain a modified version of \cref{invariant:unweighted-MIS}, where the reward of the solution we calculate within consecutive borders is in range $[\nicefrac{Mw}{\varepsilon},\nicefrac{8Mw}{\varepsilon}+2Mw]$. We want to repeatedly calculate a $(1+\varepsilon)$-approximation of \textsc{Maximum-IS}\xspace within regions and use a similar but different approach to \cref{lemma:sparse-sol-approx}. In contrast to the setting of \cref{lemma:sparse-sol-approx}, our invariant bounds the total weight within consecutive borders as opposed to the number of jobs in the optimal solution within consecutive borders. Consider a dynamic programming problem where our state is the total weight of jobs and the corresponding answer is the shortest prefix that can obtain jobs of this total weight. It is simpler for us if all weights are integers and there are not many weights. We round all weights down to powers of $(1+\varepsilon)$, which will not affect our approximation by more than a factor of $(1+\varepsilon)$. Then, we scale all weights by $\nicefrac{1}{\varepsilon}$. Each job now has weight at least $\nicefrac{1}{\varepsilon}$, so rounding down to the nearest integer is at most $\varepsilon$ fraction of the weight and the remaining optimal solutions is still an $(1+O(\varepsilon))$-approximation. Now, we optimally calculate the \textsc{Maximum-IS}\xspace within each region given the rounding. Let $D$ be the number of distinct weights. The dynamic programming problem we mentioned can be solved in $O(\log(n) \cdot |OPT| \cdot D)$ time as there are at most $|OPT|$ states we can reach, there are $D$ possible transitions (trying the job with some given weight that starts after the current prefix and ends earliest), and each transition uses a $O(\log(n))$ query to a balanced binary search tree. Due to our invariant and scaling weights, the sum of $|OPT|$ as we calculate \textsc{Maximum-IS}\xspace $M$ times is at most $O(\frac{Mw}{\varepsilon^2})$. By rounding the weights down to powers of $(1+\varepsilon)$, $D = O(\frac{\log(w)}{\varepsilon})$. Thus, we recompute the answer for a region in $O(\frac{Mw \log(n) \log(w)}{\varepsilon^3})$.
Now, we handle insertions and deletions similarly to \cref{theorem:unweighted-M=1}. This maintains a $((\frac{M^M}{M^M-(M-1)^M})(1 + \varepsilon))$-approximation, which is also a $4$-approximation.
This means the solution the algorithm generates for any region is at least $\frac{1}{4}$ the optimal solution for that region. When we insert/delete intervals in a region, we recompute the answer for the region. If the total weight of the region becomes too small, we repeatedly combine with adjacent regions until it is not too small. At most four combinations must occur, as then the union of the solutions we had found is at last a factor of $4$ larger than the minimum solution size for a region, so our 4-approximation must find it. If we add a job and the region solution becomes too large, we note that the true solution size is at most $4(\frac{8Mw}{\varepsilon}+2Mw)$. Whenever a region's solution is too large, we split at the smallest prefix that contains intervals of total weight $\frac{4Mw}{\varepsilon}$. The left region will have a solution of size $\ge \frac{4Mw}{\varepsilon}$ and the right region will have a solution of size $\ge (\frac{8Mw}{\varepsilon}+2Mw)-(\frac{4Mw}{\varepsilon}-2Mw) = \frac{4Mw}{\varepsilon}$. Thus, our 4-approximation will find a solution of size at least $\frac{Mw}{\varepsilon}$ for both and we will never classify either as too small. As we separate at least $\frac{4Mw}{\varepsilon}$ of weight with every split, only $O(1)$ splits will occur. With this, we achieve an algorithm with $O(\frac{Mw \log(n) \log(w)}{\varepsilon^3})$ update and $O(\log(n))$ query time worst-case.
\subsection{Proof of \cref{theorem:random-unweighted}}
\label{proof:theorem:random-unweighted}
The algorithm begins by assigning each job to one of the machines uniformly at random. Then, finding an optimal solution on each machine is the \textsc{Maximum-IS}\xspace problem.
Our proof technique is to simultaneously simulate the classic greedy \textsc{Maximum-IS}\xspace algorithm and the realization of each job's assignment for a single machine. We show that the expected \textsc{Maximum-IS}\xspace of jobs assigned to a machine is at least $\frac{|OPT|}{2M-1}$.
Consider the set of jobs in an optimal solution $OPT$, and ignore all others. In the classic greedy \textsc{Maximum-IS}\xspace algorithm, we consider jobs in an increasing order of their ending time and use the job if it does not intersect any previously selected jobs. At a high-level, we will simulate this algorithm on a particular machine, realizing whether or not a job was assigned to this machine only as we need to. In particular, assume we have a set of jobs $OPT$ that are a valid scheduling on $M$ machines and all start after the ending points of any jobs we have previously selected. We consider this set in increasing order of ending time. When we consider a job $I$, we realize its assignment. If $I$ is not assigned to the current machine (probability $1-1/M$), we cannot use it. If $I$ is assigned to the current machine (probability $1/M$), we use the job and delete all jobs in $OPT$ that intersect it. Note that all other jobs in $OPT$ have an ending time that is at least the ending time of $I$ (because we have not yet considered them). Thus, to intersect $I$, they must start before $I$ ends. This implies that all the jobs we delete must contain the ending point of $I$. Since $OPT$ is a valid schedule for $M$ machines (and no schedule on $M$ machines can have $>M$ jobs containing a point), we only need to delete at most $M-1$ other jobs. In either situation, the invariant on $OPT$ is maintained afterwards.
Thus, we write the following recurrence $f(X)$ to denote a lower bound on the expected size of the \textsc{Maximum-IS}\xspace given $|OPT|=X$:
\[
f(X) \ge (1 - 1/M) f(X-1) + \nicefrac{1}{M} (f(X-M) + 1).
\]
For simplicity of notation, we assume that $f(X) = 0$ for $X \le 0$.
\begin{lemma}
It holds that $f(X) \ge \frac{X}{2M-1}$.
\end{lemma}
\begin{proof}
First, we show the claim when $X \le M$.
We have the following chain of inequalities:
\begin{align*}
& \rb{1-\frac{1}{M}}f(X-1) + \frac{1}{M} (f(X-M) + 1) \\
\ge & \rb{1 - \frac{1}{M}} \frac{X-1}{2M-1} + \frac{1}{M} (0 + 1) \\
= & \frac{X-1}{2M-1} - \frac{X-1}{(2M-1)M} + \frac{1}{M}
\\
= & \frac{M(X-1) - (X-1) + (2M-1)}{M (2M-1)} \\
= & \frac{Mx + M - x}{M (2M-1)} \\
\ge & \frac{Mx}{M (2M-1)} \\
= & \frac{X}{2M-1}.
\end{align*}
Next, we show the claim when $X > M$:
\begin{align*}
& \rb{1-\frac{1}{M}}f(X-1) + \frac{1}{M} (f(X-M) + 1) \\
\ge & \rb{1 - \frac{1}{M}}\frac{X-1}{2M-1} + \frac{1}{M}\rb{\frac{X-M}{2M-1}+1} \\
= & \frac{Mx}{M (2M-1)} \\
= & \frac{X}{2M-1}.
\end{align*}
\end{proof}
Thus, we have that $f(|OPT|) \ge \frac{|OPT|}{2M-1}$. As all machines are identical, the expected value of the schedule is the sum of their expected \textsc{Maximum-IS}\xspace. Thus, the expected optimal schedule has size $\frac{M |OPT|}{2M-1}$. Using an $\alpha$-approximation for each of these \textsc{Maximum-IS}\xspace subproblems yields a $(2-1/M)\alpha$-approximation, as advertised.
Note that this bound is tight as $n$ approaches infinity. Consider an instance with $n$ jobs, where job $i$ starts at time $i$ and ends at time $i+M$. If we simulate the classic greedy algorithm for \textsc{Maximum-IS}\xspace on a machine, it will see $M$ jobs in expectation until it sees one that is assigned to it (expectation of a Bernoulli random variable). To use this interval, the $M-1$ jobs after it cannot be used (they all intersect). Thus, for every job in the solution, in expectation the machine needed to throw away $2M-2$ other jobs and thus as $n$ approaches infinity the expected schedule size approaches $\frac{M}{2M-1}$.
\section{Scheduling with Additional Machines}\label{section:additional-machines}
Up until now, we have focused on maintaining an approximation of the optimal solution given that we can use $M$ machines. We pose a new question that adds an additional dimension. What if we buy more machines, and compare ourselves to the optimal solution that uses the original number of machines we had? Meaning, perhaps it is difficult to maintain an approximately optimal solution with $M$ machines, compared to the optimal solution that uses $M$ machines. Instead, we may be willing to buy more machines so that we have e.g. $2M$ machines, with the goal of then having a solution that has approximately as much value as the optimal solution using $M$ machines. In terms of notation, we define an $(\alpha, \beta)$-scheduling algorithm as an algorithm that returns a schedule using $\beta M$ machines with total value at least $\frac{OPT(M)}{\alpha}$ where $OPT(M)$ denotes the optimal solution using $M$ machines. For example, we have already presented dynamic $((1+\varepsilon),1)$-scheduling algorithms for some variants of interval scheduling. Note that, because of diminishing returns for adding additional machines, it is very possible that a $(\alpha, \beta)$-scheduling guarantee is stronger than the guarantee of an approximation algorithm for using $\beta M$ machines with a slightly worse approximation ratio.
Naturally, our approaches for elevating \textsc{Maximum-IS}\xspace solutions to scheduling solutions perform better with additional machines.
\begin{lemma}
The greedy \textsc{Maximum-IS}\xspace approach provides a $((1+\varepsilon), \log{\frac{1}{\varepsilon}})$-scheduling algorithm. \textbf{Note that this result holds for the analogous coloring problem.}
\end{lemma}
\begin{proof}
TBD. Essentially your remaining suboptimality is multiplied by $(1-\frac{1}{M})$ at every step.
\end{proof}
\begin{lemma}
The random machine approach provides an unweighted $(x,k)$-scheduling algorithm.
\end{lemma}
\begin{proof}
Solve recurrence TODO
\end{proof}
\begin{lemma}
The random machine approach provides an weighted $(x,k)$-scheduling algorithm.
\end{lemma}
\begin{proof}
Permutations of 2 and cover, TODO
\end{proof}
Moreover, we propose this may provide an interesting dimension in the online interval scheduling problem (online interval scheduling with a fixed number of machines has been studied before in \cite{?}). In this model, we receive intervals in order of their arrival time and must immediately decide whether or not to schedule them.
\begin{lemma}
There exists an online $(1,4\log{(\Delta)} \log{(w)})$-scheduling algorithm where $\Delta$ is the ratio of the shortest and longest interval lengths and $w$ is the ratio of lowest and highest interval weights.
\end{lemma}
\begin{proof}
\end{proof}
\section{Algorithm for $P(Q)$ of \cite{henzinger2020dynamic}}
\label{section:pq}
In our work, we use $P(Q)$ of \cite{henzinger2020dynamic} as a black-box to generate $Z(Q)$. \newtext{At a high-level, $P(Q)$ is a set of weighted points where the points correspond to starting and ending times of jobs chosen by a some greedy method.} But for our purposes, all we rely on is that $P(Q)$ has the following properties:
\begin{itemize}
\item For each $Q$, it holds that $w(P(Q)) \le 10 \cdot w(OPT(Q))$.
\item For each range $[L,R]$ within a cell $Q$, it holds that $w(P(Q)[L,R]) \ge \frac{w(OPT(Q,[L,R])}{10}$.
\item Maintaining all $P(Q)$ every update takes $O(\log(w) \cdot \log(n) \cdot \log^{4} (N))$ time.
\end{itemize}
For completeness, we outline an overview of how $P(Q)$ is maintained. For a more rigorous explanation (including analysis), we encourage readers to read \cite{henzinger2020dynamic} (in particular, Section 3). Our explanation of the algorithm for obtaining $P(Q)$, designed by \cite{henzinger2020dynamic}, will closely mirror that of theirs in Section 3, although we will reframe terminology to focus only on the 1-dimensional setting (the approach works more generally for multiple dimensions). Also note that this approach yields an $(4+\varepsilon) 2^d$-approximation, but we will focus on $\varepsilon=1$ and $d=1$ so that this is simply a $10$-approximation.
First, we outline the primary intuition of $P(Q)$. Consider a global algorithm where one processes all jobs in increasing order of size. \newtext{(The fact that we consider jobs in the increasing order will be crucial in the explanation below.)} Then, we use the current job $J$ if and only if all the jobs it intersects have total weight of at most $w(J)/2$. This can be shown to be an $O(1)$-approximation. Henzinger et al.~use a similar approach that is easier to simulate with a data structure.
\textbf{Reminder:} Recall that $Q_{root}$ is the root cell that contains the time range $[0,N]$, and every cell $Q$ has children $Q_L$ and $Q_R$ whose time ranges correspond to the left and right half of $Q$'s time range, respectively. Also recall how each job is assigned to a cell $Q$ such that the job's length is approximately $\varepsilon$ fraction of $Q$'s time range. As we selected $\varepsilon=1$, this means each job's length is $\Theta(1)$ fraction of its assigned cell's time range. $C'(Q)$ denotes all jobs assigned to some cell $Q$, and $C(Q)$ denotes all jobs assigned to $Q$ or a descendant of $Q$.
\paragraph{A greedy method.} We simulate a greedy method, the one that considers jobs in the increasing order of length, in a bottom-up fashion. More precisely, we make decisions in the descendants of a cell $Q$ before making decisions about jobs in $C'(Q)$. At every cell $Q$, we will ultimately decide a set $\overline{C}(Q) \subseteq C'(Q)$ of jobs to \newtext{\emph{potentially}} include in \newtext{our $10$-approximation}. We will include a job from $\overline{C}(Q)$ \newtext{to the final $10$-approximate solution} iff it does not intersect any job later chosen to be in $\overline{C}(Q')$ for any ancestor $Q'$ of $Q$. Because we assign jobs to cells such that they make up approximately $\Theta(1)$ fraction of the cell's time range, this enables us to simulate the small-to-large greedy by looking at $Q$'s descendants first (because they will have strictly smaller job lengths) and then making decisions for $Q$.
\paragraph{A motivation for designing $P(Q)$.} Now, we detail how to decide $\overline{C}(Q)$ for a cell $Q$ given that we have processed all of $Q$'s descendants. Intuitively, we do this iteratively by repeatedly finding the smallest job $J$ such that $J \in C'(Q)$ and the greedy method would choose $J$ (i.e., the total weight of \newtext{already chosen} jobs intersecting $J$ is at most $w(J)/2$).
We call a job \emph{addable} if it would be chosen by the greedy method (or more correctly, if it matches conditions we define next). \newtext{It is computationally expensive to} calculate the smallest addable job exactly, but we follow this intuition approximately. In particular, it is desirable that our algorithm's complexity is much faster than iterating over all possible jobs $J$ to check which jobs are addable.
To design a faster method for selecting addable jobs, we instantiate a data structure $P(Q)$ for the cell $Q$. $P(Q)$ will contain a set of \emph{weighted points}; note that $P(Q)$ does not contain a set of jobs, but rather weighted points corresponding to a set of jobs. In particular, for ever job $J' \in \overline{C}(Q')$ for some $Q'$ that is a descendant of $Q$, the starting time and ending time of $J'$ is added to $P(Q)$ with weight $w(J')$.
It can be shown that instead of directly checking if a job $J$ is addable by calculating the total weight of all jobs $J$ intersects, one can use the total weight of points of $P(Q)$ contained within it. However, it still is not immediately clear how to use this to make updates efficient.
\paragraph{Efficient processing.} Utilizing $P(Q)$, we introduce an auxiliary grid structure $Z'(Q)$ that will partition the time range of $Q$. In particular, $Z'(Q)$ is a set of points of time, that we will call \emph{grid endpoints}. We choose the grid endpoints of $Z'(Q)$ such that in the time range between any two consecutive grid endpoints $(z'_i, z'_{i+1})$, that we call a \emph{grid slice}, the total weight of points of $P(Q)$ within the grid slice is $O(\frac{w(P(Q))}{\log(N)})$.\footnote{Having $\log(N)$ in the fraction is to account for the loss of approximation across all $\log{N}$ levels of our data structure. The approximation analysis would still hold is $\log(N)$ is replaced by any larger factor, although in that case the running time would increase.}
\newtext{
To determine whether a job $J$ is addable or not, we will ask whether there exists a segment $[z'_i, z'_j]$, with $z'_i, z'_j \in Z'(Q)$, such that $J$ is contained within this segment and the weight of points of $P(Q)$ inside $[z'_i, z'_j]$ is not significantly larger than the weight of points inside $J$.
This in turn enable us to answer whether $J$ is addable or not by inspecting only a single segment of $Z'(Q)$.} Hence, to list all addable jobs we iterate over all pairs of grid endpoints of $Z'(Q)$ and use a data structure to find the shortest/smallest job contained within those grid endpoints with at least some particular weight. This is much faster than iterating over all jobs and manually checking if they are addable.
We now provide additional details on how we perform search within $Z'(Q)$. We define $Z'(Q)$ by using weighted quantiles of $P(Q)$ such that $|Z'(Q)| = O(\log N)$. Then, at each iteration we iterate over all pairs of endpoints in $|Z'(Q)|$ to form segments $[z'_i, z'_j]$. For each $[z'_i, z'_j]$, we identify the smallest length job inside the segment with weight at least $2 \cdot w(P(Q)[z'_i,z'_j])$ (if any such job exists).
To eliminate the weight aspect from our problem, we round job weights to powers of $(1+\varepsilon)$ (in this case, powers of $2$ as $\varepsilon=1$), and maintain a separate data structure for each rounded weight. Searching for the smallest job inside some range is done with a standard balanced binary search tree data structure. After finding the smallest addable job $J$, we add $J$'s endpoints to $P(Q)$, add $J$ to $\overline{C}(Q)$, add the endpoints of $J$ to $Z'(Q)$, and remove any jobs in $\overline{C}(Q)$ that intersect with $J$ from $\overline{C}(Q)$, along with removing their endpoints from $P(Q)$ and $Z'(Q)$. We repeat until there is no addable job.
\cite{henzinger2020dynamic} show that $P(Q)$ maintains the guarantees we outlined at the beginning of this section (which our algorithm uses as a black-box), and prove that $P(Q)$ can be maintained in $O(\log w \cdot \log n \cdot \log^{4} N)$ time.
\section*{Acknowledgements}
We thank Benjamin Qi (MIT) for helpful discussions. S.~Mitrovi\' c was supported by the Swiss NSF grant No.~P400P2\_191122/1, NSF award CCF-1733808, and FinTech@CSAIL. R.~Rubinfeld was supported by the NSF TRIPODS program (awards CCF-1740751 and
DMS-2022448), NSF award CCF-2006664, and FinTech@CSAIL.
\bibliographystyle{alpha}
|
1,314,259,993,198 | arxiv | \section{Introduction}
The discovery of more than 760 exoplanets to date \citep{b18} has revolutionized our understanding of the architecture of planetary systems. One of the surprising key observations is the large eccentricity of exoplanet orbits. Compared to Jupiter's eccentricity of 0.05 and to the Earth's 0.017, the median eccentricity of exoplanet orbits is 0.2. Various dynamical mechanisms were proposed to account for this departure from the solar system's planetary standard. These mechanisms include: planet-planet scattering \citep{scatt3,scatt1,scatt2}, three-body secular Kozai oscillations \citep{koz2,koz3,koz1}, mean motion resonances \citep{res1,res2,res3}, stellar encounters \citep{stenc} and excitation induced by stellar jets (\cite{b19} hereafter Paper I, \cite{rj8}). Although some mechanisms are more efficient than others, attention has recently been devoted to planet-planet scattering owing to the claim that this mechanism reproduces the observed eccentricity distribution of exoplanet orbits. However, the eccentricity distribution produced from planet-planet scattering only approximates the observed distribution if the latter is cut off at an eccentricity of 0.2 \citep{scatt2}. Smaller planetary orbital eccentricities were attributed to the other possible mechanisms or a combination thereof. The threefold increase in the number of planets between 2008 and 2012 has provided better statistics and decreased the median eccentricity from 0.3 to 0.2 (or 0.23) if only planets with periods larger than 5 (or 20) days are included to account for tidal evolution bias. The excess of planets in the current enlarged sample with eccentricities smaller than 0.3 with respect to the Rayleigh distribution produced by planet-planet scattering \citep{scatt2} requires that the eccentricity cutoff applied to the observed distribution be increased to 0.3 leaving more than half the planet population with stirred orbits outside the scope of applicability of planet-planet scattering.
In this paper, we revisit the excitation of planetary orbits that results from momentum loss through stellar jets (Paper I) and attempt to assess quantitatively the effects of this mechanism. Orbital excitation through stellar jets is based on jet-counterjet asymmetry that has been observed in a significant fraction of star-disk systems \citep{b10,b11,b12,b14,b13,hh7, rj1,rj2, rj3, rj4, rj5}. As the planet orbits around the star and the inner disk, it sees that system accelerating away from it owing to asymmetric momentum loss. Excitation from a smooth time-varying jet-induced acceleration is a secular process that requires that the jet axis be inclined with respect to the disk's plane. It was shown in Paper I that the maximum eccentricity achieved is proportional to the sine of the mutual inclination of the planetary orbital normal and the jet axis. In this paper, we develop a realistic modeling of momentum loss as a time-variable stochastic process that results in a zero mean stellar acceleration and restrict our attention to systems where the jet system's axis is perpendicular to the initial planetary plane and secular excitation is absent. Momentum loss models include periodic or random polarity reversals that may be associated with the magnetic polarity reversals of the parent star and the inner star-disk interface. In section 2, we recall the basics of orbital excitation through stellar jets. In section 3, we characterize orbital excitation as a function of the variability time scale and acceleration standard deviation as well as in the presence of mutual planetary perturbations. We identify a fundamental excitation resonance between the planet orbital period and the variability timescale and show that it is an efficient excitation mechanism for both periodic and random reversal jet profiles at constant variability timescale. In section 4, we examine resonance crossing by modeling the time dependence of the variability time scale. In particular, it is found that resonance crossing is an efficient excitation mechanism for periodic polarity reversal profiles but not for random polarity reversal profiles. In the solar system, we find that resonance crossing with periodic polarity reversal and a time variability timescale that increases from 0.5 to 11 years (the current solar cycle half period) is able to reproduce Jupiter's and Saturn's orbital configuration and their inclination by $6^\circ$ with respect to the solar equator. Section 5 contains concluding remarks.
\section{Orbital excitation by asymmetric jet momentum loss}
Stellar jet asymmetry is observed in a increasing number of systems as the ejection velocities of the jet and counterjet differ by about a factor of 2 \citep{b10,b11,b12,b14,b13,hh7, rj1,rj2, rj3, rj4, rj5}. Jet launching regions (JLRs) are confined to the inner part of the disk with estimates from 0.01 AU for the X-wind model to a few AU for disk-wind models (e.g. 1.6 AU for RW Auriga, \citealt{b15}). The integrated momentum loss over the launching region accelerates the center of mass of the star-disk system which coincides with the star's center for axisymmetric star-disk systems. Gauss's theorem for the gravitational potential of the form $r^{-1}$ implies that as the planet orbits around the star and the inner disk, it sees that system accelerating away from it. The planet's orbital evolution may be described by the equation (\cite{rj7} hereafter Paper II, \cite{rj6}):
\begin{equation}
\frac{{\rm d} {\bm v}}{{\rm d} t}=-\frac{GM}{|{\bm x}|^3}\,{\bm
x} +{\bm A}, \label{motion}
\end{equation}
where ${\bm x}$ and ${\bm v}$ are the relative position and velocity of the planet with respect to the star and $G$ is the
gravitational constant. The mass $M$ is that of the star augmented by that of the inner disk and corresponds to the total mass contained in the JLR. Away from the JLR, the acceleration ${\bm A}$ that results from momentum loss is not a function of the distance to the star as it is the ratio of the integrated momentum loss over the jet launching region to the total mass $M$. If planets enter the JLR, they will be subject to an acceleration that is a monotonically increasing function of the distance to the star reaching the constant value $A$ at the outer edge of the JLR. Such a situation does not concern us in the present work. The effect of the disk's mass and its variations in $M$ may be neglected because the mass of the inner disk is small compared to the mass of the star and also because the mass loss is small and amounts to a correspondingly small dynamical effect given by the Jeans radial migration rate $\dot
r/r=10^{-8}$\,yr$^{-1}$ for $\dot M=10^{-8} M_\odot$\,yr$^{-1}$ \citep{jeans}. An order of magnitude estimate of the acceleration is given by:
\begin{equation}
A \sim 10^{-13}\,
\left(\frac{\dot M}{10^{-8} M_\odot\, \mbox{\rm yr}^{-1}}\right)\,
\left(\frac{v_e}{300 \,\mbox{\rm km\,s}^{-1}}\right)\,
\left(\frac{M_\odot}{M}\right) \mbox{\rm km\,s}^{-2}. \label{acc-mag}
\end{equation}
where $\dot M$ is the mass loss rate, and $v_e$ is the jet's ejection velocity \citep{b9}. This estimate is an instantaneous lower bound on the total momentum loss as we lack long time span observations of stellar jets as well as observations of the jet engine within a few AU from the star. Although small, the acceleration amplitude (\ref{acc-mag}) was shown to be a
possible origin for the large eccentricities and secular resonances of extrasolar planets' orbits provided the planets formed in a jet-sustaining disk (Paper I). The planet's orbital excitation time associated with the acceleration $A$ is given as $T_A=GM T/3a^2|A|$ where $T$ and $a$ are the orbital period and semimajor axis of the planet. Where $T_A\gg T$, orbital excitation is adiabatic and the eccentricity secular increase is given as: $e(t)= |\sin(\pi t/T_A)\ \sin I_0|$ where $t$ is time and $I_0$ is the inclination of the jets' axis to the planet's initial orbital normal (Paper I). This shows that eccentricity excitation does not occur when $I_0$ vanishes. In deriving this eccentricity evolution expression, acceleration was assumed to be constant when the jets are active. When jet-induced acceleration increases as during the initial launching of the jets or decreases in their final stages of activity, it was shown that the part of the disk outside the jet launching region expands, contracts and heats up as its state of minimum energy deviates from the usual star-disk mid-plane to a sombrero-shaped profile curved along the direction of acceleration (Paper II).
In the next section, we will find out under what circumstances it is possible for a jet system that is orthogonal to the planetary orbits (i.e. $I_0=0$) to excite them significantly. By modeling realistic time-variable jets (in contrast to the smooth models of Papers I and II), we show that there is a resonance between the variability timescale and the planet orbital period that may lead to large eccentricity and inclination as well as significant outward radial migration.
\section{Planet-jet variability resonance}
Stellar jets are time-variable processes \citep{var1, var4,var2, var3, b7,varn2,varn1,varn3,varn4,varn5}. Variability is attributed to the variations of the ejection velocities of the jet and counter jet possibly from mass loading at the base of the jets or from a variable magnetic field of the star and the star-disk interface.
As we lack long duration observations of stellar jets (the first such observations were made in 1994), modeling realistic asymmetric momentum loss from the jet system requires some assumptions. We choose the jet axis to be orthogonal to the initial planetary orbital plane. We model the time variations of the asymmetric momentum loss as a stochastic process. The acceleration that results from momentum loss is drawn from a normal distribution with zero mean and finite standard deviation $\sigma_A$ after a time $\tau$ that we call the variability time scale. Acceleration remains constant for the duration $\tau$ \footnote{If a planet enters the JLR, then $\sigma_A$ will depend on the star-planet distance. This situation does not occur in this work owing to the small accelerations involved.}. The mean acceleration is set to zero because we assume that the observed asymmetry in stellar jets is equally time variable in its direction. It is indeed reasonable to assume that an excess of linear momentum loss from one side of the accretion disk may in time be reversed by loss from the other side especially if the asymmetry's origin is related to the magnetic field configuration which may be subject to polarity reversals such as those observed in the solar cycle and other stars \citep{mc2,mc6,mc7, mc3,mc4,mc1, mc5,mc8}.
We consider two possible time variations for the momentum loss process. Acceleration may reverse periodically with a period equal to the variability time scale $\tau$ or it may reverse randomly. Periodic reversals acceleration profiles are constructed from a normal distribution that is forced to reverse with the period $\tau$. For random reversals, the latter forcing is turned off. The equations of motion (\ref{motion}) are integrated for a given acceleration profile. For all simulations we monitor the residual velocity $V$ that the star acquires at the end of simulation. We use enough profiles for a given parameter set so that the mean residual velocity and its standard deviation are less than a few km\,s$^{-1}$ which is much smaller than the stellar velocity dispersion in the Galaxy of order $\sim 30$km\,s$^{-1}$ \citep{r5}.
When quoting the acceleration standard deviation, $\sigma_A$, we prefer to use the more physical keplerian boundary semimajor axis $a_{\rm kplr}$ defined as the distance from the star where its gravitational pull balances the secular excitation by the acceleration $\sigma_A$ (Paper I, \cite{rj6}). Beyond this distance, orbits are no longer bound to the star. This distance is obtained by equating the excitation time $T_{\sigma_A}$ to the local orbital period $T$ and is given as:
\begin{equation}
a_{\rm kplr}=\left(\frac{GM}{3\sigma_A}\right)^\frac{1}{2}\simeq 10^3\,
\left(\frac{2\times 10^{-12}\ {\rm km}\, {\rm s}^{-2}}{\sigma_A}\right)^\frac{1}{2}\, \left(\frac{M}{M_\odot}\right)^\frac{1}{2}\, {\rm AU}.\label{akplr}
\end{equation}
Figures 1 and 2 show the results of two simulations of a Jupiter mass planet located 5 AU away from a solar mass star subject to an acceleration from a variable jet system with an acceleration standard deviation $\sigma_A$ corresponding to $a_{\rm kplr}=200$ AU and a variability time scale $\tau=6$ years. Figures 1 and 2 correspond to periodic and random acceleration reversals respectively (see the acceleration profile panels therein). The excitation of inclination, $I$, is found to be more important than that of eccentricity, $e$, and radial migration $\Delta a$. The reason is as follows: although acceleration is perpendicular to the planet's orbit, the orbit does not at any given time lie in the surface of least energy of the accelerated star. It was shown in Paper II that in the presence of acceleration least energy orbits remain circular but their orbital plane no longer includes the star. These orbits hover above the star's equator and lie on a sombrero-shaped surface that is curved along the direction of acceleration. As the sombrero profile varies stochastically with the acceleration, inclination is systematically excited. Only through the conservation of the vertical component of angular momentum that the orbital semimajor axis and eccentricity are forced to change. It was shown that these changes are of second order with respect to the perturbation. Except for the early orbital excitation phase, little difference is seen between the outcomes of periodic and random acceleration. Figures 1 and 2 however show each only one realization of a stochastic process. We estimate the number of runs required to reach a significant conclusion by simulating the planet's evolution over $10^6$ years with 1000 periodic reversal acceleration profiles with the same variability timescale and acceleration standard deviation. Figure 3 shows the dependence of the mean values and standard deviations of radial migration, eccentricity, inclination and residual velocity $V$ of the system in Figure 1 on the number of simulations. It is seen that more than 100 simulations are required to ascertain the probable outcome of a given parameter set. In particular, the average inclination is about $5^\circ$ twice as low as that of Figure 1. We therefore choose for a given parameter set to integrate 500 acceleration profiles in order to determine the excitation outcome. A similar conclusion was reached for random reversal simulations.
We now address the dependence of excitation on the variability time scale. Figure 4 shows the orbital elements and residual velocities for the reference Jupiter planet at 5 AU after $10^6$ years orbiting the sun subject to a periodic reversal acceleration with a standard deviation of $a_{\rm kplr}=200$ AU. Orbital excitation extrema are found near the resonance of the variability timescale, $\tau$, with half the orbital period, $T/2$. Maxima occur at half integer periods while minima occur at integer periods. For instance, if the current solar cycle of 22 years (of inversion timescale $\tau=11$\,years) had influenced the polarity reversal of the early solar system's jets then Jupiter at 5.2\,AU (period 11 years) would have been sitting near a location of minimum excitation. As the solar cycle period is likely to have evolved from a smaller period than 22 years, resonance crossing is likely to have excited the planet's orbit. This point is detailed in the next section. We note that whereas eccentricity peaks at resonance $\tau=5.6$ years, inclination and semi major migration peak slightly before it. Also unlike eccentricity excitation maxima which are periodic with respect to the variability timescale, those of radial migration and inclination excitation decrease significantly with increasing variability timescale. Figure 5 shows how excitation evolves in time at $10^3$, $10^4$, $10^5$ and $10^6$ years indicating is power-law type dependence. We estimate this dependence as follows. The maximum inclination excitation during a duration $\tau$ when the acceleration is constant is obtained by solving the vertical equation of motion in
(\ref{motion}) under the assumption that $\sin I\ll 1$ where $I$ denotes inclination. It is found that $z={Aa^3}/{GM}\ (1-\cos n t)\label{vertical}$ and therefore the average inclination increment $\delta I= (a/a_{\rm kplr})^2/3$. Assuming excitation to be a random walk in inclination with increments $\delta I$ that take place $t/\tau$ times, we find an inclination amplitude of:
\begin{equation}
I={f(\tau)}\left(\frac{t}{\tau}\right)^\frac{1}{2} \left(\frac{a}{a_{\rm kplr}}\right)^2 \label{scaling}
\end{equation}
where $f$ is a function that depends on the variability timescale $\tau$. We checked that this scaling has the right dependence on time and also on $a_{\rm kplr}$ by simulating a smaller acceleration corresponding to 300 AU (Figure 6). In Paper II, we showed that the conservation of the vertical component of angular momentum that forces the mean orbital radius to change in order to
compensate for the increase of the vertical motion $\delta I $ leads to a radial migration $\delta a$ and eccentricity increments $\delta e$ proportional to $(a/a_{\rm kplr})^4$. We checked that these scalings apply to the final excitation amplitudes of eccentricity and radial migration. Random polarity reversal simulations have similar scalings as but their amplitudes (e.g. $f(\tau)$ in Equation (\ref{scaling})) are much larger than those of periodic polarity reversal (Figure 7). This difference is due to the fact that whereas $\tau$ is both the acceleration amplitude variability timescale and the polarity reversal timescale for periodic reversal simulations, it is only the minimum polarity reversal timescale and the acceleration amplitude variability timescale for random reversal simulations. For the latter simulations, acceleration is on the same side of the disk plane for a longer time than $\tau$ and can excite the planets more strongly than in the corresponding case with periodic reversal.
Mutual planet perturbations are known to affect orbital excitation by asymmetric momentum loss. In the case of the secular excitation regime studied in Paper I, mutual gravitational interaction between two planets works against excitation by asymmetric momentum loss through secular perturbations of the orbits. If the jet-induced acceleration is not strong enough to impose pericenter alignment during excitation, the eccentricity and inclination amplitudes may be reduced significantly with respect to their values obtained without mutual perturbation. With stochastic excitation the situation is different and the process of pericentre alignment is irrelevant as there is no preferred excitation direction as far as the planet's orbit is concerned. In effect, the inclination and eccentricity increments acquired at each acceleration amplitude stochastic change redefine a new excitation direction for the orbit that is consequently random. We therefore expect that mutual planet perturbations to reinforce the resonant excitation by stochastic asymmetric momentum loss because the minimum excitation amplitude of the inner planet will not correspond to an excitation minimum for the outer planet. Entrainment by mutual interactions may therefore increase the excitation amplitude of the inner planet. Figure 8 shows the results of the interaction of a Jupiter-mass planet at 5\,AU and a Saturn-mass planet at 8.5\,AU under the influence of periodic polarity reversal momentum loss with an acceleration standard deviation of $a_{\rm kplr}=200\,$AU. Each point corresponds to 500 acceleration profiles each integrated over $10^6$ years. The outer planet was not set at 9\, AU in anticipation of the outward migration that orbital excitation will generate. The excitation minima of eccentricity and inclination (but not the semi major axis) for the inner planet are erased. The amplitudes dependence follows to a certain extent the envelope of both inner and outer resonances respectively at $\tau=5.6$ and $12.4$ years (based on the initial semi major axes). We also note that the orbits' relative inclination ($\sim 2^\circ$--$4^\circ$) is quite large compared to that of Jupiter and Saturn in the solar system ($\sim1^\circ$) whereas their mean inclination ($\sim 5^\circ$--$7^\circ$) is near the observed value of $6^\circ$. Applying a random polarity reversal stochastic acceleration to the Jupiter-Saturn pair will result in larger excitation amplitudes (as seen in Figure 7) but with a similar dependence on the variability timescale. For conciseness, we do not report the corresponding figures and move to the more physically relevant phenomenon of resonance crossing where the variability timescale is itself time-variable.
\section{Time-dependent variability timescales and resonance crossing}
The identification of the planet-jet variability resonance using a constant variability timescale allowed us to characterize quantitatively the excitation amplitudes of the eccentricity, inclination, and semi major axis migration. In reality however, it is likely that there are several variability timescales associated with momentum loss some of which may even vary with time. This would occur particularly if the variability timescale is related to the polarity reversals of the magnetic field of the star-disk interface within the jet-launching region. Stellar polarity reversal cycles are observed in several stars and seem to be correlated with the rotation rate as solar-mass stars rotating faster than the Sun tend to have shorter polarity reversal times \citep{mc5}. For instance, spectropolarimetric observations using Zeeman Doppler imaging showed that the planet hosting star $\tau$~Bootis has cyclic polarity reversals with $\tau=1$ year \citep{mc2} although such a cycle has yet to be confirmed by X-ray observations and optical spectra of the star \citep{mc8}. Numerical simulations of the generation of magnetic fields in young rotating stars confirm the star rotation-magnetic cycle period correlation and showed that a solar mass star rotating five times faster than the Sun had polarity reversals with $\tau=4$ years instead of the solar value $\tau=11$ years \citep{mc1}. As a star evolves toward slower rotation and hence toward larger magnetic cycle periods, the surrounding planets will cross the corresponding planet-jet variability resonances. We assess the effect of resonance crossing on orbital excitation by modeling the evolution of a Jupiter-mass planet at 5 AU under stochastic momentum loss whose variability time scale $\tau$ increases from $\tau=0.5$ to 11 years. As the outcome of resonance crossing depends on how fast resonance is traversed, we use the time it takes $\tau$ to reach 11 years, denoted $t_r$, as our parameter of crossing velocity. For the time dependence of $\tau$ we experimented with various exponential and power-type laws. Here we report on two time profiles. The first is the square root law that comes up naturally if the variability timescale is assumed to increase at a fixed rate $\Delta \tau$ after a duration equal to $\tau$. In this case $2\tau=[(\Delta \tau+ 2\tau_i)^2+ 8 \Delta\tau t ]^{1/2}-\Delta \tau$ where $\Delta \tau=(\tau_f^2-\tau_i^2)/2t_r$ and $\tau_i=0.5$ years and $\tau_f=11$ years are the initial and final values. The exponential profile is given as $\tau= \tau_f-(\tau_f-\tau_i)\exp(-5t/t_r)$. For a time variable timescale, convergence of the stochastic simulations towards a mean, for a given acceleration parameter set, is achieved after a few hundred runs. We therefore increased the number of simulations to 2000 each integrated for the duration $t_r$. Figure 9 shows the amplitudes obtained from resonance crossing for a single Jupiter mass planet as a function of the duration $t_r$ (we remind the reader that each curve point corresponds to a different time evolution of the variability time scale $\tau$). Unlike the case of constant variability time scales, it is periodic and not random polarity reversal that achieves the strongest excitation amplitudes (compare upper and lower row amplitudes). Maximum inclination for random reversal simulations is even smaller than those with constant variability timescales (Figure 7). For periodic polarity reversal simulations, resonance crossing is evident through two features: the first is the ``sudden" increase of eccentricity and inclination amplitudes as well as outward semi major axis migration. This shows that fast crossing of the planet-jet variability resonance is an inefficient mechanism to excite planet orbits --for the square root (exponential) law, resonance crossing occurs around $0.25 \,t_r$ ($0.14\,t_r$). The second feature is the large excitation amplitudes compared to the corresponding values for constant variability timescales (Figures 4 and 5). Maximum inclination for a constant variability timescale with $a_{\rm kplr}=200$ AU is of order $2^\circ$ and $5^\circ$ after $10^5$ and $10^6$ years respectively whereas it peaks at $25^\circ$ for a time variable $\tau$. Acceleration strength affects resonance crossing mainly through the onset of excitation as the excitation amplitudes remain more or less comparable. For the square root law, strong excitation by resonance crossing is triggered after $t_r=1.2\times 10^5$ years for $a_{\rm kplr}=200$ AU and is delayed until $t_r=3.7 \times 10^5$ years for $a_{\rm kplr}=300$ AU.
The choice of a relaxation time of $t_r/5$ in the exponential law makes resonance crossing inherently faster than that of the square root law explaining why strong excitation is triggered only after $t_r=3 \times 10^5$ for the same acceleration strength ($a_{\rm kplr}=200$ AU). In effect, the crossing velocity at resonance for the exponential law reads $d\tau/dt=2.5 \, \tau_f/t_r$ whereas it is given as $d\tau/dt=\, \tau_f/t_r$ for the square root law. The resonance crossing velocity that triggers excitation for the power law at $t_r= 1.2\times 10^5$ is $d\tau/dt = 10^{-4}$ and corresponds to the observed longer time $t_r$ for the exponential law. The exponential law however has different excitation amplitudes especially that of eccentricity. As the crossing velocity becomes smaller with increasing $t_r$, inclination excitation and orbital migration decrease slowly from their peak values whereas eccentricity excitation becomes independent of crossing velocity. Going back to the random polarity reversal simulations, we note that excitation amplitudes increase monotonically with a decreasing crossing velocity (increasing $t_r$) and that unlike periodic polarity reversal, excitation is weaker for the exponential law.
We assess the effect of mutual planet interactions on orbital excitation using the two-planet model of the previous section. Figure 10 shows the excitation amplitudes of Jupiter and Saturn that were initially on circular orbits at 5 AU and 8.5 AU respectively. As with the single planet simulations, random polarity reversal leads to smaller excitation amplitudes. For periodic polarity reversal simulations, mutual planet interactions modify the excitation amplitudes of the inner planet as maximum inclination is reduced by $\sim 40\%$ whereas maximum eccentricity is increased by $\sim 50\%$ regardless of acceleration strength and timescale time-dependence law. The outer planet has an interesting response to stochastic excitation in that: first, its inclination excitation profile and amplitude are similar to those of the inner planet alone (Figure 9). Second, its eccentricity excitation profile is an order of magnitude larger than that of constant timescale simulations (Figure 8). Third, for slow resonance crossing velocities, the planet may migrate several hundreds of AU away from the star. For periodic polarity reversal simulations, the mean relative inclination of the two planets is small before resonance crossing is able to strongly excite the orbits as well as slightly afterwards. For smaller resonance crossing velocities $d\tau/dt$, relative inclination becomes quite as large as the excited inclination of the inner planet with respect to the star's equator, and correlated with significant outward migration. For random polarity reversal, relative inclination increases steadily with decreasing crossing velocity whereas substantial outward migration is absent.
We can use the excitation amplitudes of Figure 10 to locate the actual orbits of Jupiter and Saturn and constrain the momentum loss process in the early solar system. The relative inclination of the two giant planets that is less than $1^\circ$ disfavors random polarity reversal momentum loss as small relative inclinations are correlated with much smaller eccentricities than those observed. We are left with periodic polarity reversal momentum loss. To determine the resonance crossing velocity or equivalently $t_r$, we combine the small relative inclination of Jupiter and Saturn with the observed inclination of Jupiter's orbit with respect to the Sun's equator (6$^\circ$) that essentially defines the invariable plane of the solar system as Jupiter is its most massive body. The current orbits would then be obtained shortly after the onset of resonance crossing excitation with the power law near $t_r=1.2$ to $1.3\times 10^5$ years and $a_{\rm kplr}=200$ AU. The semi major axis migration of the planets brings their initially smaller orbits to their current sizes. Choosing a weaker acceleration standard deviation with $a_{\rm kplr}=300$ AU and $t_r= 3.5 \times 10^5$ years would produce a larger semi major with respect to Saturn's current orbit only because the planet was started at 8.5 AU. In this respect, our results about the solar system should be regarded as a demonstration of principle of how stochastic momentum loss can produce the current orbits of Jupiter and Saturn. In this sense, this demonstration is quite encouraging if we recall that each amplitude shown in Figure 10 is an average over 2000 momentum loss profiles. In order to constrain more precisely the momentum loss process, all planets need to be included as well as minor bodies and in particular those that are decoupled dynamically from the solar system's planets such as dwarf planet Sedna that may be accounted for by the significant outward migration from smaller orbits produced by stochastic momentum loss.
\section{Conclusion}
In this work we examined quantitatively the excitation of planetary orbits by stellar jet stochastic momentum loss that on average does not accelerate the star. We modeled momentum loss using two main parameters, the acceleration standard deviation and the variability timescale, along with two polarity reversal modes, random and periodic. In particular, we did not invoke a prior inclination of the jet axis as it was taken to be perpendicular to the initial planetary orbits. Whereas secular excitation by asymmetric momentum loss requires such inclination (Paper I), stochastic momentum loss does not and may achieve far greater amplitudes than secular excitation. Stochastic momentum loss is efficient at the resonance of the planet's period with the variability timescale. Random polarity reversal appears to cause greater excitation for constant variability timescales but it fails to compete with periodic polarity reversal when the variability timescale is time dependent. If polarity reversal is related to the magnetic field of the star-disk interface, then the reversal timescale will increase during the braking of the star's rotation as indicated by observations of solar-type stars and numerical simulations of young stars' magnetic fields. As the variability timescale increases resonance crossing by the planets' orbits may excite them significantly. We have characterized such excitation and showed that the greater diversity of orbital outcomes occurs with periodic polarity reversal and is determined by how fast the planet-jet variability resonance is crossed. The smallest crossing velocities produce the most extreme systems. In particular, planets can migrate several hundred AU away from the planet forming region around a solar mass star. Periodic polarity reversal stochastic momentum loss can statistically explain the current configuration of the solar system's Jupiter and Saturn and particularly the hitherto unknown origin of the inclination of Jupiter's orbit with respect to the solar equator. Although our study was focused on planetary systems, stellar companions of a jet-sustaining star are affected similarly by stochastic momentum loss. Perhaps the most the promising result in this work is the possible link between stellar magnetic cycles and the dynamical architecture of planetary companions. This may prove a valuable tool to constrain the magnetic history of planet-hosting or binary stars and understand the observed diversity of planetary systems.
\acknowledgements
The author thanks the reviewer for useful comments. The numerical calculations in this work were done at the high performance computing center M\'esocentre {\sc sigamm} hosted at the Observatoire de la C\^ote d'Azur.
\newpage
|
1,314,259,993,199 | arxiv | \section{Introduction}
Bayesian approach has been widely applied to inverse models and parameter inference models \cite{dashti,kaipio,stuart,stuart1,tarantola,marzouk1,marzouk2,yan1,yan2,yan3,yang}. It provides a handy framework for the data analysis in the real engineering problems \cite{beck,hadidi,yuen} based on Bayes theorem,
\begin{align}\label{in1.1}
\mu^y(dx)=\frac{f(y| x)\mu(dx)}{\int f(y| x)\mu(dx)},
\end{align}
where the distribution $\mu$ characterizes prior
knowledge about the unknown parameter $x$ and $f(y| x)$ determines the likelihood function. In real applications, two key aspects of Bayesian method perplex engineers and researchers: the prior distribution and acceleration of simulation. Firstly, the prior information is coded before obtaining the measured data. This means that one needs to have the first understanding for the unknown parameter and make some survey according to their own experience and actual situation. However, this is usually a challenging task in some real problems, e.g., reservoir, non-destructive inspection, CT etc. This prompts us to explore broader and more suitable prior distributions for specific problems. In \cite{stuart,stuart1}, some prior distributions, e.g., Gauss, uniform, Besov prior, have been discussed for ill-posed operator equations. We will introduce q-Gauss distribution, q-analogy of Gauss distribution, into the research of inverse problems in this paper. In what follows, more information of q-Gauss distribution will be provided.
On the other hand, in the Bayesian framework parameters are frequently estimated by Markov chain Monte Carlo(MCMC) sampling techniques which typically have slow convergence. In fact, MCMC methods evaluate sequentially the posterior probability density at many different points in the parameter space, in which the forward model needs to be solved for each sample parameter to determine the likelihood function. This requires a computationally intensive undertaking (e.g., the solution of a system of PDEs).
Therefore, numerical acceleration algorithm is key for real applications. A kind of important acceleration methods is to reduce computational cost in solving a statistical inverse problems \cite{frangos}: reducing the cost of forward simulations, reducing the dimension of the input space and reducing the number of samples.
Q-Gaussian distribution, as an analogy to Gaussian distribution, has been discussed by many authors (see \cite{bryc,bozejko1,bozejko2,leeuwen} and references there), and widely used in quantum physics \cite{simon1,simon2}. In classical probability, the central limit theorem shows that the standardized sum of $n$ classically independent identifically distributed random variables converges to a Gaussian random variable as $n$ goes to infinity. This process depends on the commutative notion of independence. However, this conventional commutative relation is unsuitable for some real applications. Some necessary extensions need to be done.
In \cite{bozejko1}, Bo\.{z}jko and Speicher generalize the commutative independence in a deformation of Brownian motion by introducing a parameter $q\in [-1, 1]$. When $q=1$, it is the classical case, $q=-1$ the anti-commutative independence and $q=0$ the free independence. These commutative notions are used to characterize some quantum physics phenomena \cite{maassen,meyer,speicher,voiculescu}.
The density function of q-Gaussian distribution is represented by an infinite series and this truncation error of the partial sum series is discussed in \cite{szablowski}.
Compared with classical Gaussian random variables, q-Gaussian variables for $-1<q<1$ are bounded, with which we can depict some bounded physical parameters, e.g., the diffusion coefficients in heat conduction problems, the order of fractional diffusion equation. In addition, with big $q$ (greater than some constant $q_0$), the density functions of q-Gaussian distribution are unimodal. While $q$ is small, they are bimodal. Bimodal probability distributions have important applications in economic, natural problems. For more discussions about q-Gaussian distribution, one can refer to \cite{bozejko1,bozejko2,szablowski}.
Numerical acceleration has been always concerned by scientists and engineers. As stated above, in statistical inference problems, the main acceleration ideas include improving the sampling efficiency, reducing the dimensionality of input parameter and reducing the evaluation cost of the forward problem. In improving sampling efficiency, one can see \cite{christen,cui,higdon,efendiev}. In \cite{cui}, Cui et al integrate the reduced-order model construction process into an adaptive MCMC algorithm, in which the reduced-order model is used to increase the efficiency of MCMC sampling.
For reducing the dimensionality, ad hoc method is to expand the unknown parameter in its Karhunen-Lo\'{e}ve expansion according to the given prior knowledge \cite{dashti,stuart} and truncate the expansion into the partial sum. The expansion coefficients of the truncation series are viewed as the substitute of the unknown.
For reducing the evaluation cost of the forward problem, one usually tries to transform complex forward models into a simplified or coarsed version, e.g., model reduction method, or construct an approximation or 'surrogate' of the forward problem. A lot of research has been devoted to these fields, for instance, some model reduction and surrogate based approaches \cite{arridge,frangos,galbally,jin,lieberman,manzoni}, generalized polynomial chaos (gPC) methods \cite{marzouk1,marzouk2,marzouk3,xiu1,xiu2,yan1,yan2} and Gaussian process regression method \cite{kennedy,rasmussen,stuart2}.
Recently, on the basis of surrogate method of forward model, some authors propose a 'more-direct' surrogate algorithm, spectral likelihood approximation method \cite{nagel}. This approach does not replace the forward model directly, but replace the likelihood function with the orthogonal polynomials expansion. By this approach, the polynomial chaos expansion (PCE) has clearer explanation in mathematics.
In this paper, we consider a spectral likelihood approximation approach based on q-Hermite polynomials, which are orthogonal with q-Gaussian distribution weight.
This paper is organized by the following: In Section 2, we introduce some basic
knowledge about q-Gaussian distribution and q-Hermite polynomials. We give a convergence rate for the truncation q-Hermite polynomial expansion. In Section 3, the Bayesian inversion based on q-Gaussian prior is stated.
In Section 4, we consider polynomial chaos expansion of likelihood function based on q-Hermite polynomials.
In Section 5, we analyze the Kullback-Leibler divergence in two approximation process: the likelihood and the prior approximation.
Two numerical examples are given in Section 6.
\section{Preliminaries}
In this section, we first give some basic
conceptions and notations and then analyze the convergence rate for truncated q-Hermite polynomial expansion. We just discuss 1-dimensional case in this section. For multi-dimensional case, it is a direct extension.
Denote for $n\in \mathbb{N}_0$ and $-1<q<1$
\begin{align}\label{qh2.1}
&[n]_q:=\frac{1-q^n}{1-q}=1+q+\cdots+q^{n-1},\,\, [0]_q:=0,\\
&(a; q)_n=\prod_{k=0}^{n-1}(1-aq^k).
\end{align}
The density function $f^{(q)}(x)$ of q-Gaussian \cite{bozejko2} is supported by the interval $[-\frac{2}{\sqrt{1-q}}, \frac{2}{\sqrt{1-q}}]$, on which
\begin{align}\label{qha2.3}
f^{(q)}(x)=\frac{1}{\pi}\sqrt{1-q}\sin\theta\prod\limits_{n=1}^\infty (1-q^n)|1-q^ne^{2i\theta}|^2
\end{align}
with $x=\frac{2}{\sqrt{1-q}}\cos\theta$, $\theta\in(0, \pi)$ and $i=\sqrt{-1}$.
The density function $f^{(q)}(x)$ has the following expansion and the truncated error estimation \cite{szablowski} :
\begin{Lemma}\label{lemma2.1}
For $-1<q<1$, one has for $x\in [-\frac{2}{\sqrt{1-q}}, \frac{2}{\sqrt{1-q}}]$
\begin{align}\label{qh_a2.6}
f^{(q)}(x)=\frac{\sqrt{1-q}}{2\pi}\sqrt{4-(1-q)x^2}\sum_{k=1}^\infty
(-1)^{k-1}q^{\left(\begin{array}{c}k \\2\end{array}\right)}T_{2k-2}
(\frac{x\sqrt{1-q}}{2}),
\end{align}
where $T_k(x)$ is Chebyshev polynomial of the second kind defined by
\begin{align*}
T_k(x)=\frac{\sin((k+1)\arccos x)}{\sqrt{1-x^2}}.
\end{align*}
Denote
\begin{align}\label{qh_a2.7}
f^{(q)}_J(x):=\frac{\sqrt{1-q}}{2\pi}\sqrt{4-(1-q)x^2}
\sum_{k=1}^{J-1}
(-1)^{k-1}q^{\left(\begin{array}{c}k \\2\end{array}\right)}T_{2k-2}
(\frac{x\sqrt{1-q}}{2}).
\end{align}
Moreover for $J\geq 4$, it has the following estimation
\begin{align}\label{qh_a2.8}
\sup_{|x|<2/\sqrt{1-q}}|f^{(q)}(x)-f_J^{(q)}(x)|\leq \frac{|q|^{(J-1)(J-2)/2}}{\pi(1-q^2)^2}.
\end{align}
\end{Lemma}
Let $\mathcal{I}_q:=(\tilde{x}-\frac{2}{\sqrt{1-q}}, \tilde{x}+\frac{2}{\sqrt{1-q}})$ and $\mathcal{L}^{2}_{\mu_q}:=\mathcal{L}^{2}_{\mu_q}(\mathcal{I}_q)$ be the Hilbert space of functions that are square integrable with respect to the measure
\begin{align}\label{pc4.1}
\mu_q(dx):=\frac{1}{\sqrt{\Xi}}f^{(q)}(\frac{x-\tilde{x}}{\sqrt{\Xi}})dx.
\end{align}
To simplify, we set $\tilde{x}=0, \Xi=1$.
The inner product $(\cdot, \cdot)_{\mathcal{L}^{2}_{\mu_q}}$ and norm $\|\cdot\|_{\mathcal{L}^{2}_{\mu_q}}$ are defined by
\begin{align}\label{pc4.2}
&(\psi_1, \psi_2)_{\mathcal{L}^{2}_{\mu_q}}=\int_{\mathcal{I}_q}\psi_1(x)\psi_2(x)\mu_q(dx), \,\, \forall \psi_1, \psi_2\in \mathcal{L}^{2}_{\mu_q},\\
&\label{pc4.3} \|\psi\|_{\mathcal{L}^{2}_{\mu_q}}=\sqrt{\int_{\mathcal{I}_q}|\psi(x)|^2\mu_q(dx)},\,\, \forall \psi\in\mathcal{L}^{2}_{\mu_q}.
\end{align}
Q-Hermite polynomials \cite{koekoek} are determined by the following recurrence relation
\begin{align}\label{qh2.2}
xH_n^{(q)}(x)=H_{n+1}^{(q)}(x)+[n]_qH_{n-1}^{(q)}(x),\,\, n\geq 1
\end{align}
with $H_0^{(q)}(x)=1$ and $H_1^{(q)}(x)=x$.
They are orthogonal to each other with respect to measure \eqref{pc4.1}.
We can write the orthogonal relation in the following
\begin{align}\label{qh2.4}
\int_{-\frac{2}{\sqrt{1-q}}}^{\frac{2}{\sqrt{1-q}}}H_n^{(q)}(x) H_m^{(q)}(x)\mu_q(dx)=\delta_{mn}[n]_{q}!,
\end{align}
where $[n]_{q}!:=[1]_q\cdots[n]_q$ and $\delta_{mn}$ is the Kronecker delta.
Define the q-differential $D_q, 0<|q|<1$
\cite{koekoek}:
\begin{align}
D_qf(x)=\frac{\delta f(x)}{\delta x}, \,\, x=\cos\theta,
\end{align}
where
\begin{align}
&\delta f(e^{i\theta})=f(q^{\frac{1}{2}}e^{i\theta})-f(q^{-\frac{1}{2}}e^{i\theta}),\\
&\delta x=-\frac{1}{2}q^{-\frac{1}{2}}(1-q)(e^{i\theta}-e^{-i\theta}).
\end{align}
It follows for q-Hermite polynomials $H_n^{(q)}(x)$ that \cite{koekoek}
\begin{align}
D_qH_n^{(q)}(x)=q^{-\frac{n-1}{2}}[n]_q H_{n-1}^{(q)}(x).
\end{align}
and generally
\begin{align}\label{derivative2.12}
D_q^{(k)}H_n^{(q)}(x)&=\prod_{l=1}^{k}q^{-\frac{n-l}{2}}[n-l+1]_qH_{n-k}^{(q)}(x), \,\,k=1, 2, \cdots.
\end{align}
For $f\in \mathcal{L}^2_{\mu_q}$, it has the following expansion
\begin{align}\label{217f}
f(x)=\sum_{n=0}^\infty a_n H_n^{(q)}(x).
\end{align}
Denote the first $N+1$ terms sum of \eqref{217f} by
\begin{align}
f_N(x)=\sum_{n=0}^N a_n H_n^{(q)}(x).
\end{align}
Following a similar proof in \cite{augustin}, we have the following truncated error estimation.
\begin{prop}\label{theorem2.2}
Let $0<|q|<1$ and $k\geq 1$. For $f\in \mathcal{L}^2_{\mu_q}$ being $k$ times continuously q-differentiable, the convergence rate
\begin{align}\label{q_ex}
\|f-f_N\|^2_{\mathcal{L}^2_{\mu_q}}\leq\frac{|q|^{\frac{(2N-1-k)k}{2}}}{\prod\limits_{l=1}^{k}[N-l+2]_q}\|D_q^{(k)}f\|^2_{\mathcal{L}^2_{\mu_q}}
\end{align}
can be obtained. Especially, for $f(x) = H^{(q)}_{N+1}(x)$, when $k=1$, we have
\begin{align}\label{eeeee}
\|f-\sum_{n=0}^{N}a_n H_n^{(q)}\|^2_{\mathcal{L}^2_{\mu_q}}=\frac{|q|^N}{[N+1]_q}\|D_q^{(1)}f\|^2_{\mathcal{L}^2_{\mu_q}}.
\end{align}
\end{prop}
\begin{proof}
We assume $0<q<1$. For $-1<q<0$, the proof is exactly same.
By the orthogonality of q-Hermite polynomials, for any $f\in \mathcal{L}^2_{\mu_q}$, it holds the Parseval identify
\begin{align}\nonumber
\|f\|^2_{\mathcal{L}^2_{\mu_q}}&=(\sum_{n=0}^\infty a_n H_n^{(q)}(\cdot), \sum_{k=0}^\infty a_k H_k^{(q)}(\cdot))_{\mathcal{L}^2_{\mu_q}}\\
&=\sum_{n=0}^\infty [n]_q!a_n^2.\label{parf}
\end{align}
Using the formulation \eqref{derivative2.12}, with some simple calculations
we get \begin{align}
&\|D_q^{(k)}f\|^2_{\mathcal{L}^2_{\mu_q}}\nonumber\\
&=(\sum_{i=k}^\infty a_i\prod_{l=1}^k q^{-\frac{i-l}{2}}[i-l+1]_qH_{i-k}^{(q)}, \sum_{j=k}^\infty a_j\prod_{l=1}^k q^{-\frac{j-l}{2}}[j-l+1]_qH_{j-k}^{(q)})_{\mathcal{L}^2_{\mu_q}}\nonumber\\
&=\sum_{i, j=k}^\infty a_ia_j\prod_{l=1}^k q^{-\frac{i-l}{2}}[i-l+1]_q(\prod_{l=1}^k q^{-\frac{j-l}{2}}[j-l+1]_q)(H_{i-k}^{(q)}, H_{j-k}^{(q)})_{\mathcal{L}^2_{\mu_q}}\nonumber\\
&=\sum_{j=k}^\infty a_j^2[j-k]_q!(\prod_{l=1}^k q^{-\frac{j-l}{2}}[j-l+1]_q)^2\nonumber\\
&=\sum_{j=k}^\infty a_j^2[j-k]_q! q^{-\frac{(2j-1-k)k}{2}}(\prod_{l=1}^k [j-l+1]_q)^2.\label{parsevald}
\end{align}
Using the above \eqref{parf} and \eqref{parsevald} we have
\begin{align*}
&\|f-f_N\|^2_{\mathcal{L}^2_{\mu_q}}=\sum_{n=N+1}^\infty a_n^2 [n]_q!
\nonumber\\
&=\sum_{n=N+1}^\infty a_n^2 [n-k]_q! \prod_{l=1}^{k}[n-l+1]_q\nonumber\\
&=\sum_{n=N+1}^\infty a_n^2 [n-k]_q! q^{-\frac{(2n-1-k)k}{2}}\prod_{l=1}^{k}[n-l+1]_q q^{\frac{(2n-1-k)k}{2}}\\
&\leq \sum_{n=N+1}^\infty a_n^2 [n-k]_q! q^{-\frac{(2n-1-k)k}{2}}\prod_{l=1}^{k}[n-l+1]_q \frac{\prod\limits_{l=1}^{k}[n-l+1]_q}{\prod\limits_{l=1}^{k}[N-l+2]_q} q^{\frac{(2n-1-k)k}{2}}\nonumber\\
&\leq \frac{q^{\frac{(2N-1-k)k}{2}}}{\prod\limits_{l=1}^{k}[N-l+2]_q}\|D_q^{(k)}f\|^2_{\mathcal{L}^2_{\mu_q}}.\nonumber
\end{align*}
This completes the proof of \eqref{q_ex}. \\
Next, for $f(x) = H^{(q)}_{N+1}(x)$,
the q-Hermite expansion coefficients of $f(x)$ hold
\begin{align}
a_{n}=\left\{
\begin{aligned}
&1,& n= N+1, \\
&0, & \text{otherwise}.
\end{aligned}
\right.
\end{align}
The Parseval equality \eqref{parf} gives
\begin{align}
\|f-f_{N}\|^2_{\mathcal{L}^2_{\mu_q}}=[N+1]_q!.
\end{align}
The right can be rewritten as
\begin{align}
[N+1]_q!=\frac{q^{-N}[N]_q![N+1]_q^2}{[N+1]_q}q^N.
\end{align}
This together with equality \eqref{parsevald} yields \eqref{eeeee}.
\end{proof}
\begin{remark}
For $q=0$, $H_n^{(0)}(x)$ are the Chebyshev polynomials. When $f$ has $k$ continuous derivatives, then $|f(x)-f_N(x)|=O(N^{-(k-1)}).$ One can refer \cite{gil} for more details.
\end{remark}
\begin{remark}
The estimation \eqref{q_ex} is consistent with that in \cite{augustin}, i.e.,
\begin{align}
\|f-f_N\|^2_{\mathcal{L}^2_{\mu_1}}\leq \frac{1}{\prod\limits_{l=1}^{k}(N-l+2)}\|f^{(k)}\|^2_{\mathcal{L}^2_{\mu_1}},
\end{align}
where $\mathcal{L}^2_{\mu_1}$ is the square integrable space with Gaussian weight.
It is obvious that when $|q|\,\, (0<|q|\leq 1)$ is smaller, the convergence of $f_N$ to $f$ in \eqref{q_ex} is faster and the result is superior to the classical one.
\end{remark}
\section{Bayesian inversion based on q-Gaussian prior}
Inverse problems are to find $x$, an input to a mathematical model, from given observation $y$. We have an equation of the form
\begin{align}\label{qpr3.1}
y=\varphi(x^\dag)+\eta,
\end{align}
where $\eta$ is the data noise, $x^\dag$ is the true solution and $\varphi:\mathcal{X}\rightarrow \mathcal{Y}$ is the forward operator, $\mathcal{X}, \mathcal{Y}$ are Banach spaces. To simplify the discussion, we assume $\mathcal{X}, \mathcal{Y}$ to be finite dimensional spaces $\mathbb{R}^m, \mathbb{R}^n$.
The most common used method to solve \eqref{qpr3.1} is regularization techniques, e.g., Tikhonov regularization that searches the minimizer of the following minimization problem
\begin{align}
\label{qpr3.2}
\text{arg}\min\limits_{x\in\mathcal{X}} \|\varphi(x)-y\|_{\mathcal{Y}}^2+\|x-x_0\|_{\mathcal{X}}^2,
\end{align}
where $x_0$ is the prior guess.
In Bayesian inversion, inverse problem \eqref{qpr3.1} is restated within probability framework. In details, we consider $\mathcal{X}, \mathcal{Y}$ as sample spaces. Let $X$ be a random variable in $\mathcal{X}$ and $x$ be a realization of $X$ (In subsequent, we denote random variables by capital letters and its realization by the corresponding lower letters.). The forward operator $\varphi$ maps probability space $(\mathcal{X}, \mathfrak{F}_{\mathcal{X}}, \mu_{\mathcal{X}})$ to probability space $(\mathcal{Y}, \mathfrak{F}_{\mathcal{Y}}, \mu_{\mathcal{Y}})$. Here $\mathfrak{F}$ and $\mu$ with the subscripts $\mathcal{X}, \mathcal{Y}$ denote the Borel $\sigma$-algebras and the probability measures in the corresponding space respectively (We omit the subscripts under no any confusion).
Instead of finding an estimation $x$ from an observation $y$, we explore the probability distribution $\mu^y(dx):=\mu(dx|Y=y)$ of random variable $X$ given by $Y$. According to Bayes' formula \eqref{in1.1}, it is transformed to determine likelihood function and prior distribution. We take the distribution of noise $\eta$ as the Gaussian, i.e.,
\begin{align}\label{qpr3.3}
\eta\sim N(0, \Gamma),
\end{align}
where $\Gamma$ is the noise covariance matrix. In this case, the likelihood function can be written as
\begin{align}\label{qpr3.4}
f(y| x)&=\frac{1}{(2\pi)^\frac{n}{2}\sqrt{\text{det}(\Gamma)}}\exp(-\frac{\|\Gamma^{-\frac{1}{2}}(\varphi(x)-y)\|^2}{2})\nonumber\\
&:=\frac{1}{(2\pi)^\frac{n}{2}\sqrt{\text{det}(\Gamma)}}\exp(-\frac{\|\varphi(x)-y\|_{\Gamma}^2}{2}),
\end{align}
where $\text{det}$ denotes the determinant. We assume that the components of the uncertain parameter vector $X=(X_1, X_2, \cdots, X_m)$ are independent random variables $X_i$. For each component, we set its prior density as
the q-Gaussian defined by \eqref{pc4.1} with mean $\tilde{x}_i$ and variance $\Xi_i$.
For multi-dimensional case, with a slight abuse of notation, we still denote the measure by $\mu_q(dx)$ for multi-dimensional case.
Therefore, the posterior density satisfies
\begin{align}\label{qpr3.5}
f^y(x)\propto f(y| x)\prod\limits_{i=1}^m \frac{1}{\sqrt{\Xi_i}}f^{(q)}(\frac{x_i-\tilde{x}_i}{\sqrt{\Xi_i}}).
\end{align}
Maximizing the posterior probability is equivalent to minimizing the following function
\begin{align}\label{qpr3.6}
\frac{1}{2}\|\varphi(x)-y\|^2_\Gamma-\sum_{i=1}^m\log(f^{(q)}(\frac{x_i-\tilde{x}_i}{\sqrt{\Xi_i}})):=\Psi(x, y)+H(x),
\end{align}
where $\Psi$ is the negative log likelihood, also called the potential function.
By the i.i.d. of $X$, we get the Hessian matrix of $H(x)$
\begin{align*}
\text{Hess}(x)&=(\frac{\partial^2 H}{\partial x_i\partial x_j})_{m\times m}
\\&=\text{diag}(\frac{\partial^2 H}{\partial x_1^2}, \frac{\partial^2 H}{\partial x_2^2}, \cdots, \frac{\partial^2 H}{\partial x_m^2}),
\end{align*}
where
\begin{align*}
\frac{\partial^2 H}{\partial x_i^2}=\frac{(\frac{df^{(q)}(x_i)}{dx_i})^2-\frac{d^2f^{(q)}(x_i)}{dx_i^2}f^{(q)}(x_i)}{(f^{(q)}(x_i))^2},\,\, i=1, 2, \cdots, m.
\end{align*}
\begin{Lemma}\cite{szablowski}
$f^{(q)}(x)$ is bimodal for $q\in (-1, q_0)$, where $q_0\approx-0.107$ is the largest real root of the equation $\sum_{k=0}^{\infty}(2k+1)^2q^{\frac{k(k+1)}{2}}=0$.
\end{Lemma}
The fact that $f^{(q)}(x)$ for $q\geq q_0$ is unimodal implies that $\frac{\partial^2 H}{\partial x_i^2}\geq 0$ and therefore $H$ is positive semidefinite, which means that $H$ is a convex penalty term of function \eqref{qpr3.6}. Whereas, when $-1<q<q_0$, there exists three extreme points $-a, 0, a$. It is obvious that $\frac{df^{(q)}}{dx}(0)=\frac{df^{(q)}}{dx}(\pm a) =0$ and $\frac{d^2f^{(q)}}{dx^2}(0)>0$, $\frac{d^2f^{(q)}}{dx^2}(\pm a)<0$. Moreover, because $f^{(q)}(x)$ is probability density function, it satisfies that $f^{(q)}(x)\geq 0$. Thereby, we have $\frac{\partial^2 H}{\partial x^2}\mid_{x=0}<0$ and $\frac{\partial^2 H}{\partial x^2}\mid_{x=\pm a}>0$, which implies $H$ is a non-convex penalty function. For non-convex constraint, the function \eqref{qpr3.6} has multi local minima. This case is typically hard to solve and analyze in classical optimization framework.
\section{Polynomial chaos expansion of likelihood function based on q-Hermite polynomials}
\subsection{Spectral likelihood approximation}
We know that q-Hermite polynomials $H_n^{(q)}$ are orthogonal with respect to measure $\mu_q(dx)$. Define multivariate polynomials
\begin{align}\label{pc4.4}
\mathcal{H}_\alpha^{(q)}(x):=H_{\alpha_1}^{(q)}(x_1)H_{\alpha_2}^{(q)}(x_2)\cdots H_{\alpha_m}^{(q)}(x_m),
\end{align}
where $\alpha=(\alpha_1, \alpha_2, \cdots, \alpha_m)\in\mathbb{N}_0^m$. It is obvious that polynomials $\mathcal{H}_\alpha^{(q)}$ are orthogonal with respect to measure $\mu_q(dx)$ in \eqref{pc4.1}
\begin{align}\label{pc4.5}
(\mathcal{H}_\alpha^{(q)}, \mathcal{H}_\beta^{(q)})_{\mathcal{L}^{2}_{\mu_q}}=\left\{
\begin{aligned}
&[\alpha_1]_q![\alpha_2]_q!\cdots[\alpha_m]_q!, && \alpha=\beta \\
&0, && \alpha\neq\beta.
\end{aligned}
\right.
\end{align}
Polynomials $\{\mathcal{H}_\alpha^{(q)}\}$ form a complete orthogonal system of $\mathcal{L}^{2}_{\mu_q}$. The likelihood function $f(y| x)$ is measurable in the prior measure ${\mu_q(dx)}$. It can be expanded in
\begin{align}\label{pc4.6}
f(y| x)=\sum_{\alpha\in \mathbb{N}_0^m}a_\alpha^y\mathcal{H}_\alpha^{(q)}(x),
\end{align}
where $a_\alpha^y$ is the Fourier coefficients depending on data $y$ defined by
\begin{align}\label{pc4.7}
a_\alpha^y=\frac{(f(y|\cdot), \mathcal{H}_\alpha^{(q)}(\cdot))_{\mathcal{L}^{2}_{\mu_q}}}{[n_1]_q![n_2]_q!\cdots[n_m]_q!}.
\end{align}
In real numerical implementation, the expansion in \eqref{pc4.6} will be truncated to a finite summation
\begin{align}\label{pc4.8}
f_{\Lambda_N}(y| x)=\sum_{\alpha\in \Lambda_N}a_\alpha^y\mathcal{H}_\alpha^{(q)}(x),
\end{align}
where $\Lambda_N$ is finite multi-indices set defined by
\begin{align}\label{pc4.9}
\Lambda_N:=\{\alpha\in\mathbb{N}_0^m: \|\alpha\|_1=\sum_{i=1}^{m}|\alpha_i|\leq N\}.
\end{align}
This truncation means that we need collect all multivariate polynomials with order $\|\alpha\|_1$ smaller than or equal to $N$. The total number \cite{nagel,yan1,yan2} is
\begin{align}\label{pc4.10}
P=\left(\begin{array}{c}m+N \\N \end{array}\right)=\frac{(m+N)!}{m!N!}.
\end{align}
A simple way to reduce the dimension is to limit the number of regressors relies on hyperbolic truncation sets.
For $0<l<1$ a quasinorm is defined as
$\|\alpha\|_l=(\sum_{i=1}^m |\alpha_i|^l)^{1/l}$.
The corresponding hyperbolic truncation
scheme is then given as $\varpi=\{\alpha\in\mathbb{N}^m\mid \|\alpha\|_l\leq N\}.$
The convergence in the mean-square sense is indicated
in \cite{box,nagel}
\begin{align}\label{pc4.11}
\|f(y|\cdot)-f_{\Lambda_N}(y|\cdot)\|_{\mathcal{L}^{2}_{\mu_q}}^2=\mathbb{E}^{\mu_q}((f(y| x)-f_{\Lambda_N}(y| x))^2)=\sum\limits_{\alpha\in \mathbb{N}^m\backslash\Lambda_N} |a_\alpha^y|^2.
\end{align}
If $f(y| x)$ is $k$ times continuous q-differentiable about $x$, Theorem \ref{theorem2.2} shows the mean square error $\|f(y|\cdot)-f_{\Lambda_N}(y|\cdot)\|_{\mathcal{L}^{2}_{\mu_q}}\rightarrow 0$.
One can refer to \cite{augustin,muhlpfordt} for truncation errors for polynomial chaos expansions.
\subsection{Christoffel least square}
For obtaining the expansion coefficients $a_\alpha^y$, we adopt the stochastic collocation algorithm \cite{marzouk2,yan1}. The collocation equation will be solved in the least-square procedure. For given data $y$, we need find the minimizer
\begin{align}\label{leas4.12}
a_\alpha^y:=\arg\min\frac{1}{J}\sum_{j=1}^J |f(y| x^{(j)})-f_{\Lambda_N}(y| x^{(j)})|^2,
\end{align}
where $x^{(j)}=(x^{(j)}_1, x^{(j)}_2, \cdots, x^{(j)}_n)$ is a sample of $X\sim\mu_q(dx)$, i.e., $x^{(j)}_i\sim f^{(q)}(x)dx$ and $J$ is the sample number.
In general, the sample number $J$ is greater than the number $P$ in \eqref{pc4.10}, which leads to an overdetermined linear system
\begin{align}\label{chris4.13}
A^*A a^y_\alpha=A^*b,
\end{align}
where $A=(A_{kj})=(\mathcal{H}_{\alpha^{(k)}}(x^{(j)}))$ is a $J\times P$ Vandermonde-like matrix, $a^y_\alpha=(a^y_{\alpha^{(1)}}, a^y_{\alpha^{(2)}}, \cdots, a^y_{\alpha^{(P)}})^T$ and
$b=(f(y| x^{(1)}), f(y| x^{(2)}), \cdots, f(y| x^{(J)}))^T$.
The popular techniques for solving \eqref{leas4.12} include interpolatory approaches, compressive sampling or $l^1$ regularization and the least-squares $l^2$ regularization. A new approach, called Christoffel least-squares, is proposed in \cite{narayan}. This method takes $x^{(j)}$ to be i.i.d. from another distribution $\nu(dx)$. Let $v(x)$ be the density function of $\nu(dx)$. The support of $v$ contains the support of $f^{(q)}(x)$. Instead of least-squares problem \eqref{leas4.12}, we solve
\begin{align}\label{chr4.14}
a_\alpha^y=\arg\min \frac{1}{J}\sum_{j=1}^J\kappa_j|f(y| x^{(j)})-f_{\Lambda_N}(y| x^{(j)})|^2,
\end{align}
where $\kappa_j=\frac{f^{(q)}(x^{(j)})}{v(x^{(j)})}$. Obviously, the solution is defined by
\begin{align}\label{kres4.15}
a_\alpha^y=\arg\min\limits_{a\in\mathbb{R}^m}\|\sqrt{\mathcal{K}}\tilde{A}a-\sqrt{\mathcal{K}}\tilde{b}\|^2,
\end{align}
where $\mathcal{K}$ is a $J\times J$ diagonal matrix with entries $\mathcal{K}_{jj}=\kappa_j$ and $\tilde{A}, \tilde{b}$ are defined like in \eqref{chris4.13} by replacing the samples drawn from distribution $\nu$.
The solution can be obtained by solving the normal equation of \eqref{kres4.15}
\begin{align}
(\tilde{A}^*\mathcal{K}\tilde{A})a_\alpha^y=\tilde{A}^*\mathcal{K}\tilde{b}.
\end{align}
The CLS algorithm choose the weights $\kappa_j$ to be quantities that scale each row of $\sqrt{\mathcal{K}}\tilde{A}$ to have $l^2$ norm equal to the constant $P$, i.e.,
\begin{align}
\kappa_j=\frac{P}{\sum_{\alpha\in\Lambda_N}\mathcal{H}_\alpha^2(x^{(j)})}.
\end{align}
The measure $\nu$ is called pluripotential equilibrium measure, which has density function for our settings, i.e., the q-Gaussian density $f^{(q)}(x)$
\begin{align}
v(x)=\frac{\sqrt{1-q}}{\pi\sqrt{4-(1-q)x^2}}.
\end{align}
This is Chebyshev density corresponding to the arcsin measure.
The equilibrium measure for multi-dimensional case is the product of the univariate measure for i.i.d. case.
\section{Convergence analysis}
In this section, we analyze the error between the exact posterior measure
\begin{align}
\mu^y:=\frac{f(y| x)\mu_q(dx)}{\int f(y| x)\mu_q(dx)}:=\frac{f(y| x)\mu_q(dx)}{\gamma}
\end{align}
and its approximation version
\begin{align}
\tilde{\mu}^y_{NJ}:=\frac{f_{\Lambda_N}(y| x)\mu_{q}^J(dx)}{\int f_{\Lambda_N}(y| x)\mu_{q}^J(dx)}:=\frac{f_{\Lambda_N}(y| x)\mu_{q}^J(dx)}{\gamma_{NJ}},
\end{align}
where $\mu_{q}^J(dx)$ is defined by replacing $f^{(q)}(x)$ in $\mu_q(dx)$ \eqref{pc4.1} with its truncated version $f^{(q)}_J(x)$.
In addition, we introduce the measure
\begin{align}
\tilde{\mu}^y_{N}:=\frac{f_{\Lambda_N}(y| x)\mu_q(dx)}{\int f_{\Lambda_N}(y| x)\mu_q(dx)}:=\frac{f_{\Lambda_N}(y| x)\mu_q(dx)}{\gamma_{N}}.
\end{align}
We will focus on the Kullback-Leibler (KL) divergence \cite{lu1,lu2,marzouk1,sanz} (also known as the relative entropy) of $\nu$ with respect to $\mu$
to measure the bound
\begin{align}
D_{KL}(\nu||\mu):=\left\{
\begin{aligned}
&\int \nu\log\frac{\nu}{\mu}=\mathbb{E}^\nu\log(\frac{\nu}{\mu}), & \nu\ll\mu \\
&\infty, & \text{otherwise},
\end{aligned}
\right.
\end{align}
where $\nu\ll\mu$ means $\nu$ is absolutely continuous with respect to $\mu$. If $\mu$ and $\nu$ are two measures on a $\sigma$-algebra $\mathfrak{F}_\mathcal{X}$ of subsets of $\mathcal{X}$, we say that $\nu$ is absolutely continuous with respect to $\mu$ if $\nu(A)=0$ for any $A\in \mathfrak{F}_\mathcal{X}$ such that $\mu(A)=0$. If the measure $\nu$ is finite, i.e. $\nu(X)<\infty$, the property $\nu\ll\mu$ is equivalent to the following stronger statement: for any $\epsilon>0$ there is a $\delta>0$ such that $\nu(A)<\epsilon$ for every $A$ with $\mu(A)<\delta$.
The Kullback-Leibler divergence is vital in measuring the loss of information when $\nu$ is instead of $\mu$ in information theory. As seen in \cite{sanz}, the Kullback-Leibler divergence is non-negative. But, since it may not meet the symmetric and the triangle inequality, it is not a metric on the space of probability measures. Nevertheless, we can use it to quantify the proximity of the measures $\nu$ and $\mu$ by virtue of some inequalities like
\begin{align}\label{errr5.5}
D_{\text{TV}}(\nu, \mu):=\sup\{|\nu(A)-\mu(A)|: A\in \mathcal{X}\}\leq D_{KL}(\nu||\mu)^{\frac{1}{2}}
\end{align}
and
\begin{align}\label{errr5.6}
D_{\text{Hell}}^2(\nu, \mu):=\frac{1}{2}\int(\sqrt{\nu}-\sqrt{\mu})^2\leq\frac{1}{2}D_{KL}(\nu||\mu),
\end{align}
where $D_{\text{TV}}(\nu, \mu), D_{\text{Hell}}^2(\nu||\mu)$ are the so-called total variation metric and Hellinger metric respectivelly.
For convenience, we assume the noise covariance $\Gamma=\delta^2 I$.
\begin{assumption}\cite{stuart,stuart1}\label{assumption1}
The forward operator $\varphi: \mathbb{R}^m\rightarrow\mathbb{R}^n$ satisfies the following: for every $\epsilon>0$ there exists $M=M(\epsilon)\in\mathbb{R}$ such that, for all $x$,
\begin{align*}
\|\varphi(x)\|_2\leq\exp(\epsilon\|x\|_2^2+M).
\end{align*}
\end{assumption}
\begin{Lemma}\cite{dashti}\label{lemma5.1}
Let $\varphi$ satisfy Assumption \ref{assumption1}.
Then $\Psi$ satisfies:
For every $r>0$, there exists $L=L(r)>0$
such that for all $x$ and $y\in\mathbb{R}^m$
with $\max\{\|x\|_2, \|y\|_2\}<r$,
\begin{align*}
\Psi(x, y)\leq L(r).
\end{align*}
\end{Lemma}
By the above assumption and Lemma \ref{lemma5.1}, we have the following convergence result.
\begin{thm}\label{theorem5.2} Let $\varphi$ satisfy assumption \ref{assumption1} and
$\delta\leq \frac{\exp(-\frac{L(r)}{n})}{\sqrt{2\pi}}$. It holds that
\begin{align}
D_{KL}(\tilde{\mu}^y_{N}||\mu^y)\leq(2\pi)^{\frac{n}{2}}\exp(L(r))\mathbb{E}^{\mu_q}((f(y| x)-f_{\Lambda_N}(y| x))^2)\rightarrow 0
\end{align}
as $N\rightarrow \infty$.
\end{thm}
\begin{proof}
According to the non-negative of the relative entropy, we get
\begin{align*}
&D_{KL}(\tilde{\mu}^y_{N}||\mu^y)\leq D_{KL}(\tilde{\mu}^y_{N}||\mu^y)+D_{KL}(\mu^y||\tilde{\mu}^y_{N})
\\
&=\int\tilde{\mu}^y_{N}\log\frac{\tilde{\mu}^y_{N}}{\mu^y}
+\int\mu^y\log\frac{\mu^y}{\tilde{\mu}^y_{N}}\\
&=\int(\tilde{\mu}^y_{N}-\mu^y)\log\frac{\tilde{\mu}^y_{N}}{\mu^y}\\
&=\int(\frac{f_{\Lambda_N}(y| x)\mu_q(dx)}{\gamma_{N}}-\frac{f(y| x)\mu_q(dx)}{\gamma}
)\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}\frac{\gamma}{\gamma_N}\\
&=\int\mu_q(dx)(\frac{f_{\Lambda_N}(y| x)}{\gamma_{N}}-\frac{f(y| x)}{\gamma}
)\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}\\
&=\frac{1}{\gamma\gamma_N}\int\mu_q(dx)(\gamma f_{\Lambda_N}(y| x)-\gamma_N f(y| x)
)\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}\\
&=\frac{1}{\gamma\gamma_N}\int\mu_q(dx)(\gamma f_{\Lambda_N}(y| x)-\gamma_N f_{\Lambda_N}(y| x)\\
&+\gamma_N f_{\Lambda_N}(y| x)-\gamma_N f(y| x)
)\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}\\
&=\frac{\gamma-\gamma_N}{\gamma\gamma_N}\int\mu_q(dx)f_{\Lambda_N}(y| x)\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}\\
&+\frac{1}{\gamma}\int\mu_q(dx)(f_{\Lambda_N}(y| x)-f(y| x))\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}\\
&:=I_1+I_2.
\end{align*}
Since $(\gamma-\gamma_N)\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}\leq 0$,
we know $I_1\leq 0$, which yields
\begin{align}
D_{KL}(\tilde{\mu}^y_{N}||\mu^y)\leq I_2.
\end{align}
Here
\begin{align}
I_2=\frac{1}{\gamma}\int\mu_q(dx)(f_{\Lambda_N}(y| x)-f(y| x))\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}.
\end{align}
When $\delta\leq \frac{\exp(-\frac{L(r)}{n})}{\sqrt{2\pi}}$, we get by Lemma \ref{lemma5.1}
\begin{align}
&f(y| x)=\frac{1}{(2\pi)^{\frac{n}{2}}\delta^n}\exp(-\frac{\|\varphi(x)-y\|_2^2}{2\delta^2})
\\
&\geq \frac{1}{(2\pi)^{\frac{n}{2}}\delta^n}\exp(-L(r))\geq 1
\end{align}
for $x\in B(0, r)$.
When $f(y| x)\geq 1$, we have $f_{\Lambda_N}(y| x)\geq1$ almost everywhere. In fact, according to the convergence
\begin{align}
\mathbb{E}^{\mu_q}(f_{\Lambda_N}(y| x)-f(y| x))^2\rightarrow 0, \,\, \text{as} \,\, N\rightarrow \infty,
\end{align}
we have for arbitrary $\epsilon>0$, there exists $\tilde{N}>0$, when $N>\tilde{N}$
\begin{align}
\int(f_{\Lambda_N}(y| x)-f(y| x))^2\mu_q(dx)<\epsilon.
\end{align}
Define $E_1:=\{x\mid f_{\Lambda_N}(y| x)<1\}$ and $E_2=E_1^c$.
If $$\mathfrak{m}_{\mu_q(dx)}(E_1):=\int_{E_1}\mu_q(dx)>0,$$
then we have
\begin{align}\nonumber
&\epsilon>\int(f_{\Lambda_N}(y| x)-f(y| x))^2\mu_q(dx)
\\
&=\int_{E_1}+\int_{E_2}(f_{\Lambda_N}(y| x)-f(y| x))^2\mu_q(dx)\\
&\geq \int_{E_1}(f_{\Lambda_N}(y| x)-f(y| x))^2\mu_q(dx)>0.\nonumber
\end{align}
The arbitrary of $\epsilon$ leads to a contradiction. Therefore, we can assume $f_{\Lambda_N}(y| x)\geq 1$. Likewise, since likelihood function $f(y| x)$ is non-negative for any $x$, we also suppose $f_{\Lambda_N}(y| x)\geq 0$.
By this, it follows that
\begin{align}
\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}\leq f_{\Lambda_N}(y| x)-f(y| x),\,\, \text{for} \,\, x\in B(0, r).
\end{align}
For $I_2$, we have
\begin{align*}
&I_2=\frac{1}{\gamma}\int\mu_q(dx)(f_{\Lambda_N}(y| x)-f(y| x))\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}\\
&\leq \frac{1}{\gamma}(\int_{\|x\|_2\leq r}+\int_{\mathbb{R}^n\backslash B(0, r)}) \mu_q(dx)(f_{\Lambda_N}(y| x)-f(y| x))\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}\\
&\leq \frac{1}{\gamma}\int_{\|x\|_2\leq r}\mu_q(dx)(f_{\Lambda_N}(y| x)-f(y| x))^2\\
&+\frac{1}{\gamma}\int_{\mathbb{R}^n\backslash B(0, r)}\mu_q(dx)(f_{\Lambda_N}(y| x)-f(y| x))\log\frac{f_{\Lambda_N}(y| x)}{f(y| x)}.
\end{align*}
So long as $r$ is sufficient large, we can guarantee $\mu_q(dx)=0$ for $x\in\mathbb{R}^n\backslash B(0, r)$. In fact, $r$ is greater than $\frac{2}{\sqrt{1-q}}$ that assure $B(0, r)\supset\mathcal{I}_q$ can meet this point.
The boundedness of $\gamma$ from below can be obtained by the similar reasons.
\begin{align}
\gamma&=\int \mu_q(dx) f(y| x)\nonumber\\
&=\int_{\|x\|_2\leq r}\mu_q(dx) f(y| x)+\int_{\mathbb{R}^n\backslash B(0, r)}\mu_q(dx) f(y| x)\\
&\geq \frac{1}{(2\pi)^{\frac{n}{2}}\delta^n}\exp(-L(r)).\nonumber
\end{align}
Therefore, we have
\begin{align}
I_2\leq (2\pi)^{\frac{n}{2}}\exp(L(r))\mathbb{E}^{\mu_q}((f(y| x)-f_{\Lambda_N}(y| x))^2),
\end{align}
which yields
\begin{align}
D_{KL}(\tilde{\mu}^y_{N}||\mu^y)\leq(2\pi)^{\frac{n}{2}}\exp(L(r))\mathbb{E}^{\mu_q}((f(y| x)-f_{\Lambda_N}(y| x))^2).
\end{align}
\end{proof}
Moreover, using Lemma \ref{lemma2.1}, we get
\begin{thm}\label{theorem5.3}
When $J\rightarrow\infty$, we have
\begin{align}
D_{KL}(\tilde{\mu}^y_{NJ}||\tilde{\mu}^y_{N})\rightarrow 0.
\end{align}
\end{thm}
\begin{proof}
We only analyze the case of $n=1$. For $n>1$, the independence of each components $x_i$ of $x$ yields the same discussion.
As the discussion in Theorem \ref{theorem5.2}, it follows that
\begin{align}
&D_{KL}(\tilde{\mu}^y_{NJ}||\tilde{\mu}^y_{N})\leq D_{KL}(\tilde{\mu}^y_{NJ}||\tilde{\mu}^y_{N})+D_{KL}(\tilde{\mu}^y_{N}||\tilde{\mu}^y_{NJ})\nonumber\\
&\leq \frac{1}{\gamma_N}\int f_{\Lambda_N}(\mu_J-\mu)\log\frac{\mu_J}{\mu}=\frac{1}{\gamma_N}\int f_{\Lambda_N}(\mu_J-\mu)\log\frac{f^{(q)}_J(x)}{f^{(q)}(x)}\\
&=\frac{1}{\gamma_N}\int f_{\Lambda_N}f^{(q)}(x)dx\frac{f^{(q)}_J(x)-f^{(q)}(x)}{f^{(q)}(x)}\log\frac{f^{(q)}_J(x)}{f^{(q)}(x)}.\nonumber
\end{align}
Denoting $\frac{f^{(q)}_J(x)-f^{(q)}(x)}{f^{(q)}(x)}:=u$, we write
\begin{align}
D(f_J, f):=\frac{f^{(q)}_J(x)-f^{(q)}(x)}{f^{(q)}(x)}\log\frac{f^{(q)}_J(x)}{f^{(q)}(x)}=u\log(1+u).
\end{align}
By the expressions of $f^{(q)}_J(x)$ and $f^{(q)}(x)$, the common factor
$\sqrt{4-(1-q)x^2}$ contains two null points, which are all the zero points of $f^{(q)}_J(x)$ and $f^{(q)}(x)$. And the remainder factors of them have no zero points, which yields according to Lemma \ref{lemma2.1}
\begin{align}
u=\frac{
\sum\limits_{k=J}^{\infty}
(-1)^{k-1}q^{\left(\begin{array}{c}k \\2\end{array}\right)}T_{2k-2}
(\frac{x\sqrt{1-q}}{2})}{
\sum\limits_{k=1}^{\infty}
(-1)^{k-1}q^{\left(\begin{array}{c}k \\2\end{array}\right)}T_{2k-2}
(\frac{x\sqrt{1-q}}{2})}\rightarrow 0 \,\, \text{uniformly as}\,\, J\rightarrow \infty.
\end{align}
Thus $D(f_J, f)=O(u^2)\rightarrow 0$ as $J\rightarrow\infty$.
This gives our conclusion.
\end{proof}
By using the inequalities \eqref{errr5.5}, \eqref{errr5.6}, it follows from Theorem \ref{theorem5.2}, \ref{theorem5.3} that
\begin{corollary}
The same conditions as in Theorem \ref{theorem5.2} hold. We have for $N\rightarrow\infty, J\rightarrow\infty$
\begin{align}
D_{\text{Hell}}(\tilde{\mu}^y_{NJ}, \mu^y)\rightarrow 0, \\
D_{\text{TV}}(\tilde{\mu}^y_{NJ}, \mu^y)\rightarrow 0.
\end{align}
\end{corollary}
\section{Numerical test}
In the numerical examples, we implement the Metropolis-Hastings algorithm \cite{stuart} to sample from the posterior distribution. This algorithm aims at sampling from a target distribution $\mu(dx)$ with density $\pi(x)$. Algorithm MH provides the details of this algorithm.
\begin{tabular}{l}
\hline
{\bf Algorithm MH:} Metropolis-Hastings algorithm\\
\hline
Initialize $x^{(0)}\in\mathcal{X}$ \\
for $i=1, 2, \cdots$ do\\
\hspace{0.5cm} Propose: Move $x^{(i-1)}$ to a candidate $\hat{x}$ \\
\hspace{2cm} according to a transition density $q(x|x^{(i-1)})$.
\\
\hspace{0.5cm} Acceptance probability:
\\
\hspace{1cm}$\alpha(\hat{x}|x^{(i-1)})=\min\{1, \frac{q(x^{(i-1)}|\hat{x})\pi(\hat{x})}{q(\hat{x}|x^{(i-1)})\pi(x^{(i-1)})}\}$.
\\
\hspace{0.9cm} $u\sim$ Uniform$(0, 1)$.
\\
\hspace{0.5cm} Accept the proposal $\hat{x}$ with probability $\alpha$, i.e.,
\\
\hspace{1cm} $
x^{(i)}=\left\{
\begin{aligned}
&\hat{x}, & \text{if}\,\, \alpha>u,\\
&x^{(i-1)}, & \text{otherwise}.
\end{aligned}
\right.
$\\
end for
\\
\hline
\end{tabular}
\subsection{1D problem}
We consider the estimation problem of the unknown mean $x$ according to random realizations $\{y_i\}_{i=1}^n$, which are drawn independently from a Gaussian distribution $N(y|x, \sigma^2)$ with a given standard deviation $\sigma$. It can be seen that if the prior is taken as Gaussian distribution, the posterior is also Gaussian.
Here we suppose that $x$ obeys a q-Gaussian prior distribution. The posterior distribution exhibits non-Gaussianity. We use this simple example as the first testbed.
The likelihood function can be written as
\begin{align*}
f(y| x)&=\prod_{i=1}^n \frac{1}{(2\pi \sigma^2)^{\frac{1}{2}}}\exp(-\frac{(y_i-x)^2}{2\sigma^2})\\
&=
\frac{1}{(2\pi \sigma^2)^{\frac{n}{2}}}\exp(-\frac{\sum_{i=1}^n(y_i-x)^2}{2\sigma^2}).
\end{align*}
Thereby, we get the posterior according to Bayes' formula
\begin{align*}
f^y(x)=\frac{f(y| x)f^{(q)}(x)}{\int f(y| x)f^{(q)}(x)dx}.
\end{align*}
We generate the data $y_i$ from Gaussian distribution $N(y|x^\dag, \sigma^2)$ with the true mean $x^\dag=10$ and the standard deviation $\sigma=5$. For the numerical experiment, the pseudo-random numbers $$y=[15.0389;-0.6183;7.4771;3.6470;8.0871;13.2434;14.1286;4.9253;7.6447;10.6851]$$ are used as synthetic data. In the Christoffel algorithm, the density function for the pluripotential equilibrium measure $\nu$ is taken as
\begin{align}
v(x)=\frac{1}{\sqrt{\Xi}}\frac{\sqrt{1-q}}{\pi\sqrt{4-(1-q)\frac{(x-x_0)^2}{\Xi}}}.
\end{align}
The parameter $x_0$ and $\Xi$ are specified as $x_0=11.5$ and $\Xi=c(x_0-x^\dag)^2(1-q)/4$ with a fixed positive constant $c>1$. In the numerical tests, the prior density is truncated to the first $100$ terms, i.e., $J=100$. The normalization constants $\gamma_{NJ}$ and $\gamma_J$ (has a similar definition with $\gamma$) are computed by Monte Carlo method.
We list the relative error of the posterior density function in Table \ref{tab:tab1}. The results show that the spectral likelihood expansion (SLE) approximation can fit the likelihood function well and the numerical precision is higher with higher order polynomial. For comparison purposes, we draw some samples from the posterior distribution by means of MCMC. An independence sampling process is utilized. The likelihood function and its SLE approximation, the posterior density and a normalized histogram of the obtained random walk samples are shown in Fig. \ref{ex1_1}. From the results, it can be seen that the SLEs are able to approximate the likelihood function well. As depicted in \cite{nagel}, these well-fitted regions accumulate the largest proportions of the total prior probability mass. Due to the reason that the q-Gaussian priors have compact, even though the SLEs start strongly deviating from the likelihood function, the posterior densities by using SLEs vanish when far away from the peaks. From the viewpoint, the proposed priors are better options for fitting the posterior densities by using SLEs. For the sampling effect, the histogram plots can reflect the posterior densities well.
\begin{table}[h]
\centering
\caption{The relative errors for posterior density with different $q$ and orders $N$.}\label{tab:tab1}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
$q$ & $N=2$ & $N=5$&$N=7$&$N=9$& $N=12$ \\
\hline
-0.5 & 0.0772&0.0065&0.0044&1.8092e-04&6.8778e-05\\
\hline
-0.2 & 0.0963&0.0077&0.0048&1.9242e-04&9.2320e-05\\
\hline
0 & 0.1083&0.0088&0.0061&1.9865e-04&5.1662e-05\\%q=0.2 0.1202 0.0097 0.0069 2.3369e-04 6.0067e-05
\hline
0.2&0.1202&0.0097&0.0069&2.3369e-04&6.0067e-05
\\
\hline
0.5&0.1627& 0.0120 &0.0085 &2.6809e-04& 7.0980e-05
\\
\hline
\end{tabular}
\end{table}
\begin{figure}[!hbt]
\centering
\subfigure[$q=-0.8$]{
\includegraphics[width=3.5cm]{liklihoo_n8_19.eps}
\includegraphics[width=3.5cm]{pde_n8_19.eps}
\includegraphics[width=3.5cm]{pde_n8_19.eps}}
\subfigure[$q=-0.5$]{
\includegraphics[width=3.5cm]{liklihoo_n5_12.eps}
\includegraphics[width=3.5cm]{pde_n5.eps}
\includegraphics[width=3cm]{pde_n5.eps}}
\subfigure[$q=-0.2$]{
\includegraphics[width=3.5cm]{liklihoo_n2_12.eps}
\includegraphics[width=3.5cm]{pde_n2.eps}
\includegraphics[width=3.5cm]{pde_n2.eps}}
\subfigure[$q=0$]{
\includegraphics[width=3.5cm]{liklihoo_0_12.eps}
\includegraphics[width=3.5cm]{pde_0.eps}
\includegraphics[width=3.5cm]{pde_0.eps}}
\subfigure[$q=0.2$]{
\includegraphics[width=3.5cm]{liklihoo_p2_12.eps}
\includegraphics[width=3.5cm]{pde_p2.eps}
\includegraphics[width=3.5cm]{pde_p2.eps}}
\subfigure[$q=0.5$]{
\includegraphics[width=3.5cm]{liklihoo_p5_12.eps}
\includegraphics[width=3.5cm]{pde_p5.eps}
\includegraphics[width=3.5cm]{pde_p5.eps}}
\caption{Left: Likelihood function and its PCE approximation with $N=12$; Middle: original model; Right: SLE model}
\label{ex1_1}
\end{figure}
\subsection{2D problem}
We consider a stationary inverse heat conduction problem governed by
\begin{align}\label{id5.1}
&-\nabla\cdot(\kappa\nabla u)=0\,\, \text{in}\,\,\Omega\in\mathbb{R}^2,\\
&u\mid_{\gamma_1}=g(x, y),\label{id5.2}\\
&-\kappa_0\frac{\partial u}{\partial\nu}\mid_{\gamma_2}=h(x, y),\label{id5.3}
\end{align}
where $\Omega=\cup_{i=0}^n\Omega_i$ with boundary $\partial\Omega=\gamma_0\cup\gamma_1$, $\kappa$ takes different values $\kappa_i$ at $\Omega_i$ and $g, h$ are the given functions. For the test example, we use the same settings as in \cite{nagel}, where $\Omega$ is a square domain $(x, y)\in (0, 1)\times(0, 0.6)$ and $\kappa_0=15, \kappa_1=32, \kappa_2=28$. The subdomains $\Omega_1, \Omega_2$ are disks located at $(0.3, 0.3), (0.7, 0.3)$ with radius $0.1$ respectively. The Dirichlet boundary condition $g(x, 0.6)=200$ and Neumann boundary conditions $h(0, y)=0, h(1, y)=0, h(x, 0)=2000$. The forward problem is to find the solution $u$ for given $\kappa_1, \kappa_2$. The inverse heat conduction problem is to seek $\kappa_1, \kappa_2$ by measurements of $u$ at some fixed points in domain $\Omega$. The domain settings are displayed in Fig. \ref{fig:1}.
The forward problem is solved in linear finite element method and we show the finite element solution in Fig. \ref{fig:2}. The data is acquired at $10$ scattered points distributed in the domain $\Omega$ by adding absolute error to the numerical solution of the forward problem, i.e.,
\begin{align*}
u(\vec{x}, \vec{y})=u^\dag(\vec{x}, \vec{y})+\delta*\text{randn}(10, 1),
\end{align*}
where $(\vec{x}, \vec{y})$ is the 10 measure point position vector. We display several numerical effects in Fig. \ref{fig:3} for different $q$ and noise $\delta$. The results show that when $\delta$ is smaller, the peak of posterior density is closer to the true value. And the MCMC samples can reflect the posterior density well. However, there exists some differences away from the peak between the posterior density with the SLE and the original model. This also leads to some samples do not concentrate close to the peak.
\begin{figure}[!hbt]
\begin{tikzpicture}[xscale=1,yscale=1]
\draw [black, cyan] (0,0) to [out=90,in=80] (7,0);
\draw [black,ultra thick, cyan] (7,0) to [out=-102,in=-90] (0,0);
\draw [fill, color=gray] (2,0.2) ellipse [x radius=0.6cm, y radius=0.4cm];
\draw [fill, color=blue] (5,-0.5) ellipse [x radius=0.7cm, y radius=0.5cm];
\draw [fill, color=green] (4,1.2) ellipse [x radius=0.8cm, y radius=0.5cm];
\draw [fill, color=darkgray] (3,-1.2) ellipse [x radius=0.7cm, y radius=0.5cm];
\node at (5,0.5) {$\kappa_0, \Omega_0$};
\node at (2,0.2) {$\kappa_1, \Omega_1$};
\node at (5,-0.5) {$\kappa_2, \Omega_2$};
\node at (3,-1.2) {$\kappa_3, \Omega_3$};
\node at (4,1.2) {$\kappa_n, \Omega_n$};
\draw[black, very thick,->] (3,-2)--(2.98,-2.5);
\node at (3.7,-2.5) {$-\kappa_0\frac{\partial u}{\partial \nu}$};
\node at (3.7,2.4) {$u$};
\draw [line width=0.2cm] (3.7,2) -- (3.8,2.1);
\draw [dotted, ultra thick] (3,0) -- (3.5,0.5);
\node at (3, 2) {$\gamma_1$};
\node at (4, -2) {$\gamma_2$};
\node at (1, -1) {$\Omega$};
\draw [<->] (12,-2) -- (8,-2) -- (8,2);
\draw [ultra thick, cyan] (8, 1.8) -- (8, -2) -- (11.5,-2) -- (11.5, 1.8);
\draw [thick] (8, 1.8) -- (11.5, 1.8);
\draw [darkgray, ultra thick,fill] (9,0) circle [radius=0.4];
\draw [gray, ultra thick,fill] (10.5,0) circle [radius=0.4];
\node at (8.5, -1.2) {$\Omega$};
\node at (11, 1.2) {$\Omega_0$};
\node at (9, 0) {$\Omega_1$};
\node at (10.5, 0) {$\Omega_2$};
\node at (10,-2) {$\gamma_2$};
\node at (11.5,0) {$\gamma_2$};
\node at (8,0) {$\gamma_2$};
\node at (10,1.8) {$\gamma_1$};
\node at (12.2, -2.2) {$x$};
\node at (11.5, -2.2) {$1$};
\node at (8, 2.2) {$y$};
\node at (7.7, 1.8) {$0.6$};
\end{tikzpicture}
\caption{Heat conduction setup.}
\label{fig:1}
\end{figure}
\begin{figure}[!hbt]
\centering
\includegraphics[width=6cm]{finiteelementsolution.eps}
\caption{The finite element solution.}
\label{fig:2}
\end{figure}
\begin{figure}[!hbt]
\centering
\subfigure[$q=-0.8, \delta=0.1$]{
\includegraphics[width=6cm]{pde_n8_delta01.eps}
\includegraphics[width=6cm]{pce_n8_delta01.eps}}
\subfigure[$q=0.5, \delta=1$]{
\includegraphics[width=6cm]{pde_p5_delta1.eps}
\includegraphics[width=6cm]{pce_p5_delta1.eps}}
\subfigure[$q=0.5, \delta=0.5$]{
\includegraphics[width=6cm]{pde_p5_delta05.eps}
\includegraphics[width=6cm]{pce_p5_delta05.eps}}
\caption{The contour of posterior density and MCMC samples. Left: Original model; Right: The SLE model}
\label{fig:3}
\end{figure}
\section*{Conclusions}
We have introduced a new prior model, namely, q-Gaussian prior,
into the research of inverse problems, which is q-analogue of classical Gaussian distribution. Since the density function of each q-Gaussian distribution has compact support, we can characterize some bounded physical parameters using it. In order to accelerate the computation of MCMC sampling in Bayesian inversion, we adopted a spectral likelihood approximation algorithm based on q-Hermite polynomial chaos expansion of likelihood function. Then we proved the convergence of posterior distribution in the framework of relative entropy when the likelihood function is replaced with truncated PCE. And when the q-Gaussian prior is approximated, we also studied the convergence of the corresponding approximated posterior measure. With the proposed prior and SLE algorithm, we verified the effectiveness of the proposed method through two numerical examples .
|
1,314,259,993,200 | arxiv | \section{Introduction}
Recently the analog model is very popular for studying the
condensed matter physics (see \cite{Rev} for a review). Since
direct experimental probes of many important aspects of general
relativity (GR) are extremely difficult, the possibility of using
condensed matter system, such as Bose-Einstein condensates (BEC),
to mimic certain aspects of GR could prove to be very important
\cite{Garay,Garay2,Barcelo}. These analog models provide a bridge
for interchanging conceptualizations of phenomena between various
condensed matter systems and relativistic physics \cite
{Rev,Visser,VF,Unruh,Volovik,Garay,Garay2,Barcelo,Stone},
sometimes they illuminate aspects of general relativity and
sometimes the machinery of differential geometry can be used to
illuminate aspects of the analog model. In this letter, we use
mathematical methods developed in the framework of differential
geometry to study the transverse force on a moving vortex in
Bose-Einstein condensates. On the other hand, we have to mention
that these condensed matter systems can also be used to simulate
topological defects characteristic of gauge theories and which are
considered to have played a cosmological role in the early stages
of the evolution of the universe such as monopoles and cosmic
strings.
In the analog model, what we are concerned is the propagation of
small collective perturbations of the condensate around a
background stationary state instead of solving the
Gross-Pitaevskii (GP) equation with some given external potential.
By virtue of the idea that an effective Lorentzian metric governs
perturbative fluctuations, analog models can be given based on
acoustic propagation in an irrotational vortex. Thus we can use
the method of general relativity to investigate the ''effective
space-time geometry'' in a constant-speed-of-sound (iso-tachic)
and almost incompressible\ (iso-pycnal) hydrodynamical flows
\cite{VF}. With the so-called ''effective acoustic metric'' the
condensates can be regards as Lorentz spacetime \cite
{Garay,Garay2,Barcelo}. It is natural that the cosmic string in
the Lorentz spacetime will appear in such effective spacetime
geometry as the vortex. Furthermore, in terms of the
energy-momentum tensor we consider the equation of motion for the
vortex and calculate the transverse force on a moving vortex in
detail. We conclude that the Magnus force can be described with
the effective acoustic metric in the frame of general relativity
without any concrete model or hypothesis.
\section{Topological vortex in the BEC}
Given the effective acoustic metric, the effective gravity arises
in the BEC system \cite{Garay,Garay2,Barcelo}, i.e. we can
consider such system as the Lorentz spacetime as follows.
Bose--Einstein condensates are most usefully described by the nonlinear
Schr\"{o}\-dinger equation, also called the Gross--Pitaevskii equation:
\begin{equation}
i\hbar \frac \partial {\partial t}\psi (t,\vec{x})=(-\frac{\hbar ^2}{2m}%
\nabla ^2+V_{ext}(\vec{x})+\lambda |\psi (t,\vec{x})|^2)\psi (t,\vec{x}).
\label{lg1}
\end{equation}
Now use the Madelung representation~\cite{Madelung} to put the
Schr\"{o}dinger equation in ``hydrodynamic'' form:
\begin{equation}
\psi =\sqrt{\rho }\;\exp (-i\theta \;m/\hbar ).
\end{equation}
Take real and imaginary parts: The imaginary part is a continuity equation
for an irrotational fluid flow of velocity $\vec{v}\equiv \nabla \theta $
and density $\rho $; while the real part is a Hamilton--Jacobi equation
(Bernoulli equation; its gradient leads to the Euler equation).
Specifically:
\begin{equation}
\partial _t\rho +\nabla \cdot (\rho \;\nabla \theta )=0.
\end{equation}
\begin{equation}
\frac \partial {\partial t}\theta +{\frac 12}(\nabla \theta )^2+{\frac{%
\lambda \;\rho }m}-{\frac{\hbar ^2}{2m^2}}\;{\frac{\Delta \sqrt{\rho }}{%
\sqrt{\rho }}}=0.
\end{equation}
That is, the nonlinear Schr\"{o}dinger equation is completely equivalent to
irrotational inviscid hydrodynamics with a particular form for the enthalpy
\begin{equation}
h=\int \frac{dp}\rho =\frac{\lambda \rho }m,
\end{equation}
plus a peculiar derivative self-interaction:
\begin{equation}
V_Q=-{\frac{\hbar ^2}{2m^2}}\;{\frac{\Delta \sqrt{\rho }}{\sqrt{\rho }}}.
\end{equation}
The equation of state for this ``quantum fluid'' is calculated from the
enthalpy
\begin{equation}
p={\frac{\lambda \;\rho ^2}{2m}}.
\end{equation}
The corresponding speed of sound is
\begin{equation}
c_s^2=\frac{dp}{d\rho }=\frac{\lambda \rho }m.
\end{equation}
The disturbances propagate in an effective spacetime with metric $g_{\mu \nu
}$, which was shown to be of the Painleve-Gullstrand form \cite{Visser,VF}
\[
g_{00}=-\frac \rho c[c^2-v^2],\;\;g_{0i}=-\frac \rho cv_i,\;\;g_{ij}=\frac
\rho c\delta _{ij},
\]
where the velocity $c$ plays the role of the speed of light and is equal to
the sound speed for phonons. The metric has spacetime interval
\[
ds^2=\frac \rho c[-c^2dt^2+\delta _{ij}(dx^i-v^idt)(dx^j-v^jdt)],
\]
where the indices on the background velocity $v^i$ are always raised and
lowered using the flat 3-dimensional Cartesian metric, i.e. $v^i=v_i$ and $%
v^2=v^iv_i$.
Consider the physical situation that the speed of sound is iso-tachic and
independent of position and time, we can choose co-ordinates to set the
speed $c$ of linear quasiparticle dispersion equal to unity. The
(3+1)-dimensional Painleve-Gullstrand metric then reads \cite{VF}
\[
g_{\mu \nu }=\rho \left[
\begin{array}{ll}
-1+v^2 & \;-\vec{v} \\
\;-\vec{v} & \;\;\;1
\end{array}
\right]
\]
The inverse metric is
\[
g^{\mu \nu }=\frac 1\rho \left[
\begin{array}{ll}
-1 & \;\;\;\;-v^j \\
-v^i & \;\;\delta ^{\iota j}-v^iv^j
\end{array}
\right]
\]
Since the general relativity can be described with the tetrad field as the
SO(3,1) gauge theory, the above BEC system should possess the similar
effective tetrad field $e_\mu ^a$ ($a$ and $\mu$ are SO(3,1) and space-time
indices, respectively). As we known, $\omega _{\mu ab}$ is the connection of
Lorentz group gauge theory
\[
D_\mu \phi _a=\partial _\mu \phi _a-\omega _{\mu ab}\phi _b,\;\;\;\;\;\mu
=0,1,2,3,\;\;\;\;\;a,b=1,2,3,4
\]
and the corresponding pure connection is defined as
\[
\omega _{abc}=e_a^\mu \omega _{\mu bc}.
\]
With this connection the vortex tensor is proposed as
\[
F_{\mu \nu }=e_{\nu a}D_\mu \omega _a-e_{\mu a}D_\nu \omega _a
\]
where $\omega _a=\omega _{bab}$. For Lorentz spacetime, the torsion should
be zero, i.e.
\[
T_{\mu \nu a}=D_\mu e_{\nu a}-D_\nu e_{\mu a}=0,
\]
then, the vortex tensor
\begin{eqnarray}
F_{\mu \nu } &=&\partial _\mu A_\nu -\partial _\nu A_\mu , \label{tensor1}
\end{eqnarray}
where $A_\mu =e_{\mu a}\omega _a$ can be looked as a U(1) connection.
In terms of Eqs. (\ref{tensor1}) the topological charge of vortex can be
found
\begin{equation}
q=\int_\Sigma F_{\mu \nu }dx^\mu \wedge dx^\nu .
\end{equation}
As one has shown in \cite{DuanU1}, the U(1) gauge potential can be
decomposed by the 2-dimensional unit vector fields $n^A$ $(n^A=\phi ^A/\sqrt{%
\phi ^B\phi ^B})$ as
\begin{equation}
A_\mu =\frac 1{2\pi }\varepsilon _{AB}n^A\partial _\mu n^B.
\end{equation}
One can find that the charge of vortex can be expressed as
\begin{equation}
q=\frac 1{2\pi }\int_\Sigma \varepsilon _{AB}\partial _\mu n^A\partial _\nu
n^Bdx^\mu \wedge dx^\nu .
\end{equation}
Following the $\phi $-mapping theory, it can be rigorously proved that
\begin{equation}
q=\int_\Sigma \delta ^2(\vec{\phi})D_{\mu \nu }(\frac \phi x)dx^\mu \wedge
dx^\nu ,
\end{equation}
where $\Sigma $ is an arbitrary 2-dimensional surface,
\begin{equation}
x^\mu =x^\mu (u^1,u^2),
\end{equation}
and $D^{\mu \nu }(\frac \phi x)$ is the tensor Jacobian which defined as
\begin{equation}
\varepsilon ^{AB}D^{\mu \nu }\left( \frac \phi x\right) =\varepsilon ^{\mu
\nu \lambda \sigma }\partial _\lambda \phi ^A\partial _\sigma \phi ^B.
\end{equation}
The integral $q$ can be rewritten with the usual Jacobian $D(\phi /u)$
\begin{equation}
q=\int_\Sigma \delta ^2(\vec{\phi})D(\frac \phi u)\sqrt{g_u}d^2u.
\end{equation}
We find that $q\neq 0$, only when
\begin{equation}
\phi ^A(\vec{x},t)=0,\;\;\;\;\;\;A=1,2.
\end{equation}
Then solutions of above equations are vortices
\begin{equation}
S_\alpha :\;\;\;\;\;\;x^\mu =z_A^\mu (\sigma ,\tau ),\;\;\;\;\;\;\alpha
=1,2,...,l
\end{equation}
which are the worldlines of vortices.
Using the $\phi $-mapping topological current theory we have
\begin{equation}
q=\int_\Sigma \sum_{A=1}^lW_\alpha \delta ^2(u^i-z_\alpha ^i)d^2u
\end{equation}
where $z_\alpha ^i(\alpha =1,2,...,l)$ are the intersection points of
vortices $S_\alpha $ with surface $\Sigma $ and $W_\alpha $ is the winding
number. Then we find
\begin{equation}
q=\sum_{\alpha =1}^lW_\alpha .
\end{equation}
This is just our $\phi $-mapping topological current theory of
vortices in the frame of effective acoustic geometry which shows
that the vortices appear naturally in the hydrodynamical flows and
the charge of vortices are topologically quantized by winding
number. In the other word, the vortices will emerge in all those
systems which can be described with the effective Lorentz geometry
with the above discussion.
At the other hand, the vortex can be discussed with the order parameter $%
\psi (\vec{r},t)$ directly. Starting from the Gross-Pitaevskii equation, one
can construct the vortex current with the condensed wave function $\psi $
\[
\vec{j}=\frac m\hbar \nabla \times \vec{v},
\]
where the current velocity
\[
\vec{v}=-\frac{i\hbar }{2m}(\psi ^{*}\nabla \psi -\psi \nabla \psi
^{*})/|\psi |^2,
\]
is just the background velocity.
It is well known that the condensed wave function $\psi $ can be looked upon
as a section of a complex line bundle with base manifold $M$ (in this paper $%
M=R^3\otimes R$). Denoting the condensed wave function $\psi $ as
\[
\psi (\vec{x},t)=\phi ^1(\vec{x},t)+i\phi ^2(\vec{x},t),
\]
where $\phi ^1(\vec{x})$ and $\phi ^2(\vec{x})$ are two components of a
two-dimensional vector field
\[
\vec{\phi}=(\phi ^1,\phi ^2)
\]
in the (3+1)-dimensional space-time. The vortex current in the 3-dimensional
space can be obtained
\[
j^i=\frac 1{2\pi }\varepsilon ^{ijk}\varepsilon _{AB}\partial _\nu
n^A\partial _\lambda n^B,\;\;\;\;i,j,k=1,2,3
\]
where $n^A$ is the two-dimensional unit vector field of the complex scalar
field:
\[
n^A=\phi ^{\newline
A}/||\phi ||,\;\;\;\;||\phi ||^2=\phi ^A\phi ^A,\;\;\;A=1,2.
\]
It is clear that the topological current is identically conserved \cite
{Gelfand}, i.e.
\begin{equation}
\partial _ij^i=0. \label{conserv1}
\end{equation}
By making use of the $\phi $-mapping method, this topological current can be
rewritten in a compact form \cite{DuanZhangLi},
\begin{equation}
j^i=D^i(\frac \phi x)\delta (\vec{\phi}), \label{zero12}
\end{equation}
where $D^i(\frac \phi x)$ is the vector Jacobians of $\phi (x):$%
\[
D^i(\frac \phi x)=\frac 12\varepsilon ^{ijk}\varepsilon _{AB}\partial _j\phi
^A\partial _k\phi ^B.
\]
Thus we have the important relation between the topological current and the
condensed wave function $\psi (\vec{x})$ in the Bose-Einstein condensation
system. With this topological current the corresponding vorticity $\Gamma
=\oint \vec{v}\cdot d\vec{l}$ can be given
\[
\Gamma =\frac hm\sum_{\alpha =1}^lW_\alpha =q\frac hm.
\]
One can find easily that the vortex in the frame of the effective acoustic
geometry is the same vortex given with the order parameters. That means that
we can discuss this kinds of topological defects in the condensed matter
using the method of general relativity by virtue of the effective acoustic
metric.
It is convenient to generalize the vortex current into the (3+1)-dimensional
spacetime
\[
j^{\mu \nu }=\varepsilon ^{\mu \nu \lambda \rho }F_{\lambda \rho }.
\]
It is easy to prove that
\[
j^{\mu \nu }=\delta (\vec{\phi})D^{\mu \nu }(\phi /x),
\]
where $D^{\mu \nu }(\phi /x)$ is the tensor Jocobian
\[
\varepsilon _{\mu \nu \lambda \rho }D^{\mu \nu }(\phi /x)=\varepsilon
_{ab}\partial _\lambda \phi ^a\partial _\rho \phi ^b.
\]
Based on this generalized vortex current we can consider the
equation of emotion of vortex in the effective spacetime.
\section{Equation of emotion with the Energy-Momentum tensor}
In the above section, we show the vortices exist in the effective
acoustic spacetime geometry and their charges are quantized in the
level of topology. With the vortex current $j^{\mu \nu }$, we can
define the Lagrangian in the effective spacetime
\[
L=T\sqrt{\frac 12g_{\mu \nu }g_{\lambda \rho }j^{\mu \lambda }j^{\nu \rho }}%
=T\sqrt{\frac 12j_{\mu \nu }j^{\mu \nu }}
\]
which is the generalization of Nielsen's Lagrangian, where $T$ is
a constant with dimension of $[mass]^2$. It is easy to obtain the
energy-momentum tensor
\begin{equation}
T^{\mu \nu }=T\delta (\vec{\phi})D(\phi /x)g^{IJ}\frac{\partial x^\mu }{%
\partial u^I}\frac{\partial x^\nu }{\partial u^J}, \label{net1}
\end{equation}
which shows that the tensor only appear in the zeroes of the order parameter
field $\vec{\phi}(x)$, i.e. the position of the vortices. Since the vortex
is the only quasiparticles in the flowing fluids and offer the role of the
matter term in the effective acoustic geometry, Eq. (\ref{net1}) is natural
result for the energy-momentum tensor. From the principle of the least
action or the formula $\triangledown _\mu T^{\mu \nu }=0$ we can obtain the
equation of motion
\[
\frac 1{\sqrt{-g_u}}\frac \partial {\partial u^I}(\sqrt{-g_u}g^{IJ}\frac{%
\partial x^\lambda }{\partial u^J})+\Gamma _{\mu \nu }^\lambda g^{IJ}\frac{%
\partial x^\mu }{\partial u^I}\frac{\partial x^\nu }{\partial u^J}=0,
\]
which is the basic equations for us to discuss the transverse force on the
moving vortex. If we choose the conformal gauge, the equation have the
simple form
\begin{equation}
\partial _I\partial _Ix^\lambda +\Gamma _{\mu \nu }^\lambda \frac{\partial
x^\mu }{\partial u^I}\frac{\partial x^\nu }{\partial u^J}=0. \label{eom1}
\end{equation}
In the following we consider a simple model that the system only include one
vortex and the background velocity is along with the $x$-direction, i.e. the
velocity $\vec{v}=(v_1,0,0)$. With this 3-velocity the connection
coefficients read
\begin{equation}
\Gamma _{00}^2=-\frac 12\partial _2v_1^2,\;\;\;\;\Gamma _{10}^2=\frac
12\partial _2v_1. \label{connection2}
\end{equation}
Since we discuss the transverse force on the vortex, we choose an
ideal model that the vortex coordinate $x^\mu $ satisfies
\[
x^3=\sigma ,\;x^1=x^1(\tau ),\;x^2=x^2(\tau ),
\]
and the vortex moves in the $x$-direction only
\[
v_{vortex}=\dot{x}^1.
\]
Then the Eq. (\ref{eom1}) gives
\[
\partial _\tau \partial _\tau x^2+(\Gamma _{00}^2\partial _\tau x^0\partial
_\tau x^0+2\Gamma _{10}^2\partial _\tau x^1\partial _\tau x^0)=0,
\]
which can be calculate by virtue of the connection (\ref{connection2})
\[
\stackrel{..}{x}^2-v_1\partial _2v_1+\partial _2v_1\dot{x}^1=0,
\]
i.e.
\[
\stackrel{..}{x}^2=\partial _2v_1(v_1-v_{vortex}).
\]
Thus the transverse force $F_t$ on the vortex is obtained with the effective
acoustic geometry
\[
F_t=\rho _s\stackrel{..}{x}^2=\rho _s\Omega (v_{vortex}-v_n),
\]
where $\rho _s$ is the density of fluids, $\Omega =\partial _1v_2-\partial
_2v_1=-\partial _2v_1$ is the vorticity of the fluid and $v_n=v_1$ means the
background velocity of fluid. The last equation show that the vortex moving
in the fluids will be exerted a transverse force, just the Magnus force.
\section{Conclusion}
In this paper, we discuss the vortex in the BEC using the method
of effective acoustic geometry. Instead of solving the concrete GP
equation with some given external potential $ V_{ext}(x)$, we
study the topological structure of vortices from the viewpoint of
spacetime defects. We show that the vortex appears naturally in
such a effective Lorentz spacetime and the charge of vortex is
quantized in the topological level. Then, the energy-momentum
tensor is given with the vortex current in the effective
spacetime. The equation of vortex motion is derived in terms of
the energy-momentum tensor in the frame of general relativity.
Furthermore, we consider the transverse force on the vortex with
this equation of emotion. A simple model is calculated with the
expression of transverse force by virtue of the effective acoustic
metric. We find the transverse force in the frame of effective
geometry is just the usual Magnus force upon a moving vortex.
\section{Acknowledgement}
This project was supported in part by the National Natural Science
Foundation of China (NSFC-10175028), the TianYuan Mathematics Fund
(A0324661) and the China Postdoctoral Science Foundation.
|
1,314,259,993,201 | arxiv | \section{Introduction} \label{sec:intro}
In this paper, we consider the numerical solution of parabolic-elliptic interface problems via
the non-symmetric coupling method of MacCamy and Suri~\cite{MacCamy:1987}, which consists of a
Galerkin approximation in space
and a subsequent discretization in time by a variant of the implicit Euler method.
For ease of presentation we consider the following simple model problem:
Find $u$ and $u_e$ such that
\begin{alignat}{2}
\partial_t u -\Delta u &= \tilde f &\qquad& \text{in }\Omega\times (0,T), \label{eq:model1} \\
-\Delta u_e &= 0 &\qquad& \text{in } \Omega_e \times (0,T) \label{eq:model2} \\
\intertext{with coupling conditions across the interface given by}
u &= u_e + \tilde g &\qquad& \text{on } \Gamma \times (0,T), \label{eq:model3} \\
\partial_{n} u &= \partial_{n} u_e +\tilde h\ &\qquad& \text{on } \Gamma \times (0,T) \label{eq:model4}.
\end{alignat}
For the presentation of our results we assume that
$\Omega \subset \RR^2$ is some bounded Lipschitz domain with ${\operatorname{diam}}(\Omega)<1$. However, all results also hold for three dimensions.
We further denote by $\Gamma:=\partial \Omega$ and $\Omega_e = \RR^2 \setminus \overline{\Omega}$ the boundary
and the complement of $\Omega$,
and by $T>0$ a fixed end time.
The co-normal derivative $\partial_{n} u = \nabla u \cdot n|_\Gamma$ is taken in direction
of the unit normal vector $n$ on $\Gamma$ pointing outward with respect to $\Omega$.
The input data for the model are $\tilde f$, $\tilde g$, and $\tilde h$.
To ensure the uniqueness of the solution, we additionally require the following initial and radiation conditions
\begin{alignat}{2}
u(\cdot,0) &= 0 &\qquad& \text{on } \Omega, \label{eq:model5}\\
u_e(x,t) &= a(t) \log|x| + \O(|x|^{-1}) &\qquad& |x| \to \infty. \label{eq:model6}
\end{alignat}
The function $a(t):[0,T]\to\RR$ is unknown and automatically determined
in the solving process, see \cref{rem:solution}.
A system of this type arises, for instance, in the modeling of eddy currents in the
magneto-quasistatic regime~\cite{MacCamy:1987}.
In our model problem we might also allow inhomogeneous initial data
and extra Dirichlet or Neumann boundaries in the interior domain.
Then the analysis in this paper holds by obvious modifications.
Using the well-known representation formula~\cite{McLean:2000-book},
the field $u_e$ in the exterior domain can be expressed via the traces $u_e|_\Gamma$
and $\phi:=\partial_{n} u_e|_\Gamma$ on the interface $\Gamma$.
This allows us to reduce the above problem to a parabolic partial differential
equation in $\Omega$ coupled to an integral equation at the boundary $\Gamma$
with $u$ and $\phi$ as the unknown fields.
Different equivalent formulations are possible here, which
lead, after discretization, to various numerical approximation schemes.
Based on the non-symmetric coupling method of Johnson and N{\'e}d{\'e}lec~\cite{Johnson:1980-1},
MacCamy and Suri~\cite{MacCamy:1987} established the well-posedness of
problem~\cref{eq:model1}--\cref{eq:model6} via the method of Galerkin approximation.
Their analysis is based on the compactness of the double layer operator which
relies on the assumption that $\Gamma$ is smooth~\cite{Costabel:1988-1}.
As a by-product of their analysis, the authors also proved quasi-optimal error estimates
in the energy norm
for general Galerkin approximations under mild assumptions on the approximation spaces, i.e.,
\begin{align*}
\norm{u - u_h&}{L^2(0,T;H^1(\Omega))} + \norm{\partial_t u - \partial_t u_h}{L^2(0,T;H^1(\Omega)')} +
\norm{\phi - \phi_h}{L^2(0,T;H^{-1/2}(\Gamma))}\\
&\le C \inf_{v_h,\psi_h} \{\norm{u - v_h}{L^2(0,T;H^1(\Omega))} + \norm{\partial_t u - \partial_t v_h}{L^2(0,T;H^1(\Omega)')} \\
&\qquad\qquad\quad+ \norm{\phi - \psi_h}{L^2(0,T;H^{-1/2}(\Gamma))}\}.
\end{align*}
Here $u_h$ and $\phi_h$ are the semi-discrete approximations of $u$ and $\phi$, respectively.
Hence, a discretization by appropriate finite and boundary elements directly leads to error estimates
with optimal order for the resulting semi-discrete schemes.
To overcome the restrictive smoothness assumption on the domain $\Omega$,
Costabel, Ervin, and Stephan~\cite{Costabel:1990} applied the symmetric coupling approach
proposed in~\cite{Costabel:1988-2} to treat the parabolic-elliptic interface problem
stated above. This allowed them to prove the well-posedness
of~\cref{eq:model1}--\cref{eq:model6} and the quasi-optimality of
Galerkin approximations also for non-smooth domains.
In addition, they investigated the subsequent time discretization by the Crank-Nicolson method and
established error estimates for the resulting fully discrete scheme.
The analysis of~\cite{Costabel:1990} is based on an
elliptic projection and corresponding error estimates in $L^2$, and therefore
relies on duality arguments; see e.g.~\cite{Varga:1971-book,Wheeler:1973}.
Due to a lack of ``adjoint consistency'' for the non-symmetric coupling method of MacCamy and Suri
these arguments cannot be used for its analysis.
Therefore, ``an analysis of a fully discretized version of their coupling scheme is not available and will
be difficult'', as argued in~\cite{Costabel:1990}.
In this paper, we close this gap in the analysis of the non-symmetric coupling
method for parabolic-elliptic interface problems.
Our main results can be summarized as follows:
\begin{itemize}
\item Based on an argument of Sayas~\cite{Sayas:2009-1}, Steinbach~\cite{Steinbach:2011}
showed that the non-symmetric coupling of the elliptic-elliptic interface problem
with a lowest order term in the interior domain
in fact leads to a coercive variational formulation; see also \cite{Erath:2017-1}.
This allows us to extend the results of~\cite{MacCamy:1987,Costabel:1990} to the non-symmetric
coupling method on non-smooth domains. In particular, we establish well-posedness
of this formulation and prove quasi-optimal error estimates for Galerkin approximations.
\item As a second step of our analysis, we also consider the time discretization of the
semi-discrete scheme of~\cite{MacCamy:1987} by a variant of the implicit Euler method.
We utilize a formulation that is fully consistent with the continuous variational
formulation and does not require additional smoothness of the solution or the data;
see~\cite{Tantardini:2014-1} for a related approach in the context of parabolic problems.
This allows us to establish well-posedness and quasi-optimal approximation properties
with respect to the energy norm under minimal smoothness assumptions on the solution.
\end{itemize}
For ease of notation, we will present the details of our analysis only for
the simple model problem~\cref{eq:model1}--\cref{eq:model6} stated above.
Our arguments, however, are quite general and can be also applied to interface problems with more general
parabolic operators and interface conditions, and in higher space dimensions.
Our approach might also be useful for the analysis of other coupling strategies;
let us refer to \cite{Aurada:2013-1} for a recent survey of possible couplings.
The remainder of the manuscript is organized as follows:
In \cref{sec:prelim}, we introduce our basic notation and assumptions.
Then we present the weak formulation of the non-symmetric coupling approach
and establish its well-posedness.
\Cref{sec:galerkin} introduces a semi-discretization of the variational
problem in space by a Galerkin approach.
Furthermore, we establish well-posedness of the semi-discrete scheme and
quasi-optimal approximation properties.
In \cref{sec:time}, we discuss the time discretization by a variant of the implicit
Euler method
and prove again quasi-optimal error estimates under minimal smoothness assumptions.
In \cref{sec:fembem}, we consider space discretization by finite and boundary
elements. Using the analysis of the previous sections,
we derive explicit error estimates for the resulting
semi-discrete and fully-discrete schemes.
For illustration of our theoretical results, we present
some numerical tests in \cref{sec:numerics}.
\section{Notation and weak formulation} \label{sec:prelim}
In this section, we first introduce some basic notation and assumptions.
Then we formulate and analyze a weak formulation of our model problem.
\subsection{Notation and basic assumptions}
Throughout the next sections, we make the following assumption on the domain:
\begin{align}
\label{as:A1}
\tag{A1}
\Omega \subset \RR^2 \text{ is a bounded Lipschitz domain and }{\operatorname{diam}}(\Omega)<1.
\end{align}
Note that ${\operatorname{diam}}(\Omega)<1$ can always be achieved by scaling.
We write $H^s(\Omega)$ and $H^s(\Gamma)$ for the usual Sobolev spaces
and denote by $H^s(\Omega)'$ and $H^{-s}(\Gamma)=H^s(\Gamma)'$
their dual spaces with respect to the duality pairing induced by $L^2$; see~\cite{Evans:2010-book,McLean:2000-book} for details.
We use $\product{\cdot}{\cdot}_\Omega$ and $\dual{\cdot}{\cdot}_\Omega$, and on the
boundary $\product{\cdot}{\cdot}_{\Gamma}$ and $\dual{\cdot}{\cdot}_{\Gamma}$
to denote the corresponding scalar products and duality pairings.
Let us recall that
\begin{align*}
\dual{\psi}{v}_{\Gamma}
\le \norm{\psi}{H^{-1/2}(\Omega)} \norm{v}{H^{1/2}(\Gamma)}
\le C_{tr} \norm{\psi}{H^{-1/2}(\Omega)} \norm{v}{H^1(\Omega)}
\end{align*}
for all $\psi \in H^{-1/2}(\Gamma)$ and $v \in H^1(\Omega)$ with a constant $C_{tr}>0$.
In the first and second statement, one should formally write $\gamma v$ instead of $v$,
where $\gamma : H^1(\Omega) \to H^{1/2}(\Gamma)$ denotes the trace operator.
We skip the explicit notation of the trace operator since the meaning is clear from the context.
The last inequality encodes the continuity of the trace operator.
For ease of presentation and to allow for an easy comparison of the results,
we adopt the notation of~\cite{Costabel:1990} and denote by
\begin{align*}
H&=H^1(\Omega) \qquad \text{and} \qquad B=H^{-1/2}(\Gamma)
\end{align*}
the main function spaces arising in our analysis.
Furthermore, we use
\begin{align*}
H_T=L^2(0,T;H) \qquad \text{and} \qquad B_T=L^2(0,T;B)
\end{align*}
to denote the corresponding Bochner spaces of functions on $[0,T]$ with values in $H$ and $B$, respectively.
The associated dual spaces are given by $H'=H^1(\Omega)'$ and $B'=H^{-1/2}(\Gamma)'=H^{1/2}(\Gamma)$
as well as $H_T'=L^2(0,T;H')$ and $B_T'=L^2(0,T;B')$. All spaces introduced above are Hilbert spaces if equipped
with their natural norms, e.g., $\norm{u}{H_T}^2 = \int_0^T \norm{u(t)}{H}^2\, dt$. We further use
\begin{align*}
Q_T = \set{u \in H_T}{\partial_t u \in H_T' \text{ and } u(0)=0}
\end{align*}
to denote the natural energy space for the parabolic problem with the norm
\begin{align*}
\norm{u}{Q_T}^2 := \norm{u}{H_T}^2 + \norm{\partial_t u}{H_T'}^2.
\end{align*}
This space is again complete.
It is well-known that the space $Q_T$ is continuously
embedded in $C([0,T];L^2(\Omega))$; see, e.g.,~\cite{Evans:2010-book}.
Thus the initial value $u(0)=0$ makes sense.
\subsection{Preliminaries}
Let $(u,u_e)$ denote a sufficiently smooth solution of problem~\cref{eq:model1}--\cref{eq:model6}.
Then multiplying equation~\cref{eq:model1} with a test function $v \in H^1(\Omega)$,
integrating over $\Omega$, and using integration by parts formally lead to
\begin{align*}
\int_\Omega \partial_t u(t) v\,dx + \int_\Omega \nabla u(t)\cdot\nabla v\,dx
- \int_{\Gamma} \phi(t) v\,ds
= \int_\Omega \tilde f(t) v\,dx + \int_\Gamma \tilde h v\,ds.
\end{align*}
Here, we used equation~\cref{eq:model4} with $\phi:=\partial_{\normal} u_e|_\Gamma$
to replace the interior co-normal derivative.
For the right-hand side, we will use the short hand notation
\begin{align}
\label{eq:f}
\dual{f}{v}_{\Omega} := \int_\Omega \tilde f v \,dx + \int_{\Gamma} \tilde h v \,ds.
\end{align}
and write $f\in H'_T$.
With the representation formula for the Laplacian, we can further express
the solution for~\cref{eq:model2} and~\cref{eq:model6} in the exterior domain
$\Omega_e$ by
\begin{align}
\label{eq:repformular}
u_e(x) = \int_{\Gamma} \partial_{n_y} G(x,y) u_e(y)|_\Gamma\,ds_y
- \int_{\Gamma} G(x,y) \partial_{\normal} u_e(y)|_\Gamma\,ds_y.
\end{align}
Here $G(x,y) = - \frac{1}{2\pi} \log|x-y|$ denotes the fundamental solution
of the Laplace operator in two dimensions~\cite{McLean:2000-book}.
Upon taking the trace at the boundary $\Gamma$, writing again $\phi=\partial_{\normal} u_e|_\Gamma$ at $\Gamma$,
and using the coupling condition~\cref{eq:model3}
to replace $u_e|_\Gamma$ by $u|_\Gamma$
we obtain
\begin{align}
\label{eq:g}
\V \phi + (1/2-\K) u|_{\Gamma} = (1/2 - \K) \tilde g =:g.
\end{align}
Here, $\V$ and $\K$ denote the single and double layer operators.
For sufficiently smooth functions and domains they are given by~\cite{McLean:2000-book}
\begin{align*}
(\V \psi)(x) = \int_{\Gamma} G(x,y) \psi(y) \,ds_y
\qquad \text{and} \qquad
(\K v)(x) = \int_{\Gamma} \partial_{n_y} G(x,y) v(y) \,ds_y.
\end{align*}
By assumption~\cref{as:A1} they can be extended to
bounded linear operators on $H^{-1/2}(\Gamma)$
and $H^{1/2}(\Gamma)$, respectively; see~\cref{lem:elliptic}.
\subsection{Variational formulation}
A combination of the above formulas leads to the following weak formulation,
which will be the starting point for our analysis.
\begin{problem}[Variational problem] \label{prob:variational}
Given $f \in H'_T$ and $g \in B_T'$, find $u \in Q_T$ and $\phi \in B_T$ such that
\begin{align}
\dual{\partial_t u(t)}{v}_\Omega + \product{\nabla u(t)}{\nabla v}_\Omega - \dual{\phi(t)}{v}_{\Gamma}
&= \dual{f(t)}{v}_\Omega, \label{eq:vp1}\\
\dual{(1/2-\K) u(t)|_\Gamma}{\psi}_{\Gamma} + \dual{\V \phi(t)}{\psi}_{\Gamma}
&= \dual{g(t)}{\psi}_{\Gamma} \label{eq:vp2}
\end{align}
for all test functions $v \in H=H^1(\Omega)$ and $\psi \in B=H^{-1/2}(\Gamma)$,
and for a.e. $t \in [0,T]$.
\end{problem}
\begin{remark}
\label{rem:solution}
Any sufficiently smooth solution of~\cref{eq:model1}--\cref{eq:model6}
also solves~\cref{eq:vp1}--\cref{eq:vp2} with
$\dual{f}{v}_{\Omega} = \dual{\tilde f}{v}_\Omega + \dual{\tilde h}{v}_\Gamma$ and
$\dual{g}{\psi}_\Gamma=\dual{(1/2-\K)\tilde g}{\psi}_\Gamma$ and, vice versa,
any regular solution $(u,\phi)$ of~\cref{eq:vp1}--\cref{eq:vp2}
is a classical solution of~\cref{eq:model1}--\cref{eq:model6}.
We note that $a(t)$ in~\cref{eq:model6}
can be expressed directly in terms of the field $u_e$, once the solution $(u,\phi)$
of~\cref{eq:vp1}--\cref{eq:vp2} is known, i.e.,
$a(t)=\frac{1}{2\pi} \int_{\Gamma} \phi \,ds$, where $\phi=\partial_{n} u_e|_\Gamma$.
\end{remark}
The analysis of \cref{prob:variational} is based on the following auxiliary results.
\begin{lemma} \label{lem:elliptic}
Let~\cref{as:A1} hold.
Then the linear operators $\V:H^{s-1/2}(\Gamma) \to H^{s+1/2}(\Gamma)$
and $\K : H^{s+1/2}(\Gamma) \to H^{s+1/2}(\Gamma)$, $s\in [-1/2,1/2]$, are bounded
and $\V$ is elliptic on $H^{-1/2}(\Gamma)$, i.e.,
\begin{align*}
\dual{\V \psi}{\psi}_{\Gamma} \ge C_\V \norm{\psi}{H^{-1/2}(\Gamma)}^2
\qquad \text{for all } \psi \in H^{-1/2}(\Gamma)
\end{align*}
with some $C_\V>0$ independent of $\psi$. Moreover, the bilinear form
\begin{align*}
a(u,\phi;v,\psi) := \product{\nabla u}{\nabla v}_\Omega - \dual{\phi}{v}_{\Gamma}
+ \dual{(1/2 - \K) u}{\psi}_{\Gamma} + \dual{\V \phi}{\psi}_{\Gamma},
\end{align*}
is continuous and satisfies a G\r{a}rding inequality
on $H^1(\Omega) \times H^{-1/2}(\Gamma)$, i.e.,
\begin{align*}
a(v,\psi;v,\psi) + \product{v}{v}_\Omega \ge \alpha \big(\norm{v}{H^1(\Omega)}^2 +
\norm{\psi}{H^{-1/2}(\Gamma)}^2 \big)
\end{align*}
with $\alpha>0$ independent of the functions $v \in H^1(\Omega)$ and $\psi \in H^{-1/2}(\Gamma)$.
\end{lemma}
\begin{proof}
Boundedness and ellipticity of the integral operators are well-known;
see for instance~\cite{Costabel:1988-1,McLean:2000-book}.
The coercivity estimate for the bilinear form $a(\cdot;\cdot)$, on the other hand, follows
directly by applying~\cite[Theorem~1]{Erath:2017-1}
with $\mathbf{A}=\mathcal{I}$, $C_{\mathbf{b}c}=1$, and $\beta=0$.
\end{proof}
Using these properties, we now prove the well-posedness of \cref{prob:variational}.
\begin{theorem}
\label{thm:wellposed}
Let~\cref{as:A1} hold. Then for any $f \in H_T'$ and $g \in B_T'$, \cref{prob:variational}
admits a unique weak solution
$(u,\phi) \in Q_T \times B_T$ and
\begin{align*}
\norm{u}{Q_T} + \norm{\phi}{B_T} \le C ( \norm{f}{H_T'} + \norm{g}{B_T'})
\end{align*}
with a constant $C>0$ that only depends on the domain $\Omega$ and the time horizon $T$.
\end{theorem}
\begin{proof}
Since $\V$ is elliptic and thus invertible, we can use~\cref{eq:vp2} to express
$\phi(t) = \S u(t) + \R g(t)$ with $\S=\V^{-1} (\K - 1/2)$ and $\R=\V^{-1}$.
Then~\cref{eq:vp1} can be reduced to
\begin{align} \label{eq:reduced}
\dual{\partial_t u(t)}{v}_\Omega + \tilde a(u(t),v) = \dual{f(t)}{v}_{\Omega} + \dual{\R g(t)}{v}_{\Gamma}
\end{align}
with the bilinear form $\tilde a(u,v) := \product{\nabla u}{\nabla v}_\Omega - \dual{\S u}{v}_{\Gamma}$.
From the G\r{a}rding inequality for the bilinear form $a(\cdot,\cdot)$
in \cref{lem:elliptic} with
$\psi = \V^{-1}(\K-1/2) v$ we deduce that for all $v\in H^1(\Omega)$
\begin{align*}
\tilde a(v,v) + \product{v}{v}_\Omega
&= a(v,\psi;v,\psi) + \product{v}{v}_\Omega
\ge \alpha \norm{v}{{H^1(\Omega)}}^2.
\end{align*}
Thus $\tilde a(u,v)$ satisfies the G\r{a}rding inequality on $H^1(\Omega)$.
Consequently, the reduced problem~\cref{eq:reduced} is uniformly parabolic.
The assertions for $u$ in~\cref{eq:reduced} then follow from standard results about variational
evolution problems, see, e.g.,~\cite[Ch.~XVIII, Par.~3]{Dautray:1992-5}
and~\cite[Part II, Sec. 7.1.2]{Evans:2010-book}.
To bound the second solution component $\phi$
we use~\cref{eq:vp2} and the ellipticity of $\V$ which gives
\begin{align*}
C_\V \norm{\phi(t)}{{H^{-1/2}(\Gamma)}}^2
&\le \dual{V \phi(t)}{\phi(t)}_{\Gamma}
=-\dual{(1/2-\K) u(t)}{\phi(t)}_{\Gamma} + \dual{g(t)}{\phi(t)}_{\Gamma} \\
&\le \big((1/2+C_\K) C_{tr} \norm{u(t)}{H^1(\Omega)}
+ \norm{g(t)}{H^{1/2}(\Gamma)} \big) \norm{\phi(t)}{H^{-1/2}(\Gamma)}.
\end{align*}
In the last step, we used the trace inequality and
the boundedness of $\K$.
\end{proof}
\begin{corollary}
For $\tilde f \in H'_T$, $\tilde g\in B'_T$, and $\tilde h\in B_T$ our model
problem~\cref{eq:model1}--\cref{eq:model6} admits a unique weak solution $(u,\phi) \in Q_T \times B_T$
and
\begin{align*}
\norm{u}{Q_T} + \norm{\phi}{B_T}\le C( \norm{\tilde f}{H_T'} + \norm{\tilde h}{B_T} + \norm{\tilde g}{B_T'}).
\end{align*}
\end{corollary}
\begin{proof}
This follows directly from \cref{thm:wellposed} with~\cref{eq:f} and \cref{eq:g}.
\end{proof}
\section{Galerkin approximation} \label{sec:galerkin}
Let $H^h \subset H^1(\Omega)$ and $B^h \subset H^{-1/2}(\Omega)$ be finite dimensional
subspaces. Similar as before, we define corresponding Bochner spaces $H^h_T = L^2(0,T;H^h)$
and $B_T^h = L^2(0,T;B^h)$ and
the corresponding energy space is denoted by $Q^h_T = \set{v_h \in H^1(0,T;H^h)}{v_h(0)=0}$.
Then we consider the following Galerkin approximation of \cref{prob:variational}.
\begin{problem}[Semi-discrete problem]
\label{prob:semidiscrete}
Find $u_h \in Q^h_T$ and $\phi_h \in B^h_T$ such that
\begin{align}
\product{\partial_t u_h(t)}{v_h}_\Omega + \product{\nabla u_h(t)}{\nabla v_h}_\Omega
- \product{\phi_h(t)}{v_h}_{\Gamma} &= \dual{f(t)}{v_h}_\Omega \label{eq:vp1h}\\
\product{(1/2-\K) u_h(t)}{\psi_h}_{\Gamma} + \product{\V \phi_h(t)}{\psi_h}_{\Gamma}
&= \product{g(t)}{\psi_h}_{\Gamma} \label{eq:vp2h}
\end{align}
for all test functions $v_h \in H^h$ and $\psi_h \in B^h$, and for a.e. $t \in [0,T]$.
\end{problem}
The analysis of this Galerkin approximation can be carried out with similar arguments as used
in~\cite{Costabel:1990} and~\cite{MacCamy:1987}. Hence we make use of \cref{lem:elliptic}
to get rid of the smoothness assumption on $\Gamma$.
For convenience of the reader and later reference,
we briefly state the main results and sketch the basic ideas of their proofs.
Due to \cref{lem:elliptic}, the well-posedness of the above problem follows
again by standard energy arguments.
\begin{lemma} \label{lem:wellposedh}
Let~\cref{as:A1} hold. Then \cref{prob:semidiscrete} has a unique solution.
Moreover,
\begin{align}
\label{eq:energyh}
\norm{u_h}{H_T} + \norm{\phi_h}{B_T} \le C \big( \norm{f}{H_T'} + \norm{g}{B_T'} \big)
\end{align}
with a constant $C>0$ that is independent of the data $f$, $g$ and the spaces $H^h,B^h$.
\end{lemma}
\begin{proof}
We proceed with similar arguments as in the proof of \cref{thm:wellposed}:
First, we use~\cref{eq:vp2h} to express $\phi_h(t) = \S_h u_h(t) + \R_h g(t)$,
where $\S_h:H^h \to B^h$ is defined by
\begin{align}
\label{eq:Sh}
\dual{\V \S_h u_h}{\psi_h}_{\Gamma}
= \dual{(\K-1/2) u_h}{\psi_h}_{\Gamma} \qquad \text{for all } \psi_h \in B^h.
\end{align}
and $\R_h : H^{1/2}(\Gamma) \to B^h$ is defined by
\begin{align}
\label{eq:Rh}
\dual{\V \R_h g}{\psi_h}_{\Gamma} = \dual{g}{\psi_h}_{\Gamma} \qquad \text{for all } \psi_h \in B^h.
\end{align}
Due to the Lax-Milgram Lemma both equations~\cref{eq:Sh}--\cref{eq:Rh}
have unique solutions since
$\V:H^{-1/2}(\Gamma)\to H^{1/2}(\Gamma)$ is bounded and elliptic,
and $H^h\subset H$ and $B^h\subset B$ are finite dimensional and thus complete subspaces.
Hence, $\S_h$ and $\R_h$ are well-defined.
Furthermore, it directly follows that $\norm{\R_h g}{H^{-1/2}(\Gamma)} \le C_\V^{-1} \norm{g}{H^{1/2}(\Gamma)}$.
Then~\cref{eq:vp1h} can again be reduced to an ordinary differential equation
\begin{align}
\label{eq:reducedproblemh}
\product{\partial_t u_h(t)}{v_h}_\Omega + \tilde a_h(u_h(t),v_h) = \dual{f(t)}{v_h}_\Omega
+ \product{\R_h g(t)}{v_h}_{\Gamma}
\end{align}
with bilinear form $\tilde a_h(u_h,v_h) = \product{\nabla u_h}{\nabla v_h}_\Omega
- \product{\S_h u_h}{v_h}_{\Gamma}$. Using \cref{lem:elliptic} with $u=v=u_h$ and
$\phi=\psi=\psi_h=\S_h u_h$, where $\S_h$ is defined by~\cref{eq:Sh}, we
obtain for all $u_h\in H^h$ that
\begin{align}
\label{eq:ahgarding}
\tilde a_h(u_h,u_h) + \product{u_h}{u_h}_\Omega
= a(u_h,\psi_h;u_h,\psi_h)+ \product{u_h}{u_h}_\Omega
\ge \alpha \norm{u_h}{H^1(\Omega)}^2.
\end{align}
Existence and uniqueness of a solution to the reduced problem~\cref{eq:reducedproblemh}
and the estimates for $\norm{u_h}{H_T}$ can again be obtained from the abstract
results of~\cite{Dautray:1992-5,Evans:2010-book}.
To estimate $\norm{\phi_h(t)}{H^{-1/2}(\Gamma)}$
we use~\cref{eq:vp2h} and the same arguments as in the proof of \cref{thm:wellposed} and get
\begin{align}
\label{eq:energyphih}
C_\V\norm{\phi_h(t)}{H^{-1/2}(\Gamma)}
\le (1/2+C_\K) C_{tr} \norm{u_h(t)}{H^1(\Omega)}
+ \norm{g(t)}{H^{1/2}(\Gamma)}.
\end{align}
\end{proof}
In order to obtain a uniform estimate also for the time derivative $\partial_t u_h$,
which is not included in~\cref{eq:energyh}, we proceed with similar arguments
as~\cite{MacCamy:1987,Costabel:1990}.
Let $P_h : L^2(\Omega) \to H^h$ denote the $L^2$-orthogonal projection defined by
\begin{align}\label{eq:projection}
\product{P_h v}{w_h}_\Omega = \product{v}{w_h}_\Omega \qquad \text{for all } w_h \in H^h.
\end{align}
We will assume that the $L^2$-projection $P_h$ is stable in $H^1(\Omega)$, i.e.,
there exists a constant $C_P>0$ such that
\begin{align}
\label{as:A2}
\tag{A2}
\norm{P_h v}{H^1(\Omega)} \le C_P \norm{v}{H^1(\Omega)} \text{ for all }v \in H^1(\Omega).
\end{align}
This imposes a mild condition on the approximation
space $H^h$, which is not very restrictive in practice; see \cref{sec:fembem} for an example.
Property (A2) and equation~\cref{eq:vp1h} can now be used
to deduce a uniform bound for the norm $\norm{\partial_t u_h}{H^1(\Omega)'}$ of the time derivative
and the following energy estimate.
\begin{lemma}[Discrete energy estimate] \label{lem:energyh}
Let~\cref{as:A1}--\cref{as:A2} hold. Then
\begin{align*}
\norm{u_h}{Q_T} + \norm{\phi_h}{B_T} \le C \big( \norm{f}{H_T'} + \norm{g}{B_T'} \big)
\end{align*}
with a constant $C>0$ independent of $f,g$ and the approximation spaces $H^h$ and $B^h$.
\end{lemma}
\begin{proof}
By definition of the dual norm and the $L^2$-projection, we obtain
\begin{align}
\label{eq:dualnorm}
\norm{\partial_t u_h(t)}{H^{1}(\Omega)'}
&= \sup_{0\not= v \in H^1(\Omega)} \frac{\product{\partial_t u_h(t)}{v}_\Omega}{\norm{v}{H^1(\Omega)}}
= \sup_{0\not= v \in H^1(\Omega)} \frac{\product{\partial_t u_h(t)}{P_h v}_\Omega}{\norm{v}{H^1(\Omega)}}.
\end{align}
Using equation~\cref{eq:vp1h}, the Cauchy-Schwarz inequality, and the trace inequality,
one can further estimate
\begin{align*}
\product{\partial_t u_h(t)}{P_h v}_\Omega \le \big(\norm{u_h(t)}{H^1(\Omega)}
+ C_{tr} \norm{\phi_h(t)}{H^{-1/2}(\Omega)} + \norm{f(t)}{H^{1}(\Omega)'} \big) \norm{P_h v}{H^1(\Omega)}.
\end{align*}
Therefore, assumption~\cref{as:A2} yields
\begin{align*}
\norm{\partial_t u_h(t)}{H^{1}(\Omega)'}
\le C \big( \norm{u_h(t)}{H^1(\Omega)} + \norm{\phi_h(t)}{H^{-1/2}(\Gamma)}
+ \norm{f(t)}{H^{1}(\Omega)'} \big).
\end{align*}
Then the assertion of the lemma
follows by integration over time and combination with the
estimates~\cref{eq:energyh} for $\norm{u_h}{H_T}$ and $\norm{\phi_h}{B_T}$
stated in \cref{lem:wellposedh}.
\end{proof}
By combination of the previous lemmas and the variational problems defining
the continuous and the semi-discrete solution, we now obtain the following result.
\begin{theorem}[Quasi-best-approximation]
\label{thm:quasioptimality}
Let~\cref{as:A1}--\cref{as:A2} hold. Furthermore, $(u,\phi) \in Q_T \times B_T$
and $(u_{h},\phi_{h}) \in Q_T^{h} \times B_T^{h}$ denote the solutions of
\cref{prob:variational} and \cref{prob:semidiscrete}, respectively.
Then there holds that
\begin{align*}
\norm{u - u_h}{Q_T} + \norm{\phi-\phi_h}{B_T} \le C \big( \norm{u - \tilde u_h}{Q_T}
+ \norm{\phi-\tilde \phi_h}{B_T} \big)
\end{align*}
for all functions $\tilde u_h \in Q_T^h$ and $\tilde \phi_h \in B_T^h$ with a
constant $C>0$ which is independent of the problem data $f,g$ and of the spaces $H^h$ and $B^h$.
\end{theorem}
\begin{proof}
This result was first proven in~\cite{Costabel:1990}
for the symmetric coupling method.
Using \cref{lem:elliptic},
their proof can be adopted to the non-symmetric coupling as well.
For convenience of the reader and later reference, we only repeat the main arguments:
Let $\tilde u_h \in Q_T^h$ and $\tilde \phi_h \in B_T^h$ be arbitrary.
By
\begin{align*}
\norm{u - u_h}{Q_T} &\le \norm{u - \tilde u_h}{Q_T} + \norm{\tilde u_h - u_h}{Q_T} \qquad \text{and}\\
\norm{\phi - \phi_h}{B_T} &\le \norm{\phi - \tilde \phi_h}{B_T} + \norm{\tilde \phi_h - \phi_h}{B_T}
\end{align*}
we split the error into an \emph{approximation error} and a \emph{discrete error} component.
The first part already appears in the final estimate.
To estimate the discrete error components we note that the discrete problem~\cref{eq:vp1h}--\cref{eq:vp2h}
is consistent with the continuous problem~\cref{eq:vp1}--\cref{eq:vp2}.
Hence, we may write the discrete error components
$w_h = \tilde u_h - u_h$ and $\rho_h = \tilde \phi_h - \phi_h$ as the solution of the system
\begin{align}
\label{eq:discerror1}
\product{\partial_t w_h(t)}{v_h}_\Omega + \product{\nabla w_h(t)}{\nabla v_h}_\Omega
- \product{\rho_h(t)}{v_h}_{\Gamma} &= \product{F(t)}{v_h}_\Omega\\
\label{eq:discerror2}
\product{(1/2-\K) w_h(t)}{\psi_h}_{\Gamma} + \product{\V \rho_h(t)}{\psi_h}_{\Gamma}
&= \dual{G(t)}{\psi_h}_{\Gamma}
\end{align}
for all $v_h\in H^h$ and $\psi_h\in B^h$ with the right-hand sides $F(t)$ and $G(t)$ defined by
\begin{align*}
\dual{F(t)}{v}_{\Omega}
&:= \dual{\partial_t \tilde u_h(t) - \partial_t u(t)}{v}_\Omega
+ \product{\nabla \tilde u_h(t) - \nabla u(t)}{\nabla v}_\Omega
- \dual{\tilde \phi_h(t) - \phi(t)}{v}_{\Gamma},\\
\dual{G(t)}{\psi}_{\Gamma}
&:= \dual{(1/2-\K) (\tilde u_h(t) - u(t))}{\psi}_{\Gamma}
+ \dual{\V (\tilde \phi_h(t) - \phi(t))}{\psi}_{\Gamma}.
\end{align*}
for all $v\in H$ and $\psi\in B$.
With the bounds from the integral and trace operators,
the Cauchy-Schwarz inequality, and integrating with respect to time, one can see that
\begin{align*}
\norm{F}{L^2(0,T;H^1(\Omega)')}
&\le C \big( \norm{u - \tilde u_h}{Q_T} + \norm{\phi - \tilde \phi_h}{B_T}\big) \\
\norm{G}{L^2(0,T;H^{1/2}(\Gamma))}
& \le C \big( \norm{u - \tilde u_h}{H_T} + \norm{\phi - \tilde \phi_h}{B_T}\big).
\end{align*}
Note that the system~\cref{eq:discerror1}--\cref{eq:discerror2} with the right-hand sides $F$ and $G$
has the same form as~\cref{eq:vp1h}--\cref{eq:vp2h}.
Therefore, \cref{lem:energyh} applies and finally shows that
\begin{align*}
\norm{\tilde u_h - u_h}{Q_T} + \norm{\tilde \phi_h - \phi_h}{B_T}
\le C \big( \norm{u - \tilde u_h}{Q_T} + \norm{\phi - \tilde \phi_h}{B_T} \big).
\end{align*}
Together with the error splitting this completes the proof.
\end{proof}
\begin{remark}
As a direct consequence of \cref{thm:quasioptimality}, we also obtain
\begin{align*}
\norm{u - u_h}{Q_T} + \norm{\phi-\phi_h}{B_T} \le C \big( \norm{u - P_h u}{Q_T}
+ \norm{\phi-\Pi_h \phi}{B_T} \big),
\end{align*}
where $P_h:H^1(\Omega)\to H^h$ is the $L^2(\Omega)$ projection operator introduced
in~\cref{eq:projection}, $\Pi_h:H^{-1/2}(\Gamma)\to B_h$ is the $H^{-1/2}(\Gamma)$-projection
operator, and $C>0$.
This allows us to obtain explicit error bounds for particular choices of
approximation spaces by using interpolation error estimates in the energy spaces;
see \cref{sec:fembem} for an example.
\end{remark}
\section{Time discretization} \label{sec:time}
For the time discretization of the Galerkin approximation,
we consider a particular one-step method that allows us to
establish quasi-optimality of a fully discrete scheme under minimal regularity assumptions.
Let us note that a similar method was used in~\cite[Sec. 4.1.]{Tantardini:2014-1} for the discretization
of a parabolic problem.
First of all, we introduce some notation which we need to formulate our time discretization scheme.
Let $0=t^0 < t^1 < \ldots < t^N = T$, $N\in\mathbb{N}$ be a partition of the time interval $[0,T]$.
Further, we denote by $\tau^n = t^n-t^{n-1}$ the local time step sizes and
set $\tau := \max_{n=1,\ldots,N } \tau^{n}$.
In this section we search for approximations $u_{h,\tau} \in Q_T^{h,\tau}$
and $\phi_{h,\tau} \in B_T^{h,\tau}$ with
\begin{align*}
Q_T^{h,\tau}
&:=\set{u \in C(0,T;H^h)}{u(0) = 0, u|_{[t^{n-1},t^n]} \text{ is linear in } t} \qquad \text{and} \\
B_T^{h,\tau}
&:= \set{\phi \in L^2(0,T;B^h)}{\phi|_{(t^{n-1},t^n]} \text{ is constant in } t}.
\end{align*}
Furthermore, for sufficiently regular functions in $t$, we denote by $v^n = v(t^n)$
the values at the grid points.
For $u_{h,\tau}\in Q_T^{h,\tau}$ the operator $\partial_t$ has to be understood
piecewise with respect to the time mesh, in particular,
\begin{align}
\label{eq:discderivative}
\partial_t u_{h,\tau}|_{(t^{n-1},t^n)}=d_\tau u_{h,\tau}^n
\qquad\text{with}\qquadd_\tau u_{h,\tau}^n := \frac{1}{\tau^n} (u_{h,\tau}^n - u_{h,\tau}^{n-1}).
\end{align}
We further introduce weighted averages
\begin{align} \label{eq:hatv}
\widehat{v}^n = \frac{1}{\tau^n} \int_{t^{n-1}}^{t^n} v(t) \omega^n(t)\,dt
\qquad\text{with }\omega^n(t)=\frac{6t-2t^n-4t^{n-1}}{\tau^n}
\end{align}
and define our fully discrete system as follows:
\begin{problem}[Full discretization]
\label{prob:fullydiscreteweight}
Find $u_{h,\tau} \in Q_T^{h,\tau}$ and $\phi_{h,\tau} \in B_T^{h,\tau}$ such that
\begin{align}
\label{eq:vp1htauweight}
\product{\widehat{\partial_t u}_{h,\tau}^n}{v_h}_\Omega
+ \product{\widehat{\nabla u}_{h,\tau}^n}{\nabla v_h}_\Omega
- \product{\widehat{\phi}_{h,\tau}^n}{v_h}_{\Gamma} &= \dual{\widehat{f}^n}{v_h}_\Omega,\\
\label{eq:vp2htauweight}
\product{(1/2-\K) \widehat{u}_{h,\tau}^n} {\psi_h}_{\Gamma}
+ \product{\V\widehat{\phi}_{h,\tau}^n}{\psi_h}_{\Gamma} &= \product{\widehat{g}^n}{\psi_h}_{\Gamma}
\end{align}
for all $v_h \in H^h \subset H^1(\Omega)$ and $\psi_h \in B^h \subset H^{-1/2}(\Gamma)$
and for all $1 \le n \le N$.
\end{problem}
\begin{remark}
\label{rem:classicalEuler}
We have chosen the piecewise linear weight function $\omega^n(t)$ in~\cref{eq:hatv} such that
for all $n\in\mathbb{N}$, $u_{h,\tau} \in Q_T^{h,\tau}$, and $\phi_{h,\tau} \in B_T^{h,\tau}$
there holds that
\begin{align}
\label{eq:identities}
\widehat{u}_{h,\tau}^n = u_{h,\tau}^n, \qquad
\widehat{\partial_t u}_{h,\tau}^n = d_\tau u_{h,\tau}^n= \frac{1}{\tau^n} (u_{h,\tau}^n - u_{h,\tau}^{n-1}),
\quad \text{ and} \quad
\widehat{\phi}_{h,\tau}^n = \phi_{h,\tau}^n.
\end{align}
Thus the discrete system \cref{prob:fullydiscreteweight} is equivalent to
\begin{align}
\label{eq:vp1htau}
\product{d_\tau u_{h,\tau}^n}{v_h}_\Omega
+ \product{\nabla u_{h,\tau}^n}{\nabla v_h}_\Omega
- \product{\phi_{h,\tau}^n}{v_h}_{\Gamma} &= \dual{\widehat{f}^n}{v_h}_\Omega,\\
\label{eq:vp2htau}
\product{(1/2-\K) u_{h,\tau}^n} {\psi_h}_{\Gamma}
+ \product{\V\phi_{h,\tau}^n}{\psi_h}_{\Gamma} &= \product{\widehat{g}^n}{\psi_h}_{\Gamma}
\end{align}
for all $v_h \in H^h \subset H^1(\Omega)$ and $\psi_h \in B^h \subset H^{-1/2}(\Gamma)$,
and for all $1 \le n \le N$.
Hence, the fully discrete scheme \cref{prob:fullydiscreteweight}
amounts to a discretization of \cref{prob:semidiscrete}
in time by a variant of the implicit Euler method,
i.e., it differs only in the right-hand side which is
treated in a special way in
order to reduce the regularity requirements on the data.
An error analysis of the coupling with
the classical implicit Euler scheme and other
time discretizations in the natural energy norm is also possible. However, one needs the usual
Taylor expansions and therefore some regularity on the data $\tilde f$, $\tilde g$, $\tilde h$,
and the solution.
\end{remark}
\begin{remark}
\label{rem:consistency}
By testing~\cref{eq:vp1}--\cref{eq:vp2} with $v=v_h$ and $\psi=\psi_h$,
multiplication with the weight function $\omega^n$, and
integration over the time interval $[t^{n-1},t^n]$, one can see that
\begin{align*}
\dual{\widehat{\partial_t u}^n\!}{v_h}_\Omega + \product{\widehat{\nabla u}^n\!}{\nabla v_h}_\Omega
- \dual{\widehat{\phi}^n}{v_h}_{\Gamma} &= \dual{\widehat{f}^n}{v_h}_\Omega,\\
\dual{(1/2-\K) \widehat{u}^n}{\psi_h}_{\Gamma} + \dual{\V \widehat{\phi}^n}{\psi_h}_{\Gamma}
&= \dual{\widehat{g}^n}{\psi_h}_{\Gamma}
\end{align*}
for all $v_{h} \in H^h$, $\psi_h \in B^h$, and all $1 \le n \le N$.
This shows that the fully discrete scheme~\cref{eq:vp1htauweight}--\cref{eq:vp2htauweight}
is a Petrov-Galerkin approximation and thus is
consistent with the variational problem~\cref{eq:vp1}--\cref{eq:vp2}.
\end{remark}
In the following, we derive error estimates for the fully discrete scheme in the energy norm
by an extension of our arguments for the analysis of the Galerkin semi-discretization.
Let us start with establishing the corresponding fully discrete energy estimate.
\begin{lemma}[Well-posedness]
\label{lem:wellposedhtau}
Let~\cref{as:A1} hold and $\tau \le 1/4$.
Then for any $f\in H'_T$ and $g\in B'_T$, \cref{prob:fullydiscreteweight} admits a unique solution and
\begin{align}
\label{eq:energyht}
\norm{u_{h,\tau}}{H_T} + \norm{\phi_{h,\tau}}{B_T}
\le C e^{2N\tau} \big( \norm{f}{H_T'} + \norm{g}{B_T'}\big)
\end{align}
with a constant $C>0$ that depends only on the domain $\Omega$.
If the bilinear form $a(v,\psi;v,\psi)$ is elliptic
there is no constant factor $e^{2N\tau}$ on the right-hand side of~\cref{eq:energyht}.
\end{lemma}
\begin{proof}
We recall the notation of \cref{lem:wellposedh} with $\phi^n_{h,\tau}=\S_h u^n_{h,\tau}+\R_h \widehat g^n$
and $\tilde a_h(u^n_{h,\tau},v_h) = \product{\nabla u^n_{h,\tau}}{\nabla v_h}_\Omega
- \product{\S_h u^n_{h,\tau}}{v_h}_{\Gamma}$,
where $\S_h$ and $\R_h$ are defined in~\cref{eq:Sh} and~\cref{eq:Rh}, respectively.
Next we rewrite the equivalent formulation \cref{eq:vp1htau}--\cref{eq:vp2htau}
of our discrete \cref{prob:fullydiscreteweight} as
\begin{align*}
\frac{1}{\tau^{n}} \product{u_{h,\tau}^{n} - u_{h,\tau}^{n-1}}{v_h}_\Omega
+ \tilde a_h(u_{h,\tau}^{n},v_h) = \dual{\widehat{f}^{n}}{v_h}_{\Omega}
+ \product{\R_h\widehat{g}^{n}}{v_h}_{\Gamma}.
\end{align*}
By testing with $v_h = u_{h,\tau}^{n}$
and using the relation $-ab=-\frac{1}{2}a^2-\frac{1}{2} b^2+\frac{1}{2} (a-b)^2$,
we apply the Cauchy-Schwarz, trace, and Young inequalities as well as the
G\r arding inequality~\cref{eq:ahgarding} for the bilinear form $\tilde a_h(\cdot,\cdot)$
to get
\begin{align*}
&\frac{1}{2\tau^n} \norm{u_{h,\tau}^n}{L^2(\Omega)}^2
-\frac{1}{2\tau^n} \norm{u_{h,\tau}^{n-1}}{L^2(\Omega)}^2
+ \frac{1}{2\tau^n} \norm{u_{h,\tau}^n - u_{h,\tau}^{n-1}}{L^2(\Omega)}^2
+ \alpha \norm{u_{h,\tau}^{n}}{H^1(\Omega)}^2 \\
&\qquad\le
\norm{u_{h,\tau}^{n}}{L^2(\Omega)}^2
+ \frac{\alpha}{2}\norm{u_{h,\tau}^{n}}{H^1(\Omega)}^2
+ \frac{1}{\alpha} \norm{\widehat{f}^{n}}{H^1(\Omega)'}^2
+ \frac{C_\V^{-1}C_{tr}^2}{\alpha} \norm{\widehat{g}^{n}}{H^{1/2}(\Gamma)}^2.
\end{align*}
Additionally, we have used $\norm{\R_h \widehat{g}^{n}}{H^{-1/2}(\Gamma)} \le
C_\V^{-1} \norm{\widehat{g}^{n}}{H^{1/2}(\Gamma)}$
for the operator $\R_h$ defined in~\cref{eq:Rh},
where $C_\V$ is the ellipticity constant of $\V$.
This shows that the problems are uniquely solvable at every time step.
Multiplying with $2\tau^n(1-2\tau)^{n-1}$, rearranging the terms,
and using the fact that $\tau^n\leq \tau\leq 1/4$,
a Gronwall argument, see, e.g.,~\cite{Wheeler:1973},
leads to
\begin{align}
\label{eq:energyht1}
\norm{u_{h,\tau}^N}{L^2(\Omega)}^2 + \alpha\sum_{n=1}^N \tau^n \norm{u_{h,\tau}^{n}}{H^1(\Omega)}^2
\le C e^{2N\tau} \sum_{n=1}^N \tau^n (\norm{\widehat{f}^{n}}{H^1(\Omega)'}^2
+\norm{\widehat{g}^{n}}{H^{1/2}(\Gamma)}^2),
\end{align}
with a constant $C>0$.
Since $u_{h,\tau}$ and $\phi_{h,\tau}$ are piecewise linear and constant, respectively,
we easily see that
\begin{align}
\label{eq:energyht2}
\norm{u_{h,\tau}}{H_T}^2\leq \frac{4}{3}\sum_{n=1}^N \tau^n \norm{u_{h,\tau}^{n}}{H^1(\Omega)}^2
\quad\text{ and }\quad
\norm{\phi_{h,\tau}}{B_T}^2\leq \sum_{n=1}^N \tau^n \norm{\phi_{h,\tau}^{n}}{H^{-1/2}(\Omega)}^2.
\end{align}
For the right-hand side of~\cref{eq:energyht1} it follows
directly by the Cauchy-Schwarz inequality and
$\norm{\omega^n(t)}{L^2(t^{n-1},t^n)}^2=4\tau^n$ that
\begin{align}
\label{eq:energyht3}
\sum_{n=1}^N \tau^n \norm{\widehat{f}^{n}}{H^1(\Omega)'}^2
\leq 4\norm{f}{H_T'}^2\qquad\text{and}\qquad
\sum_{n=1}^N \tau^n \norm{\widehat{g}^{n}}{H^{1/2}(\Gamma)}^2)
\leq 4\norm{g}{B_T'}^2
\end{align}
With~\cref{eq:vp2htau} and the same arguments as for~\cref{eq:energyphih} we get
the bound
\begin{align}
\label{eq:energyht4}
C_\V\norm{\phi^n_{h,\tau}}{H^{-1/2}(\Gamma)}
\le (1/2+C_\K) C_{tr} \norm{u^n_{h,\tau}}{H^1(\Omega)}
+ \norm{\widehat{g}^n}{H^{1/2}(\Gamma)}.
\end{align}
Now the energy estimate~\cref{eq:energyht} follows from~\cref{eq:energyht1}--\cref{eq:energyht4}.
\end{proof}
With similar arguments as used for the analysis on the semi-discrete level,
we also obtain a bound for the time derivatives $\partial_t u_{h,\tau}$ of the discrete solution.
\begin{lemma}[Energy estimate]
\label{lem:fullydiscreteenergy}
Let~\cref{as:A1}--\cref{as:A2} hold and $\tau \le 1/4$. Then
\begin{align}
\label{eq:fullydiscreteenergy}
\norm{u_{h,\tau}}{Q_T} + \norm{\phi_{h,\tau}}{B_T}
\le C \big( \norm{f}{H_T'} +\norm{g}{B_T'}\big).
\end{align}
The constant $C>0$ depends only on the domain $\Omega$ and the time horizon $T$.
\end{lemma}
\begin{proof}
In view of \cref{lem:wellposedhtau}, we only have to estimate
\begin{align*}
\norm{\partial_t u_{h,\tau}}{H'_T}^2=
\sum_{n=1}^N \tau^n\norm{d_\tau u_{h,\tau}^n}{H^1(\Omega)'}^2.
\end{align*}
With similar reasoning as in \cref{lem:energyh}, we obtain
\begin{align}
\label{eq:fullydiscreteenergy1}
\norm{d_\tau u_{h,\tau}^n}{H^1(\Omega)'}
&= \sup_{0\not= v \in H^1(\Omega)}
\frac{\product{d_\tau u_{h,\tau}^n}{v}_\Omega}{\norm{v}{H^1(\Omega)}}
= \sup_{0\not= v \in H^1(\Omega)}
\frac{\product{d_\tau u_{h,\tau}^n}{P_h v}_\Omega}{\norm{v}{H^1(\Omega)}}.
\end{align}
By equation~\cref{eq:vp1htau} and the Cauchy-Schwarz inequality, we further get
\begin{align*}
\product{d_\tau u_{h,\tau}^n}{P_h v}_\Omega \le \big(\norm{u_{h,\tau}^n}{H^1(\Omega)}
+ C_{tr} \norm{\phi_{h,\tau}^n}{H^{-1/2}(\Gamma)} +\norm{\widehat{f}^{n}}{H^{1}(\Omega)'}\big)
\norm{P_h v}{H^1(\Omega)}.
\end{align*}
The $H^1$-stability assumption~\cref{as:A2} therefore yields for~\cref{eq:fullydiscreteenergy1}
\begin{align*}
\norm{d_\tau u_{h,\tau}^n}{H^1(\Omega)'}
\le C \big( \norm{u_{h,\tau}^n}{H^1(\Omega)}
+ \norm{\phi_{h,\tau}^n}{H^{-1/2}(\Gamma)} + \norm{\widehat{f}^{n}}{H^1(\Omega)'} \big).
\end{align*}
The assertion now follows by squaring this estimate, multiplying with $\tau^{n}$,
summation over $n$, and the estimates~\cref{eq:energyht1},~\cref{eq:energyht3},~\cref{eq:energyht4},
and~\cref{eq:energyht}.
\end{proof}
Now we prove the main result of this work.
\begin{theorem}[Quasi optimality of the fully discrete scheme]
\label{thm:main}
Let~\cref{as:A1}--\cref{as:A2} hold and $\tau \le 1/4$. Furthermore, $(u,\phi) \in Q_T \times B_T$
and $(u_{h,\tau},\phi_{h,\tau}) \in Q_T^{h,\tau} \times B_T^{h,\tau}$ denote the solutions of
\cref{prob:variational} and \cref{prob:fullydiscreteweight}, respectively. Then
\begin{align}
\label{eq:fullquasioptimal}
\norm{u - u_{h,\tau}}{Q_T} + \norm{\phi - \phi_{h,\tau}}{B_T}
\le C \big( \norm{u - \tilde u_{h,\tau}}{Q_T} + \norm{\phi - \tilde \phi_{h,\tau}}{B_T}\big)
\end{align}
for all functions $\tilde u_{h,\tau} \in Q_T^{h,\tau}$ and $\tilde \phi_{h,\tau} \in B_T^{h,\tau}$.
The constant $C>0$ in this estimate depends only on the domain $\Omega$ and the time horizon $T$.
\end{theorem}
\begin{proof}
The result follows with similar arguments as used in the proof of
\cref{thm:quasioptimality}.
Let $\tilde u_{h,\tau} \in Q_T^{h,\tau}$ and $\tilde \phi_{h,\tau} \in B_T^{h,\tau}$ be arbitrary.
Then we split the error
\begin{align*}
\norm{u - u_{h,\tau}}{Q_T}\leq
\norm{u-\tilde u_{h,\tau}}{Q_T}+\norm{\tilde u_{h,\tau} - u_{h,\tau}}{Q_T}, \\
\norm{\phi - \phi_{h,\tau}}{B_T}\leq
\norm{\phi - \tilde\phi_{h,\tau}}{B_T}+\norm{\tilde \phi_{h,\tau} - \phi_{h,\tau}}{B_T}.
\end{align*}
To estimate the \emph{discrete error}
we recall the consistency of the
fully discrete scheme~\cref{eq:vp1htauweight}--\cref{eq:vp2htauweight}
with the variational problem~\cref{eq:vp1}--\cref{eq:vp2}, see \cref{rem:consistency}.
Hence, the discrete error components
$w_{h,\tau}:=\tilde u_{h,\tau} - u_{h,\tau}$
$\rho_{h,\tau}:=\tilde \phi_{h,\tau} - \phi_{h,\tau}$ fulfill the system
\begin{align}
\label{eq:fullquasi1}
\dual{\widehat{\partial_t w}^n_{h,\tau}\!}{v_h}_\Omega + \product{\widehat{\nabla w}^n_{h,\tau}\!}{\nabla v_h}_\Omega
- \dual{\widehat{\rho}^n_{h,\tau}}{v_h}_{\Gamma} &= \dual{\widehat{F}^n}{v_h}_\Omega,\\
\label{eq:fullquasi2}
\dual{(1/2-\K) \widehat{w}^n_{h,\tau}}{\psi_h}_{\Gamma} +
\dual{\V \widehat{\rho}^n_{h,\tau}}{\psi_h}_{\Gamma}
&= \dual{\widehat{G}^n}{\psi_h}_{\Gamma}
\end{align}
for all $v_{h} \in H^h$, $\psi_h \in B^h$, and all $1 \le n \le N$
with the averaged right-hand sides $\widehat{F}$ and $\widehat{G}$ obtained from
\begin{align*}
\dual{F(t)}{v}_{\Omega}
&:= \dual{\partial_t \tilde u_{h,\tau}(t) - \partial_t u(t)}{v}_\Omega\\
&\qquad+ \product{\nabla \tilde u_{h,\tau}(t) -
\nabla u(t)}{\nabla v}_\Omega
- \dual{\tilde \phi_{h,\tau}(t) - \phi(t)}{v}_{\Gamma},\\
\dual{G(t)}{\psi}_{\Gamma}
&:= \dual{(1/2-\K) (\partial_t \tilde u_{h,\tau}(t) - \partial_t u(t))}{\psi}_{\Gamma}
+ \dual{\V (\tilde \phi_{h,\tau}(t) - \phi(t))}{\psi}_{\Gamma}.
\end{align*}
for all $v\in H$ and $\psi\in B$.
Note that the system~\cref{eq:fullquasi1}--\cref{eq:fullquasi2}
has the same form as~\cref{eq:vp1htauweight}--\cref{eq:vp2htauweight}
with the right-hand sides $\widehat{F}^n$
and $\widehat{G}^n$.
Thus we can apply the energy estimate~\cref{eq:fullydiscreteenergy} of \cref{lem:fullydiscreteenergy}.
The estimates
\begin{align*}
\norm{F}{H'_T}
&\le C \big( \norm{u - \tilde u_{h,\tau}}{Q_T} + \norm{\phi - \tilde \phi_{h,\tau}}{B_T}\big), \\
\norm{G}{B'_T}
& \le C \big( \norm{u - \tilde u_{h,\tau}}{H_T} + \norm{\phi - \tilde \phi_{h,\tau}}{B_T}\big),
\end{align*}
and the error splitting complete the proof for~\cref{eq:fullquasioptimal}.
\end{proof}
\begin{remark}
The time discretization strategy can also be applied directly to the continuous
variational problem \cref{eq:vp1}--\cref{eq:vp2}. Let us denote by
\begin{align*}
Q_T^\tau &= \set{u \in Q_T}{u|_{[t^{n-1},t^n]} \text{ is linear in } t} \qquad \text{and} \\
B_T^\tau &= \set{\phi \in B_T}{\phi|_{(t^{n-1},t^n]} \text{ is constant in } t}
\end{align*}
the corresponding function spaces and let $(u_\tau, \phi_\tau) \in Q_T^\tau \times B_T^\tau$
be the respective solutions obtained by time discretization of the continuous variational problem.
The well-posedness of this time-discretized problem follows by simply
setting $Q_T^{h,\tau}=Q_T^\tau$ and $B_T^{h,\tau}=B_T^\tau$ in the above results.
As a consequence, we also obtain the quasi-optimal error bound
\begin{align*}
\norm{u - u_\tau}{Q_T} + \norm{\phi - \phi_\tau}{Q_T} \le C \big( \norm{u - \tilde u_\tau}{Q_T}
+ \norm{\phi - \tilde \phi_\tau}{B_T}\big).
\end{align*}
for all $\tilde u_\tau \in Q_T^\tau$ and $\tilde \phi_\tau \in B_T^\tau$
with a constant $C$ being independent of $u$, $\phi$ and the temporal grid.
The condition~\cref{as:A2} is not required for this result to hold true.
\end{remark}
\begin{remark}
Explicit error bounds for the time discretization of the continuous and the semi-discrete
variational problem can also be obtained via the usual Taylor estimates
under some regularity
assumptions on the solution. As we will see in the next section, we obtain linear convergence
with respect to $\tau$ and independent of the spatial approximation.
Furthermore, other time discretization schemes are possible here, e.g., choose
$w^n(t)=1$ in~\cref{eq:hatv}. Then the identities~\cref{eq:identities} are
\begin{align*}
\widehat{u}_{h,\tau}^n = (u_{h,\tau}^n+u_{h,\tau}^{n-1})/2, \qquad
\widehat{\partial_t u}_{h,\tau}^n = d_\tau u_{h,\tau}^n= \frac{1}{\tau^n} (u_{h,\tau}^n - u_{h,\tau}^{n-1}),
\quad \text{ and }
\widehat{\phi}_{h,\tau}^n = \phi_{h,\tau}^n.
\end{align*}
and the discrete system \cref{prob:fullydiscreteweight} becomes a variant of the Crank-Nicolson time
discretization.
\end{remark}
\section{Error estimates for a FEM-BEM discretization}
\label{sec:fembem}
In this section we discuss a space discretization with finite and boundary elements.
Together with the
time discretization of the previous section, this yields to a fully discrete method which converges
uniformly and exhibits order optimal convergence rates under minimal
regularity assumptions on the solution.
We assume in the following that
\begin{align}
\label{as:A3}
\tag{A3}
&\T=\{T\} \text{ is a conforming triangulation of the domain }\Omega; \text{ see~\cite{Ciarlet:1978-book}}. \\
\label{as:A4}
\tag{A4}
&\E_{\Gamma} = \{ E \} \text{ is a segmentation of the boundary }\Gamma \text{ into straight edges}.
\end{align}
Note that condition~\cref{as:A3} and~\cref{as:A4} particularly imply that $\Gamma$ is a polygon
and that the surface mesh $\E_{\Gamma}$ is in general decoupled from the mesh $\T|_\Gamma$ of the domain.
\begin{remark}
An analysis for curved boundaries can be found in~\cite{ErathSchorr:2017-2}.
In~\cite{Gonzalez:2006},
curved finite elements are considered for the symmetric FEM-BEM coupling in two dimensions
for a time-dependent problem.
\end{remark}
As usual we denote by $\rho_T$ and $h_T$ the inner circle radius and diameter of the
triangle $T\in\T$ and by $h_E$ the length of the edge $E\in\E_{\Gamma}$. We further set
$h = \max\{\max_{T} h_T,\max_{E} h_E\}$ and assume that
\begin{align}
\label{as:A5}
\tag{A5}
\begin{split}
&\text{the partition }(\T,\E_{\Gamma}) \text{ is }\eta\text{-quasi-uniform with }\eta>0, \text{ i.e. },\\
&\eta h \le \rho_T \le h_T \le h
\qquad \text{and} \qquad
\eta h \le h_E \le h
\qquad \text{for all } T \in \T , \ E \in \E_{\Gamma}.
\end{split}
\end{align}
For the Galerkin semi-discretization in space we utilize the standard approximations
\begin{align}
\label{eq:spaceHh}
H^h &= \set{v \in C(\Omega)}{v|_T \in \P^1(T) \text{ for all } T \in \T }\qquad \text{and} \\
\label{eq:spaceBh}
B^h &= \set{\psi \in L^2(\Gamma)}{\psi|_E \in \P^0(E) \text{ for all } E \in \E_{\Gamma}}
\end{align}
consisting of globally continuous and
piecewise linear functions over $\T$ and piecewise constant functions over $\E_{\Gamma}$,
respectively. We denote by
$P_h: L^2(\Omega) \to H^h$ and $\Pi_h : H^{-1/2}(\Gamma) \to B^h$ the
$L^2(\Omega)$- and the $H^{-1/2}(\Gamma)$-orthogonal projection, respectively.
\begin{lemma}
\label{lem:approxerror}
Let~\cref{as:A1} and~\cref{as:A3}--\cref{as:A5} hold. Then~\cref{as:A2} is valid with a constant $C_P$
independent of the mesh-size. Moreover, the operator $P_h$ can be extended to a bounded linear
operator on $H^1(\Omega)'$. Hence, for all $0 \le s \le 1$ and $0 \le s_e \le 3/2$ we have
\begin{align*}
\norm{u - P_h u}{H^{1}(\Omega)} &\le C h^s \norm{u}{H^{1+s}(\Omega)},\qquad u\in H^{1+s}(\Omega), \\
\norm{u - P_h u}{H^{1}(\Omega)'} &\le C h^s \norm{u}{H^{1-s}(\Omega)'},\qquad u\in H^{1-s}(\Omega)', \\
\norm{\phi - \Pi_h \phi}{H^{-1/2}(\Gamma)} &\le C h^{s_e}
\norm{\phi}{H^{-1/2+s_e}(\Gamma)},\qquad \phi\in H^{-1/2+s_e}(\Gamma).
\end{align*}
The constant $C>0$ is independent of the particular choice of the triangulation.
\end{lemma}
\begin{proof}
The assertion about $\phi$ follows from \cite[Th. 10.4]{Steinbach:2008-book}.
Validity of condition~\cref{as:A2} for these particular function spaces
has been shown in~\cite{Costabel:1990} via an inverse inequality.
Now we turn to the remaining estimates:
Let $P^1_h : H^1(\Omega) \to H^h$ be the $H^1$-orthogonal projection defined by
\begin{align*}
\product{P^1_h u}{v_h}_{H^1(\Omega)} = \product{u}{v_h}_{H^1(\Omega)} \qquad \text{for all } v_h \in H^h,
\end{align*}
and recall that $\norm{u-P^1_h u}{H^1(\Omega)} \le C' h^s \norm{u}{H^{1+s}(\Omega)}$
for $0 \le s \le 1$; see, e.g.,~\cite{Brenner:2008-book}.
Then
\begin{align*}
\norm{u - P_h u }{H^1(\Omega)}
&\le \norm{u - P_h P_h^1 u}{H^1(\Omega)} + \norm{P_h (u - P_h^1 u)}{H^1(\Omega)} \\
&\le (1 + C_P) \norm{u - P^1_h u}{H^1(\Omega)} \le (1+C_P) C' h^s \norm{u}{H^{1+s}(\Omega)},
\end{align*}
where we used the projection property of $P_h$, condition~\cref{as:A2},
and the approximation properties of $P^1_h$ in the last two steps.
By definition of the dual norm, we further have
\begin{align*}
\norm{u - P_h u}{H^{1}(\Omega)'}
&= \sup_{0\not= v \in H^1(\Omega)} \frac{\product{u - P_h u}{v}_{\Omega}}{\norm{v}{H^1(\Omega)}} \\
&= \sup_{0\not= v \in H^1(\Omega)} \frac{\product{u}{v - P_h v}_{\Omega}}{\norm{v}{H^1(\Omega)}}
\le C h \norm{u}{L^2(\Omega)}.
\end{align*}
Here we used the standard estimate $\norm{v - P_h v}{L^2(\Omega)} \le C h \norm{v}{H^1(\Omega}$
for the $L^2$-projection in the last step.
With a similar duality argument and condition~\cref{as:A2}, one can further see that
$\norm{P_h u}{H^1(\Omega)'} \le C_P \norm{u}{H^1(\Omega)'}$ for all functions in $L^2(\Omega)$.
By density of $L^2(\Omega)$ in $H^1(\Omega)'$,
we can extend $P_h$ to a bounded linear operator on $H^1(\Omega)'$, and obtain
\begin{align*}
\norm{u - P_h u}{H^1(\Omega)'} \le (1+C_P) \norm{u}{H^1(\Omega)'}.
\end{align*}
Noting that $L^2(\Omega) = H^0(\Omega) = H^0(\Omega)'$ and interpolating the two latter bounds
now allows us to establish the second estimate for $u$ which completes the proof.
\end{proof}
\begin{remark}
Due to the results of~\cite{Bramble:2001} and~\cite{Bank:2014},
the assertions of \cref{lem:approxerror} also holds true on rather general shape-regular meshes
under a mild growth condition on the local mesh size. With standard arguments, these estimates
can also be generalized to polynomial approximations of higher order. All results that
are presented below thus can be extended to such more general situations.
\end{remark}
As a consequence of these approximation error bounds and the quasi-best approximation of
the semi-discretization, we obtain the following quantitative error estimates.
\begin{theorem}
Let~\cref{as:A1}--\cref{as:A5} hold and denote by $(u,\phi)$ and $(u_h,\phi_h)$
the solutions of \cref{prob:variational} and
\cref{prob:semidiscrete}, respectively. Then
\begin{align*}
&\norm{u - u_h}{Q_T} + \norm{\phi - \phi_h}{B_T} \\
&\quad\le C h^s \big( \norm{u}{L^2(0,T;H^{1+s}(\Omega))} + \norm{\partial_t u}{L^2(0,T;H^{1-s}(\Omega)')}
+ \norm{\phi}{L^2(0,T;H^{s-1/2}(\Gamma))}\big)
\end{align*}
for all $0 \le s \le 1$, $u(t)\in H^{1+s}(\Omega)$, $\partial_t u \in H^{1-s}(\Omega)'$,
$\phi(t)\in H^{-1/2+s}(\Gamma)$, and for a.e. $t\in [0,T]$.
\end{theorem}
\begin{proof}
The result follows directly from \cref{thm:quasioptimality} and \cref{lem:approxerror}.
\end{proof}
\begin{remark}
Let us emphasize that the estimate of the theorem is optimal with respect to both,
the approximation properties of the spaces $Q_T^h$ and $B_T^h$ and the smoothness
requirements on the solution. Furthermore, the method even converges without any
smoothness assumptions on the
solution, i.e., for all $u \in Q_T$ and $\phi \in B_T$.
\end{remark}
For the full discretization we will also need the $L^2$-projection in time, i.e., operators
$P^\tau : L^2(0,T;L^2(\Omega)) \to Q_T^\tau$ and
$\Pi^\tau : L^2(0,T;H^{-1/2}(\Gamma)) \to B_T^\tau$. These satisfy
\begin{align*}
\norm{v - P^\tau v}{Q_T} &\le C \tau^r \big(\norm{\partial_t v}{H^r(0,T;H^1(\Omega)')}
+ \norm{v}{H^r(0,T;H^1(\Omega))}\big),\qquad 0\leq r\leq 1, \\
\norm{\psi - \Pi^\tau \psi}{B_T} &\le C \tau^r \norm{\psi}{H^r(0,T;H^{-1/2}(\Gamma))},\qquad 0\leq r\leq 1.
\end{align*}
Then we obtain the following result for the fully discrete scheme.
\begin{theorem}
\label{th:errorestimateorder}
Let~\cref{as:A1}--\cref{as:A5} hold and $\tau \le 1/4$. Further we denote by
$(u,\phi)$ and $(u_{h,\tau},\phi_{h,\tau})$ the solutions of
\cref{prob:variational} and \cref{prob:fullydiscreteweight}, respectively. Then
\begin{align*}
&\norm{u - u_{h,\tau}}{Q_T} + \norm{\phi - \phi_{h,\tau}}{B_T} \\
&\quad\le C_1 h^s \big( \norm{u}{L^2(0,T;H^{1+s}(\Omega))} + \norm{\partial_t u}{L^2(0,T;H^{1-s}(\Omega)')}
+ \norm{\phi}{L^2(0,T;H^{s-1/2}(\Gamma))} \big) \\
& \qquad \qquad + C_2 \tau^r \big( \norm{\partial_t u}{H^r(0,T;H^1(\Omega)')} + \norm{u}{H^r(0,T;H^1(\Omega))}
+ \norm{\phi}{H^r(0,T;H^{-1/2}(\Gamma))} \big)
\end{align*}
for all $0 \le s \le 1$ and $0 \le r \le 1$ with
$u\in H^r(0,T;H^{1+s}(\Omega))$, $\partial_t u\in H^r(0,T;H^{1-s}(\Omega)')$, and $\phi\in H^r(0,T;H^{-1/2+s}(\Gamma))$.
The constants $C_1,C_2>0$ depend only on the domain $\Omega$ and the time horizon $T$.
\end{theorem}
\begin{proof}
By the triangle inequality, we obtain
\begin{align*}
\norm{u - P^\tau P_h u}{Q_T} &\le \norm{u - P_h u}{Q_T}
+ \norm{P_h u - P^\tau P_h u}{Q_T},\\
\norm{\phi - \Pi^\tau \Pi_h\phi}{B_T} &\le \norm{\phi - \Pi_h\phi}{B_T}
+ \norm{\Pi_h\phi - \Pi^\tau\Pi_h\phi}{B_T}.
\end{align*}
The first term in each line can be estimated by \cref{lem:approxerror}.
Since the projection operators commute, we can change their order in the second term in each line.
Then we use the stability of the spatial projection operators guaranteed
by \cref{lem:approxerror} and the approximation properties of the
time projections $P^\tau$. We obtain
\begin{align*}
\norm{P_h u - P^\tau P_h u}{Q_T}
&\le C \tau^r \big(\norm{\partial_t u}{H^r(0,T;H^1(\Omega)')} + \norm{u}{H^r(0,T;H^1(\Omega))}\big), \\
\norm{\Pi_h\phi - \Pi^\tau \Pi_h\phi}{B_T}
&\le C \tau^r \norm{\phi}{H^r(0,T;H^{-1/2}(\Gamma))}.
\end{align*}
Now we apply \cref{thm:main}
with $\tilde u_{h,\tau}=P^\tau P_h u$ and $\tilde \phi_{h,\tau}=\Pi^\tau \Pi_h \phi$.
The estimates from \cref{lem:approxerror} for the approximation errors yield to the assertion.
\end{proof}
\begin{remark}
From the previous result, we also obtain a corresponding estimate
\begin{align*}
\norm{u &- u_\tau}{Q_T} + \norm{\phi - \phi_\tau}{B_T} \\
&\le C \tau^r \big( \norm{\partial_t u}{H^r(0,T;H^1(\Omega)')}
+ \norm{u}{H^r(0,T;H^1(\Omega))} + \norm{\phi}{H^r(0,T;H^{-1/2}(\Gamma))} \big)
\end{align*}
for the approximation $(u_\tau,\phi_\tau)$ obtained by the time discretization scheme
without additional Galerkin approximation in space.
The proof of this result simply follows
by setting $Q_T^h=Q_T$, $B_T^h=B_T$ and $Q_T^{h,\tau}=Q_T^\tau$, $B_T^{h,\tau}=B_T^\tau$ in the previous theorem.
Note that the conditions~\cref{as:A2}--\cref{as:A5} are not required for this result to hold true.
\end{remark}
\begin{figure}[tbhp]
\centering
\subfigure[Mesh for \cref{subsec:bspanalytic}.]{\label{subfig:meshlshape}
\includegraphics[width=.38\textwidth]{figures/meshLshape.pdf}}
\hspace{0.075\textwidth}
\subfigure[Mesh for \cref{subsec:bspcap}.]{\label{subfig:meshcap}
\includegraphics[width=.37\textwidth]{figures/meshcap.pdf}}
\caption{The initial triangle meshes for the examples.
The bold lines are the coupling boundary (blue) and the Dirichlet boundary (red).}
\label{fig:meshes}
\end{figure}
\section{Numerical illustration}
\label{sec:numerics}
In this section we illustrate our theoretical findings by some numerical examples in $\mathbb{R}^2$
with the function spaces $H^h$ and $B^h$ defined in~\cref{eq:spaceHh} and~\cref{eq:spaceBh}, respectively.
For the implementation we use the equivalent system~\cref{eq:vp1htau}--\cref{eq:vp2htau}
instead of \cref{prob:fullydiscreteweight}, see \cref{rem:classicalEuler}.
The right-hand side is built from the model data $\tilde f$, $\tilde g$,
$\tilde h$ with~\cref{eq:f} and \cref{eq:g}, and with the aid
of the weighted average operator~\cref{eq:hatv}. For these integrals we use Gauss quadrature
in space and time.
The calculations were performed using \textsc{Matlab} utilizing some functions
from the \textsc{Hilbert}-package~\cite{HILBERT:2013-1} for assembling the matrices resulting
from the integral operators $\V$ and $\K$.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{loglogaxis}[width=0.9\textwidth,
xlabel={$1/h$}, ylabel={\small error}, font={\scriptsize},
ymin=1e-6,
legend style={font=\small, draw=none, fill=none, cells={anchor=west}, legend pos=south west}]
\addplot table [x=invmaxMeshsizeh,y=errorH1dual] {figures/mexicanhatfemsinus19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorenergyV] {figures/mexicanhatfemsinus19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorenergyVproj] {figures/mexicanhatfemsinus19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorL2] {figures/mexicanhatfemsinus19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorL2proj] {figures/mexicanhatfemsinus19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorH1semi] {figures/mexicanhatfemsinus19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorH1semiproj] {figures/mexicanhatfemsinus19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=globalEnergy] {figures/mexicanhatfemsinus19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=globalEnergyproj] {figures/mexicanhatfemsinus19112017.dat};
\logLogSlopeTriangle{0.925}{0.2}{0.49}{-1}{black}{\scriptsize};
\logLogSlopeTriangle{0.925}{0.2}{0.66}{-1}{black}{\scriptsize};
\legend{
$\norm{z_h^a}{H_T}$,
$\norm{\phi - \phi_{h,\tau}}{L^2(0,T;\V)}$,
$\norm{\overline{\phi}_h - \phi_{h,\tau}}{L^2(0,T;\V)}$,
$\norm{u-u_{h,\tau}}{L^2(0,T;L^2(\Omega))}$,
$\norm{\overline{u}_h-u_{h,\tau}}{L^2(0,T;L^2(\Omega))}$,
$\norm{\nabla(u-u_{h,\tau})}{L^2(0,T;L^2(\Omega))}$,
$\norm{\nabla(\overline{u}_h-u_{h,\tau})}{L^2(0,T;L^2(\Omega))}$,
$(\norm{u-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\phi - \phi_{h,\tau}}{L^2(0,T;\V)}$,
$(\norm{\overline{u}_h-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\overline{\phi}_h - \phi_{h,\tau}}{L^2(0,T;\V)}$}
\end{loglogaxis}
\end{tikzpicture}
\caption{The different error components of the solutions
$u_{h,\tau}$ and $\phi_{h,\tau}$
for uniform refinement in time and space
for the smooth example in \cref{subsubsec:bsp1}.
The added energy error norms
$(\norm{u-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\phi - \phi_{h,\tau}}{L^2(0,T;\V)}$
and
$(\norm{\overline{u}_h-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\overline{\phi}_h - \phi_{h,\tau}}{L^2(0,T;\V)}$
show the first order convergence as predicted in \cref{th:errorestimateorder}.
}
\label{fig:errorbsp1}
\end{figure}
\subsection{Tests with analytical solutions}
\label{subsec:bspanalytic}
In the following, we discuss the convergence behaviour for
three examples with analytical solutions.
We consider the coupling problem~\cref{eq:model1}--\cref{eq:model6}
on the classical L-shape $\Omega= (-1/4, 1/4)^2\setminus [0, 1/4] \times [-1/4, 0]$
and the time interval $[0,1]$. The uniform start triangulation (triangles)
is plotted in \cref{subfig:meshlshape} with $h=0.125$. We use uniform time stepping,
in particular, we begin with $\tau^n=\tau=0.05$.
The refinement will be uniform for both, the space and the time grid, and
simultaneously.
For all three
examples we prescribe the same analytical solution in the exterior domain $\Omega_e$, namely
\begin{align*}
u_e(x_1,x_2,t)&=(1-t)\log \sqrt{(x_1+0.125)^2+(x_2-0.125)^2}.
\end{align*}
Note that this solution is smooth in $\Omega_e$.
With the interior solutions given below we will calculate
the right-hand side $\tilde f$ and the jumps $\tilde g$ and $\tilde h$
(from $u = u_e + \tilde g$ and
$\partial_{n} u = \partial_{n} u_e + \tilde h$)
appropriately.
For the error discussion we also consider
the $L^2$-projected analytical solutions $\overline{u}_h(t)\in H^h$ of $u(t)$
and $\overline{\phi}_h(t)\in B^h$ of $\phi(t)$ for a fixed but arbitrary $t$.
Note that the prescribed exterior solution
guarantees at least $\phi(t)\in L^2(\Gamma)$.
Hence, we may estimate the error as
\begin{align}
\label{eq:l2splitting1}
\norm{u-u_{h,\tau}}{Q_T}&\leq \norm{u-\overline{u}_h}{Q_T}+\norm{\overline{u}_h-u_{h,\tau}}{Q_T},\\
\label{eq:l2splitting2}
\norm{\phi-\phi_{h,\tau}}{B_T}&\leq \norm{\phi-\overline{\phi}_h}{B_T}
+\norm{\overline{\phi}_h-\phi_{h,\tau}}{B_T}.
\end{align}
The convergence order of $\norm{u-\overline{u}_h}{Q_T}$ and
$\norm{\phi-\overline{\phi}_h}{B_T}$ are known a~priori.
With the discrete error $e_h(t):=\overline{u}_h(t)-u_{h,\tau}(t)$
we can estimate the non computable dual norm
$\norm{\partial_t e_h}{H'_T}^2=\int_0^T\norm{\partial_t e_h}{H'}^2$ in the following way.
Let $z_h^a\in H^h$ be the solution to the auxiliary problem
\begin{align*}
\product{\nabla z_h^a}{\nabla v_h}_{\Omega} + \product{z_h^a}{v_h}_{\Omega}
=\product{\partial_t e_h}{v_h}_{\Omega},
\end{align*}
with $v_h=P_h v$ for all $v\in H$ and $P_h$ being the $L^2$-projection introduced
in \cref{sec:fembem}.
Then the $H^1$-stability of $P_h$ and the definition of the auxiliary problem
lead to
\begin{align*}
\norm{\partial_t e_h}{H^1(\Omega)'}
&=\sup_{0\not =v\in H^1(\Omega)}\frac{\product{\partial_t e_h}{v}_{\Omega}}{\norm{v}{H^1(\Omega)}} \\
&=\sup_{0\not =v\in H^1(\Omega)}\left(\frac{\product{\partial_t e_h}{v-P_h v}_{\Omega}}{\norm{v}{H^1(\Omega)}}
+ \frac{\product{\partial_t e_h}{P_h v}_{\Omega}}{\norm{v}{H^1(\Omega)}} \right) \\
&\leq \sup_{0\not =v\in H^1(\Omega)}\frac{\norm{z_h^a}{H^1(\Omega)}
\norm{P_h v}{H^1(\Omega)}}{\norm{v}{H^1(\Omega)}} \leq C_P\norm{z_h^a}{H^1(\Omega)}
\end{align*}
with the constant $C_P>0$.
Thus $\norm{z_h^a}{H_T}$ is an upper bound for $\norm{\partial_t e_h}{H'_T}$.
The norm $\norm{\phi(t)-\phi_{h,\tau}(t)}{B}$ is also not computable.
Hence we may use the equivalent norm
\begin{align*}
\norm{\phi(t)-\phi_{h,\tau}(t)}{B} \sim
\norm{\phi(t)-\phi_{h,\tau}(t)}{\V} := \dual{\V(\phi(t)-\phi_{h,\tau}(t))}{\phi(t)-\phi_{h,\tau}(t)}_\Gamma,
\end{align*}
see~\cite{Erath:2010-phd} for details.
Thus $\norm{\phi-\phi_{h,\tau}}{L^2(0,T;\V)}$ is an equivalent norm to
$\norm{\phi-\phi_{h,\tau}}{B_T}$.
We approximate all other spatial norms by Gaussian quadrature or with the matrices from the discretization.
The time integral in the Bochner-Sobolev norms is also computed with a Gaussian quadrature.
For the energy norm we therefore present the upper bound
\begin{align*}
\norm{u-u_{h,\tau}}{Q_T}+\norm{\phi-\phi_{h,\tau}}{B_T}
&\leq (\norm{u-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}
+\norm{\phi - \phi_{h,\tau}}{L^2(0,T;\V)}.
\end{align*}
Furthermore, with respect to the error splitting~\cref{eq:l2splitting1}--\cref{eq:l2splitting2}
we also calculate the error
\begin{align*}
(\norm{\overline{u}_h-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}
+\norm{\overline{\phi}_h - \phi_{h,\tau}}{L^2(0,T;\V)}
\end{align*}
with the $L^2$-projected analytical solutions $\overline{u}_h(t)\in H^h$ of $u(t)$
and $\overline{\phi}_h(t)\in B^h$ of $\phi(t)$.
\subsubsection{Smooth solution}
\label{subsubsec:bsp1}
For the first example we use the interior solution
\begin{align*}
u(x_1,x_2,t)&=\sin(2\pi t)(1-100 x_1^2-100 x_2^2)e^{-50(x_1^2+x_2^2)}.
\end{align*}
Hence, both, $u$ and $u_e$ are smooth and according to
\cref{th:errorestimateorder}
we expect the optimal
convergence rate $\O(h+\tau)$
which is indeed observed in \cref{fig:errorbsp1}.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{loglogaxis}[width=0.9\textwidth,
xlabel={$1/h$}, ylabel={\small error}, font={\scriptsize},
ymin=4e-6,
legend style={font=\small, draw=none, fill=none, cells={anchor=west}, legend pos=south west}]
\addplot table [x=invmaxMeshsizeh,y=errorH1dual] {figures/LShapeunsmooth19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorenergyV] {figures/LShapeunsmooth19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorenergyVproj] {figures/LShapeunsmooth19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorL2] {figures/LShapeunsmooth19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorL2proj] {figures/LShapeunsmooth19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorH1semi] {figures/LShapeunsmooth19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=errorH1semiproj] {figures/LShapeunsmooth19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=globalEnergy] {figures/LShapeunsmooth19112017.dat};
\addplot table [x=invmaxMeshsizeh,y=globalEnergyproj] {figures/LShapeunsmooth19112017.dat};
\logLogSlopeTriangle{0.9}{0.2}{0.68}{-2/3}{black}{\scriptsize};
\logLogSlopeTrianglelow{0.9}{0.2}{0.25}{-2/3}{black}{\scriptsize};
\logLogSlopeTrianglelow{0.9}{0.2}{0.43}{-3/4}{black}{\scriptsize};
\legend{
$\norm{z_h^a}{H_T}$,
$\norm{\phi - \phi_{h,\tau}}{L^2(0,T;\V)}$,
$\norm{\overline{\phi}_h - \phi_{h,\tau}}{L^2(0,T;\V)}$,
$\norm{u-u_{h,\tau}}{L^2(0,T;L^2(\Omega))}$,
$\norm{\overline{u}_h-u_{h,\tau}}{L^2(0,T;L^2(\Omega))}$,
$\norm{\nabla(u-u_{h,\tau})}{L^2(0,T;L^2(\Omega))}$,
$\norm{\nabla(\overline{u}_h-u_{h,\tau})}{L^2(0,T;L^2(\Omega))}$,
$(\norm{u-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\phi - \phi_{h,\tau}}{L^2(0,T;\V)}$,
$(\norm{\overline{u}_h-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\overline{\phi}_h - \phi_{h,\tau}}{L^2(0,T;\V)}$}
\end{loglogaxis}
\end{tikzpicture}
\caption{The different error components of the solutions
$u_{h,\tau}$ and $\phi_{h,\tau}$
for uniform refinement in time and space
for the example with a spatial generic singularity of the interior solution in \cref{subsubsec:bsp2}.
The added energy error norms
$(\norm{u-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\phi - \phi_{h,\tau}}{L^2(0,T;\V)}$
and
$(\norm{\overline{u}_h-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\overline{\phi}_h - \phi_{h,\tau}}{L^2(0,T;\V)}$
show the reduced convergence order as predicted in \cref{th:errorestimateorder}.
}
\label{fig:errorbsp2}
\end{figure}
\subsubsection{Generic singularity at the reentrant corner}
\label{subsubsec:bsp2}
For the second example, we choose the analytical solution
\begin{align*}
u(x_1,x_2,t) &=(1+t^2) r^{2/3} \sin(2\varphi/3)
\end{align*}
with the polar coordinates
$(x_1,x_2) = r(\cos\varphi, \sin\varphi)$, $r\in\RR_+$ and $\varphi\in [0,2\pi)$.
This solution is a classical test solution in the spatial components and exhibits
a generic singularity at the reentrant corner $(0,0)$ of $\Omega$. Note that $\Delta u = 0$
and that the function $u(x_1,x_2,\cdot)$ is only
in $H^{1+2/3-\varepsilon}(\Omega)$ for $\varepsilon>0$.
As analyzed
in \cref{th:errorestimateorder} and observed
in \cref{fig:errorbsp2}
we obtain a reduced convergence rate of $\O(h^{2/3}+\tau)$.
\begin{figure}
\centering
\begin{tikzpicture}
\begin{loglogaxis}[width=0.9\textwidth,
xlabel={\small $1/\tau$=\# time intervals}, ylabel={\small error}, font={\scriptsize},
ymin=1e-6,
legend style={font=\small, draw=none, fill=none, cells={anchor=west}, legend pos=south west}]
\addplot table [x=numberTimeintervals,y=errorH1dual] {figures/mexicanhatfem19112017.dat};
\addplot table [x=numberTimeintervals,y=errorenergyV] {figures/mexicanhatfem19112017.dat};
\addplot table [x=numberTimeintervals,y=errorenergyVproj] {figures/mexicanhatfem19112017.dat};
\addplot table [x=numberTimeintervals,y=errorL2] {figures/mexicanhatfem19112017.dat};
\addplot table [x=numberTimeintervals,y=errorL2proj] {figures/mexicanhatfem19112017.dat};
\addplot table [x=numberTimeintervals,y=errorH1semi] {figures/mexicanhatfem19112017.dat};
\addplot table [x=numberTimeintervals,y=errorH1semiproj] {figures/mexicanhatfem19112017.dat};
\addplot table [x=numberTimeintervals,y=globalEnergy] {figures/mexicanhatfem19112017.dat};
\addplot table [x=numberTimeintervals,y=globalEnergyproj] {figures/mexicanhatfem19112017.dat};
\logLogSlopeTriangle{0.925}{0.2}{0.49}{-1/3}{black}{\scriptsize};
\logLogSlopeTriangle{0.925}{0.2}{0.67}{-1}{black}{\scriptsize};
\logLogSlopeTriangle{0.925}{0.15}{0.35}{-3/2}{black}{\scriptsize};
\logLogSlopeTriangle{0.925}{0.15}{0.175}{-3/4}{black}{\scriptsize};
\legend{
$\norm{z_h^a}{H_T}$,
$\norm{\phi - \phi_{h,\tau}}{L^2(0,T;\V)}$,
$\norm{\overline{\phi}_h - \phi_{h,\tau}}{L^2(0,T;\V)}$,
$\norm{u-u_{h,\tau}}{L^2(0,T;L^2(\Omega))}$,
$\norm{\overline{u}_h-u_{h,\tau}}{L^2(0,T;L^2(\Omega))}$,
$\norm{\nabla(u-u_{h,\tau})}{L^2(0,T;L^2(\Omega))}$,
$\norm{\nabla(\overline{u}_h-u_{h,\tau})}{L^2(0,T;L^2(\Omega))}$,
$(\norm{u-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\phi - \phi_{h,\tau}}{L^2(0,T;\V)}$,
$(\norm{\overline{u}_h-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\overline{\phi}_h - \phi_{h,\tau}}{L^2(0,T;\V)}$}
\end{loglogaxis}
\end{tikzpicture}
\caption{The different error components of the solutions
$u_{h,\tau}$ and $\phi_{h,\tau}$
for uniform refinement in time and space
for the example with a singularity in the time component
of the interior solution in \cref{subsubsec:bsp3}.
The added energy error norms
$(\norm{u-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\phi - \phi_{h,\tau}}{L^2(0,T;\V)}$
and
$(\norm{\overline{u}_h-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\overline{\phi}_h - \phi_{h,\tau}}{L^2(0,T;\V)}$
show the reduced convergence order as predicted in \cref{th:errorestimateorder}.
}
\label{fig:errorbsp3}
\end{figure}
\subsubsection{Non-smooth function in time}
\label{subsubsec:bsp3}
The third example is less regular in time, but smooth in space, and reads
\begin{align*}
u(x_1,x_2,t) =t^{5/6} (1-100 x_1^2-100 x_2^2)e^{-50(x_1^2+x_2^2)}.
\end{align*}
Note that the function $u(x,\cdot)$ is only in $H^{4/3}(0,T)$.
According to our analysis we expect
a convergence rate of $\O(h+\tau^{1/3})$.
We plot the convergence order with respect to the number of time intervals ($=1/\tau$)
in \cref{fig:errorbsp3}.
Note that the energy norm error $\norm{u-u_{h,\tau}}{Q_T}+\norm{\phi-\phi_{h,\tau})}{B_T}$ represented by
$(\norm{u-u_{h,\tau}}{H_T}^2+\norm{z_h^a}{H_T}^2)^{1/2}+\norm{\phi - \phi_{h,\tau}}{L^2(0,T;\V)}$
seems to have a misleading convergence order of $\O(\tau)$.
The error component $\norm{z_h^a}{H_T}$, representing
the dual norm error $\norm{\partial_t(u-u_{h,\tau})}{H'_T}$, has convergence order
$\O(\tau^{1/3})$. With respect to
$\norm{\nabla (u-u_{h,\tau})}{L^2(0,T;L^2(\Omega))}$ this error component is rather small.
Hence the predicted convergence rate $\O(h+\tau^{1/3})$ would be observed asymptotically which can not
be visualized here due to computational restrictions.
\begin{figure}[tbhp]
\centering
\subfigure[Solution at $t=0.0125$.]{\label{fig:acap}\includegraphics[width=.3\textwidth]{figures/cap0dot0125.pdf}}
\hspace{.025\textwidth}
\subfigure[Solution at $t=0.05$.]{\label{fig:bcap}\includegraphics[width=.3\textwidth]{figures/cap0dot05.pdf}}
\hspace{.025\textwidth}
\subfigure[Solution at $t=0.4875$.]{\label{fig:ccap}\includegraphics[width=.3\textwidth]{figures/cap0dot4875.pdf}}
\vspace{.1\baselineskip}
\subfigure[Solution at $t=0.5$.]{\label{fig:dcap}\includegraphics[width=.3\textwidth]{figures/cap0dot5.pdf}}
\hspace{.025\textwidth}
\subfigure[Solution at $t=0.6$.]{\label{fig:fcap}\includegraphics[width=.3\textwidth]{figures/cap0dot6.pdf}}
\hspace{.025\textwidth}
\subfigure[Solution at $t=1.0$.]{\label{fig:gcap}\includegraphics[width=.3\textwidth]{figures/cap1dot0.pdf}}
\vspace{.25\baselineskip}
\includegraphics[width=0.5\textwidth,clip]{figures/colorbar.pdf}
\caption{Solution of the capacitor example in \cref{subsec:bspcap} at different times.}
\label{fig:cap}
\end{figure}
\begin{figure}[tbhp]
\centering
\includegraphics[width=.7\textwidth]{figures/cap3D1dot0.pdf}
\caption{Solution of the capacitor example in \cref{subsec:bspcap} at the end $T=1$.}
\label{fig:cap3D}
\end{figure}
\subsection{Quasi-electrostatic problem}\label{subsec:bspcap}
In the last example we want to apply our numerical scheme to a more practical
problem~\cite[Example 8.2]{Carstensen:1999}.
The idea behind the problem
is to model the potential of a capacitor
in an unbounded domain
with two electrodes $\Omega_{D,1}=[-0.8,-0.6]\times [-0.8,0,8]$
and $\Omega_{D,2}=[0.6,0.8]\times [-0.8,0,8]$.
For this we consider our model problem~\cref{eq:model1}--\cref{eq:model6}.
with the interior domain $\Omega=(-2,2)^2\backslash\big(\Omega_{D,1}\cup \Omega_{D,2}\big)$
and the exterior domain $\Omega_e=\mathbb{R}^2\backslash [-2,2]^2$, see also \cref{subfig:meshcap}.
We choose $\tilde f=0$, $\tilde g=0$, $\tilde h=0$, and the initial field $u(\cdot,0)=0$.
Contrary to~\cref{eq:model1} we allow a diffusion coefficient in the interior domain $\Omega$
of $5$ instead of $1$.
Furthermore, we define $\Gamma_{D,1}:=\partial\Omega_{D,1}$ and $\Gamma_{D,2}:=\partial\Omega_{D,2}$.
Thus the coupling boundary reads
$\Gamma=\partial \Omega_e=\partial \Omega\backslash\big(\Gamma_{D,1}\cup\Gamma_{D,2}\big)$.
For the charge at the electrode boundaries $\Gamma_{D,1}$ and $\Gamma_{D,2}$,
which are Dirichlet boundaries in the model problem, we
choose
\begin{alignat}{2}
u(x,t)&=
\begin{cases} -1 &\text{for}\quad t<0.5 \\
\phantom{-}1 &\text{for}\quad t \geq 0.5
\end{cases}
&\qquad& \text{on } \Gamma_{D,1} \times (0,1), \\
u(x,t)&=
\begin{cases} \phantom{-}1 &\text{for}\quad t<0.5 \\
-1 &\text{for}\quad t \geq 0.5
\end{cases}
&\qquad& \text{on } \Gamma_{D,2} \times (0,1).
\end{alignat}
Hence the charges are fixed to $\pm 1$
at Dirichlet boundary $\Gamma_{D,1} \cup \Gamma_{D,2}$
and the polarity is reversed at $t=0.5$.
In \cref{fig:cap} we plot the interior and part of the exterior solution at different times after
$5$ uniform refinements of the triangulation \cref{subfig:meshcap}, i.e., $h=0.03125$
and $\tau=0.0015625$. We use the representation formula~\cref{eq:repformular} with the discrete solution
$u_{h,\tau}|_\Gamma$ and $\phi_{h,\tau}$ to get
the approximation of $u_e$ in $\Omega_e$.
The figure sequence shows how the electrical field is building up and evolves after the change of polarity.
Finally, we plot the solution at the end time $T=1$ in \cref{fig:cap3D}.
\section{Conclusions}
In this work we provided a refined a~priori analysis for the
semi-discretization of the non-symmetric FEM-BEM coupling
for a parabolic-elliptic interface problem.
Furthermore, the first a~priori analysis was worked out
for the full discretization
of this coupling type
in terms of the energy norm
of the solution space.
We were able to show quasi-optimality results for both,
the semi- and the full discretization,
with a Galerkin method in
space and a variant of the implicit Euler method in time.
Then we utilized the piecewise linear ansatz function space
and the piecewise constant ansatz function space to approximate
the interior problem and the exterior problem, respectively.
This defines a classical non-symmetric FEM-BEM coupling approach with
first order convergence.
Note that this is the optimal convergence rate for these ansatz spaces in this norm.
However, the optimal convergence rate in the $L^2$ norm,
which usually relies on a duality argument, still remains open.
In case of a non-symmetric approach, adjoint regularity cannot be obtained as
easy as in the symmetric case.
Thus our analysis avoided using the elliptic projection and
used the $L^2$-projection instead.
Numerical experiments confirmed the theoretical findings. In particular they show
that our method even converges on non-convex domains with less regular data.
\bibliographystyle{alpha}
|
1,314,259,993,202 | arxiv | \section{Introduction
\section{Introduction}
A long standing goal in the study of non-equilibrium is to generalize and implement the vast knowledge accumulated in the study of thermodynamic phase transitions \citep{Zinn-Justin_book,Yeomans,Sachdev}. In equilibrium, the relevant thermodynamic potential, e.g. the free energy, becomes non-analytic at the
transition point. For a continuous phase transition, the
thermodynamic potential is composed of a regular part and a singular universal part -- a scaling function of the relevant thermodynamic variables. The scaling function is characterized by critical exponents, which in turn classify
the physics into universality classes that depend only on the symmetry and dimensionality of the model.
Non-equilibrium systems are generally sensitive to microscopic details, boundary conditions and initial conditions. Therefore, it is appealing to find where can universality take over in non-equilibrium systems, from both a theoretical and a practical viewpoint. If universality takes over, it is tempting to assume that a coarse grained (hydrodynamic) theory can capture the singular universal behavior. The purpose of this paper is to show that this is indeed the case for an analytically tractable setup.
It has been suggested long ago to build a thermodynamic formalism for non-equilibrium systems by looking at probabilities over time realizations rather than looking
at the instantaneous energy states \citep{Ruelle}. To illustrate this idea, let us consider two particle reservoirs, coupled through a $1D$ transport channel of size $L$ -- a
common non-equilibrium setup. The hallmark of non-equilibrium in such systems is a non-vanishing current. For this reason, a natural quantity of interest is $P_t (Q)$,
the probability to observe a transfer of $Q$ particles in the system during the time interval $\left[0, t\right]$. For $t \gg 1$, the probability to observe an atypical particle transfer, i.e. different than the steady state, is usually exponentially
unlikely. Thus, the large deviation function (LDF)
is defined by the function $I(J) = -\frac{1}
{t} \log P_t(Q)$ for $J = Q/t$ -- the atypical mean current. Starting from the discovery of fluctuation theorems in the 90' \cite{Jarzynski1997,Gallavotti1995prl}, LDFs have played an important role in the modern development of non-equilibrium
theories \cite{Touchette2009}. Since the LDF constrains the system to exhibit a mean atypical current $J$, we can define
an associated mean spatio-temporal particle occupancy in the system, where the mean is over all spatio-temporal evolutions that support the particle transfer $Q$~\cite{Chertrite2013}.
Similarly to thermodynamic phase transitions, dynamical phase transitions (DPTs) are defined as non-analytic points in the LDF. A variety of DPTs are identified in a broad range of non-equilibrium systems, such as in high-dimensional chaotic chains \cite{tailleur2007probing,bouchet2014,laffargue2013}, kinetically constrained glass models \cite{garrahan2007,Hedges2009,Pitard2011,Speck2012,Bodineau2012a,Limmer2014,Nemoto2017prl} or active self-propelled particles \cite{Cagnetta2017,Whitelam2018,nemoto2018optimizing}. The transition is manifest in e.g. a dramatic change in the mean spatio-temporal particle occupancy \citep{Bertini2005,Appert-Rolland2008,Bodineau2005,Bodineau2007,Baek2016b,Shpielberg2017b}.
In this paper, we especially consider 1D diffusive processes that are symmetric to the exchange of particles and vacancies. In this case, it is known that the observed particle occupancy becomes independent of both space and time in a range determined by the critical value $J_C$ \citep{Appert-Rolland2008,Bodineau2005,Bodineau2007,Baek2016b,Shpielberg2017b}. We show that the singular part of the LDF is universal, irrespective of microscopic details and boundary conditions. Namely $I(J)= I_{\textnormal{reg}}+I_{\textnormal{sing}}$, where
\begin{equation}
I_{\textnormal{sing}} = \frac{1}{L^{2+\alpha}}{\phi}(\delta u L^\beta) \label{eq:singular exponents}
\end{equation}
such that $\delta u $ is a universal parameter that vanishes as $J \rightarrow J_c$.
In order to exhibit the universality and find the scaling exponents $\alpha,\beta$, we employ analytical tools as well as corroborating results using numerical analysis. First, we use the macroscopic fluctuation theory (MFT). The MFT is a hydrodynamic theory of diffusive systems. It was used to obtain various results, e.g. current fluctuations, non-equilibrium fluctuation induced forces, escape times of interacting particles, statistics of tagged particles in single-file diffusion \cite{Agranov2018,Aminov15,Krapivsky2015,Bouchet2016,Akkermans2013,Bodineau2004,Bodineau2008,Tizon16a} and many more
\cite{Hurtado2011b,Prados2011,Prados2012,Prados2013,Lasanata2016}. The predictions are exact, up to $1/L$ corrections. The second approach relies on an exact solution of a microscopic model -- the simple symmetric exclusion process (SSEP) \citep{Mallick2015,Chou2011,Derrida2007} via the Bethe ansatz. The Bethe ansatz
allows to determine the energy eigenstates of many-body integrable quantum systems \cite{korepin_bogoliubov_izergin_1993}
as well as evaluating current statistics of non-equilibrium systems \citep{izergin1984,Derrida2004,DeGier2005}. The SSEP is an important model in the study of classical and quantum non-equilibrium systems \cite{Bernard2018,Kamenev2011,Derrida2004,Akkermans2013}.
By using both methods, we evaluate $1/L^2$ corrections. Close to the transition, the leading singular behavior allows to obtain the scaling exponents $\alpha,\beta$.
The singular term in the LDF is sub-leading. However, it becomes more dominant for higher and higher derivatives of the LDF with respect to $J_C-J$ (equivalently $\delta u$). Then, as shown in Fig.~\ref{fig:third_order}, the third derivative is sufficient to capture the universal behavior.
\begin{figure*}
\begin{center}
\includegraphics[width=0.32\textwidth]{CGF_0th}
\includegraphics[width=0.32\textwidth]{CGF_1st}
\includegraphics[width=0.32\textwidth]{CGF_2nd}
\includegraphics[width=0.32\textwidth]{CGF_3rd}
\includegraphics[width=0.32\textwidth]{CGF_4th}
\includegraphics[width=0.32\textwidth]{CGF_5th}
\caption{\label{fig:third_order}
The $d$-th derivative with respect to the universal parameter $u$ of the cumulant generating function $\mu(\lambda)$ and a proper rescaling
with $L^{\alpha-d\beta }$ as a function of $\delta u L^\beta$
is plotted for the weakly asymmetric exclusion process (WASEP) on a ring \citep{Mallick2015,Chou2011,Derrida2007}. The cumulant generating function $\mu(\lambda)$, as defined in the text, is a Legendre-Fenchel transform of the LDF and carries the same scaling behavior for the singular term. For the third derivative, the universal function becomes dominant over the non-universal part even for a relatively small system size $L$. The convergence to a scaling function $\tilde \phi(r)$ is convincing already for small systems $L=14,16,18,20$ with $\alpha = 1/3,\beta = 2/3$ for the $d=3$ derivative.
As mentioned in Section~\ref{sec:numerical verification}, the singular point $r=0$ may be shifted, especially for small systems.
In the WASEP, as detailed in Appendix~\ref{sec:app:smatrix}, each site is occupied by at most one particle. Particles hop to a left or right empty neighbor with a $\exp(1\pm E/L)$ rate, where we take here $E=10$. To get this figure, we diagonalize the corresponding biased matrix~\cite{Lebowitz1999,Garrahan2009} numerically. See \ref{sec:numerical verification} for more details. }
\end{center}
\end{figure*}
\section{The macroscopic fluctuation theory}
To unveil the universal structure of DPTs in non-equilibrium diffusive systems, we introduce the MFT.
Taking the limit $t,L \rightarrow \infty$
with the fixed diffusive scaling $t/L^2$, we define
rescaled coordinates: $\tau = t/L^2$ and $x\in \left[0,1 \right] $. At these
scales, the coarse grained density $\rho(x,\tau )$ is assumed to be a smoothly varying function. Here, we focus only on
processes that conserve particles at the bulk. The
current density $j(x,\tau )$ allows to write the continuity
equation
\begin{equation}
\partial_\tau \rho = -\partial_x j.
\label{eq:continuity equation}
\end{equation}
At the steady state, diffusive processes satisfy Fick's law
$j = \mathfrak{J}(\rho)$, where $\mathfrak{J}(\rho) = -D(\rho)\partial_x \rho + \sigma(\rho) E$.
For a vanishing field $E$, Fick's law in \eqref{eq:continuity equation} gives the steady state diffusion equation with $D$ the diffusion. The conductivity $\sigma$ is a measure of the response to an external field E. Generally, $D,\sigma$ are density-dependent.
The fluctuating hydrodynamics approach posits that
Fick's law can be extended to a dynamical Langevin equation \begin{equation}
j(x,\tau) = \mathfrak{J}(\rho(x,\tau)) + \sqrt{\frac{\sigma(\rho(x,\tau))}{L}} \xi(x,\tau),
\label{eq:Langevin eq}
\end{equation}
where $\xi(x,\tau)$ is a Gaussian white noise.
The strength of the noise in diffusive systems $\sqrt{\sigma/L}$ is tuned to be consistent with the Einstein relation \cite{Derrida2007}. The dynamics of diffusive systems is thus expressed through $D$ and $\sigma$ only.
From the Langevin equation \eqref{eq:Langevin eq} and by using the Martin-Siggia-Rose formalism \cite{Martin1973}, the fundamental result of the MFT is derived. Namely, we find that
the probability to observe a history $\lbrace \rho,j \rbrace$ of the system during time $\left[0,T=t/L^2\right]$ is given by
\begin{eqnarray}
\label{eq:fund eq}
\mathcal{P} (\lbrace \rho,j \rbrace) & \sim & \exp \left(-L \, \mathcal{I}_{\left[0,T\right]}(\rho,j) \right),
\\ \nonumber
\mathcal{I}_{\left[0,T\right]}(\rho,j) &=& \intop^1 _0 dx \intop^T _0 d\tau \, \frac{(j+D\partial_x \rho-\sigma E)^2}{2\sigma},
\end{eqnarray}
where the continuity equation is implicitly assumed. The MFT becomes exact for $L\rightarrow \infty$.
Indeed, trying to extract microscopic details from a hydrodynamic theory is usually an ill fated attempt.
Notice that any observable obtained through \eqref{eq:fund eq} will be dominated by the saddle-point since $L$ is large.
It is convenient to define the rescaled LDF $\Phi(J)=L I(J)$, so that $\Phi$ is $L$ independent to leading order. Then, using the MFT to calculate the LDF of diffusive systems boils down to solving the minimization problem
\begin{equation}
\Phi(J)= \frac{1}{T} \min_{j,\rho} \mathcal{I}_{\left[0,T\right]}(\rho,j),
\label{eq:LDF formal}
\end{equation}
where $\lbrace \rho,j \rbrace$ satisfy the continuity equation \eqref{eq:continuity equation} and the macroscopic particle transfer $Q = L^2 \int dx d\tau j $ for a large diffusive time $T \gg 1$ \footnote{We implicitly assume that particles do not accumulate in the system. See \cite{Hirschberg2015} for a contrary case. }. Since we consider non-equilibrium processes, boundary conditions usually strongly impact the results. Here, we consider periodic boundary conditions and boundary driven processes -- where the system is coupled to two particle reservoirs with densities $\rho_{l,r}$ respectively. The reservoirs' state is assumed to be unaffected by the interaction with the system.
For a periodic system, the integrated number of particles is conserved. Therefore, one requires that $\int dx \, \rho(x,\tau)$ is fixed for any $\tau$. Moreover, $\rho(x=0,\tau)=\rho(x=1,\tau)$. For boundary driven processes, $\rho(x=\lbrace 0,1\rbrace ,\tau )$ is fixed to the boundary values
$\rho_{l,r}$.
\section{Dynamical phase transitions}
Finding a solution to \eqref{eq:LDF formal} is hard even for simple models. It requires solving a partial differential non-linear problem with constraints. In \cite{Bodineau2004}, it was conjectured that the optimal density profile in current fluctuations is time-independent, which is the so-called additivity principle. For $1D$ systems, it implies $j(x,\tau)=J,\rho(x,\tau)=\rho(x)$. The particle transfer and continuity constraints are relaxed and
the variational principle (\ref{eq:fund eq}) is simplified to
\begin{equation}
\Phi_{\textnormal{AP}}(J) = \min_{\rho(x)} \int dx \, \frac{(J+D\partial_x \rho - \sigma E)^2}{2\sigma}.
\label{eq:AP LDF}
\end{equation}
This is clearly a significant improvement, as the solution of \eqref{eq:AP LDF} requires solving a non-linear ordinary differential equation \citep{Bodineau2004,Imparato2009,Bertini2005}. See \cite{Shpielberg2016,Bertini2005,Bertini2006} for discussions on the validity of the additivity principle.
However, even for a time-independent density profile, the solution need not be unique which will usually give rise to a DPT.
For periodic systems, translational symmetry suggests a spatial invariant density profile so that $\rho(x) \rightarrow \rho $ which fixes the solution. This constant solution can be overtaken by a traveling wave solution as was shown in \cite{Bodineau2005,Appert-Rolland2008,Espigares2013}, amounting again to a DPT.
\section{Finite size corrections}
From now on, we focus only on models with dynamics that satisfy particle-hole symmetry. This implies that odd derivatives of $D,\sigma$ w.r.t. $\rho$ vanish at $\rho=1/2$. The solution of the LDF to \eqref{eq:AP LDF} of periodic boundary conditions with mean density $\rho=1/2$ as well as a boundary driven process with $\rho_{l,r} = 1/2$ clearly bears a special symmetry. Assuming the additivity principle, the constant density solution $\rho(x)=1/2$ is a solution for any $J$ which results in $\Phi_{\textnormal{AP}}(J) = J^2 / 2\sigma$. From here on out, $D, \sigma$ and their derivatives are always evaluated at $\rho=1/2$.
Taking small fluctuations around this solution, namely $\rho(x,t) \rightarrow 1/2 + \delta \rho $ and $j(x,t) \rightarrow J + \delta j $ allows to explore the finite size corrections and whether the solution is indeed optimal. Note that $\delta \rho,\delta j$ have to satisfy the continuity equation and the integrated current constraint. For a boundary driven case the fluctuations can be recast using the Fourier representation
\begin{eqnarray}
\delta \rho &=& \frac{1}{2}\sum_{k,\omega} k \sin (kx) (a_{k,\omega} e^{i\omega \tau} + a^\star _{k,\omega} e^{-i\omega \tau}),
\\ \nonumber
\delta j &=& \frac{1}{2} \sum_{k,\omega} i\omega \cos (kx) (a_{k,\omega} e^{i\omega \tau} - a^\star _{k,\omega} e^{-i\omega \tau}).
\label{eq:Foureir rep}
\end{eqnarray}
where $k=\pi n$ and $\omega = \frac{2\pi}{T}m$ for $n,m\in \mathbb{Z}$. Moreover, $a^\star _{k,\omega} = a _{k,-\omega} $ and $a _{-k,\omega} = a _{k,\omega} $. We remark that the $k,\omega$ values have a finite cutoff of the order $\lvert k \rvert \sim L$ and $\lvert \omega \rvert \sim L^2$ for the hydrodynamic theory to be valid. Let us keep that in mind, and set these cutoffs by $k_{\max},\omega_{\max}$ (for periodic boundary condition a slightly different representation is required, see \cite{Appert-Rolland2008}).
To find $\Delta \Phi \equiv \Phi - \Phi_{\textnormal{AP}} $, we rescale $a_{k,\omega}\rightarrow a_{k,\omega}/\sqrt{L}$ and obtain a perturbative Landau-like theory as
\begin{equation}
\Delta \Phi =
-\frac{1}{T L} \log
\prod_{k\geq 0,\omega\geq 0} \int d^2 a_{k,\omega} e^{-\int dx d\tau \, \sum_{j\geq 2} \frac{S_j}{L^{1-j/2}}}
\end{equation}
with the Gaussian term $S_2 = \sum_{k,\omega} f (k,\omega) \lvert a_{k,\omega} \rvert ^2
$ such that $f = \frac{\omega^2}{2\sigma} +\frac{D^2}{2\sigma} k^2 (k^2 -2u) $ with $u= \epsilon \frac{J^2 - E^2 \sigma^2 }{16D^2 \sigma}\sigma'' $ and $\epsilon = 4(1) $ for boundary driven (periodic) systems. The higher order terms $S_i$, explicitly detailed in the appendix \ref{sec:app:driven corr }, were considered here only for the boundary driven case.
Evaluating $\Delta \Phi$ boils down to performing a perturbation theory for Gaussian integrals. Let us define $\Delta \Phi = \sum_{j=1,2,...}L^{-j} \Phi_j $. We then find
\begin{equation}
\Phi_1 = d D \mathcal{F}(u) + c J^2 ,
\label{eq:Phi1}
\end{equation}
where $c$ is a constant that depends on the cutoffs and cannot be evaluated from a hydrodynamic theory, $d=\frac{1}{8} \, (1)$ for boundary driven (periodic) systems and $\mathcal{F}$ is
\begin{equation}
\mathcal{F}(u) = -4 \sum_{n=1,2,...} n \pi \sqrt{n^2 \pi^2 -2u } - n^2 \pi^2 + u,
\label{eq:F singular}
\end{equation}
already recovered in this context \citep{Appert-Rolland2008,Imparato2009,Baek2018} as well as others \cite{Beisert:2005mq,Gromov:2005gp}.
For $u= u^\star = \pi^2/2$, $\mathcal F(u)$ is non-analytic and its derivatives diverge. This singularity has been discussed as the onset of a DPT ~\citep{Appert-Rolland2008,Imparato2009,Baek2018}. It also implies the break down of the perturbation theory close to the transition point (see appendix \ref{sec:app:non-convexity},\ref{sec:app:periodic corr}), i.e., all the higher order perturbation coefficients $\Phi_j$ diverge at this point. The singular part of the $1/L$ correction is universal -- independent of microscopic details and fully captured by the macroscopic $D,\sigma$. Therefore, it is natural to assume that a singular universal function, just like in \eqref{eq:singular exponents}, emerges from the sum of all the singular corrections. To obtain the scaling exponents $\alpha,\beta$, it is sufficient to find the dominant singular behavior of $\Phi_2$, as shown below.
The $1/L^2$ correction is cumbersome and littered
with non-universal terms, depending on microscopic details (see appendix \ref{sec:app:driven corr }).
Focusing on the leading singular term and defining $\delta u = u^\star - u$, we find that
\begin{equation}
\label{eq:Phi2 sing}
\Phi_2 = \frac{15 \pi ^4 \left(D\sigma ''-2 \sigma D''\right)}{16 D \delta u} + \mathcal{O} \left(\frac{1}{\sqrt{\delta u}}\right)
\end{equation}
as $\delta u \rightarrow 0$. We expect periodic systems to yield a similar leading term. Let us now evaluate the critical exponents.
\section{The scaling function}
We have shown that the finite size corrections diverge at the critical point $u^\star$.
For a continuous phase transition, we expect (to leading order)
\begin{equation}
L \Delta \Phi (J) = \frac{1}{L^{\alpha}} \phi\left(\delta u L^\beta \right) + \textnormal{non universal terms}. \label{eq:scaling form}
\end{equation}
Here, $\phi(r)$ is the scaling function and the non-universal terms are of order 1. From \eqref{eq:Phi1}, \eqref{eq:F singular}, \eqref{eq:Phi2 sing},
we find that the leading singular term is of the form $
\phi_0 \sqrt{\delta u} + \frac{\phi_1}{L \delta u } + \mathcal{O}(\frac{1}{L^2})
$ where $\phi_{0,1}$ are constants. To keep the scaling \eqref{eq:scaling form}, we find that $\alpha = \beta/2 = 1-\beta$. This leads to the exponents $\alpha = 1/3, \beta = 2/3$.
\section{Bethe ansatz for the SSEP}
\label{subsec:2nd order - Bethe}
To test whether the critical exponents are indeed universal, we corroborate our result by analyzing the finite size corrections of an integrable microscopic model -- the SSEP. The SSEP is defined by setting $E=0$ in the WASEP (See the caption of Fig.~\ref{fig:third_order} or the appendix \ref{sec:app:smatrix}). Macroscopically, it corresponds to $D=1,\sigma = 2 \rho (1-\rho)$.
Note that, since $u$ is always negative in this case, the singularity of $\mathcal F(u)$ is not attained for real values of $J$. Yet it can still teach us about the formal structure of the universality by investigating the poles appearing in the perturbation coefficients.
For a technical reason, instead of trying to find the LDF $I(J)$, we consider equivalently the cumulant generating function (CGF) $G(s)= \sum_Q e^{-sQ} P_t (Q)$. Note that the CGF is a Legendre-Fenchel transform of the LDF. For diffusive processes, it is natural to define the rescaled CGF $\mu(\lambda)=L G(s)$ where $\lambda = sL$, similarly to the rescaled LDF structure $\Phi(J)$.
For a Markov process, the CGF is the ground state energy (lowest eigenvalue) of an operator $H$ associated to the Markov matrix \cite{Derrida2004} (see appendix \ref{sec:app:smatrix}). This property makes the CGF appealing from both a numerical and theoretical perspectives as we shall see in the following.
For the SSEP, the CGF $G(s)$ corresponds to the ground state energy of a quantum spin chain operator \cite{Derrida2004}
\begin{equation}
\begin{aligned} \label{eq:SSEPhamiltonian}
H = & \frac{L}{2}-\frac{1}{2}\sum_{i=1}^{L}\bigl[\cosh s \left( \sigma_{i}^{x}\sigma_{i+1}^{x}+\sigma_{i}^{y}\sigma_{i+1}^{y}\right)+\sigma_{i}^{z}\sigma_{i+1}^{z}\\
&-i \sinh s \left( \sigma_{i}^{x}\sigma_{i+1}^{y}+\sigma_{i}^{y}\sigma_{i+1}^{x}\right) \bigr]\,.
\end{aligned}
\end{equation}
Its eigen-system can be exactly determined via Bethe ansatz \cite{korepin_bogoliubov_izergin_1993}. In the coordinate formulation of the Bethe ansatz,
each particle is described by a plane-wave with their interactions embodied in a pairwise factorizable scattering matrix \cite{sutherland2004beautiful}.
Considering $N$ particles on a ring of size $L$, such that mean density $\rho =N/L \in(0,1) $, the corresponding wave-function is parametrized by the complex parameters $\{\xi_i\}_{i=1}^{N}$ which are quantized according to the so-called Bethe equations
\begin{equation}
\xi_{i}^{L} = \prod_{\substack{j=1 \\ j\neq i}}^{N}\left[-\frac{e^{s}-2\xi_i+e^{-s}\xi_i \xi_j}{e^{s}-2\xi_j+e^{-s}\xi_i \xi_j} \right]\,.
\label{eq:BAE}
\end{equation}
The eigenvalues are expressed in terms of the solutions of these equations through
\begin{equation}
G(s) = -2 N+e^{-s}\sum_{j=1}^{N}\xi_j+e^{s}\sum_{j=1}^{N}\frac{1}{\xi_j}\,.
\end{equation}
Note that the ground state is shifted here by $2N$ to obtain $G(s)$.
Based on Bethe ansatz, the CGF was already calculated to order $1/L$ \cite{Appert-Rolland2008}. Using an alternative method based on the Baxter equation \cite{Gromov:2005gp}, we compute the $1/L^2$ corrections and numerically validate our results.
Generally, solving \eqref{eq:BAE} analytically is unfeasible for arbitrary finite $N$ and $L$ but in the thermodynamic limit where $N,L\rightarrow \infty$, they become tractable.
The key observation is that under the change of variables
$\xi_i = e^{s} (z_i +i/2)/(z_i - i/2)$,
the Bethe equations \eqref{eq:BAE} become
\begin{equation} \label{eq:XXX}
\left(\frac{z_i+i/2}{z_i-i/2}\right)^{L} =e^{-\lambda}\,\prod_{\substack{j=1 \\ j\neq i}}^{N}\frac{z_i-z_j+i}{z_i-z_j-i}\;\;\;\; i=1,\dots,N.
\end{equation}
Eqs.\eqref{eq:XXX} are precisely the Bethe equations for the twisted XXX$_{1/2}$ spin-chain with $\lambda$ playing the role of the twist \cite{Sklyanin:1988yz}.
Finite size corrections to the spectrum of spin chains of this type are well studied. In particular, a powerful method based on the so-called Baxter equation was developed in \cite{Gromov:2005gp} and applied for the closely related $sl(2)$ spin chain. Based on the results of \cite{Gromov:2005gp}, we determine the finite size corrections of the SSEP to order $1/L^2$. Namely, we determine $\mu_0, \mu_1$ and $\mu_2$, where we have defined $\mu(\lambda) =\sum_{i=0}^{\infty} \mu_i L^{-i}$.
The expressions $\mu_0$, $\mu_1$ (see appendix \ref{sec:app:periodic corr}) agree with the previously obtained results \cite{Appert-Rolland2008}.
The full expression of $\mu_2$, which is one of our main results, has a long expression given in the appendix.
These theoretical predictions of $\mu_{0,1,2}$ are confirmed in the appendix by comparing to a population dynamics algorithm \cite{Giardina2006xxx,tailleur2007probing,Hurtado2009,Giardina2011JSP,Nemoto_Bouchet_Jack_Lecomte,Nemoto2017,Ray2018PRL,klymko2018rare,Brewer2018}.
The interesting part of the $\mu_2$ arises at its strongest singularity. For illustration we consider the lowest mode in
the appendix \eqref{eq:mu2_exact},
namely $k=1$, and we find the following singular behaviour
\begin{equation}
\mu_2 \sim \frac{2 \pi ^4 \left(\theta ^2-1\right)}{\delta u \,\theta ^2}+\mathcal{O}\left( \frac{1}{\sqrt{\delta u}}\right)\,,
\label{eq:G2 corr}
\end{equation}
where we have introduced $\delta u \equiv \frac{1}{8}\theta^2 \lambda^2 + \frac{\pi^2}{2}$ and $\theta = 2 \sqrt{\rho(1-\rho)}$.
Under the continuation to complex values of $\lambda$, we find simple poles at the positions $\lambda = \pm \frac{2 i \pi }{\theta }$.
We then get the same type of singularity in $\delta u$ as in the hydrodynamics analysis, i.e., ($\mu_2 = \mathcal O(1/\delta u)$). Combined with the results of the MFT, we deduce that the $1/L^2$ correction diverges with $1/\delta u$.
\section{Numerical verification \label{sec:numerical verification}}
It is numerically hard to single-out the singular universal function $\phi$ from the (unknown) non-universal terms. However, differentiation accentuates the singular term as detailed below.
In terms of CGF $\mu(\lambda)$, the scaling form eq.(\ref{eq:scaling form})
is written as
\begin{equation}
L \left [ \mu(\lambda) - \mu_{\rm AP}(\lambda) \right ] = \frac{1}{L^{\alpha}} \tilde \phi (\delta u L^{\beta}) + {\rm non \ universal \ terms},
\end{equation}
where $\delta u = \pi^2/2 - u$ and
\begin{equation}
u(\lambda)=\mu_{\rm AP}(\lambda) \frac{\sigma^{\prime\prime}}{ 8 D^2},
\end{equation}
\begin{equation}
\mu_{\rm AP}(\lambda)=\lim_{L\rightarrow \infty}\mu(\lambda).
\end{equation}
To derive these expressions, we have used $J=-\mu^{\prime}_{\rm AP}(\lambda)$ \footnote{Note that a shift in the critical point is also possible due to finite size effects. We do not discuss this point further}.
This scaling form indicates that the higher order derivatives of $\mu(\lambda)$ with respect to $\delta u$ is dominated by the universal function $\tilde \phi$. More precisely,
\begin{equation}
L^{\alpha - d \beta + 1}\mu^{(d)}(\lambda) \big |_{r=\delta u L^{\beta}} = \tilde \phi(r) + {\mathcal O}(L^{\alpha - d \beta + 1}),
\label{SM:scaling_}
\end{equation}
where $\mu^{(d)}$ is the $d$-th derivative of $\mu$ with respect to $\delta u$ and $r$ is the scaling variable given as $r=\delta u L^{\beta}$. We thus can see that sufficiently large derivatives $d$ (more precisely, $d=3$ given $\alpha = 1/3$, $\beta = 2/3$) allows us to neglect $\mathcal O(L^{\alpha - d \beta + 1})$. Using the method detailed in the next paragraph, we have numerically probed $\mu(\lambda)$ for the WASEP to search for the universal scaling function.
We present the plot, showing the left-hand side of (\ref{SM:scaling_}) in Fig.~\ref{fig:third_order}. One can clearly see that the function starts to overlap from the third order derivative, supporting the prediction of the scaling exponents.
To obtain $\mu(\lambda)$ and its derivatives, we numerically diagonalize the s-biased operator $L^s_{\mathcal C,\mathcal C^{\prime}}$, whose explicit expression is detailed as (\ref{eq:appen_Lscc}) in Appendix~\ref{sec:app:smatrix}.
In order to obtain the derivatives of the CGF in a stable manner, we use the following method:
We denote the eigenvalue equation of $L^s$ by
\begin{equation}
L^s \varsigma = G(s) \varsigma,
\end{equation}
where $\varsigma$ is the right eigenvector associated with the principal eigenvalue $G(s)$. To get the first order derivative, we numerically solve the following equation
\begin{equation}
(L^s)^{\prime} \varsigma + L^s \varsigma^{\prime} = G^{\prime}(s) \varsigma + G(s) \varsigma^{\prime}.
\end{equation}
together with the eigenvalue equation.
Similarly, to get the second order derivatives, we add another equation $(L^s \varsigma)^{\prime\prime} = (G(s) \varsigma)^{\prime\prime}$ to these equations. Higher order derivatives can be also calculated in the same strategy. Thanks to this method, we do not have to rely on the difference method, which increases the error of the estimation of the higher-order derivatives.
\section{Discussion}
We have probed the LDF (CGF) of the current in diffusive systems using a hydrodynamical theory, a Bethe ansatz approach and numerical simulations. For dynamics with particle-hole symmetry, a singular scaling function with universal exponents is observed. This implies that near the transition, macroscopic fluctuations dominate and hydrodynamic theories are sufficient to observe the critical behavior \cite{Baek2018,Gerschenfeld2011}. Thus, it is understood that non-equilibrium systems are prone to universality, where not only microscopic bulk dynamics, but also boundary condition details may be washed away.
Our observation leads us to conjecture a similar scaling exponents for current fluctuations in an infinite chain, starting from a step initial conditions \cite{Derrida2009,derrida2009a,Krapivsky2012}. Consider an infinite $1D$ chain, where at time $t=0$ the sites $i\leq 0$ have mean density $\rho_l$ and the sites $i>0$ have mean density $\rho_r$ with Bernoulli distribution (no correlations between the sites). For the SSEP, the CGF was completely determined (see Eq.2 in \cite{derrida2009a}). One can notice that for $\rho_{l,r}=\frac{1}{2}$, the CGF becomes singular for the unphysical value $\lambda = \pm i \pi$, similarly to the value obtained for the boundary driven setup. From the similar structure, it is indeed appealing to conjecture that the universal structure shown here is carried through also in the infinite chain setup as well.
While the universality class here involves diffusive processes with particle-hole symmetry, it is temping to check whether the exponents are valid even outside the range of validity currently considered, e.g. in models of ballistic or anomalous transport. The nonlinear fluctuating hydrodynamics theory \citep{Spohn2014} may allow to detect the universality class in these regimes.
Two more remarks are in order for the scaling function. Notice that for the SSEP on a ring, with the mean density $\rho=1/2$, the singular terms vanish in \eqref{eq:G2 corr} as well as the subleading diverging terms (see appendix \ref{sec:app:periodic corr}). This does not imply that the singular behavior changes as the density is changed by an infinitesimal amount. We expect that a similar scaling will be recovered in the next leading order expansion. Secondly, as is verified for the SSEP on a ring, the diverging term is a simple pole in \eqref{eq:G2 corr} even without the particle-hole symmetry in the density. Therefore, the critical exponents do not change for periodic boundary conditions, irrespective of the symmetry. It would be interesting to find whether the scaling exponents remain the same even when the particle-hole symmetry is broken for boundary driven processes. To verify that, it is necessary to find continuous DPTs in boundary driven processes, which are analytically tractable. Unfortunately, such transitions are not expected to support a constant density profile that enable the direct perturbation theory performed here \cite{Shpielberg2017a}.
\begin{acknowledgments}
We thank Y. Baek, N. Gromov, O. Hirschberg, V. Kazakov and Elsen Tjhung for fruitful discussions. We especially thank B. Derrida for many insightful remarks. OS acknowledge the support of ANR-14-CE25-0003. The work of JC was supported by the
People Programme (Marie Curie Actions) of the European Union's Seventh
Framework Programme FP7/2007-2013/ under REA Grant Agreement No 317089
(GATIS), by the European Research Council (Programme ``Ideas''
ERC-2012-AdG 320769 AdS-CFT-solvable), from the ANR grant StrongInt
(BLANC- SIMI- 4-2011). This work was granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the program Investissements d'Avenir supervised by the Agence
Nationale pour la Recherche.
This work was also granted access to the HPC resources of CINES/TGCC under the allocation 2018-A0042A10457 made by GENCI.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
|
1,314,259,993,203 | arxiv | \section{Introduction}
In this paper we construct the first examples of finitely generated, residually finite groups $G$ whose outer automorphism groups are finitely generated and not recursively presentable. Indeed, we construct a continuum, so $2^{\aleph_0}$, of such groups $G$ with pairwise non-isomorphic outer automorphism groups. Our construction is motivated by a question of Bumagin and Wise, who asked if every countable group $Q$ could be realised as the outer automorphism group of a finitely generated, residually finite group $G_Q$.
In this paper we solve a finite-index version of this question for $Q$ finitely generated and residually finite, and the aforementioned result then follows. Bumagin and Wise solved the question for $Q$ finitely presented \cite{BumaginWise2005}. In previous work, the author gave a partial solution for $Q$ finitely generated and recursively presentable \cite[Theorem A]{logan2015outer}, and a complete solutions for these groups assuming that there the exists a ``malnormal version'' of Higman's embedding theorem \cite[Theorem B]{logan2015outer}.
\p{Residually finite groups}
A group $G$ is residually finite if for all $g\in G\setminus\{1\}$ there exists a homomorphism image $\phi_g:G\rightarrow F_g$ where $F_g$ is finite and where $\phi_g(g)\neq1$.
Residual finiteness is a strong finiteness property. For example, finitely presentable, residually finite groups have soluble word problem, while finitely generated, residually finite groups are Hopfian \cite{Malcev1940}.
Our main result, which is Theorem~\ref{corol:mainresult}, contrasts with these ``nice'' properties as it implies that finitely generated groups can have very complicated symmetries.
Fundamental to this paper is the existence of finitely generated, residually finite groups which are not recursively presentable. Bridson-Wilton \cite[Section 2]{bridson2015triviality} point out that the existence of such groups follows from work of Slobodsko\v\i \cite{Slobodskoi1981undecidability}. The ``continuum'' statement in the main result, Theorem~\ref{corol:mainresult}, relies on the fact that there is a continuum of such groups, which is due to Minasyan-Ol'shanskii-Sonkin \cite[Theorem 4]{minasyan2009periodic}.
To see that the existence of such groups is fundamental to our argument, suppose that every finitely generated, residually finite group is recursively presentable, and let $G$ be a finitely generated, residually finite group with finitely generated outer automorphism group. Then $\operatorname{Aut}(G)$ is finitely generated and residually finite \cite{Baumslag}, and hence is recursively presentable. Therefore, as the kernel of $\operatorname{Aut}(G)\rightarrow \operatorname{Out}(G)$ is finitely generated (because $\operatorname{Inn}(G)\cong G/Z(G)$), $\operatorname{Out}(G)$ is also recursively presentable. Hence, the existence of finitely generated, residually finite groups which are not recursively presentable is necessary for our argument.
\p{The main construction}
The main result of this paper, the result stated in the abstract, is Theorem~\ref{corol:mainresult}. This theorem follows from a more general construction, Theorem~\ref{thm:maintheorem}, which relates to the outer automorphism groups of HNN-extensions of
certain groups. Theorem~\ref{thm:maintheorem} yields the following two corollaries, each of which individually solves Bumagin and Wise's question up to finite index for $Q$ finitely generated and residually finite. A {triangle group} $T_{i, j, k}:=\langle a, b; a^i, b^j, (ab)^k\rangle$ is called hyperbolic if $i^{-1}+j^{-1}+k^{-1}<1$.
\begin{corollary}
\label{corol:triangle}
Fix a hyperbolic triangle group $H:=T_{i, j, k}$. Then every finitely-generated group $Q$ can be embedded as a finite index subgroup of the outer automorphism group of an HNN-extension $G_Q$ of $H$, where $G_Q$ is residually finite if $Q$ is residually finite.
\end{corollary}
The following corollary is satisfied by a random group, in the sense of Gromov~\cite{gromov1996geometric}~\cite{ollivier2005january}, at density $<1/6$ \cite{dahmani2011random}~\cite{ollivier2011cubulating}.
\begin{corollary}
\label{corol:random}
Fix a hyperbolic group $H$ which has Serre's property FA and which acts properly and cocompactly on a $\operatorname{CAT}(0)$ cube complex. Then every finitely-generated group $Q$ can be embedded as a finite index subgroup of the outer automorphism group of an HNN-extension $G_Q$ of $H$, where $G_Q$ is residually finite if $Q$ is residually finite.
\end{corollary}
The main result of the paper is the following. By a \emph{continuum} we mean a set of cardinality $2^{\aleph_0}$, that is, of cardinality equal to that of the real numbers $\mathbb{R}$.
\begin{thmletter}
\label{corol:mainresult}
There exists a continuum of finitely generated, residually finite groups whose outer automorphism groups are pairwise non-isomorphic finitely generated, non-recursively-presentable groups.
\end{thmletter}
We prove Theorem~\ref{corol:mainresult} by noting the existence of a continuum of finitely generated, residually finite groups which are not recursively presentable, and then apply Theorem~\ref{thm:maintheorem} (or rather, either of the above corollaries) to these groups.
\p{Outline of the paper}
In Section~\ref{sec:prelim} we give two preliminary results on a certain class of HNN-extensions, which we call ``inner'' HNN-extensions. These are Theorem~\ref{thm:Prelim1}, which describes a certain subgroup of the outer automorphism group of an inner HNN-extension, and Proposition~\ref{thm:Prelim2}, which classifies the residual finiteness of a certain class of inner HNN-extensions.
In Section~\ref{sec:main} we prove our main results, Theorems~\ref{corol:mainresult}~and~\ref{thm:maintheorem}.
In Section~\ref{sec:Qs} we prove a result for finitely presented (rather than finitely generated) residually finite groups.
\p{Acknowledgments}
The author would like to thank Steve Pride and Tara Brendle for many helpful discussions about the work surrounding the paper, and Henry Wilton for ideas which led to Corollary~\ref{corol:random}.
\section{Two preliminary results}
\label{sec:prelim}
Our construction of Theorem~\ref{thm:maintheorem}, which leads to the main result, applies two preliminary results on \emph{inner HNN-extensions}, which are HNN-extensions where the action of the stable letter on the associated subgroup(s) is an inner automorphism of the base group. Such an HNN-extension $G$ has the following form (up to isomorphism).
\[
G\cong \langle H, t; k^t=k, k\in K\rangle
\]
The first result of this section, Theorem~\ref{thm:Prelim1}, relates to the outer automorphism groups of inner HNN-extensions, while the second result, Proposition~\ref{thm:Prelim2}, relates to their residual finiteness.
\p{First preliminary result} The first preliminary result, Theorem~\ref{thm:Prelim1}, tells us about a subgroup of the outer automorphism group of an inner HNN-extension. This subgroup, denoted $\operatorname{Out}^H(G)$, is the subgroup which consists of those outer automorphisms $\Phi$ with a representative $\phi\in\Phi$ which fixes $H$ setwise, $\phi(H)=H$.
\[
\operatorname{Out}^H(G)=\{\Phi\in\operatorname{Out}(H):\text{ there exists }\phi\in\Phi \text{ such that }\phi(H)=H\}
\]
Theorem~\ref{thm:Prelim1} gives, under certain conditions, the isomorphism class of this subgroup up to finite index. We write $A\leq_f B$ to mean that $A$ is a finite index subgroup of $B$.
\begin{theorem}\label{thm:Prelim1}
Let $G$ be an inner $HNN$-extension of $H$ with associated subgroup $K\lneq H$.
If $V$ is a subgroup of $H$ such that $K\leq V\leq N_H(K)$ and such that $V\cap Z(H)=1$ then $V/K$ embeds into $\operatorname{Out}^H(G)$.
In addition, if $V\leq_f N_H(K)$ and if both $\operatorname{Out}(H)$ and $C_H(K)$ are finite then this embedding is with finite index.
\end{theorem}
\begin{proof}
Let $\operatorname{Out}_H(G)$ denote the subgroup of $\operatorname{Out}(G)$ consisting of those outer automorphisms $\Phi$ with a representative $\phi$ which fixes $H$ setwise and which sends $t$ to a word containing precisely one $t$-term. The result holds for $\operatorname{Out}_H(G)$ in place of $\operatorname{Out}^H(G)$ \cite[Theorem A \& Lemma 5.2]{logan2015HNN}. Then $\operatorname{Out}_H(G)=\operatorname{Out}^H(G)$ by a result of M. Pettet \cite[Lemma 2.6]{pettet1999automorphism}.
\end{proof}
\p{Second preliminary result} The second result applied in Theorem~\ref{thm:maintheorem} is a criterion for residual finiteness of inner HNN-extensions. Ate\c{s}-Logan-Pride actually prove a more general version of this result \cite{AtesPride}.
We use the fact that a finite index subgroup $F$ of a group $G$ is residually finite if and only if $G$ is residually finite implicitly throughout the proof of this theorem. To prove this equivalence, note that subgroups of residually finite groups are clearly residually finite, while for the other direction re-write the definition of a residually finite group using normal subgroups (corresponding to the kernels of the homomorphisms $\phi_g$), and note that every finite index subgroup of $F$ contains a finite index subgroup which is normal in $G$.
\begin{proposition}[Ate\c{s}-Logan-Pride \cite{AtesPride}]
\label{thm:Prelim2}
Let $G$ be an inner $HNN$-extension of a group $H$ with non-trivial associated subgroup $K\lneq H$.
Suppose $H$ is finitely generated and residually finite, and suppose that $N_H(K)$ has finite index in $H$. Then $G$ is residually finite if and only if $N_H(K)/K$ is residually finite.
\end{proposition}
Our application of Proposition~\ref{thm:Prelim2} only uses the ``if'' direction, and not the ``only if'' direction.
\begin{proof}
Firstly, $N_H(K)/K$ embeds into $\operatorname{Aut}(G)$ \cite[Proposition 5.3]{logan2015HNN}, hence $G$ is residually finite only if $N_H(K)/K$ is residually finite \cite{Baumslag}.
For the other direction, note that the HNN-extension $G$ is residually finite if for all finite sets $\{g_1, \ldots, g_n\}$ with $g_i\in H\setminus K$ there exists some finite index normal subgroup $N$ of $H$, $N\unlhd_f H$, such that $g_iK\cap N$ is empty for all $i\in\{1, \ldots, n\}$ \cite[Lemma 4.4]{BaumslagTretkoff}. We prove that this condition holds under the conditions of this lemma. To do this, we find for each such $g_i$ a normal subgroup $N_i$ of finite index in $H$ such that $g_iK\cap N_i$ is empty. Then, the finite-index subgroup $N:=\cap N_i$ has the required properties. There are two cases: $g_i\not\in N_H(K)$, and $g_i\in N_H(K)$.
Suppose $g_i\not\in N_H(K)$. Take the normal subgroup $N_i$ to be the intersection of the (finitely many) conjugates of $N_H(K)$. Then $hK\cap N_i$ is non-empty if and only if $h\in N_H(K)$, and hence $g_iK\cap N_i$ is empty.
Suppose $g_i\in N_H(K)$. Then $g_iK\neq K$ and because $N_H(K)/K$ is residually finite there exists a map $\psi_i: N_H(K)/K\rightarrow F_i$, such that $F_i$ is finite and $g_iK$ is not contained in the kernel of $\psi_i$. Therefore, there exists a map $\widetilde{\psi_i}: N_H(K)\rightarrow N_H(K)/K\xrightarrow{\psi_i} F_i$ such that $g_i$ is not contained in the kernel of $\widetilde{\psi_i}$, and take $N_i$ to be the kernel of the map $\widetilde{\psi_i}$. Then, $g_iK\cap N_i$ is empty by construction.
\end{proof}
\section{The proof of the main result}
\label{sec:main}
In this section we prove Theorems~\ref{corol:mainresult}~and~\ref{thm:maintheorem}. Recall that Theorem~\ref{corol:mainresult} is the main result of this paper.
\begin{thmletter}
\label{thm:maintheorem}
Fix a group $H$ such that $H$ is
\begin{enumerate}
\item\label{list:Hyp} hyperbolic,
\item\label{list:rf} residually finite, and
\item\label{list:large} large, (that is, $H$ contains a finite index subgroup $V$ which surjects onto $F_2$),
\end{enumerate}
and such that $H$ has
\begin{enumerate}[resume]
\item\label{list:FA} Serre's property FA, and
\item\label{list:tfSub} a torsion-free subgroup $U$ of finite index.
\end{enumerate}
Then every finitely-generated group $Q$ can be embedded as a finite index subgroup of the outer automorphism group of an HNN-extension $G_Q$ of $H$, where $G_Q$ is residually finite if $Q$ is residually finite.
\end{thmletter}
Note that (\ref{list:Hyp}) implies (\ref{list:rf}) if and only if (\ref{list:Hyp}) implies (\ref{list:tfSub}) \cite{kapovich2000equivalence}.
\begin{proof}
We give the construction, and then we prove that the required properties hold.
The group $G_Q$ is an inner HNN-extension, $G_Q=\langle H, t; k^t=k, k\in K\rangle$. Specifying the associated subgroup $K$ completes the construction. Let $N$ be a subgroup of $H$ such that $V/N\cong F_2$, with $V$ as in the statement of the theorem. Note that we can assume $V$ is torsion-free, as for $U$ the torsion-free subgroup of finite index the image of $V\cap U$ under the map induced by $N$ is free and non-abelian, so rewrite $V:=V\cap U$. Then, for every natural number $n$ it holds that $H$ contains a torsion-free finite-index subgroup $V_n$ which maps onto $F_n$, which can be seen by applying the correspondence theorem to the fact that the free group on two-generators contain finite-index free subgroups of arbitrary rank.
Let $Q$ be a finitely generated group. Then take a presentation $\langle X; \mathbf{r}\rangle$ of $Q$ with $2\leq|X|<\infty$ and $\mathbf{r}$ non-empty, and so $V_n$ maps onto $Q$ with $n:=|X|$. Take $K$ to be the subgroup of $V_n$ (and so of $H$) associated with the kernel of this map, so $V_n/K\cong Q$. Note that because $V_n$ has finite index in $H$,
we have that $V_n\leq_f N_H(K)\leq_f H$.
We now prove that the required properties hold. As $N_H(K)$ has finite index in $H$, Proposition~\ref{thm:Prelim2} implies that $G_Q$ is residually finite if $Q$ is residually finite. We now prove that $Q$ can be embedded as a finite index subgroup into $\operatorname{Out}(G_Q)$. We show that the conditions of Theorem~\ref{thm:Prelim1} are satisfied, with $V:=V_n$, and so $Q$ embeds with finite index into $\operatorname{Out}^H(G_Q)$.
The result then follows because $H$ having has Serre's property FA implies that $\operatorname{Out}^H(G_Q)=\operatorname{Out}(G_Q)$ \cite[Lemma~2.1]{logan2015HNN}. So, $\operatorname{Out}(H)$ is finite as the base group $H$ is hyperbolic group with Serre's property FA \cite{levitt2005automorphisms}. Now, $K$ is non-cyclic because the map $V_n\rightarrow V_n/K$ factors through a non-cyclic free group (by assumption the set of relators $\mathbf{r}$ in the presentation for $Q$ is non-empty), and so $C_H(K)$ is finite as $H$ is hyperbolic.
By construction we have $K\leq V_n\leq_f N_H(K)$, and finally $V_n\cap Z(H)$ as $V_n$ is torsion-free by construction while $Z(H)$ is finite as $H$ is hyperbolic.
\end{proof}
We now prove Corollaries~\ref{corol:triangle}~and~\ref{corol:random}.
\begin{proof}[Proof of Corollary~\ref{corol:triangle}]
For $H$ a hyperbolic triangle group the properties (\ref{list:rf})--(\ref{list:tfSub}) are well-known to hold
\cite{baumslag1987generalized}
\cite{trees}
\cite{feuer1971torsion
.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{corol:random}]
The required properties follow from Agol's theorem \cite{agol2012virtual}.
\end{proof}
We now prove the main result of this paper, Theorem~\ref{corol:mainresult}. Recall that by a \emph{continuum} we mean a set of cardinality $2^{\aleph_0}$($=|\mathbb{R}|$).
\begin{proof}[Proof of Theorem~\ref{corol:mainresult}]
Begin by noting that there exists a continuum of finitely generated, residually finite groups, and hence there is a set $\mathcal{Q}$, with cardinality the continuum, of such groups which are not recursively presentable~\cite[Theorem 4]{minasyan2009periodic}.
Applying Theorem~\ref{thm:maintheorem} to the set $\mathcal{Q}$, we obtain a set $\mathcal{G}=\{G_Q: Q\in\mathcal{Q}\}$ which consists of finitely generated, residually finite groups whose outer automorphism groups are finitely generated but not recursively presentable. Moreover, for $G_Q\in\mathcal{G}$, $\operatorname{Out}(G_Q)$ has only countably many subgroups of finite index, and hence the set $\mathcal{G}$ contains a (subset consisting of a) continuum of groups with pairwise non-isomorphic outer automorphism groups.
\end{proof}
All the outer automorphism groups in Theorem~\ref{corol:mainresult} are residually finite. This leads us to the following question.
\begin{question}
Does there exist a finitely generated, non-recursively-presentable, non-residually-finite group $Q$ which can be realised as the outer automorphism group of a finitely generated, residually finite group $G_Q$?
\end{question}
\section{When $G_Q$ is finitely presented}
\label{sec:Qs}
We now prove a result on $\operatorname{Out}(G_Q)$ for $G_Q$ finitely presented and residually finite.
\begin{theorem}
\label{thm:fpBumaginWise}
For every finitely presented, residually finite group $Q$ there exists a finitely presented, residually finite group $G_Q$ such that $Q$ embeds into $\operatorname{Out}(G_Q)$.
\end{theorem}
\begin{proof}
A version of Rips' construction due to Wise \cite{wise2003ripsconstruction} gives a finitely presented, centerless, residually finite group $H_Q$ with a three-generated subgroup $N=\langle a, b, c\rangle$ such that $H_Q/N\cong Q$.%
\footnote{More recent work of Wise and his coauthors prove that the group $H_Q$ in Rips' original construction is also residually finite. The main practical difference is that $N$ can then be taken to be two-generated \cite{rips1982subgroups}.}
Then the HNN-extension $G_Q=\langle H_Q, t; a^t=a, b^t=b, c^t=c\rangle$ is residually finite, by Theorem~\ref{thm:Prelim1}, while $Q\cong H_Q/K$ embeds into $\operatorname{Out}(G_Q)$ by Proposition~\ref{thm:Prelim2}, with $V:=H_Q=N_{H_Q}(K)$.
\end{proof}
Note that the groups $Q$ in Theorem~\ref{thm:fpBumaginWise} can be taken to be any group which embeds into a finitely presentable, residually finite group.
We know nothing about the embedding $Q\hookrightarrow \operatorname{Out}(G_Q)$ in Theorem~\ref{thm:fpBumaginWise}.
Indeed, Theorem~\ref{thm:fpBumaginWise} is similar to a result of Wise, who proved the analogous theorem for finitely generated groups $G_Q$ by proving that $G/N$ embeds into $\operatorname{Out}(N)$ \cite[Corollary 3.3]{wise2003ripsconstruction}. Bumagin and Wise altered Rips' construction to make Wise's embedding an isomorphism \cite{BumaginWise2005}. It may be possible to similarly alter the construction of Theorem~\ref{thm:fpBumaginWise} to answer the following question. Note that if $Q$ is finitely generated and $G_Q$ is finitely presented and residually finite then $Q$ must be recursively presentable \cite[Proposition 3.4]{logan2015outer}.
\begin{question}
Can every finitely presented group $Q$ be realised as the outer automorphism group of some finitely presented, residually finite group $G_Q$? And for $Q$ finitely generated and recursively presentable?
\end{question}
\bibliographystyle{amsalpha}
|
1,314,259,993,204 | arxiv | \section{Introduction}
Abdominal Aortic Aneurysm (AAA), an enlargement of the abdominal aorta with 50\% diameter over normal state, occurs increasingly often among old people \cite{sakalihasan2005abdominal}. The rupture of AAA brings in 85\%-90\% fatality rate \cite{kent2014abdominal}. Fenestrated Endovascular Aortic Repair (FEVAR) is a minimally invasive surgery for AAA, where a deployment catheter carrying a compressed stent graft is inserted via the femoral artery, advanced through the vasulature and deployed subsequently at the AAA position. Three typical stent grafts - iliac, fenestrated and thoracic stent graft are shown in Figure~\ref{fig:intro}(a), \ref{fig:intro}(b) and \ref{fig:intro}(c) respectively. In FEVAR, an accurate alignment of stent graft fenestrations or scallops (as shown in Figure~\ref{fig:intro}c) to aortic branches, i.e., renal arteries, is necessary for connecting branch stent grafts into aortic branches \cite{cross2012fenestrated}. Although several robot-assisted systems have been developed to facilitate the FEVAR procedure, i.e., the Magellan system (Hansen Medical, CA, USA), current navigation technique is still based on 2D fluoroscopic images which are insufficient for 3D-to-3D alignment. Either supplying 3D navigation for the AAA or fenestrated stent grafts would improve the navigation.
\begin{figure}[th]
\centering
\framebox{\parbox{3.3in}{\includegraphics[width = 3.3in]{./Introduction}}}
\caption{Illustration of iliac stent graft (a), thoracic stent graft (b), fenestrated stent graft (c), marker number and different stent segment status (d).}
\label{fig:intro}
\end{figure}
For 3D AAA navigation, a skeleton-based as-rigid-as-possible approach was proposed to adapt a 3D pre-operative AAA shape to intra-operative position of the deployment device from two fluoroscopic images for recovering the 3D AAA shape \cite{toth2015adaption}. A skeleton instantiation framework for AAA with a graph matching method and skeleton deformation was introduced to instantiate the 3D AAA skeleton from a single 2D fluoroscopic image \cite{zheng20183d}.
For offering 3D navigation for fenestrated stent grafts, many methods have been implemented. The 3D stent shape was recovered from a 2D X-ray image via registration and optimization in \cite{demirci20113d} but without estimation of the graft nor the angle or position of fenestrations or scallops. A 3D shape instantiation framework with stent graft modelling and Robust Perspective-n-Point (RPnP) method was proposed to instantiate the 3D shape of a fully-compressed stent graft \cite{zhoustent}. The work in \cite{zhoustent} was then used to recover the 3D shape of each stent segment (as shown in Figure~\ref{fig:intro}b), with customized markers, while Focal U-Net and graft gap (as shown in Figure~\ref{fig:intro}b) interpolation were proposed to semi-automatically segment customized markers and recover the whole 3D shape of fully-deployed stent grafts in \cite{zhou2018real_ral}. Equally-weighted Focal U-Net was also proposed for automatic marker segmentation in \cite{zhou2018towards_iros} to improve the automation of the 3D shape instantiation framework. However, the method by Zhou et al. could not instantiate the 3D shape of a partially-deployed stent segment, as the 3D marker references required by the RPnP method are unknown.
The method proposed in this paper aims to obtain the deformation pattern between partially-deployed and fully-deployed stent segment using deep learning based methods. General artificial neural networks can be applied to this task but with very large searching space of parameters. The relationship between each two markers is not uniform and the topological structure is non-Euclidean either. The classical convolutional kernel and thus the convolutional neural networks cannot be used for this problem. A novel convolution on an undirected simple graph called spectral graph convolution was described in \cite{shuman2012emerging}. A Graph Convolutional Network (GCN) with locally connected architecture was then proposed in \cite{bruna2013spectral} with $\mathcal{O}(n)$ parameter number for each layer based on the spectrum of graph Laplacian, which was validated on the MNIST dataset. Furthermore, a more efficient GCN with localized spectral convolution on a graph was proposed in \cite{defferrard2016convolutional}, reducing the parameter number to $\mathcal{O}(K)$ with improved performance on the MNIST dataset but computational complexity, where $K<n$ is the localized filter size. Another construction of GCN was also proposed in \cite{kipf2016semi} with first-order approximation of spectral graph convolutions for a large-scale architecture, but with less capacity for the same layer number compared to \cite{defferrard2016convolutional}.
An Adapted GCN based on the architecture in \cite{defferrard2016convolutional} is proposed for predicting 3D marker references of partially-deployed stent segment from 3D fully-deployed markers, which bridges the gap of utilizing the RPnP method for 3D shape instantiation of partially-deployed stent segment. The coarsening layers are removed and the softmax function at the network end is replaced with a linear mapping. The derived 3D marker references are integrated into a previously deployed 3D shape instantiation framework \cite{zhou2018real_ral}, with the customized marker placement, stent segment modelling and the RPnP method, to achieve 3D shape instantiation for partially-deployed stent segment, The pipeline is shown in Figure~\ref{fig:pipeline}. Three stent grafts with total 26 different stent segments were used for the validation. Details regarding the methodology and experimental setup are in Section~\ref{sec:method}. Results with an average angular error about $7^\circ$ and an average mesh distance error around 2mm are stated in Section~\ref{sec:result}. Discussion and conclusion are introduced in Section~\ref{sec:discussion} and Section~\ref{sec:conclusion} respectively.
\section{Methodology}
\label{sec:method}
In this section, we introduce the proposed Adapted GCN for predicting 3D marker references, while briefly introducing the stent segment modelling and 3D shape instantiation to facilitate the understanding of the whole framework. Experimental setup is also demonstrated.
\begin{figure}[th]
\centering
\framebox{\parbox{3.3in}{\includegraphics[width = 3.3in]{./pipeline}}}
\caption{Pipeline for shape instantiation of partially-deployed stent segment from a single fluoroscopic image and the 3D CT scan of fully-deployed stent graft}
\label{fig:pipeline}
\end{figure}
\subsection{Partially-deployed Stent Segment Modelling}
\label{sec:stent_model}
In practice, the parameters of stent segment, including the height and diameters at the fully-deployed and fully-compressed state, can be obtained via fenestrated stent graft and deployment catheter design. In this paper, as the stent grafts were experimented multiple times with compression and deployment, the practical parameters are different from the ideally designed ones and are measured manually.
In \cite{zhou2018real_ral}, a stent graft was modelled as a cylinder fitted by a series of concentric circles with a finite set of vertices $\mathcal{V}$ of coordinates $\textit{\textbf{V}}\in\mathbb{R}^{3\times(360h/0.1{\rm mm})}$. The coordinate of each circle vertex is defined as $(r {\rm cos}\theta~r {\rm sin}\theta~h)^\top$. In this paper, each partially-deployed stent segment is modelled as a cone with the diameters and the height of this segment.
Different from the fully-deployed stent segment in \cite{zhou2018real_ral}, the diameters of partially-deployed stent segments are not only decided by the designed deployed size but also the compression diameters $r_{\rm fc}\in\mathbb{R}_+$ and the gap width $w_{\rm g}\in\mathbb{R}_+$. In the experiments, one partially-deployed stent segment's diameter of its deployed side $r_{\rm pd}\in\mathbb{R}_+$ is set as the value designed for fully-deployed state $r_{\rm fd}\in\mathbb{R}_+$:
\begin{equation}
r_{\rm pd}:=r_{\rm fd}
\end{equation}
and the diameter of its compressed side $r_{\rm pc}\in\mathbb{R}_+$ is set as the minimal value between the deployed diameter, and the addition of compressed diameter and twice gap width:
\begin{equation}
r_{\rm pc}:={\rm min}\{r_{\rm fc}+2w_{\rm g}, r_{\rm fd}\}
\end{equation}
Using the diameters of the deployed side and the compressed side, a cone shape can be modelled for the partially-deployed stent segment
Following \cite{zhou2018real_ral}, these circle vertices are accumulated by connecting the neighbouring vertices regularly into triangular faces, resulting in a mathematically modelled stent segment mesh. Fenestrations or scallops are modelled by removing the corresponding vertices and triangular faces. The resolution of height $h$ was set as 0.1mm and that of rotation angle $\theta$ was set as $1^\circ$. A set of five customized markers are sewn on each stent segment. With known pre-operative 3D reference marker positions (3D marker references) and corresponding intra-operative 2D marker positions (2D marker references), the 3D intra-operative pose of marker set which is also the 3D intra-operative pose of the stent segment could be recovered by the RPnP method \cite{zhou2018real_ral}. Details regarding this part will be briefly introduced in Section~\ref{sec:instantiation}.
Unlike the work in \cite{zhoustent} and \cite{zhou2018real_ral} for fully-compressed and fully-deployed stent graft, where 3D marker references are known from computed tomography (CT) scan or stent graft design, 3D marker references for partially-deployed stent segment are unknown due to the unpredictability of the deployment process.
\subsection{Adapted GCN}
\label{sec:coordinate_prediction}
With known pre-operative 3D fully-deployed marker positions $\textit{\textbf{Y}}_{\rm f}^{\rm l}=(\textit{\textbf{y}}_{{\rm f}1}^{\rm l}~\cdots~\textit{\textbf{y}}_{{\rm f}5}^{\rm l})\in\mathbb{R}^{3\times5}$, an Adapted GCN for regressing pre-operative 3D marker references of partially-deployed stent segment $\textit{\textbf{Y}}_{\rm p}^{\rm l}\in\mathbb{R}^{3\times5}$ is proposed based on \cite{defferrard2016convolutional}. Original GCNs in \cite{defferrard2016convolutional} and \cite{kipf2016semi} were for classification tasks, while in this paper, the coarsening layers are removed and the softmax function at the network end is replaced by linear mapping.
\subsubsection{Data Pre-processing}
To focus the Adapted GCN training on learning the deformation between $\textit{\textbf{Y}}_{\rm f}^{\rm l}$ and $\textit{\textbf{Y}}_{\rm p}^{\rm l}$, in the training data, markers' coordinates for fully-deployed stent segment $\textit{\textbf{Y}}_{\rm f}^{\rm l}$ are standardized in local frame with the transformation:
\begin{equation}
\textit{\textbf{t}}_{\rm l}^{\rm g}:=\sum_{i=1}^5{({\textit{\textbf{y}}_{\rm f}^{\rm g}}_i)}
\end{equation}
\begin{equation}
\textit{\textbf{R}}_{\rm l}^{\rm g}:=
\begin{pmatrix}
\textbf{\textit{v}}_1/\|\textbf{\textit{v}}_1\|_2&\textbf{\textit{v}}_2/\|\textbf{\textit{v}}_2\|_2&\textbf{\textit{v}}_3/\|\textbf{\textit{v}}_3\|_2
\end{pmatrix}
\end{equation}
where $\textbf{\textit{v}}_1:={\textit{\textbf{y}}_{\rm f}^{\rm t}}_1$, $\textbf{\textit{v}}_2:=({\textit{\textbf{y}}_{\rm f}^{\rm t}}_1\times{\textit{\textbf{y}}_{\rm f}^{\rm t}}_2)$, $\textbf{\textit{v}}_3:= (\textbf{\textit{v}}_1 \times \textbf{\textit{v}}_2)$ and $\textit{\textbf{Y}}_{\rm f}^{\rm t}:=\textit{\textbf{R}}_{\rm l}^{\rm g} \textit{\textbf{Y}}_{\rm f}^{\rm l}$. $\times$ between two vectors represents the cross product. Then the transformation between global frame and local frame can be represented by:
\begin{equation}
\textit{\textbf{Y}}_{\rm f}^{\rm g}=\textit{\textbf{R}}_{\rm l}^{\rm g}\textit{\textbf{Y}}_{\rm f}^{\rm l}+\textit{\textbf{t}}_{\rm l}^{\rm g}\otimes(\textbf{1})_{1\times5}
\end{equation}
where, $\otimes$ is the kronecker product and $(\textbf{1})_{1\times5}$ is a $1\times5$ matrix consisting of $1$.
Before training the network, the ground truth of markers' coordinates for each partially-deployed stent segment in local frame $\textit{\textbf{Y}}_{\rm p}^{\rm l}$ is obtained by aligning the detected 3D markers' coordinates in global frame $\textit{\textbf{Y}}_{\rm p}^{\rm g}$ to the markers for corresponding fully-deployed stent segment in the local frame $\textit{\textbf{Y}}_{\rm f}^{\rm l}$ via singular value decomposition (SVD): $\textit{\textbf{U}}_{\rm svd}\Sigma{\textit{\textbf{V}}}_{\rm svd}=\textit{\textbf{Y}}_{\rm p}^{\rm g}{\textit{\textbf{Y}}_{\rm f}^{\rm l}}^{\top}$.
The aligned markers' coordinates for each partially-deployed stent segment is thus calculated with mapping $f:(\mathbb{R}^{3\times 5},\mathbb{R}^{3\times 5})\to\mathbb{R}^{3\times 5}$ defined as:
\begin{equation}
\textit{\textbf{Y}}_{\rm p}^{\rm l}=f(\textit{\textbf{Y}}_{\rm p}^{\rm g},\textit{\textbf{Y}}_{\rm f}^{\rm l}):=\textit{\textbf{R}}_{\rm p}^{\rm f}\textit{\textbf{Y}}_{\rm p}^{\rm g}+\textit{\textbf{t}}_{\rm p}^{\rm f}
\end{equation}
where
$
\textit{\textbf{R}}_{\rm p}^{\rm f}:=\textit{\textbf{V}}_{\rm svd}\textit{\textbf{U}}_{\rm svd}^\top
$ and
$
\textit{\textbf{t}}_{\rm p}^{\rm f}:=\sum_{i=1}^5{({\textit{\textbf{y}}_{\rm f}^{\rm l}}_i)}-\textit{\textbf{R}}_{\rm p}^{\rm f}\sum_{i=1}^5{({\textit{\textbf{y}}_{\rm p}^{\rm g}}_i)}
$
are the rotation matrix and translation vector of the transformation.
\subsubsection{Spectral Graph Convolution}
Different from conventional convolutional kernels used in Euclidean space, GCN employs spectral graph convolution on a graph \cite{shuman2012emerging}. The spectral graph Fourier transform and its inverse transform is defined as:
\begin{equation}
\tilde{\textit{\textbf{Y}}}=\mathcal{F}_\mathcal{G}(\textit{\textbf{Y}}):=\textit{\textbf{U}}^\top \textit{\textbf{Y}},\quad \textit{\textbf{Y}}=\mathcal{F}_\mathcal{G}^{-1}(\tilde{\textit{\textbf{Y}}})=\textit{\textbf{U}} \tilde{\textit{\textbf{Y}}}
\end{equation}
where $\mathcal{G}=(\mathcal{V},\mathcal{E},{\textit{\textbf{W}}})$ is an undirected simple graph with $n=5$ nodes, representing the coordinates of five customized markers, $\mathcal{V}$ is a finite set of $|\mathcal{V}|=n$ vertices, $\mathcal{E}\subseteq \mathcal{V}\times\mathcal{V}$ is a set of edges, $\textit{\textbf{W}}\in\mathbb{R}^{n\times n}$ is the weighted adjancy matrix, $\textit{\textbf{Y}}$ is the coordinates' values defined on nodes, the fourier basis $\textit{\textbf{U}}$ is obtained by the eigenvector matrix of graph $\mathcal{G}$'s normalized Laplacian matrix $\textit{\textbf{L}}\in\mathbb{R}^{5\times5}$: $\textit{\textbf{L}}=\textit{\textbf{U}}\Lambda\textit{\textbf{U}}^{-1}$, where $\Lambda={\rm diag}(\lambda_0~\cdots~\lambda_{n-1})\in\mathbb{R}^{n}$ is the eigen values. The normalized Laplacian matrix is defined as:
\begin{equation}
\textit{\textbf{L}}:=\textit{\textbf{D}}^{-0.5}(\textit{\textbf{D}}-\textit{\textbf{W}})\textit{\textbf{D}}^{-0.5}
\end{equation}
where $\textit{\textbf{D}}\in\mathbb{R}^{n\times n}$ is the diagonal degree matrix. As normalized Laplacian matrix is semi-positive definite symmetric matrix, $\textit{\textbf{U}}^\top=\textit{\textbf{U}}^{-1}$. Spectral graph convolution on graph $\mathcal{G}$ could be defined as:
\begin{equation}
\label{eq:convolution}
(\textit{\textbf{g}}_\vartheta*\textit{\textbf{Y}})_\mathcal{G}:=\mathcal{F}^{-1}_\mathcal{G}(\mathcal{F}_\mathcal{G}\big(\textit{\textbf{g}}_\vartheta)\mathcal{F}_\mathcal{G}(Y)\big)=\textit{\textbf{U}}\tilde{\textit{\textbf{g}}}_\vartheta\textit{\textbf{U}}^\top\textit{\textbf{Y}}
\end{equation}
where $\tilde{\textit{\textbf{g}}}_\vartheta$ is defined as the convolutional kernel (also known as filter in \cite{defferrard2016convolutional}) and $\vartheta$ is the trainable parameters. A non-parametric kernel is defined as $\tilde{\textit{\textbf{g}}}_\vartheta(\Lambda)={\rm diag}(\vartheta)$ \cite{bruna2013spectral}, where $\vartheta\in\mathbb{R}^n$. There are also multiple approaches of parametrization for the localized filter, polynomial parametrization was introduced in \cite{defferrard2016convolutional}: $\tilde{\textit{\textbf{g}}}_\vartheta(\Lambda)=\sum_{k=0}^{K-1}{\vartheta_k (\Lambda)^k}$, where ${(\Lambda)^k}$ is the $k$ power of $\Lambda$. Because $\textit{\textbf{U}}^\top=\textit{\textbf{U}}^{-1}$ and this polynomial parametric kernel converts (\ref{eq:convolution}) into:
\begin{equation}
(\textit{\textbf{g}}_\vartheta*\textit{\textbf{Y}})_\mathcal{G}=\tilde{g}_\vartheta(\textit{\textbf{L}})\textit{\textbf{Y}}=\sum_{k=0}^{K-1}{\vartheta_k (\textbf{\textit{L}})^k}\textit{\textbf{Y}}
\end{equation}
where $K\in\mathbb{Z}_+$ represents the kernel size and $\vartheta\in\mathbb{R}^K$ implies the learning complexity to be reduced to $\mathcal{O}(K)$, compared with $\mathcal{O}(n)$ for non-parametric kernel.
Furthermore, recursive formulation for parametric kernel was introduced in \cite{defferrard2016convolutional} to reduce the computational time. The kernel is approximated by Chebyshev polynomials:
\begin{equation}
\tilde{\textit{\textbf{g}}}_\vartheta(\Lambda)=\sum_{k=0}^{K-1}{\vartheta_k T_k(\Lambda')}
\end{equation}
where $\Lambda'=2\Lambda/\lambda_{\rm max}-\textit{\textbf{I}}_n$, $\textit{\textbf{I}}_n$ is an unity matrix with size $n\times n$. $T_k(\Lambda')=2\Lambda'T_{k-1}(\Lambda')-T_{k-2}(\Lambda')$ is the recursive Chebyshev polynomials with $T_0(\Lambda')=1$ and $T_1(\Lambda')=\Lambda'$. This kernel is used in this paper and details can be found in \cite{hammond2011wavelets}.
\subsubsection{Network Architecture}
The number of five customized markers is shown in Figure~\ref{fig:intro}(d). An undirected simple graph $\mathcal{G}=(\mathcal{V},\mathcal{E},\textit{\textbf{W}})$ with five nodes is constructed to represent the five markers' coordinates with weight adjacency matrix set referring to the distance scale:
\begin{equation}
\textit{\textbf{W}}=\begin{pmatrix}
0&e^{-(5/4)^2}&0&0&e^{-(5/8)^2}\\
e^{-(5/4)^2}&0&e^{-(5/4)^2}&0&0\\
0&e^{-(5/4)^2}&0&e^{-(5/4)^2}&0\\
0&0&e^{-(5/4)^2}&0&e^{-(5/4)^2}\\
e^{-(5/8)^2}&0&0&e^{-(5/4)^2}&0\\
\end{pmatrix}
\end{equation}
\begin{figure}[ht]
\centering
\framebox{\parbox{2.9in}{\includegraphics[width = 2.9in]{./network}}}
\caption{Network architecture of the proposed Adapted GCN.}
\label{fig:network}
\end{figure}
The network architecture is shown in Figure~\ref{fig:network}, where the input is ${\textit{\textbf{Y}}}_{\rm f}^{\rm l}+\epsilon$ and the output is $\hat{\textit{\textbf{Y}}}_{\rm p}^{\rm l}$, where $\epsilon\sim\mathcal{N}(0,0.1)$ is Gaussian noise. The mathematical expression for each two neighbouring layers can be written as:
\begin{equation}
\textit{\textbf{F}}^i=\sigma_i\big((\textit{\textbf{g}}_\vartheta*\textit{\textbf{F}}^{i-1})_\mathcal{G}\big)
\end{equation}
where $i\in[0,N+1]\cap\mathbb{Z}$, $\textit{\textbf{F}}^0$ is the input graph, $\textit{\textbf{F}}^{N+1}$ is the output graph, $\textit{\textbf{F}}^{1\sim N}$ are hidden layers, $N$ is the hidden layer number and $\sigma_i(\cdot)$ is the activation function for the $i^{\rm th}$ layer.
Eight hidden layers are used for the experiments, 32 channels are set in each hidden layer. Leaky ReLU is used as the activation function for non-linear mapping with 0.1 leaky rate for the input and the hidden layers. No non-linear activation function is used in the output layer. Chebyshev polynomial parametric kernel is used with an kernel size of 2 for each spectral convolutional layer.
\subsubsection{Loss Function and Optimization}
The root mean square error between the ground truth and the output coordinates is calculated as the loss function, with a regularization term of L2 norm of the weight matrix:
\begin{equation}
\mathcal{L}=\|\hat{\textit{\textbf{Y}}}_{\rm p}^{\rm l}-\textit{\textbf{Y}}_{\rm p}^{\rm l}\|_2+\alpha\|\vartheta\|_2
\end{equation}
Adam and Momentum Stochastic Gradient Descend (SGD) were compared for training the network. The optimization through Adam was hard to converge and hence Momentum SGD was used as the optimizer. The learning rate was set as 0.0001 and the learning momentum was set as 0.9. The L2 norm weight $\alpha$ was set as $5\times 10^{-4}$ and the batch size was set as 10.
As the RPnP method is only related to 3D reference marker shapes while is free to global 3D reference marker positions, the predicted 3D marker references $\hat{\textit{\textbf{Y}}}_{\rm p}^{\rm l}$ are also aligned to the local markers' coordinates of fully-deployed stent segment ${\textit{\textbf{Y}}}_{\rm f}^{\rm l}$ as $f(\hat{\textit{\textbf{Y}}}_{\rm p}^{\rm l},{\textit{\textbf{Y}}}_{\rm f}^{\rm l})$ for the transformation estimation of partially-deployed stent segments.
\subsection{3D Shape Instantiation}
\label{sec:instantiation}
With the predicted pre-operative 3D marker references from the Adapted GCN in Section~\ref{sec:coordinate_prediction} and manually labelled corresponding intra-operative 2D marker positions/references, following \cite{zhou2018real_ral}, the RPnP method \cite{li2012robust} is used to instantiate the 3D pose of intra-operative marker set including the rotation matrix $\hat{\textit{\textbf{R}}}_{\rm l}^{\rm g}\in\mathbb{R}^{3\times3}$ and translation vector $\hat{\textit{\textbf{t}}}_{\rm l}^{\rm g}\in\mathbb{R}^{3}$:
\begin{equation}
\hat{\textit{\textbf{Y}}}_{\rm p}^{\rm g}=\hat{\textit{\textbf{R}}}_{\rm l}^{\rm g}f(\hat{\textit{\textbf{Y}}}_{\rm p}^{\rm l},{\textit{\textbf{Y}}}_{\rm f}^{\rm l})+\hat{\textit{\textbf{t}}}_{\rm l}^{\rm g}\otimes(\textbf{1})_{1\times5}
\end{equation}
where $\hat{\textit{\textbf{Y}}}_{\rm p}^{\rm g}$ is the instantiated intra-operative 3D marker positions for partially-deployed stent segment. As markers are sewn on the stent segment, $\hat{\textit{\textbf{R}}}_{\rm l}^{\rm g}$ and $\hat{\textit{\textbf{t}}}_{\rm l}^{\rm g}$ are also the rotation matrix and translation vector for the partially-deployed stent segment. After moving the mathematically modelled stent segment mesh in Section~\ref{sec:stent_model} to the same local coordinate frame, $\hat{\textit{\textbf{R}}}_{\rm l}^{\rm g}$ and $\hat{\textit{\textbf{t}}}_{\rm l}^{\rm g}$ are applied for the stent segment transformation. After central point based correction, 3D shape instantiation of partially-deployed stent segment is achieved. More details could be found in \cite{zhou2018real_ral}.
\subsection{Experiment and Validation}
\label{sec:experiment}
\subsubsection{Marker Design}
Customized stent graft markers with five different shapes were designed based on commercially-used gold markers and were manufactured on a Mlab Cusing R machine (ConceptLaser, Lichtenfels, Germany) from SS316L stainless steel powder, as shown in Figure~\ref{fig:intro}(d) with their own numbers. The sizes are around 1$\sim$3 mm, similar to the commercial ones. Those five markers were sewn on each stent segment at five non-planar places.
\begin{figure}[ht]
\centering
\framebox{\parbox{3.3in}{\includegraphics[width = 3.3in]{./setup}}}
\caption{Illustration of the experimental setup with fixing an AAA phantom under the CT scan.}
\label{fig:setup}
\end{figure}
\subsubsection{Simulation of Surgery}
Three stent grafts were used in the experiments, including a iliac stent graft (Cook Medical, IN, USA) with five stent segments, 10$\sim$19mm diameters and total $90$mm height, a fenestrated stent graft (Cook Medical) with six stent segments, 22$\sim$30mm diameters and total 117mm height, and a thoracic stent graft (Medtronic, MN, USA) with 10 stent segments, 30mm diameter and total 179mm height. Five AAA phantoms were modelled from CT data scanned from patients and were printed on a Stratasys Object 3D printer (MN, USA) with VeroClear and TangoBlack colours. To simulate the practical situation in FEVAR where the fenestrated stent graft is customized to similar diameters to that of the AAAs, two suitable AAA target positions where their diameters are similar to that of the corresponding experiment stent graft were selected for each experiment stent graft, resulting in 6 experiments in total. The selected AAA phantom was fixed as shown in Figure~\ref{fig:setup}. In each experiment, a stent graft was compressed into a Captivia delivery catheter (Medtronic) with 8mm diameter, inserted into the selected phantom and deployed subsequently segment-by-segment from the proximal end to the distal end at the target AAA position.
\subsubsection{Data Collection}
A 3D CT scan and a 2D fluoroscopic image at the frontal plane were scanned for each partially-deployed stent graft using a GE Innova 4100 (GE Healthcare, Bucks, UK) system. The stent segments at the distal end and with odd indexes in the thoracic stent graft experiment were ignored to keep data balance. Thus, there are eight partially-deployed stent segments scanned by CT and flurorscopy in two different AAA phantoms for the iliac stent graft (segment number 1-4 and 5-8), 10 for the fenestrated stent graft (stent segment number 9-13 and 14-18), and eight for the thoracic stent graft (stent segment number 19-22 and 23-26). In addition, three CT scans were acquired for the three experiment stent grafts at fully-deployed state to supply 3D fully-deployed marker positions - $\textit{\textbf{Y}}_{\rm f}^{\rm l}$. In practical applications, this information can be obtained from stent graft designing.
\subsubsection{Marker Position Extraction}
\label{sec:marker_detection}
Although Equally-weighted Focal U-Net was proposed to potentially achieve automatic 2D marker segmentation and classification from intra-operative 2D fluoroscopic images. In this paper, the stent graft is in partially-deployed state which is different from the training data in \cite{zhou2018towards_iros} where the stent graft was in fully-deployed state. The segmentation and classification results of applying the trained model in \cite{zhou2018towards_iros} onto the fluoroscopic images in this paper is not accurate and unsatisfied. Hence the intra-operative 2D marker positions or references $\textit{\textbf{X}}^{\rm g}=(\textit{\textbf{x}}_1^{\rm g}~\cdots~\textit{\textbf{x}}_5^{\rm g})\in\mathbb{R}^{2\times5}$ were extracted manually via $Matlab^{\textregistered}$.
The shapes of 3D stents and 3D customized markers were segmented from CT scans via ITK-SNAP and the 3D central coordinates of customized markers $\textit{\textbf{Y}}^{\rm g}=(\textit{\textbf{y}}_1^{\rm g}~\cdots~\textit{\textbf{y}}_5^{\rm g})\in\mathbb{R}^{3\times5}$ were extracted using Meshlab.
\subsubsection{Data Augmentation}
Before training the Adapted GCN with the 3D marker positions of fully-deployed and partially-deployed stent segments, these coordinates were rotated and scaled to enlarge the training dataset. The rotations about three axises range from $-30^\circ$ to $30^\circ$ with the resolution of $3^\circ$. The scale ratios range from $0.2$ to $11.39$ with the geometric proportion of $1.5$.
\subsubsection{Criteria and Evaluation}
To evaluate the 3D marker references predicted by the proposed Adapted GCN, the aligned 3D marker reference prediction $\textit{\textbf{Y}}_{\rm p}^{\rm l}$ were compared to the ground truth of the aligned partially-deployed stent segment's marker positions $f(\hat{\textit{\textbf{Y}}}_{\rm p}^{\rm l},{\textit{\textbf{Y}}}_{\rm f}^{\rm l})$ via their mean distance error, ${\rm MDE}\big(\textit{\textbf{Y}}_{\rm p}^{\rm l},f(\hat{\textit{\textbf{Y}}}_{\rm p}^{\rm l},{\textit{\textbf{Y}}}_{\rm f}^{\rm l})\big)$, which is calculated as:
\begin{equation}
{\rm MDE}(\textit{\textbf{Y}}^1,\textit{\textbf{Y}}^2)=\frac{1}{n}\sum_{i=1}^{n}{\big\|\textit{\textbf{y}}_i^1-\textit{\textbf{y}}_i^2\big\|_2}
\end{equation}
where $\textit{\textbf{Y}}^{1}$ and $\textit{\textbf{Y}}^{2}$ can be two matrices of 3D or 2D marker coordinates with the same dimension number and the same point number.
To evaluate marker instantiation, the registered global markers' coordinates for each partially-deployed stent segment $\hat{{\textit{\textbf{Y}}}}_{\rm p}^{\rm g}$ are compared with the ground truth ${{\textit{\textbf{Y}}}}_{\rm p}^{\rm g}$ via ${\rm MDE}\big({{\textit{\textbf{Y}}}}_{\rm p}^{\rm g},\hat{{\textit{\textbf{Y}}}}_{\rm p}^{\rm g}\big)$ in 3D and the reprojected distance error ${\rm MDE}\big({{\textit{\textbf{X}}}}_{\rm p}^{\rm g},\hat{{\textit{\textbf{X}}}}_{\rm p}^{\rm g}\big)$ in 2D, where $\hat{{\textit{\textbf{X}}}}_{\rm p}^{\rm g}$ is the projected 2D coordinate from the estimated 3D global coordinate $\hat{{\textit{\textbf{Y}}}}_{\rm p}^{\rm g}$, calculated by $\hat{{\textit{\textbf{X}}}}_{\rm p}^{\rm g}=g(\hat{{\textit{\textbf{Y}}}}_{\rm p}^{\rm g})$ with mapping $g:\mathbb{R}^{3\times n}\to\mathbb{R}^{2\times n}$:
\begin{equation}
\label{eq:projection_points}
g(\textit{\textbf{Y}})=\begin{pmatrix}
\textit{\textbf{p}}_1^\top\textit{\textbf{Y}}^{\rm h}\oslash\textit{\textbf{p}}_3^\top\textit{\textbf{Y}}^{\rm h} \\ \textit{\textbf{p}}_2^\top\textit{\textbf{Y}}^{\rm h}\oslash\textit{\textbf{p}}_3^\top\textit{\textbf{Y}}^{\rm h}
\end{pmatrix}
\end{equation}
where $\textit{\textbf{P}}=\begin{pmatrix} \textit{\textbf{p}}_1 & \textit{\textbf{p}}_2 & \textit{\textbf{p}}_3 \end{pmatrix}^\top\in\mathbb{R}^{3\times4}$ is the projection matrix, $\oslash$ is Hadamard division, and $\textit{\textbf{Y}}^{\rm h}=(\textit{\textbf{y}}_1^{\rm h}~\cdots~\textit{\textbf{y}}_{\textit{n}^3}^{\rm h})=(\textit{\textbf{Y}}^\top~(\textbf{1})_{\textit{n}^3\times1})^\top\in\mathbb{R}^{{4}\times{\textit{n}^3}}$ is the homogeneous vector form of the 3D coordinates.
To evaluate 3D shape instantiation for each partially-deployed stent segment, the distance between the instantiated partially-deployed stent segment mesh and the corresponding ground truth was measured using Matlab function $point2trimesh$ \cite{point2trimesh}. Marker angle was estimated by the angle of the nearest vertex on the constructed stent segment. Mean absolute angle difference between the predicted markers and the ground truth was used to measure the angle error.
\subsubsection{Cross Validation}
Three-fold cross validations were performed along the division of stent graft. For example, for testing stent segments on iliac stent grafts, the data from the fenestrated and thoracic stent graft were used for the training.
\section{Results}
\label{sec:result}
In this section, the experimental results for the validation of the proposed method was illustrated including the 3D distance errors in the marker prediction, the 2D re-projected and 3D distance error in the marker instantiation, as well as the angular and the mesh error in the stent segment shape instantiation,
\subsection{Prediction of 3D Marker References}
\label{sec:result_GCN}
\begin{figure}[ht]
\centering
\framebox{\parbox{3.3in}{\includegraphics[width = 3.3in]{./deform_error}}}
\caption{Mean$\pm$std 3D distance of the initial variation and mean$\pm$std 3D distance error of 3D marker reference prediction with the proposed Adapted GCN.}
\label{fig:deform_error}
\end{figure}
The mean 3D distance between the prediction of 3D marker references and the ground truth, called Adapted GCN, and the initial mean 3D distance between the 3D fully-deployed markers and the ground truth, named initial variation, for the 26 partially-deployed stent segments are shown in Figure~\ref{fig:deform_error}. We can see that the mean 3D distance achieved by the Adapted GCN is significantly lower than the initial variation, especially for the fenestrated and thoracic stent graft (stent segment number 9$\sim$26), proving the efficiency of the proposed Adapted GCN on 3D marker reference prediction. The mean 3D distances achieved by the Adapted GCN for the iliac stent graft (stent segment number 1$\sim$8) are comparable to the initial variations. Because the diameter of the iliac stent graft is very close to that of the deployment catheter (due to limited experimental resources, we only got one available deployment catheter), and there is not much difference between the fully-deployed and partially-deployed state of the iliac stent graft.
\subsection{3D Marker Instantiation}
\begin{figure}[th]
\centering
\framebox{\parbox{3.3in}{\includegraphics[width = 3.3in]{./pose_example}}}
\caption{Comparison of instantiated intra-operative 3D marker positions and the 3D ground truth (a), and comparison of 2D projections of instantiated 3D markers and the 2D ground truth (b).}
\label{fig:pose_example}
\end{figure}
The predicted 3D marker references and the manually detected 2D marker references for partially-deployed stent segment are imported into the RPnP instantiation framework \cite{zhou2018real_ral} to recover the intra-operative 3D marker positions. The instantiated intra-operative 3D marker positions and their 2D projections are compared to the corresponding ground truth, with results shown in Figure~\ref{fig:pose_example}. We can see that the instantiated marker positions are very close to the ground truth in both 3D and 2D.
\begin{figure}[th]
\centering
\framebox{\parbox{3.3in}{\includegraphics[width = 3.3in]{./pose_error}}}
\caption{Mean$\pm$std 3D (a) and 2D projected (b) distance errors of the instantiated intra-operative marker positions with the ideal (red) and practical (blue) 2D marker references as the input 2D marker reference.}
\label{fig:pose_error}
\end{figure}
Due to the imaging error caused by the fluoroscopic system, 0.5$\sim$0.8mm deviation exists between the manually detected 2D marker references, named practical 2D marker references, and the projected 2D marker references from the ground truth 3D marker references, named ideal 2D marker references. Both of these two 2D marker references are used with the predicted 3D marker references to instantiate the intra-operative 3D marker positions. The 3D and 2D re-projected distance errors for the 26 partially-deployed stent segments are shown in Figure~\ref{fig:pose_error}. We can see that an average 2D distance error of 1.58mm and an average 3D distance error of 1.98mm are achieved respectively. The small accuracy gap in the Figure~\ref{fig:pose_error} between using practical and ideal 2D marker references indicates that the robustness of the instantiation framework to the imaging error introduced by the fluoroscopic system.
\begin{figure}[th]
\centering
\framebox{\parbox{3.3in}{\includegraphics[width = 3.3in]{./mesh_example}}}
\caption{Two comparison examples of instantiated meshes of partially-deployed stent segment and 3D makers from predicted 3D and practical 2D marker references, compared with the estimated stent segment ground truth and the 3D marker ground truth.}
\label{fig:mesh_example}
\end{figure}
\begin{figure}[th]
\centering
\framebox{\parbox{3.3in}{\includegraphics[width = 3.3in]{./stent_example}}}
\caption{Two comparison examples of instantiated meshes of partially-deployed stent segment from predicted 3D and practical 2D marker references, compared with the corresponding stent ground truth segmented from CT scan.}
\label{fig:stent_example}
\end{figure}
\subsection{3D Shape Instantiation of Partially-deployed Stent Segment}
As graft could not be imaged via CT, the ground truth of partially-deployed stent segment was estimated by registering the mathematical model in Section~\ref{sec:stent_model} onto the ground truth 3D marker references. Two comparison examples of the instantiated partially-deployed stent segment and the estimated ground truth are shown in Figure~\ref{fig:mesh_example}. Two comparison examples of the instantiated partially-deployed stent segment and the real ground truth represented by the CT stent scan are shown in Figure~\ref{fig:stent_example}. We can see that the reasonable 3D shape instantiation is achieved.
\begin{figure}[th!]
\centering
\framebox{\parbox{3.3in}{\includegraphics[width = 3.3in]{./mesh_error}}}
\caption{Mean$\pm$std angular and 3D mesh distance error of instantiated meshes of partially-deployed stent segment with ideal and practical 2D marker references as the input 2D marker reference.}
\label{fig:mesh_error}
\end{figure}
The mean angular error between the instantiated intra-operative 3D markers and the ground truth is shown in Figure~\ref{fig:mesh_error}(a). An average angular error of $7^\circ$ is achieved which is larger than the average angular error of $4^\circ$ in \cite{zhou2018real_ral}. This is reasonable, as 3D marker references in this paper are unknown and are predicted by training an Adapted GCN. The mean angular error for iliac stent graft (stent segment number 1$\sim$8) is larger than that for the fenestrated and thoracic stent graft (stent segment number 9$\sim$26) due to the same reason stated in Section~\ref{sec:result_GCN}. The mean distance error between the instantiated stent segment mesh and the ground truth is shown in Figure~\ref{fig:mesh_error}(b). An average distance error of 1$\sim$3mm is achieved which is comparable to the average distance error of 1$\sim$3mm in \cite{zhou2018real_ral}. The iliac stent graft (stent segment number 1$\sim$8) experiences lower mean distance error than the fenestrated and thoracic stent graft (stent segment number 9$\sim$26), as its size is smaller.
\begin{table*}[th]
\centering
\caption{The overall performance of marker reference prediction, marker instantiation and 3D shape instantiation on the six experiments, via mean 3D distance error (3D dist.), mean 2D projected distance error (2D dist.), angular error (Ang. error) and mesh distance error (Mesh dist.).}
\label{tab:results}
\begin{tabular}{ccccccccc}
\hline
\multicolumn{3}{c}{Stent graft} &iliac &iliac &fenestrated &fenestrated &thoracic &thoracic \\
\multicolumn{3}{c}{Stent segment number} &1-4 &5-8 &9-13 &14-18 &19-22 &23-26 \\
\hline
\multirow{2}{*}{Marker references} &\multirow{2}{*}{3D dist. (mm)}&Initial Variation &1.5152 &0.9772 &5.2585 &5.5062 &5.0397 &4.9839 \\ \cline{3-9}
& &Adapted GCN &1.2490 &1.2374 &1.6595 &1.8378 &1.5935 &1.4778 \\
\hline
\multirow{4}{*}{Marker instantiation} &\multirow{2}{*}{2D dist. (mm)} &Ideal 2D Marker Reference &1.3247 &1.8414 &1.3421 &2.1870 &1.3989 &1.2145 \\ \cline{3-9}
& &Practical 2D Marker Reference &1.3300 &1.8671 &1.3101 &2.2328 &1.2607 &1.4742 \\ \cline{2-9}
&\multirow{2}{*}{3D dist. (mm)} &Ideal 2D Marker Reference &1.8196 &1.8120 &2.0238 &2.1100 &2.2377&1.9398 \\ \cline{3-9}
& &Practical 2D Marker Reference &1.8505 &1.8085 &2.0629 &2.1285 &2.1495 &1.8948 \\
\hline
\multirow{4}{*}{Shape Instantiation}&\multirow{2}{*}{Ang. error ($^\circ$)} &Ideal 2D Marker Reference &10.9250 &7.1725 &5.2060 &7.8280 &6.3175&5.1775 \\ \cline{3-9}
& &Practical 2D Marker Reference &11.1625 &5.9375 &5.6240 &8.0560 &7.2200 &5.2250 \\ \cline{2-9}
&\multirow{2}{*}{Mesh dist. (mm)} &Ideal 2D Marker Reference &1.1530 &0.9841 &1.8910 &2.0721 &2.3562 &2.4084 \\ \cline{3-9}
& &Practical 2D Marker Reference &1.1688 &0.9992 &1.8803 &2.0800 &2.3579 &2.4122 \\
\hline
\end{tabular}
\end{table*}
Furthermore, the 3D distance error for 3D marker reference prediction, the 2D projected and 3D distance error for intra-operative 3D marker instantiation, the angular and distance error for 3D shape instantiation for partially-deployed stent segment for each experiment are shown with details in the Tab.~\ref{tab:results}.
For instantiating each stent segment on a computer with a CPU of $Intel^{\textregistered}$ Core(TM) i7-4790 @3.60GHz$\times$8, the computational time is around 7ms using Matlab. The 3D marker reference prediction in Tensorflow on a $Nvidia^{\textregistered}$ Titan Xp GPU costs around 0.8ms for each stent segment. The training of Adapted GCN takes approximately 5 hours. The implemented code was written based on the work of \cite{kipf2016semi}.
\section{Discussion}
\label{sec:discussion}
In this paper, a 3D shape instantiation approach based on a previously deployed framework \cite{zhou2018real_ral} is proposed for partially-deployed stent segment from a single intra-operative 2D fluoroscopic image. It is validated on three commonly used stent grafts with five different AAA phantoms. The mean distance errors of instantiated stent segments are around 1$\sim$3mm and the mean angular errors of instantiated markers are around $5^\circ \sim 11^\circ$.
Without knowing pre-operative 3D marker references, the Adapted GCN is introduced into the previous shape instantiation framework \cite{zhou2018real_ral} and achieves reasonable 3D marker reference prediction (an average 3D distance error of 1.5mm for the fenestrated and thoracic stent graft) from 3D fully-deployed markers. However, the 3D marker reference prediction for the iliac stent graft is insufficient. The diameter of deployment catheter used in the experiments is almost the same as that of the iliac stent graft, resulting in the partially-deployed 3D marker set shape is almost the same as the fully-deployed one. In the cross validation for the iliac stent graft, the Adapted GCN was trained on the fenestrated and thoracic stent graft data for learning partially-deployed deformation. The trained model would not be suitable for predicting 3D marker references for the iliac stent graft which did not experience obvious partially-deployed deformation.
In the training of the Adapted GCN, batch normalization and dropout were also explored, but these two methods decreased the accuracy. One potential reason for the batch normalization's performance is the network for regression tasks is sensitive to the scale of feature value and thus the usage of batch normalization in this task should be different. Future work is essential to confirm the feasebility of batch normalization and dropout in the proposed Adapted GCN.
The errors of 3D marker or shape instantiation with using ideal and practical 2D marker references are very similar in Figure~\ref{fig:pose_error} and Figure~\ref{fig:mesh_error}, implying that the proposed framework is insensitive to the imaging errors caused by the fluoroscopic system. Instantiating partially-deployed stent segment includes mainly three steps: marker segmentation which costs 0.1s on a Nvidia Titan Xp GPU \cite{zhou2018towards_iros}, 3D marker reference prediction which costs 0.8ms, and 3D shape instantiation which costs 7ms. The total computational time is less than 0.11s, which potentially could achieve real-time running as the typical frame rate for clinical usage is around 2 $\sim$ 5 frames per second.
In the future, this paper could be combined with the 3D shape instantiation for fully-deployed \cite{zhou2018real_ral} and fully-compressed \cite{zhou2018towards_iros} stent segment to build a system of real-time 3D shape instantiation for stent grafts at any states. The Equally-weighted Focal U-Net could be retrained and integrated into the instantiation framework for improving the automation.
\section{CONCLUSIONS}
\label{sec:conclusion}
A 3D shape instantiation framework for partially-deployed stent segment was proposed in this paper, including stent segment modelling, 3D marker reference prediction, 3D marker instantiation and 3D shape instantiation. Only a single fluoroscopic image with minimal radiation is required as the intra-operative input. The Adapted GCN is introduced to explore the variation pattern of 3D markers and to provide the 3D marker references for 3D marker instantiation. Compared with the previous relevant work, the proposed framework focuses on dealing with the difficulties of predicting the stent segment shape at the partially-deployed state and achieved a comparable accuracy.
\addtolength{\textheight}{-12cm}
\section*{Acknowledgement}
The authors would like to thank the support of NVIDIA Corporation for the donation of the Titan Xp GPU used for this research.
\bibliographystyle{IEEEtran}
|
1,314,259,993,205 | arxiv | \section{The model}\label{The model}
Recent chemodynamical galactic evolution models, like e.g., \cite{Minchev14}, \cite{vandeVoort15}, and \cite{Shen15}, can model in a self-consistent way massive mergers of galactic subsystems (causing effects like infall in simpler models), energy feedback from stellar explosions (causing effects like outflows), radial migrations in disk galaxies, mixing and diffusion of matter/ISM, and the initiation of star formation dependent on local conditions, resulting from the effects discussed above. In our present investigation we still utilize a more classical approach with a parametrized infall of primordial matter, and a Schmidt law (\citealt{Schmidt59}) for star formation. Therefore, we neglect large scale mixing effects, while we include the feedback from stellar explosions and the resulting mixing with the surrounding ISM, according to a Sedov-Taylor blast wave. In this way, the model permits to keep track of the local inhomogeneities due to different CCSN ejecta. This approach allows to grasp the main features of the impact of the first stars / stellar deaths on the evolution of the heavy element enrichment. This approximation omits other mixing effects, e.g., spiral arm mixing (on time scales of the order of $2\cdot 10^8$ years). The main focus of this work is the investigation of the chemical evolution behaviour at low metallicities, where these effects should not have occured, yet, and are therefore left out in this first order approximation.
We treat the galactic chemical evolution of europium (Eu), iron (Fe) and $\alpha$-elements (e.g., oxygen O), utilizing the established GCE code ''Inhomogeneous Chemical Evolution'' (ICE), created by \cite{Argast04}. A detailed description of the model can be found therein.
For the simulation, we set up a cube of $(2\textit{kpc})^3$ within the galaxy which is cut in $40^3$ smaller cubes representing a $(50\textit{pc})^3$ sub cube each. The evolution is followed with time-steps of 1My.
Primordial matter is assumed to fall into the simulation volume, obeying the form
\begin{equation}
\dot{ M}(t)= a \cdot t^b \cdot e^ {-t/\tau} \textit{,}
\end{equation}
which permits an initially rising and eventually exponentially declining infall rate. While $\tau$ and the total galaxy evolution time $t_{final}$ are fixed initially, the parameters $a$ and $b$ can be determined alternatively from $M_{tot}$ (the total infall mass integrated over time), defined by
\begin{equation}
M_{tot} := \int_0^{t_{final}} a \cdot t^b \cdot e^ {-t/ \tau} \textit{,}
\end{equation}
and the time of maximal infall $t_{max}$, given by
\begin{equation}
t_{max}:=b\cdot \tau \textit{.}
\end{equation}
See \cite{Argast04} for an extended discussion of the infall model and table~\ref{infall parameters} for the applied parameters.
\begin{table}
\begin{tabular}{|llr|}
\hline
\hline
$M_{tot}$ & Total infall mass & $10^8 \text{M}_{\odot}$ \\
$\tau$ & time scale of infall decline & $5\cdot 10^9$yrs \\
$t_{max}$ & time of the highest infall rate & $2\cdot 10^9$yrs \\
$t_{final}$ & duration of the simulation & $13.6\cdot 10^9$yrs \\
\hline
\end{tabular}
\caption{Main infall parameters. See Argast et al. (2004) for details on the parameters.}
\label{infall parameters}
\end{table}
\subsection{Treating stellar births and deaths} \label{Iteration Procedure}
The main calculation loop at each time step (1My) can be described in the following way.
\begin{enumerate}
\item We scan all mass cells of the total volume and calculate the star formation rate per volume and time step ($10^6$ yrs) according to a Schmidt law with a density power $\alpha=1.5$ (see \citealt{Schmidt59}, \citealt{Kennicutt98}, \citealt{Larson91}). Dividing by the average stellar mass of a Salpeter IMF (power $-2.35$) provides the total number of stars per time step $n(t)$ created in the overall volume of our simulation.
\item Individual cells in which stars are formed are selected randomly until $n(t)$ is attained, but the probability is scaled with the density, which leads to the fact that patches of higher density, predominently close to supernova remnants, are chosen.
\item The mass of a newly created star is chosen randomly in the range $0.1$ to $50 M_ {\odot}$, subject to the condition that the mass distribution of all stars follows a Salpeter IMF. Consequently only cells which contain more than $50 M_{\odot}$ are selected in order to prevent a bias.
\item The newly born star inherits the composition of the ISM out of which it is formed.
\item The age of each star is monitored, in order to determine the end of its lifetime, either to form a white dwarf or experience a supernova explosion (see \ref{LMS} and \ref{HMS}). A fraction of all high mass stars ($M>8M_{\odot}$), according to the probability ($P_{Jet-SN}$), is chosen to undergo a magneto-rotationally driven supernova event (see section~\ref{Jet-SN}). Type Ia supernova events are chosen from white dwarfs according to the discussion in \ref{SNIa}. The treatment of neutron star mergers follows the description in \ref{NSM_model}.
\item The composition for the ejecta of all these events is chosen according to the discussion in \ref{Nucleosynthesis sites}. They will pollute the neighbouring ISM with their nucleosynthesis products and sweep up the material in a chemically well mixed shell. We assume that an event pollutes typically $ 5\cdot 10^4 \text{M}_{\odot} $ of surrounding ISM due to a Sedov-Taylor blast-wave of $10^{51}$erg (\citealt{Ryan96}, \citealt{Shigeyama98}). This implies that the radius of a remnant depends strongly on the local density and the density of the surrounding cells.
\item In the affected surrounding cells, stars are polluted by the matter of the previously exploded star and the event specific element yields.
\end{enumerate}
The details on the above procedure will be explained in the following.
\subsection{Nucleosynthesis sites} \label{Nucleosynthesis sites}
\subsubsection{Low (LMS) and intermediate mass stars (IMS)} \label{LMS}
Low and intermediate mass stars provide a fundamental contribution to the GCE of e.g., He, C, N, F, Na and heavy s-process elements during the asymptotic giant branch (AGB) phase. For instance, most of the C and N in the solar system were made by AGB stars (e.g., \citealt{Kobayashi12}). In their hydrostatic burning phase, these stars
lock-up a part of the overall mass and return most of it to the ISM in their AGB phase by stellar winds.
Since the maximum radius of these winds is orders of magnitude smaller than the output range of supernova events (e.g., radius of Crab remnant: 5.5 Ly (\citealt{Hester08}), while the diameter of the Cat's Eye Nebula is only 0.2 Ly (\citealt{Reed99})), our simulation assumes that stellar winds influence the ISM only in the local calculation cell.
AGB stars provide only a marginal s-process contribution to typical r-process elements like Eu (e.g., \citealt{Travaglio99}). In particular, for this work the s-process contribution to Eu plays a negligible role and we are not considering it here.
\subsubsection{High mass stars (HMS)} \label{HMS}
Massive stars which exceed $8 \text{M}_{\odot}$ are considered to end their life in a core-collapse supernova (CCSN, e.g., \citealt{Thielemann96}, \citealt{Nomoto97}, \citealt{Woosley02}, \citealt{Nomoto13}, \citealt{Jones13}). CCSNe produce most of the O and Mg in the chemical inventory of the galaxy. They provide an important contribution to other $\alpha$-elements (S, Ca, Ti), to all intermediate-mass elements, the iron-group elements and to the s-process species up to the Sr neutron-magic peak (e.g., \citealt{Rauscher02}). Associated to CCSNe, different neutrino-driven nucleosynthesis components might be ejected and contribute to the GCE (e.g., \citealt{Arcones13}, and references therein), possibly including the r-process.
We did not include regular CCSNe as a major source of heavy r-process elements, as recent investigations indicate strongly that the early hopes for a high entropy neutrino wind with the right properties (\citealt{Woosley94}, \citealt{Takahashi94}) did not survive advanced core collapse simulations (e.g., \citealt{Liebendorfer03}) which led to proton-rich environments in the innermost ejecta (see also \citealt{Fischer10}, \citealt{Hudepohl10}), causing rather a so-called $\nu p$-process (\citealt{Frohlich06a}, \citealt{Frohlich06b}, \citealt{Pruet05}, \citealt{Pruet06}, \citealt{Wanajo06}). Further investigations seem to underline this conclusion (recently revisited by \citealt{Wanajo13}), although a more advanced – in medium – treatment of neutrons and protons in high density matter causes possible changes of the electron fraction ($Y_{e}$) of ejecta (\citealt{Martinez-Pinedo12}; \citealt{Roberts12}) and might permit a weak r-process, including small fractions of Eu. Similar effects might be possible via neutrino oscillations (\citealt{Wu14}).
For this reason we did not include regular CCSNe in our GCE simulations, although a weak r-process with small (\citealt{Honda06}-like) Eu contributions could be responsible for a lower bound of [Eu/Fe] observations (see Fig.~\ref{163}), explaining a non-detection of the lowest predicted [Eu/Fe] ratios.
Nucleosynthesis yields for HMS are taken from \cite{Thielemann96} or \cite{Nomoto97}. Assuming a typical explosion energy of $10^{51}$erg, the ejecta are mixed with the surrounding interstellar medium via the expansion of a Sedov-Taylor blast wave, which stops at a radius which contains about $ 5\cdot 10^4 \text{M}_{\odot} $ (see section~\ref{Iteration Procedure} for details on the iteration procedure).
\subsubsection{Supernovae Type Ia (SNIa)} \label{SNIa}
When an IMS is newly born in a binary system, there is a probability that it has a companion in the appropriate mass range leading finally to a SNIa, following a double- or single degenerate scenario. We follow the analytical suggestion of \cite{Greggio05} and reduce the numerous degeneracy parameters to one probability ($P_{SNIa}=9 \cdot 10^{-4}$) for a newly born IMS to actually be born in a system fulfilling the prerequisites for a SNIa. Once the star enters its red giant phase, we let the system perform a SNIa-type explosion and emit the event specific yields (cf. \citealt{Iwamoto99}, model CDD2), which highly enriches the surrounding ISM with iron. For this work we use the same SNIa yields for each metallicity, consistently with the \cite{Argast04} calculations. We are aware that this choice is not optimal, since several SNIa yields including e.g., Mn and Fe depend on the metallicity of the SNIa progenitor (e.g., \citealt{Timmes03}, \citealt{Thielemann04}, \citealt{Travaglio05}, \citealt{Bravo10}, \citealt{Seitenzahl13}). On the other hand, this approximation does not have any impact on our analysis and our conclusions.
\subsubsection{Neutron Star Merger (NSM)} \label{NSM_model}
If two newly born HMS were created in a binary system, they may both undergo a CCSN individually. This could leave two gravitationally bound Neutron Stars (''NS'') behind. Such a system emits gravitational waves and the two NS spiral inwards towards their common center of mass with a coalescence time ($t_{coal}$) until they merge. The actual merging event is accompanied by an ejection of matter and (r-process) nucleosynthesis (\citealt{Rosswog13}, \citealt{Freiburghaus99}, \citealt{Panov08}, \citealt{Korobkin12}, \citealt{Bauswein13}, \citealt{Rosswog13}, \citealt{Rosswog14}, \citealt{Eichler14}, \citealt{Wanajo14}). As all of these publications show the emergence of a strong r-process, in the mass region of Eu they suffer partially from nuclear uncertainties related to fission fragment distributions (see e.g., \citealt{Eichler14}, \citealt{Goriely13}). For our purposes we chose to utilize as total amount of r-process ejecta $ 1.28\cdot 10^{-2} \text{M}_{\odot} $ (consistent with the $1.4 \text{M}_{\odot} + 1.4 \text{M}_{\odot}$ NS collision in \citealt{Korobkin12} and \citealt{Rosswog13}), but distributed in solar r-process proportions, which leads for Eu to a total amount of $10^{-4} \text{M}_{\odot}$ per merger. This value is relatively high in comparison to other investigations in the literature.
Observational constraints for the probability of a newly born star to undergo this procedure ($P_{NSM}$) are provided by e.g., \cite{Kalogera04} who have found a NSM rate of $R_{NSM}=83.0_{-66.1}^{+209.1} \text{Myr}^{-1}$, which corresponds to a $P_{NSM}=0.0180_{-0.0143}^{+0.0453}$.
The coalescence time, $P_{NSM}$ and the event specific yields are important parameters for GCE, and their influence on the GCE are subject of this paper. Concerning the coalescence time scale, it might be more realistic to use a distribution function (e.g., as in \citealt{Ishimaru15}) instead of a fixed value. We utilize this simplified procedure as a first order approach.
\subsubsection{Magnetorotationally driven supernovae (Jet-SNe)}
A fraction ($P_{Jet-SN}$) of high mass stars end their life as a ''magneto-rotationally driven supernova'' or magnetar, forming in the center a highly magnetized neutron star (with fields of the order $10^{15}$Gauss) and ejecting r-process matter along the poles of the rotation axis (\citealt{Fujimoto06}, \citealt{Fujimoto08}; \citealt{Winteler12}, \citealt{Moesta14}). r-process simulations for such events were first undertaken in 3D by \cite{Winteler12}. For the purpose of this work, we randomly choose newly born high mass stars to later form a Jet-SN. At the end of their life time, they explode similar to a CCSN, however with different ejecta. Based on \cite{Winteler12}, we assume an amount of $14 \cdot 10^{-5} \text{M}_{\odot}$ of europium ejected to the ISM by such an event. In this work, we study the influence of $P_{Jet-SN}$ and the specific Jet-SN yields on the GCE.
\subsection{Observed stellar abundances}
Data for the observed stars to compare our simulation results with are taken from the SAGA (Stellar Abundances for Galactic Archaeology) database (e.g., \citealt{Suda08}, \citealt{Suda11}, \citealt{Yamada13}; in particular [Eu/Fe] abundance observations are mainly from e.g., \citealt{Francois07}, \citealt{Simmerer04}, \citealt{Barklem05}, \citealt{Ren12}, \citealt{Roederer10}, \citealt{Roederer14a}, \citealt{Roederer14b}, \citealt{Roederer14c}, \citealt{Shetrone01}, \citealt{Shetrone03}, \citealt{Geisler05}, \citealt{Cohen09}, \citealt{Letarte10}, \citealt{Starkenburg13}, \citealt{McWilliam13}). From the raw data, we excluded carbon enriched metal poor stars (''CEMPs'') and stars with binary nature, since the surface abundances of such objects are expected to be affected by internal pollution from deeper layers or pollution from the binary companion.
\section{RESULTS}\label{Results}
For a general understanding of the effects of Jet-SNe and NSM on GCE, namely the parameters $P_{NSM}$, $t_{coal}$ and $P_{Jet-SN}$, we performed a number of simulations described in detail below.
\subsection{Coalescence time scale and NSM probability} \label{NSM}
As a prerequisite, we studied the influence of both coalescence time and the probability of a binary system to become a NSM. In Figure~\ref{nfo}, we present the evolution of [Eu/Fe] abundances when only NSM contribute to the enrichment. The results can be summarized as follows.
\begin{enumerate}
\item Smaller coalescence time scale leads to an enrichment of europium at lower metallicities. Larger coalescence time scale shifts this to higher metallicities.
\item A higher NSM probability shifts towards a quantitatively higher enrichment combined with an appearence at lower metallicities.
\end{enumerate}
These effects can be explained in the following way.
\begin{enumerate}
\item When binary neutron star systems take longer to coalesce, the time between the CCSN of both stars and the NSM event is longer. The longer this delay time, also further nucleosynthesis events occur in the galaxy during this period, enriching the ISM with metals. Thus, when the NSM event finally takes place, surrounding stars have developed a higher [Fe/H] abundance, shifting the system towards higher [Fe/H] abundances, respectively. This implies an overall europium production shift towards higher metallicities.
\item With more binary systems becoming NSM, the produced europium amount per time step is larger, since every event produces the same amount of r-process elements. This leads to a higher [Eu/Fe] abundance, compared to simulations with lower NSM probability. As the fraction of NSM systems is higher while the CCSN rate is constant, larger amounts of europium are produced, while the surrounding medium evolves regularly. This also leads to a higher abundance of europium at lower [Fe/H]. These effects shift the [Eu/Fe] curve to higher values for the same [Fe/H].
\end{enumerate}
All these results are consistent with the earlier conclusions by \cite{Argast04}, stating that it is extremely difficult to reproduce the observed [Eu/Fe] ratios at metallicities [Fe/H]$<-2.5$ by NSM alone. A potential solution would be that the preceding supernovae which produced the two neutron stars of the merging system mix their ejecta with more extended amounts of the ISM. We utilized the results following a Sedov-Taylor blast wave of $10^{51}$erg, which pollutes of the order $ 5\cdot 10^4 \text{M}_{\odot} $ of ISM until the shock is stopped. \cite{vandeVoort15} assumed (in their standard case) the mixing with more than $10^6 \text{M}_{\odot} $ of ISM (\citealt{Shen15} utilized $ 2\cdot 10^5 \text{M}_{\odot} $ in a similar approach). This produces an environment with a substantially lower [Fe/H] into which the NSM ejecta enter. Thus, it is not surprising that in such a case the Eu enrichment by NSM is setting in at lower metallicities. The higher resolution run shown in Fig.~4 of \cite{vandeVoort15} agrees with our results. Thus, the major question is whether such a very much enlarged mixing with the ISM by almost two orders of magnitude can be substantiated. We will discuss these aspect further in section~\ref{Conlusion and Discussion}
\begin{figure}
\includegraphics[width=84mm]{nfo.ps}
\caption{Influence of coalescence time scale and NSM-probability on Eu-Abundances in GCE. Magenta stars represent observations. Red dots correspond to model star abundances as in Argast et al. (2004). The coalescence time scale of this event is $10^8$ years and the probability $P_{NSM}$ is set to $4{\cdot}10^{-4}$. Green dots illustrate the effect on the abundances if the coalescence time scale of NSM is shorter (around $10^6$ years). Blue dots show the abundance change if the probability of HMS binaries to later merge in a NSM is increased to $4{\cdot}10^{-2}$ (cf. subsection~\ref{NSM}).}
\label{nfo}
\end{figure}
\subsection{Probability of Jet-SNe} \label{Jet-SN}
The contribution of Jet-SNe to the galactic Eu abundance differs from that of NSM. Since Jet-SNe explode directly from a massive star, they contribute much earlier to the chemical evolution than NSM. Since the interstellar matter is distributed more inhomogeneously than in later evolution stages of the galaxy, high [Eu/Fe] abundances are possible in individual stars. This leads to a large spread in the abundances towards lower metallicities. Considering Jet-SNe, the parameter with the highest impact on GCE for such rare events, similar to NSM events but ''earlier'' in metallicity, is the probability of a massive star to actually become a Jet-SN. A lower probability leads to a smaller overall [Eu/Fe] abundance, while a higher probability leads to larger abundances. However, we also recognize a larger spread in abundances in models with lower probability. This comes from the fact that the high yield of the event only sets an upper limit on the abundances. The rarer an event is, the more and the longer stars remain unpolluted. This results in a larger spectrum of abundances in stars and therefore in a larger spread in [Eu/Fe] ratios.
Note from Fig.~\ref{Jet-SN1} and Fig.~\ref{Jet-SN2} that Jet-SNe might explain the abundances at low metallicities better than NSM. Thus, while Jet-SNe alone could be an explanation for the lower metallicity observations, there is clear evidence of NSM events and therefore we have to examine the combination of both events. Whether the apparently to high concentration of model stars with low [Eu/Fe] values at metallicities $-3<$[Fe/H]$<2$ in comparison to observations is related to observational bias or whether we require another additional source will be discussed in the following sections.
\begin{figure}
\includegraphics[width=84mm]{301,401.ps}
\caption{Influence of increased Jet-SN probabilities on Eu-Abundances in GCE. Magenta stars represent observations. Green dots represent model star abundances based on Winteler et al. (2012), the Jet-SN probability has been chosen to follow the observations at [Fe/H]$>-1.5$. A good value seems to be $0.1{\%}$~of HMS to end up in a Jet-SN. Note that this model fails to reproduce the observed abundances at lower metallicities. Blue dots illustrate the effect on the abundances if the Jet-SN probability is increased to 1\%. This model better reproduces the observed abundances at lower metallicities, but clearly fails at higher ones.}
\label{Jet-SN1}
\end{figure}
\begin{figure}
\includegraphics[width=84mm]{301,311,312.ps}
\caption{Same as Figure~\ref{Jet-SN1}, but with decreased probabilities. Red dots are the same as green dots in Fig.~\ref{Jet-SN1} with Jet-SN probability of $0.1 {\%}$; Green and blue dots represent a Jet-SN probability of $10^{-4}$ and $2\cdot 10^{-5}$, respectively. From the comparison of these models, we can see how decreased event probability shifts the abundance curve down. We also remark an increase of the spread in abundances when the probability is lowered. The rarer a high yield event is, the larger is the spread in abundances.}
\label{Jet-SN2}
\end{figure}
\subsection{Combination of sites} \label{combination}
If both sites (Jet-SN and NSM) are considered to contribute to the galactic europium abundances, their contributions overlap. Therefore, parameters which lead to the observed [Eu/Fe] abundances, have to be searched for. As described in section~\ref{NSM}, NSM contribute at a delayed stage to the GCE and in our simulations are unable to reproduce europium abundances at metallicities [Fe/H]$<-2.5$, Jet-SNe, however, contribute europium early, but only in those regions where they occured, and cause a larger spread in the [Eu/Fe] values (cf. section~\ref{Jet-SN}). We have to test whether it is possible to use the same parameters as in sections~\ref{NSM} and \ref{Jet-SN}, since the full combination of both events could lead to an overproduction of elements. We can use the earlier parameter explorations to tune the simulated abundance pattern in order to match the observations.
In the following, we will discuss two possible cases:
\begin{enumerate}
\item $P_{NSM}=3.4\cdot 10^{-4}$, $P_{Jet-SN}=0.3\% $, $t_{coal}=1$My (hereafter model Jet+NSM:A). The results for the model Jet+NSM:A in comparison with observations are shown in Figure~\ref{23}. This model provide a reasonable explanation of the observations at lower and higher metallicities, but there is an overproduction of europium between $-2<$ [Fe/H] $<-1$. We conclude that larger coalescence time scales and larger probabilities are necessary regarding NSM, and lower probability of Jet-SNe is necessary to flatten and lower the modelled abundance curve.
\item $P_{NSM}=3.8\cdot10^{-4}$, $P_{Jet-SN}=0.1\%$, $t_{coal}=10$My (Model Jet+NSM:B). The results for the model Jet+NSM:B in comparison with observations are shown in Figure~\ref{163}. This model explains the main features of the abundance curve quite well: The spread at low metallicities, the first confinement of the spread at [Fe/H]${\approx}-2$, the plateau between [Fe/H]${\approx}-2$ and [Fe/H]${\approx}-0.6$, and the second confinement of the spread at [Fe/H]$\approx -0.2$. However, there still seem to be difficulties at [Fe/H]${\approx}-2$: the scatter in abundances towards low [Fe/H] ratios seems to be a bit too broad. This spread might be slightly reduced by additional mixing terms (e.g., spiral arms mixing) or an additional source providing ratios of [Eu/Fe]$=-1$, which we did not consider in this work.
\end{enumerate}
\begin{figure}
\includegraphics[width=84mm]{23.ps}
\caption{Evolution of Eu-abundances in GCE including both Jet-SNe and NSM as r-process sites. Magenta stars represent observations, whereas blue dots represent model stars. Model (Jet+NSM:A) parameters are $P_{NSM}=3.4\cdot 10^{-4}$, $P_{Jet-SN}=0.3\% $, $t_{coal}=1$My.}
\label{23}
\end{figure}
\begin{figure}
\includegraphics[width=84mm]{163.ps}
\caption{Same as Figure~\ref{23}, but with a different parameter set (ModelJet+NSM:B). Magenta stars represent observations (with observational errors; however, magenta stars at low metallicities which carry only horizontal errors represent upper limits). Blue dots represent model stars with $P_{NSM}=3.8\cdot10^{-4}$, $P_{Jet-SN}=0.1\%$, $t_{coal}=10$My.}
\label{163}
\end{figure}
Considering Fig.~\ref{23} and Fig.~\ref{163}: While the results from both models Jet+NSM:A and Jet+NSM:B can reproduce the observed spread of [Eu/Fe] in the early galaxy, model Jet+NSB:B seems to better fit the overall [Eu/Fe] vs. [Fe/H] distribution. On the other hand, the evolution of the [Eu/Fe] ratio at low metallicity depends on the r-process production and on the Fe production in CCSNe (see Section~\ref{inhom} and discussion), In Fig.~\ref{23,163euh}, we compare the results for the \emph{enrichment history} of europium in the galaxy according to Jet+NSM:A and Jet+NSM:B models with observations. While the [Eu/H] vs. [Fe/H] ratios predicted by model Jet+NSM:B are in agreement with the observations, model Jet+NSM:A seems to be ruled out.
\begin{figure}
\includegraphics[width=84mm]{23,163euh.ps}
\caption{Enrichment history for models Jet+NSM:A and Jet+NSM:B (cf. Figure~\ref{23} and \ref{163} for evolution plots). Magenta stars represent observations, whereas blue dots represent model stars as per Figure~\ref{163} (Model Jet+NSM:B). Red dots representing the enrichment history of the simulation as per Figure~\ref{23} (Model Jet+NSM:A) do not suit the observational data.}
\label{23,163euh}
\end{figure}
\section{The importance of inhomogeneities} \label{inhom}
\subsection{Inhomogeneities in GCE}
From observations of [Eu/Fe] in the early galaxy, one of the main features is a spread in the abundance ratios. Our model is able to reproduce these spreads, mainly because of the inhomogeneous pollution of matter. In Fig.~\ref{ih}, we try to illustrate the effect of applying such an inhomogeneous model. For this purpose, we perform a cut through the xy-plane of the simulation volume for specific time steps. These time steps are marked in the top panel of Fig.~\ref{ih}, in order to provide the reader with a quick glance of the extent of the inhomogeneous element distribution at the correspondent metallicities. For each marker, we provide the complete density field at this specific time step in the middle and lower panels (cf. figure caption for details). We show the extent of inhomogeneities in the middle left panel, for the first marker in the upper panel of the Figure. At this time step, we can see - by counting the ''bubble''-style patterns - that at least three supernovae must have taken place before the snapshot. Since such events give rise to nucleosynthesis, the abundances of metals in such a supernova remnant bubble are higher than outside such a remnant. A star being born \emph{inside} such a remnant will inherit more metals than a star born \emph{outside}. Therefore, in the early stages of galactic evolution the stellar abundances are strongly affected by the location \emph{where} a star is born.
Considering much later stages of the evolution, (e.g., the lower right panel of Fig.~\ref{ih}, corresponding to the fourth marker of the upper panel) the supernova remnants have a large overlap. Numerous supernova explosions, have contributed lots of nucleosynthesis all over the galaxy. This leads to an averaged distribution of abundances, including different events and an integral over the initial mass function of stars. Therefore, it resembles a ''mixed'' phase of galactic evolution, where the elements have been homogenised over the whole volume. At this stage of the evolution, it seems not to be so relevant \emph{where} a star was born. As a consequence, there are smaller differences in the abundance of metals in stars. Therefore, a \emph{confinement} in the spread of abundances of chemical elements at later stages of the chemical evolution is obtained. Becoming more and more homogeneous, the [Eu/Fe] value converges to a value that can be obtained by integrating the event yields over the whole IMF.
\begin{figure}
\includegraphics[width=84mm]{163ih.ps}
\caption{The top panel shows the same GCE-model as in Fig.~\ref{163} (Model Jet+NSM:B), but without observations; The red markers refer to the position where a density determination cut through the the xy-plane of the simulation volume is performed. The middle and lower two panels show the density distribution through these planes. The middle left panel corresponds to the very left marker (''A'') position's density profile (approximately 180 million years (My) have passed in the simulation), the middle right panel to the second marker ''B'' ($\approx 290$ My), the lower left panel to the third marker ''C'' ($\approx$ 2 Gy) and the lower right panel to the very right marker ''D'' ($\approx$ 12 Gy).}
\label{ih}
\end{figure}
\subsection{Instantaneous Mixing Approximation}
A number of recent chemical evolution studies revoked the ''instantaneous mixing approximation'' (I.M.A., e.g., \citealt{Chiappini01}, \citealt{Recchi01}, \citealt{Spitoni09}). The I.M.A. simplifies a chemical evolution model in terms of mass movement. In detail, all event outputs are expected to mix with the surrounding ISM instantaneously. Such approaches always result in an average value of element ratios for each [Fe/H].
Therefore, the I.M.A. scheme all stars at a given time inherit the same abundance patterns of elements and it is impossible to reproduce a scatter in the galactic abundances, which seems to be a crucial ingredient at low metallicities. Indeed, instead of a spread of distributions only one value is obtained for each metallicity. We calculate the best fit model (Jet+NSM:B, cf. Fig.~\ref{163}) with I.M.A. The result can be found in Fig.~\ref{163hom}. The I.M.A. approach may be used to study the chemical evolution trends with a lower computational effort, but Figure~\ref{ih} shows that the reproduction of spreads in abundance ratios due to local inhomogeneities requires to use more complex codes as e.g., the ICE code adopted for this work.
\begin{figure}
\includegraphics[width=84mm]{163hom.ps}
\caption{Same GCE-model as in Fig.~\ref{163} (Model Jet+NSM:B); however I.M.A. is applied instead of inhomogeneous evolution. One is able to observe a trend in the abundance evolution, however the scatter in the abundance pattern is not present anymore (cf. Section~\ref{inhom} for further discussion). The kink at about [Fe/H]$=-2.5$ is related to the delayed time when NSMs set in and contribute to Eu as well. This Figure can also be compared to Fig.~2 in Matteucci et al. (2014) which shows the contribution of NSMs alone for various merger delay times and Eu production yields and Fig.~5 in Vangioni et al. (2015) [mergers alone being indicated by black lines]. Thus, also in this approach it is evident that the explanation of [Eu/Fe] at low(est) metallicities by NSM alone is not possible.}
\label{163hom}
\end{figure}
While inhomogeneus GCE codes can explain the spread in r-process elements, there is the question whether they might predict a far too large spread for other elements (e.g., $\alpha$ elements) at low metallicities (with present stellar yields from artificially induced CCSN explosion models). Such effects can also be seen in Fig.~1 in \cite{vandeVoort15} for [Mg/Fe], spreading by more than 1 dex, while observations seem to show a smaller spread up to $0.5$ dex. This can be related to the amount of supernova ejecta being mixed with the ISM (see discussion above and in section~\ref{Conlusion and Discussion}: a more extended mixing reduces this spread), but it can also be related to the supernova nucleosynthesis yields which were never tested before in such inhomogeneous GCE studies.
From general considerations of chemical evolution studies, it is found that there are large uncertainties for GCE studies, particularly the influence of stellar yields (e.g., \cite{Romano10}). In Fig.~\ref{527.511_O}, we show the results of model Jet+NSM:B, using the CCSN yields from from \cite{Nomoto97} and \cite{Nomoto06}, which confirms a large spread in [O/Fe], similar to \cite{vandeVoort15} for [Mg/Fe]. However, present supernova yields are the result of artificially induced explosions with constant explosion energies of the order of $10^{51}$ erg. If we consider that explosion energies might increase with the compactness of the stellar core (i.e., progenitor mass, e.g., \citealt{Perego15}), the heavier $\alpha$-elements and Fe might be enhanced as a function of progenitor mass. On the other hand O, Ne, and Mg yields are dominated by hydrostatic burning and also increase with progenitor mass (e.g., \citealt{Thielemann96}). This could permit to obtain more constant $\alpha$/Fe ratios over a wide mass range, although the total amount of ejecta differs (increases) as a function of progenitor mass. This scenario does not take into account all the complexity and the multi-dimensional nature of the CCSN event (e.g., \citealt{Hix14}, and references therein) that should be considered, but it may be interesting to test its impact in our GCE simulations. In Figs.~\ref{527.511_O}~and~\ref{527.512_mg}, we show the results for tests where we:
\begin{enumerate}
\item replace the \cite{Nomoto97} iron yields by \emph{ad-hoc} yields, fitting, however, the observed SN1987A iron production;
\item keep the same CCSN rate as in the previous models;
\item adopt the parameters to study the r-process nucleosynthesis of Model Jet+NSM:B, obtaining the same [Eu/H] ratio.
\end{enumerate}
This leads, based on the adopted CCSN yields, to a possibility to minimize the spread in $\alpha$-elements at low metallicities, while keeping the spread in the r-process element evolution. Therefore, the spread of [O/Fe] obtained from GCE simulations for the early galaxy is strongly affected by the uncertainties in the stellar yields, and it is difficult to disentangle them from more intrinsic GCE uncertainties. This means that at this stage it is not obvious whether an overestimation of the observed [O/Fe] spread is a problem of the ICE code, the observations could rather provide a constraint on stellar yields. In particular, the use of realistic, self-consistent, explosion energies, might reduce the spread at low metallicities to a large extent.
Another fundamental point is related to the discussion in Section~\ref{Results} concerning [Eu/Fe]. At this stage, we consider [Eu/H] as more constraining to study the r-process nucleosynthesis compared to [Eu/Fe], since Fe yields from CCSNe are affected by large uncertainties. Therefore, the model Jet+NSM:B is recommended compared to Jet+NSM:A (see also Fig.~\ref{23,163euh}).
\begin{figure}
\includegraphics[width=84mm]{527.511_O.ps}
\caption{Same GCE-model as in Fig.~\ref{163} (Model Jet+NSM:B); Red dots show the abundance evolution of Oxygen when Nomoto et al. (1997) yields are employed, while blue dots represent a far narrower spread at low metallicities if \emph{ad hoc} yields are applied (which still would need to be optimized to obtain a better agreement with the metallicity evolution between $-1< [\text{Fe}/\text{H}]<0$). Note that the downturn at high metallicities is shifted to higher [Fe/H] values. This is probably due to an overestimate of the total IMF-integrated Fe-production, which should be improved with realistic self-consistent explosion models and their iron yields. While the delay time scale for SNIa is unchanged, earlier CCSN produce more iron, thus dispersing the whole abundance curve. Here we only want to show how changes to possibly more realistic, progenitor-mass dependent, explosion energies can improve the [$\alpha$/Fe] spread, while the [r/Fe] spread is conserved. (Cf. Section~\ref{inhom} for further discussion.)}
\label{527.511_O}
\end{figure}
\begin{figure}
\includegraphics[width=84mm]{163,512_Mg.ps}
\caption{Same consideration as in Fig.~\ref{527.511_O}, however with magnesium instead of oxygen. GCE-model as in Fig.~\ref{163} (Model Jet+NSM:B); Red dots show the abundance evolution of magnesium when Thielemann et al. (1996) / Nomoto et al. (1997) yields are employed, while blue dots represent a far narrower spread at low metallicities if \emph{ad hoc} yields are applied. Cf. Section~\ref{inhom} for further discussion.)}
\label{527.512_mg}
\end{figure}
\section{Conclusion and discussion}\label{Conlusion and Discussion}
The main goal of this paper was to reproduce the solar europium abundance as well as the evolution of [Eu/Fe] vs. [Fe/H] throughout the evolution of the galaxy.
For this reason we have studied the influence of two main r-process sites (NSM and Jet-SNe) on the GCE.
Our simulations were based on the inhomogeneous chemical evolution (ICE) model of \cite{Argast04}, with updated nucleosynthesis input for the two sites considered, their respective occurrence frequencies / time delays, and a model resolution of $(50\textit{pc})^3$. The main conclusions are that:
\begin{enumerate}
\item The production of heavy r-process matter in NSM is evident since many years (see \citealt{Freiburghaus99} and many later investigations up to \citealt{Korobkin12}, \citealt{Rosswog13}, \citealt{Bauswein13}, \citealt{Rosswog14}, \citealt{Just14}, \citealt{Wanajo14}, \citealt{Eichler14}, \citealt{Mendoza15}). Our implementation of NSM in the inhomogeneous chemical evolution model ''ICE'' can explain the bulk of Eu (r-process) contributions in the galaxy for [Fe/H]$>-2.5$, but have problems to explain the amount and the spread of [Eu/Fe] at lower metallicities. This is in agreement with the initial findings of \cite{Argast04}. Recent SPH-based studies by \cite{vandeVoort15} make use of a mixing of the ejecta with $3 \cdot 10^6 M_\odot$, a further study by \cite{Shen15} utilizes a mixing with $2 \cdot 10^5M_\odot$ up to $8 \cdot 10^5M_\odot$.
The mixing volume we utilize, based on the Sedov-Taylor blast wave approach, would be related to a subgrid-resolution in these studies, but this treatment is essential for the outcome. Mixing initially with a larger amount of matter causes smaller [Fe/H] ratios into which the r-process material is injected.
We have tested such differences in mixing volumes/masses also within our ICE approach. Fig.~\ref{Sweepup} shows the results we obtain when changing from the Sedov-Taylor blast wave approach to a mixing mass of $2 \cdot 10^5 M_\odot$ (like in \citealt{Shen15}), and we can see that we essentially reproduce their results. On the other hand, a higher resolution test in section 3.1 of \cite{vandeVoort15} is essentially in agreement with our results presented above. Thus, these differences are not based on the differences in sophistication of the multi-D hydrodynamics approach, permitting to model energy feedback from supernovae, outflows and infall, they can rather be linked directly to the mixing volumes of supernova ejecta. This requires further studies in order to understand whether there exist physical processes (on the timescale of the delay between the supernova explosions and the merger event) which permit a mixing beyond the Sedov-Taylor blast wave approach.
\begin{figure}
\includegraphics[width=84mm]{sweepup.ps}
\caption{Effect of slightly increased sweep-up mass on GCE. Magenta stars represent observations. Red dots show model stars as per our reference model JET+NSM:B. Blue dots represent a model where every CCSN pollutes $2\cdot 10^5 M_{\odot}$ of ISM. The dominant effect of this increased sweep up mass is to decrease the scatter of abundances and to shift the abundance curve towards lower metallicities.)}
\label{Sweepup}
\end{figure}
\item The production of heavy r-process elements in a rare species of CCSNe with fast rotation rates and high magnetic fields, causing (fast) jet ejection of neutron-rich matter along the poles has first been postulated by \cite{Cameron01}. This was followed up in rotationally symmetric 2D calculations \cite{Fujimoto06}, \cite{Fujimoto08} and the first 3D calculations by \cite{Winteler12}. These calculations still depend on unknown rotation velocities, and magnetic field configurations before collapse, however, they agree with the observations of magnetars and neutron stars with magnetic fields of the order $10^{15}$Gauss which make up about $1\%$ of all observed neutron stars. Further 3D calculations by \cite{Moesta14} and recent 2D calculations by \cite{Nishimura15}, might indicate that not all events leading to such highly magnetized neutron stars are able to eject the heaviest r-process elements in solar proportions. Thus, probably less than 1\% of all CCSNe end as Jet-SNe with a full r-process.
When introducing Jet-SNe with ejecta as predicted by \cite{Winteler12}, they can fill in the missing Eu at lower metallicities and reproduce the spread in [Eu/Fe], in agreement with the recent findings of \cite{Cescutti14}. We find that a fraction of $0.1\%$ of all CCSNe which end up in this explosion channel provides the best fit. This would mean that not all but only a fraction of magnetar events which produce the highest magnetic field neutron stars are able to eject a main r-process composition of the heaviest elements, as discussed above.
Our conclusion is that both sites acting in combination provide the best scenario for understanding [Eu/Fe] observations throughout galactic history, with typical probabilities for NSM formations and (merging) delay times as well as probabilities for Jet-SNe.
As a side effect we realized that present supernova nucleosynthesis yield predictions, based on induced explosions with a single explosion energy throughout the whole mass range of progenitor stars, bear a number of uncertainties. While apparently too large scatters of alpha/Fe ratios can be obtained in inhomogeneous chemical evolution models, when utilizing existing nucleosynthesis predictions from artificial explosions with energies of 1 Bethe, this might not be due to the chemical evolution model. Such deficiencies can be cured by assuming larger mixing masses with the ISM for supernovae explosions (\citealt{vandeVoort15}, \citealt{Shen15}), or the introduction of an artificial floor of abundances based on IMF-Integrated yields of CCSNe for metallicities at [Fe/H]$=-4$, but it could in fact just be due the non-existence of self-consistent CCSN explosion models. We have shown that an explosion energy dependence on the compactness of the Fe-core, related to the main sequence mass, could solve this problem as well by modifying the nucleosynthesis results. Therefore, self-consistent core collapse calculations with explosion energies varying with progenitor mass and possibly other properties like rotation are highly needed. Although we have obtained a good accordance with the observed Eu abundances, the true origin of r-process elements might still require additional insights. The present investigation may be used to put constraints on the yields, as well as essential properties and occurrence frequencies of sites.
There exist a number of open questions not addressed in the present paper, related (a/b) to production sites and (c) to the true chemical evolution of the galaxy.
\begin{enumerate}
\item As discussed in detail in section~\ref{HMS}, we did not include ''regular'' CCSNe from massive stars as contributors to the main or strong r-process, producing the heaviest elements in the Universe. However, as already mentioned in section~\ref{HMS}, there exists the chance for a weak r-process, producing even Eu in a Honda-style pattern in such events. This could provide the correct lower bound of [Eu/Fe] in Fig.~\ref{163} and would be consistent with the recent findings of \cite{Tsujimoto14}.
\item We did not include the effect of NS-BH mergers in the present paper. They would result in similar ejecta as NS-NS mergers per event (\citealt{Korobkin12}), but their occurrance frequencies bear high uncertainties (\citealt{Postnov14}). \cite{Mennekens14} provide a detailed account of their possible contribution and also discuss their contribution to global r-process nucleosynthesis. One major difference with respect to our treatment of NSM in chemical evolution relates to the fact, that (if the black hole formation is not causing a hypernova event but rather occurring without nucleosynthesis ejecta) only one CCSN is polluting the ISM with Fe before the merger event, in comparison to two CCSNe. This would lead to a smaller [Fe/H] ratio in the ISM which experiences the r-process injection, and just to an ''earlier'' appearance of high [Eu/Fe] ratios in galactic evolution. If we assume that BH formations are as frequent as supernova explosions, an upper limit of the effect would be that all NS-NS mergers are replaced by BH-NS mergers, moving the [Eu/Fe] features to lower metallicties by a factor of $2$. However, the lower main sequence mass limit for BH formation is probably of the order $20M_\odot$, and only a small fraction of core collapses end in black holes. Therefore, we do not expect that the inclusion of NS-BH mergers shifts the entries by more than $0.15$ in Fig.~\ref{nfo}. This by itself would not be a solution in terms of making only compact (i.e. NS-NS and NS-BH) mergers responsible for the r-process at very low metallicities.
\item There have been suggestions that the Milky Way in its present form resulted from merging subsystems with a different distribution of masses. Such “dwarf galaxies” will experience different star formation rates. It is known that different star formation rates can shift the relation [X/Fe] as a function of [Fe/H]. If the merging of such subsystems will be completed at the time when type Ia supernovae start to be important, the relation [X/Fe]=f([Fe/H]) will be uniform at and beyond [Fe/H]$>-1$, but it can be blurred for low metallicities between the different systems, possibly leading also to a spread of the onset of high [Eu/Fe] ratios at low metallicities (\citealt{Ishimaru15}). The result depends on the treatment of outflows, should in principle be tested in inhomogeneous models, and also already be present in the simulations of \cite{vandeVoort15} and \cite{Shen15}. But it surely requires further investigations to test fully the impact of NSM on the r-process production in the early galaxy.
\end{enumerate}
\end{enumerate}
Future studies will probably require a distribution of delay times for NSM events, a test of the possible contributions by BH-NS mergers, a better understanding of yields, and improvements in understanding mixing processes after supernovae explosions and during galactic evolution. Testing the full set of element abundances from SNe Ia and CCSNe as well as the two sources discussed above, in combination with extended observational data, will provide further clues to understanding the evolution of galaxies.
\section{Acknowledgements}
The authors thank Dominik Argast for providing his GCE code ''ICE'' for our investigations.
MP thanks the support from the Swiss National Science Foundation (SNF) and the "Lend\"ulet-2014" Programme of the Hungarian Academy of Sciences (Hungary).
BW and FKT are supported by the European Research Council (FP7) under ERC Advanced Grant Agreement No. 321263 - FISH, and the Swiss National Science Foundation (SNF). The Basel group is a member in the COST Action New Compstar.
We also thank Almudena Arcones, John Cowan, Gabriel Martínez-Pinedo, Lyudmila Mashonkina, Francesca Matteucci, Tamara Mishenina, Nobuya Nishimura, Igor Panov, Albino Perego, Tsvi Piran, Nikos Prantzos, Stephan Rosswog, and Tomoya Takiwaki for helpful discussions during the Basel Brainstorming workshop; Camilla J. Hansen, Oleg Korobkin, Yuhri Ishimaru, and Shinya Wanajo for discussions at ECT* in Trento, and Freeke van de Voort for providing details about their modelling.
We would also like to acknowledge productive cooperation with our Basel colleagues Kevin Ebinger, Marius Eichler, Roger K\"appeli, Matthias Liebend\"orfer, Thomas Rauscher, and Christian Winteler. Finally, we would also like to thank an anonymous referee who provided useful and constructive suggestions and helped improving the paper.
|
1,314,259,993,206 | arxiv | \section{Introduction}
In \cite{pEqualsTref}, Malliaris and Shelah introduce the notion of cofinality spectrum problems; these are essentially models of a weak fragment of arithmetic. To each cofinality spectrum problem $\mathbf{s}$ they associate cardinals $\mathfrak{p}_{\mathbf{s}}$ and $\mathfrak{t}_{\mathbf{s}}$, which measure certain saturation properties of $\mathbf{s}$. In their Central Theorem 9.1, they prove that $\mathfrak{t}_{\mathbf{s}} \leq \mathfrak{p}_{\mathbf{s}}$; moreover, under mild conditions on $\mathbf{s}$ (in \cite{pEqualsT2}, they note that exponentiation is sufficient), equality occurs. Malliaris and Shelah then derive two applications of this: first, they prove that $SOP_2$ theories are maximal in Keisler's order, and second, they prove that $\mathfrak{p} = \mathfrak{t}$. The latter application resolves the most longstanding open problem in the theory of cardinal invariants of the continuum, and we give a self-contained treatment in this paper. We discuss the first application in \cite{InterpOrdersUlrich}.
The main difficulty encountered by readers of \cite{pEqualsTref} is in the definition of cofinality spectrum problems; these are rather convoluted objects, but in fact they are not necessary to the proof. All that is needed is some fragment of $ZFC$ with transitive set models. $ZFC^-$ $(ZFC$ without powerset) is convenient for our purposes. The reader comfortable with mild large cardinals should feel free to replace $ZFC^-$ by $ZFC$ (or more).
A model of $ZFC^-$ is $\omega$-nonstandard if it contains nonstandard natural numbers. To every $\omega$-nonstandard $\hat{V} \models ZFC^-$ we will associate a pair of cardinal invariants $\mathfrak{p}_{\hat{V}}$ and $\mathfrak{t}_{\hat{V}}$. The reader familiar with cofinality spectrum problems may verify that any $\omega$-nonstandard $\hat{V} \models ZFC^-$ determines a cofinality spectrum problem $\mathbf{s}$, and that $\mathfrak{p}_{\hat{V}} = \mathfrak{p}_{\mathbf{s}}$ and $\mathfrak{t}_{\hat{V}} = \mathfrak{t}_{\mathbf{s}}$, following the proof of Claim 10.19 of \cite{pEqualsTref}.
We now give an overview of our proof that $\mathfrak{p}= \mathfrak{t}$. First, in Section~\ref{pVLeqTvSec}, we show that for every $\omega$-nonstandard $\hat{V} \models ZFC^-$, $\mathfrak{p}_{\hat{V}} \leq \mathfrak{t}_{\hat{V}}$; Malliaris and Shelah prove this in \cite{pEqualsTref} in the context of ultrapower embeddings, and in \cite{pEqualsT2} they note that it holds for cofinality spectrum problems with exponentiation. We also give a useful condition for when a partial type $p(x)$ over $\hat{V}$ of cardinality less than $\mathfrak{p}_{\hat{V}}$ is realized in $\hat{V}$. Next, in Section~\ref{pVEqualsTvSec} we show that for every $\omega$-nonstandard $\hat{V} \models ZFC^-$, $\mathfrak{p}_{\hat{V}} = \mathfrak{t}_{\hat{V}}$.
Finally, in Section~\ref{CardInvariants}, we prove $\mathfrak{p}=\mathfrak{t}$, loosely following Malliaris and Shelah: first, note that it follows immediately from the definitions that $\mathfrak{p} \leq \mathfrak{t}$, so we suppose that $\mathfrak{p} < \mathfrak{t}$ to get a contradiction. We are free to suppose that $\mathfrak{t} = 2^{\aleph_0} = 2^{<\mathfrak{t}}$, since we can Levy-collapse $2^{<\mathfrak{t}}$ to $\mathfrak{t}$ without adding sequences of length less than $\mathfrak{t}$. We are then able to construct a sufficiently generic ultrafilter $\mathcal{U}$ on $\mathcal{P}(\omega)$, such that if we set $\hat{V} = V^\omega/\mathcal{U}$ for some or any transitive $V \models ZFC^-$, then $\mathfrak{p}_{\hat{V}} \leq \mathfrak{p}$ and $\mathfrak{t} \leq \mathfrak{t}_{\hat{V}}$. This contradicts our earlier result that $\mathfrak{p}_{\hat{V}} =\mathfrak{t}_{\hat{V}}$. We manage to avoid reference to a hard theorem of Shelah involving peculiar cuts \cite{ptComment}.
We remark that Moranarocca gives a proof of $\mathfrak{p} = \mathfrak{t}$ in \cite{pEqualsTproof2}, following an unpublished proof sketch of J. Steprans; also, Fremlins has posted a proof on his website \cite{Fremlin}, also based on Stepran's sketch. The main difference from Malliaris and Shelah's proof is that Stepran replaces cofinality spectrum problems by ultrapower embeddings. We prefer working with models of set theory, since the ultrapower machinery introduces unneeded notational overhead. Both of these proofs \cite{pEqualsTproof2} \cite{Fremlin} use peculiar cuts.
\section{$\mathfrak{p}_{\hat{V}} \leq \mathfrak{t}_{\hat{V}}$}\label{pVLeqTvSec}
We begin with some formalities. $ZFC^-$ is $ZFC$ without powerset, but with replacement strengthened to collection, and with choice strengthened to the well-ordering principle; we consider this the standard definition, following \cite{ZFCminus}.
As some notational conventions, $\hat{V}$ will denote a model of $ZFC$. Whenever $\hat{V} \models ZFC^-$, we will identify $HF$ (the hereditarily finite sets) with its copy in $\hat{V}$; for example, we identify each natural number $n < \omega$ with its copy in $\hat{V}$. Other elements of $\hat{V}$ will usually be decorated with a hat, for instance we write $\hat{\omega}$ rather than $(\omega)^{\hat{V}}$; but sometimes readability takes precedence. Given $X \subseteq \hat{V}$, we say that $X$ is an internal subset of $\hat{V}$ if there is some $\hat{X} \in \hat{V}$ such that $X = \{\hat{y} \in \hat{V}: \hat{y} \hat{\in} \hat{X}\}$. In this case, we identify $X$ with $\hat{X}$ and will write that $X \in \hat{V}$.
We say that $\hat{V}$ is $\omega$-standard, or is an $\omega$-model, if $\hat{\omega} = \omega$ (i.e. every natural number of $\hat{V}$ has finitely many predecessors). $\hat{V}$ will only ever denote non $\omega$-models. We say that $X \subseteq \hat{V}$ is pseudofinite if there is some $\hat{X} \in \hat{V}$, finite in the sense of $\hat{V}$, such that $X \subseteq \hat{X}$. Thus if $\hat{X} \in \hat{V}$, then $\hat{X}$ is pseudofinite if and only if it is finite in the sense of $\hat{V}$.
We now make the key definitions.
\begin{definition}
If $(L, <)$ is a linear order, and $\kappa, \theta$ are infinite regular cardinals, then a $(\kappa, \theta)$-pre-cut in $L$ is a pair of sequences $(\overline{a}, \overline{b}) = (a_\alpha: \alpha < \kappa)$, $(b_\beta: \beta < \theta)$ from $L$, such that for all $\alpha < \alpha'$, $\beta < \beta'$, we have $a_\alpha < a_{\alpha'} < b_{\beta'} < b_{\beta}$. $(\overline{a}, \overline{b})$ is a cut if there is no $c \in L$ with $a_\alpha < c < b_\beta$ for all $\alpha, \beta$. Let the cut spectrum of $(L, <)$ be $\mathcal{C}(L, <) := \{(\kappa, \theta): L \mbox{ admits a } (\kappa, \theta) \mbox{ cut}\}$. Define $\mbox{cut}(L, <) = \mbox{min}\{\kappa + \theta: (\kappa, \theta) \in \mathcal{C}(L, <)\}$.
By a tree $T$ we mean a partially ordered set $(T, <)$ with meets and a minimum element $0_T$, such that the predecessors of every element are linearly-ordered. Given a tree $(T, <)$ define $\mbox{tree-tops}(T)$ to be the least (necessarily regular) $\kappa$ such that there is an increasing sequence $(s_\alpha: \alpha < \kappa)$ from $T$ with no upper bound in $T$.
Suppose $\hat{V}$ is an $\omega$-nonstandard model of $ZFC^-$. Then define $\mathcal{C}_{\hat{V}}= \mathcal{C}(\hat{\omega}, \hat{<})$, and define $\mathfrak{p}_{\hat{V}} = \mbox{cut}(\hat{\omega}, \hat{<})$. Also, let $\mathfrak{t}_{\hat{V}}$ be the minimum over all $\hat{n} < \hat{\omega}$ of $\mbox{tree-tops}(\hat{n}^{<\hat{n}}, \hat{\subset})$.
\end{definition}
Unraveling the definitions, $\mathbf{t}_{\hat{V}}$ is the least $\kappa$ such that there is some $\hat{n}< \hat{\omega}$ and some increasing sequence $(\hat{s}_\alpha: \alpha < \kappa)$ from $\hat{n}^{<\hat{n}}$, with no upper bound in $\hat{n}^{<\hat{n}}$. Equivalently, $\mathbf{t}_{\hat{V}}$ is the least $\kappa$ such that there is some $\hat{n}< \hat{\omega}$ and some increasing sequence $(\hat{s}_\alpha: \alpha < \kappa)$ from $\hat{n}^{<\hat{n}}$, with no upper bound in $\hat{\omega}^{<\hat{\omega}}$; this is because if $\hat{s}$ is any upper bound, then $\hat{s} \restriction_{\hat{m}}$ is an upper bound in $\hat{n}^{<\hat{n}}$, where $\hat{m}$ is the largest number below $\hat{n}$ so that $\hat{s} \restriction_{\hat{m}} \in \hat{n}^{\hat{m}}$.
The following lemma is a component of Shelah's proof in \cite{ShelahIso} that SOP theories are maximal in Keisler's order. It need not hold for cofinality spectrum problems. In Section 10 of \cite{pEqualsTref}, Malliaris and Shelah derive the lemma in the context of ultrapower embeddings, following \cite{ShelahIso}. In \cite{pEqualsT2}, Malliaris and Shelah comment that cofinality spectrum problems with exponentiation are enough.
\begin{lemma}\label{pLeqT}
Suppose $\hat{V} \models ZFC^-$ is $\omega$-nonstandard. Then
$\mathfrak{p}_{\hat{V}} \leq \mathfrak{t}_{\hat{V}}$. In fact, $(\mathfrak{t}_{\hat{V}}, \mathfrak{t}_{\hat{V}}) \in \mathcal{C}_{\hat{V}}$.
\end{lemma}
\begin{proof}
Suppose $(s_\alpha: \alpha < \kappa)$ is an increasing sequence from $\hat{n_*}^{<\hat{n_*}}$ with no upper bound, where $\kappa$ is regular. We show $(\kappa, \kappa) \in \mathcal{C}_{\hat{V}}$.
Let $\hat{<}_{lex}$ be the lexicographic ordering on $\hat{n}_*^{<\hat{n}_*}$.
Note that if $s \in \hat{T}$, then $s_\alpha \,^\frown(0) \, \hat{\leq}_{lex} \, s \hat{\leq}_{lex} s_\alpha \,^\frown(\hat{n}_*-1)$ if and only if $s_\alpha \subseteq s$. Since $(s_\alpha: \alpha < \kappa)$ is unbounded, it follows that $(s_\alpha\,^\frown(0): \alpha < \kappa)$ and $(s_\alpha\,^\frown(\hat{n}_*-1): \alpha < \kappa)$ form a $(\kappa, \kappa)$-cut in $(\hat{n}_*^{<\hat{n}_*}, (\hat{<}_{lex})_*)$.
In $\hat{V}$, let $\hat{\sigma}: (\hat{n}_*^{<\hat{n}_*}, \hat{<}_{lex}) \to (|\hat{n}_*^{<\hat{n}_*}|, \hat{<})$ be the order preserving bijection. Then $(\hat{\sigma}(s_\alpha\,^\frown(0)): \alpha < \kappa)$ and $(\hat{\sigma}(s_\alpha\,^\frown(\hat{n}_*-1)): \alpha < \kappa)$ witness that $(\kappa, \kappa) \in \mathcal{C}(\hat{\omega}, \hat{V})$.
\end{proof}
The following corresponds to Claim 2.14 of \cite{pEqualsTref}.
\begin{lemma}\label{definableTreeTops}
Suppose $\hat{V} \models ZFC^-$ is $\omega$-nonstandard.
Suppose $(\hat{T}, \hat{<})$ is a pseudofinite tree in $\hat{V}$. Then $\mbox{tree-tops}(\hat{T}, \hat{<}) \geq \mathfrak{t}_{\hat{V}}$.
\end{lemma}
\begin{proof}
There is in $\hat{V}$ a subtree of $\hat{\omega}^{<\hat{\omega}}$ which is isomorphic to $\hat{T}$; so we can suppose that $\hat{T}$ is a subtree of $\hat{\omega}^{<\hat{\omega}}$. Then $\hat{T}$ is a subtree of $\hat{n}_*^{<\hat{n}_*}$ for some $\hat{n}_* < \hat{\omega}$.
Now suppose $(s_\alpha: \alpha < \kappa)$ is an increasing sequence from $\hat{T}$ with $\kappa < \mathfrak{t}_{\hat{V}}$; we show there is an upper bound in $\hat{T}$. To see this let $s_+$ be an upper bound of $(s_\alpha: \alpha < \kappa)$ in $\hat{\omega}^{<\hat{\omega}}$, and let $\hat{n}$ be largest so that $s_+ \restriction_{\hat{n}} \, \hat{\in} \, \hat{T}$; and let $s = s_+ \restriction_{\hat{n}}$.
\end{proof}
The following theorem corresponds to Theorem 4.1 of \cite{pEqualsTref}, although there the authors must also assume $\lambda < \mathfrak{t}_{\hat{V}}$ in the absence of Lemma~\ref{pLeqT}. Note that since models of $ZFC^-$ admit pairing functions, there is no loss in only considering types in a single variable, in which each formula has only a singleton parameter.
\begin{theorem}\label{localSaturation}
Suppose $\hat{V} \models ZFC^-$ is $\omega$-nonstandard.
Suppose $p(x)= (\varphi_\alpha(x, a_\alpha): \alpha < \lambda)$ is a partial type over $\hat{V}$ of cardinality $\lambda < \mathfrak{p}_{\hat{V}}$. Suppose $\hat{X} \in \hat{V}$ is pseudofinite, and $\varphi_0(x)$ is $``x \in \hat{X}$." Then $p(x)$ is realized in $\hat{V}$.
\end{theorem}
\begin{proof}
Obviously this is true when $\lambda$ is finite.
Suppose the lemma is true for all $\lambda' < \lambda$; we show it is true for $\lambda$. This suffices. Write $\hat{n}_* = |\hat{X}|$.
We choose $(s_\alpha: \alpha \leq \lambda)$ an increasing sequence from $\hat{X}^{<\hat{n}_*}$, such that if we let $\hat{n}_\alpha = \hat{{\rm lg}}(s_\alpha)$, then for all $\beta < \alpha < \lambda$ and for all $\hat{n}_\beta \leq \hat{n} < \hat{n}_\alpha$, $\hat{V} \models \varphi_\beta(s_\alpha(\hat{n}), a_\beta)$. Obviously then $s_\lambda(\hat{n}_\lambda - 1)$ will realize $p(x)$.
Let $s_0 = \emptyset$. At successor stage $\alpha$, just use the hypothesis for $\lambda' = |\alpha| < \lambda$.
Suppose we have defined $(s_\alpha: \alpha < \delta)$ where $\delta \leq \lambda$. Using $|\delta| < \mathfrak{p}_{\hat{V}} \leq \mathfrak{t}_{\hat{V}}$ (by Lemma~\ref{pLeqT}), we may apply Lemma~\ref{definableTreeTops} to choose $s_+ \in \hat{X}^{<\hat{n}_*}$, an upper bound of $(s_\alpha: \alpha < \delta)$.
Let $\hat{m}_0 = \hat{{\rm lg}}(s_+)$. For $\beta \leq \delta$ we will define $\hat{m}_\beta$ so that for all $\alpha < \delta$, and for all $\beta < \beta' < \delta$, $\hat{n}_\alpha < \hat{m}_{\beta'} < \hat{m}_\beta$, and further for every $\beta \leq \delta$, we have that for every $\beta' < \beta$ and for every $\hat{n}_{\beta'} \leq \hat{n} < \hat{m}_\beta$, $\hat{V} \models \varphi_{\beta'}(s_+(\hat{n}), a_{\beta'})$. Note once we finish we can set $s_\delta = s_+ \restriction_{\hat{m}_\delta}$.
Having defined $\hat{m}_\beta$ for $\beta < \delta$, let $\hat{m}_{\beta+1}$ be the greatest $\hat{m} < \hat{m}_\beta$ such that for all $\hat{n} < \hat{m}$, $\hat{V} \models \varphi_\beta(s_+(\hat{n}), a_\beta)$; this works. Having defined $\hat{m}_\beta$ for all $\beta < \delta' \leq \delta$, since $\delta' \leq \delta \leq \lambda < \mathfrak{p}_{\hat{V}}$ we can choose $\hat{m}_{\delta'}$ with $\hat{n}_\alpha < \hat{m}_{\delta'} < \hat{m}_{\beta}$ for all $\alpha < \delta$, $\beta < \delta'$.
This concludes the construction.
\end{proof}
\section{$\mathfrak{p}_{\hat{V}} = \mathfrak{t}_{\hat{V}}$}\label{pVEqualsTvSec}
In this section, we prove the following theorem. It corresponds to Central Theorem 9.1 of \cite{pEqualsTref}.
\begin{theorem}\label{pVEqualsTv}Suppose $\hat{V} \models ZFC^-$ is $\omega$-nonstandard. Then $\mathfrak{p}_{\hat{V}} = \mathfrak{t}_{\hat{V}}$.
\end{theorem}
Fix $\hat{V}$ for the rest of the section.
We begin with the following theorem; it corresponds to Theorem 3.1 of \cite{pEqualsTref}.
\begin{theorem}\label{lcfWellDefined} Suppose $\kappa < \mbox{min}(\mathfrak{p}_{\hat{V}}^+, \mathfrak{t}_{\hat{V}})$ is regular. Then there is a unique regular cardinal $\lambda$ with $(\kappa, \lambda) \in \mathcal{C}_{\hat{V}}$; moreover this $\lambda$ is also unique with the property that $(\lambda, \kappa) \in \mathcal{C}_{\hat{V}}$.
\end{theorem}
\begin{proof}
We first show that there exist $\lambda_0, \lambda_1$ regular cardinals with $(\kappa, \lambda_0) \in \mathcal{C}_{\hat{V}}$ and $(\lambda_1, \kappa) \in \mathcal{C}_{\hat{V}}$. We will then show that $\lambda_0 = \lambda_1$, which suffices to prove the theorem.
For $\lambda_0$: pick $\hat{n}_*$ nonstandard, and note that by Lemma~\ref{definableTreeTops} applied to the tree $(\hat{n}_*, \hat{<})$ we can choose $(\hat{n}_\alpha: \alpha < \kappa)$ a strictly increasing sequence below $\hat{n}_*$. Let $(\hat{m}_\beta: \beta < \beta_*)$ be any strictly decreasing sequence in $(\hat{n_*}, \hat{<})$, cofinal above $(\hat{n}_\alpha: \alpha < \kappa)$, and then discard elements to replace $\beta_*$ by $\mbox{cof}(\beta_*) =: \lambda_0$.
For $\lambda_1$: I claim that we can define $(\hat{n}'_\alpha: \alpha < \kappa)$, a strictly decreasing sequence of nonstandard numbers from $\hat{\omega}$. To see that we can do this: first let $\hat{n}'_0$ be an arbitrary nonstandard natural number. Having defined $\hat{n}'_{\alpha}$, let $\hat{n}'_{\alpha+1} = \hat{n}'_\alpha - 1$. Having defined $\hat{n}'_\alpha$ for all $\alpha < \delta$ where $\delta < \kappa$ is a limit, consider the pre-cut $(n: n < \omega), (\hat{n}'_\alpha: \alpha < \delta)$. Since $\omega + \delta < \kappa \leq \mathfrak{p}_{\hat{V}}$ this cannot be a cut, so choose $\hat{n}'_\delta$ in the gap. Having constructed $\hat{n}'_\alpha$ for each $\alpha < \kappa$, we can as in the previous paragraph choose a regular $\lambda_1$ and a strictly increasing sequence $(\hat{m}'_\gamma: \gamma < \lambda_1)$, cofinal below $(\hat{n}'_\alpha: \alpha < \kappa)$.
Now to show $\lambda_0 = \lambda_1$: first, by possibly increasing $\hat{n}_*$, we can suppose $\hat{n}_* > \hat{n}'_0$, and thus each $\hat{n}_\alpha, \hat{n}'_\alpha, \hat{m}_\beta, \hat{m}'_\beta < \hat{n}_*$. Let $(\hat{T}, \hat{<})$ be the tree of all sequences $s \in (\hat{n}_* \times \hat{n}_*)^{<\hat{n}_*}$, such that that for all $\hat{n} < \hat{m} < \hat{{\rm lg}}(s)$, $s(\hat{n})(0) < s(\hat{m})(0) < s(\hat{m})(1) < s(\hat{n})(0)$.
We now choose a strictly increasing sequence $(s_\alpha: \alpha < \kappa)$ from $\hat{T}$ such that for each $\alpha < \kappa$, if we set $\hat{a}_\alpha = \hat{{\rm lg}}(s_\alpha)$, then $s_{\alpha}(\hat{a}_\alpha-1) = (\hat{n}_\alpha, \hat{n}'_\alpha)$. Let $s_0 = \emptyset$; having defined $s_\alpha$, let $s_{\alpha+1} = s_\alpha \,^\frown (\hat{n}_{\alpha+1}, \hat{n}'_{\alpha+1})$. Finally, having defined $s_\alpha$ for each $\alpha < \delta$ for $\delta < \kappa$ limit, since $|\delta| < \mathfrak{t}_{\hat{V}}$ we can choose $s_+$ an upper bound for $(s_\alpha: \alpha < \delta)$. Let $\hat{n}$ be greatest so that $s_+(\hat{n})(0) < \hat{n}_\delta$ and $s_+(\hat{n})(1) > \hat{n}'_\delta$; let $s_\delta = s_+ \restriction_{\hat{n}} \,^\frown (\hat{n}_\delta, \hat{n}'_\delta)$.
Since $\kappa < \mathfrak{t}_{\hat{V}}$ we can choose $s$ an upper bound on $(s_\alpha: \alpha < \kappa)$. Choose $\lambda$ regular and $(\hat{b}_\alpha: \alpha < \lambda)$ a strictly decreasing sequence from $\hat{\omega}$, which is cofinal above $(\hat{a}_\alpha: \alpha < \kappa)$, and such that $\hat{b}_0= \hat{{\rm lg}}(s)-1$.
Then the sequences $(\hat{m}_\alpha: \alpha < \lambda_0)$ and $(s(\hat{b}_\alpha, 0): \alpha < \lambda)$ are cofinal in each other, so $\lambda_0 = \lambda$; and the sequences $(\hat{m}'_\alpha: \alpha < \lambda_0)$ and $(s(\hat{b}_\alpha, 1): \alpha < \lambda)$ are cofinal in each other, so $\lambda_1 = \lambda$.
\end{proof}
Note that in the following definition, we will eventually be proving that $\min(\mathfrak{p}_{\hat{V}}^+, \mathfrak{t}_{\hat{V}}) = \mathfrak{p}_{\hat{V}} = \mathfrak{t}_{\hat{V}}$.
\begin{definition}
For $\kappa < \min(\mathfrak{p}_{\hat{V}}^+, \mathfrak{t}_{\hat{V}})$ regular, define $\mbox{lcf}_{\hat{V}}(\kappa)$ to be the unique regular $\lambda$ with $(\kappa, \lambda) \in \mathcal{C}_{\hat{V}}$ (which is also the unique regular $\lambda$ with $(\lambda, \kappa) \in \mathcal{C}_{\hat{V}}$).
\end{definition}
Note that by definition of $\mathfrak{p}_{\hat{V}}$ there is some $\kappa \leq \mathfrak{p}_{\hat{V}}$ such that either $(\kappa, \mathfrak{p}_{\hat{V}}) \in \mathcal{C}_{\hat{V}}$ or else $(\mathfrak{p}_{\hat{V}}, \kappa) \in \mathcal{C}_{\hat{V}}$. If $\mathfrak{p}_{\hat{V}} < \mathfrak{t}_{\hat{V}}$, then $\kappa = \mbox{lcf}_{\hat{V}}({\mathfrak{p}_{\hat{V}}})$ and thus both occur.
Thus, for the contradiction, it suffices to show if $\mathfrak{p}_{\hat{V}} < \mathfrak{t}_{\hat{V}}$, then for all $\kappa \leq \mathfrak{p}_{\hat{V}}$, we have that $(\kappa, \mathfrak{p}_{\hat{V}}) \not \in \mathcal{C}_{\hat{V}}$.
The following easy case corresponds to Lemma 6.1 of \cite{pEqualsTref}.
\begin{lemma}\label{noSmallSymCuts}
Suppose $\mathfrak{p}_{\hat{V}} < \mathfrak{t}_{\hat{V}}$. Write $\kappa = \mathfrak{p}_{\hat{V}}$. Then $(\kappa, \kappa) \not \in \mathcal{C}_{\hat{V}}$.
\end{lemma}
\begin{proof}
Suppose it were, say via $(\hat{a}_\alpha: \alpha < \kappa)$, $(\hat{b}_\alpha: \alpha < \kappa)$. Write $\hat{N}_* = \hat{b}_0+1$. Let $(\hat{T}, \hat{<})$ be the tree of all sequences $s$ in $(\hat{N}_* \times \hat{N}_*)^{<\hat{N}_*}$ such that for all $\hat{n} < \hat{m} < \hat{{\rm lg}}(s)$, $s(\hat{n})(0) < s(\hat{n})(1) < s(\hat{m})(1) < s(\hat{m})(0)$. Using the techniques of the previous proofs it is easy to define $(s_\alpha: \alpha < \kappa)$ an increasing sequence from $\hat{T}$ such that if we set $\hat{n}_\alpha = \hat{{\rm lg}}(s_\alpha)$, then $s_\alpha(\hat{n}_\alpha-1) = (\hat{a}_\alpha, \hat{b}_\alpha)$. Then since $\kappa < \mathfrak{t}_{\hat{V}}$ is regular we can choose an upper bound $s$ for $(s_\alpha: \alpha < \kappa)$. Then $s(\hat{{\rm lg}}(s)-1)(0)$ is in the gap $(\hat{a}_\alpha: \alpha < \kappa), (\hat{b}_\alpha: \alpha < \kappa)$; but this was supposed to be a cut.
\end{proof}
Before finishing, we will want the following standard fact. It is listed as Fact 8.4 of \cite{pEqualsTref}.
\begin{lemma}\label{Combinatorics}
For every $\kappa$, there is some map $g: [\kappa^+]^2 \to \kappa$ such that for every $X \subseteq \kappa^+$, if $|X| = \kappa^+$ then $|g[X^2]| = \kappa$.
\end{lemma}
\begin{proof}
Choose $g$ so that for all $\gamma < \beta < \alpha$, $g(\gamma, \alpha) \not= g(\beta, \alpha)$ (this is possible since for all $\alpha < \kappa^+$, there is an injection from $\alpha$ to $\kappa$). Suppose $X \subseteq \kappa^+$ has size $\kappa^+$. Then we can choose $\alpha \in X$ such that $|\alpha \cap X| = \kappa$. Then for all $\beta, \gamma \in \alpha \cap X$ distinct, $g(\beta, \alpha) \not= g(\gamma, \alpha)$; hence $|g[X]^2] = \kappa$.
\end{proof}
To finish the proof of Theorem~\ref{pVEqualsTv}, it suffices to establish the following lemma; it corresponds to Theorem 8.1 of \cite{pEqualsTref}.
\begin{lemma}\label{KeyLemma} Suppose $\mathfrak{p}_{\hat{V}} < \mathfrak{t}_{\hat{V}}$; write $\lambda = \mathfrak{p}_{\hat{V}}$, and let $\kappa < \lambda$ be regular. Then $(\kappa, \lambda) \not \in \mathcal{C}_{\hat{V}}$.
\end{lemma}
\begin{proof}
We proceed like in the proof of Lemma~\ref{noSmallSymCuts}, but with a more inspired tree $\hat{T}$. Towards a contradiction, let $(\hat{a}_\alpha: \alpha < \kappa)$, $(\hat{b}_\beta: \beta < \lambda)$ be a $(\kappa, \lambda)$-cut. Let $\hat{N}_* = \hat{b}_0+1$. Also, choose a function $g: [\kappa^+]^2 \to \kappa$ as in Lemma~\ref{Combinatorics}. Extend $g$ to a function from $[\lambda]^2$ to $\kappa$ arbitrarily.
Now, define $\hat{T}$ to be the tree of all sequences $s = (\hat{e}_{\hat{n}}, \hat{D}_{\hat{n}}, \hat{g}_{\hat{n}}: \hat{n} < \hat{n}_*) \in \hat{V}$ of length $\hat{n}_* < \hat{N}_*$, satisfying:
\begin{enumerate}
\item $(\hat{e}_{\hat{n}}: \hat{n} < \hat{n}_*)$ is a decreasing sequence with $\hat{e}_0 < \hat{N}_*$;
\item For all $\hat{n} < \hat{n}_*$, we have that $\hat{D}_{\hat{n}} \subseteq \hat{n}$, and $\hat{g}_{\hat{n}}: [\hat{D}_{\hat{n}}]^2 \to \hat{e}_{\hat{n}}$;
\item If $\hat{n}+1 < \hat{n}_*$ then $\hat{g}_{\hat{n}}$ and $\hat{g}_{\hat{n}+1}$ agree on $[\hat{D}_n \cap \hat{D}_{n+1}]^2$.
\end{enumerate}
So as $\hat{n}$ increases, $\hat{g}_{\hat{n}}$ is squeezing pairs from $\hat{D}_{\hat{n}}$ into the shrinking space $\hat{e}_{\hat{n}}$.
Suppose $\beta_* < \lambda$. Then say that the increasing sequence $(s_\beta: \beta \leq \beta_*)$ from $\hat{T}$ is nice if, writing $\hat{d}_\alpha = {\rm lg}(s_\alpha)$ for each $\beta \leq \beta_*$ and writing $s_{\beta_*}(\hat{n}) = (\hat{e}_{\hat{n}}, \hat{D}_{\hat{n}}, \hat{g}_{\hat{n}})$ for each $\hat{n} < \hat{d}_{\beta_*}$, the following conditions are met:
\begin{enumerate}
\item[4.] For all $\beta < \beta_*$, $\hat{e}_{\hat{d}_{\beta}} = \hat{b}_\beta$;
\item[5.] For all $\beta< \beta_*$ and for all $\hat{d}_\beta < \hat{n} < \hat{n}_*$, $\hat{d}_\beta \in \hat{D}_n$;
\item[6.] For all $\beta < \beta' < \beta_*$ and for all $\hat{d}_{\beta'} < \hat{n} < \hat{n}_*$, $\hat{g}_{\hat{n}}(\hat{d}_\beta, \hat{d}_{\beta'}) = \hat{a}_{g(\beta, \beta')}$.
\end{enumerate}
Also, for limit ordinals $\delta \leq \lambda$, say that the increasing sequence $(s_\beta: \beta < \delta)$ from $\hat{T}$ is nice each proper initial segment is.
\noindent \textbf{Claim.} There is a nice sequence $(s_\beta: \beta < \lambda)$ from $\hat{T}$.
\vspace{1 mm}
Before proving the claim, we indicate why it suffices. Let $(s_\beta: \beta < \lambda)$ be a nice sequence from $\hat{T}$. Since $\lambda = \mathfrak{p}_{\hat{V}} < \mathfrak{t}_{\hat{V}}$, we can find an upper bound $s_\lambda$ to $(s_\beta: \beta < \lambda)$ in $\hat{T}$. Write $\hat{d}_\beta = {\rm lg}(s_\beta)$ for each $\beta \leq \lambda$, and write $s_\lambda(\hat{n}) = (\hat{e}_{\hat{n}}, \hat{D}_{\hat{n}}, \hat{g}_{\hat{n}})$ for each $\hat{n} < \hat{d}_{\lambda}$. The idea is to find some $\gamma < \gamma' < \kappa^+$ and some $\hat{d}_{\gamma'} < \hat{n} < \hat{n}_*$ such that $\hat{n}$ is small enough that $\hat{g}_{\hat{n}}(\hat{d}_\gamma, \hat{d}_{\gamma'}) = \hat{a}_{g(\gamma, \gamma')}$, and such that $\hat{n}$ is large enough that $\hat{e}_{\hat{n}} \leq \hat{a}_{g(\gamma, \gamma')}$. This will be a contradiction.
Formally, choose a decreasing sequence $(\hat{k}_\alpha: \alpha < \kappa)$ with $\hat{k}_0 = \hat{d}_\lambda$ so that $(\hat{d}_\beta: \beta < \lambda), (\hat{k}_\alpha: \alpha < \kappa)$ is a cut; this is possible by uniqueness of $\mbox{lcf}_{\hat{V}}(\lambda) = \kappa$. Note that for each $\gamma < \kappa^+$, we can find some $\alpha_\gamma < \kappa$ such that whenever $\hat{d}_\gamma \leq \hat{n} \leq \hat{k}_{\alpha_\gamma}$, we have that $\hat{d}_\gamma \in \hat{D}_{\hat{n}}$ (otherwise, the least $\hat{n} \geq \hat{d}_\gamma$ with $\hat{d}_\gamma \not \in \hat{D}_{\hat{n}}$ would fill the cut $(\hat{d}_\beta: \beta < \lambda), (\hat{k}_\alpha: \alpha < \kappa)$). Then we can find some $\alpha < \kappa$ such that $\{\gamma < \kappa^+: \alpha_\gamma = \alpha\}$ has size $\kappa^+$. Let $\alpha' < \kappa$ be large enough so that $\hat{e}_{\hat{k}_\alpha} \leq \hat{a}_{\alpha'}$ (if there were no such $\alpha'$ then $\hat{e}_{\hat{k}_\alpha}$ would fill the cut $(\hat{a}_\alpha: \alpha < \kappa), (\hat{b}_\beta: \beta < \lambda)$). Now by choice of $g$, there are $\gamma < \gamma' \in \Gamma$ with $g(\gamma, \gamma') \geq \alpha'$. Now $\hat{g}_{\hat{k}_\alpha}(\hat{d}_\gamma, \hat{d}_{\gamma'}) = \hat{a}_{g(\gamma, \gamma')}$ by condition 3; but $\hat{a}_{g(\gamma, \gamma')} \geq \hat{a}_{\alpha'} \geq \hat{e}_{\hat{k}_\alpha}$, contradicting that $\hat{g}_{\hat{k}_\alpha}: [\hat{D}_{\hat{k}_\alpha}]^2 \to \hat{e}_{\hat{k}_\alpha}$.
So it suffices to prove the claim. We define our nice sequence $(s_\beta: \beta <\lambda)$ inductively. At the stage $\beta = 0$, we just set $s_0 = \emptyset$. At limit stages, there is nothing to do.
Suppose we have constructed $(s_\beta: \beta < \beta_*)$, where $\beta_*< \lambda$ is a limit ordinal (i.e. we are at the successor of a limit stage). By Theorem~\ref{localSaturation}, we can find some upper bound $s_{\beta_*}$ to $(s_\beta: \beta < \beta_*)$ in $\hat{T}$, such that if we write $\hat{d}_\beta = {\rm lg}(s_\beta)$ for each $\beta \leq \beta_*$, and write $s_{\beta_*} = (\hat{e}_{\hat{n}}, \hat{D}_{\hat{n}}, \hat{g}_{\hat{n}}: \hat{n} < \hat{d}_{\beta_*})$, then for all $\beta < \beta_*$ and for all $\hat{d}_{\beta} < \hat{n} < \hat{d}_{\beta_*}$, $\hat{d}_\beta \in \hat{D}_n$. Then $(s_\beta: \beta < \beta_* +1)$ is nice.
Finally, suppose we have constructed $(s_\beta: \beta < \beta_*+1)$ for some $\beta_*< \lambda$. Write $\hat{d}_\beta = {\rm lg}(s_\beta)$ for each $\beta \leq \beta_*$, and write $s_{\beta_*} = (\hat{e}_{\hat{n}}, \hat{D}_{\hat{n}}, \hat{g}_{\hat{n}}: \hat{n} < \hat{d}_{\beta_*})$. Write $\hat{n} = \hat{d}_\beta$. Let $\hat{d}_{\beta_*+1} = \hat{n}+ 1$ and let $\hat{e}_{\hat{n}} = \hat{b}_{\beta}$. By Theorem~\ref{localSaturation} (using $\hat{\mathcal{P}}(\hat{D}_{\hat{n}-1})$ is pseudofinite), we can find some $\hat{D} \subseteq \hat{D}_{\hat{n}-1}$ such that $\hat{d}_\gamma \in \hat{D}$ for all $\gamma< \beta_*$, and such that for all $\hat{u} \in [\hat{D}]^2$, $\hat{g}_{\hat{n}-1}(\hat{u}) < \hat{e}_{\hat{n}} = \hat{b}_\beta$. (This uses that each $\hat{g}_{\hat{n}-1}(\hat{d}_\gamma, \hat{d}_{\gamma'}) = \hat{a}_{g(\gamma, \gamma')} < \hat{b}_\beta$.) Let $\hat{D}_{\hat{n}} = \hat{D} \cup \{\hat{n}\}$. By Theorem~\ref{localSaturation} again (using $\hat{e}_{\hat{n}}^{\hat{D} \times \hat{D}}$ is pseudofinite), we can find $\hat{g}_{\hat{n}}: [\hat{D}_{\hat{n}}]^2 \to \hat{e}_{\hat{n}}$ extending $\hat{g}_{\hat{n}-1} \restriction_{\hat{D}}$, such that $\hat{g}_{\hat{n}}(\hat{d}_\gamma, \hat{d}_{\beta_*}) = \hat{a}_{g(\gamma, \beta_*)}$ for every $\gamma < \beta_*$. Let $s_{\beta_*+1} = s_{\beta_*} \,^\frown\, (\hat{e}_{\hat{n}}, \hat{D}_{\hat{n}}, \hat{g}_{\hat{n}})$. Then $(s_\beta: \beta < \beta_*+2)$ is nice.
\end{proof}
This concludes the proof that $\mathfrak{p}_{\hat{V}} = \mathfrak{t}_{\hat{V}}$.
\section{$\mathfrak{p} = \mathfrak{t}$}\label{CardInvariants}
We begin the final leg of the proof of $\mathfrak{p} = \mathfrak{t}$ with the relevant definitions:
\begin{definition}
\begin{itemize}
\item Given $X, Y \subset \omega$, say that $X \subseteq_* Y$ if $X \backslash Y$ is finite.
\item Given $\mathcal{B} = \{B_\alpha: \alpha < \kappa\}$ say that $\mathcal{B}$ has the strong finite intersection property if the intersection of finitely many elements from $\mathcal{B}$ is infinite. Say that $\mathcal{B}$ has a pseudo-intersection if there is some infinite $X \subset \omega$ with $X\subseteq_* B_\alpha$ for each $\alpha < \kappa$.
\item Let $\mathfrak{p}$ be the least cardinality of a familiy $\mathcal{B}$ of subsets of $\omega$ with the strong finite intersection property but without an infinite pseudo-intersection.
\item Say that $(X_\alpha: \alpha < \kappa)$ is a tower if each $X_\alpha \subseteq \omega$ is infinite, and $\alpha < \beta < \kappa$ implies $X_\alpha \supseteq_* X_\beta$.
\item Let $\mathfrak{t}$ be the least cardinality of a tower with no pseudo-intersection.
\end{itemize}
\end{definition}
Obviously $\mathfrak{p} \leq \mathfrak{t}$. See \cite{CombCharSurvey} for a survey on the classical theory of cardinal invariants of the continuum.
We will want the following definition.
To begin making connections with the previous section we observe the following lemma.
\begin{lemma}\label{pVtVEquivalents}
Suppose $\hat{V} \models ZFC^-$ is $\omega$-nonstandard. Then the following are equivalent:
\begin{itemize}
\item[(A)] $\lambda < \mathfrak{p}_{\hat{V}}$.
\item[(B)] $\lambda < \mathfrak{t}_{\hat{V}}$.
\item[(C)] Whenever $(\hat{a}_\alpha: \alpha < \lambda)$ is a family from $[\hat{\omega}]^{<\hat{\aleph}_0}$ such that for all $\alpha_0, \ldots, \alpha_{n-1} \in \lambda$, $|\hat{a}_{\alpha_0} \hat{\cap} \ldots \hat{\cap} \hat{a}_{\alpha_{n-1}}|$ is nonstandard, then there is $\hat{a} \in [\hat{\omega}]^{<\hat{\aleph_0}}$ with $|\hat{a}|$ nonstandard, such that $\hat{a}\subseteq \hat{a}_\alpha$ for each $\alpha < \lambda$.
\item[(D)] Whenever $(\hat{a}_\alpha: \alpha < \lambda)$ is a descending sequence of nonempty sets from $[\hat{\omega}]^{<\hat{\aleph}_0}$, there is some $\hat{m}<\hat{\omega}$ such that $\hat{m} \in \hat{a}_\alpha$ for each $\alpha < \lambda$.
\end{itemize}
\end{lemma}
\begin{proof}
(A) and (B) are equivalent by Theorem~\ref{pVEqualsTv}, and they imply the other items by Theorem~\ref{localSaturation}. (For (A) implies (C), note that we are requiring $\hat{a} \in \hat{\mathcal{P}}(\hat{a}_0)$, a pseudofinite set.) Also, clearly (C) implies (D). So it suffices to show that (D) implies (B).
Suppose $(s_\alpha: \alpha < \lambda)$ is an increasing sequence from $\hat{n}^{<\hat{n}}$. Let $\hat{a}_\alpha = \{s \in \hat{n}^{\hat{n}-1}: s_\alpha \subseteq s\}$. Then by (D) (and applying an injection from $\hat{n}^{\hat{n}-1}$ to $\hat{n}'$ for large enough $\hat{n}'$) we can choose $s \in \hat{n}^{\hat{n}}$ with $s \in \hat{a}_\alpha$ for each $\alpha < \lambda$. Then $s$ is an upper bound on $(s_\alpha: \alpha < \lambda)$.
\end{proof}
We need one more lemma. It is implicit in the proof of Claim 14.7 of \cite{pEqualsTref}.
\begin{definition}
Suppose $f, g: \omega \to [\omega]^{<\aleph_0}$ and $A \subseteq \omega$ is infinite. Then say that $f \leq_A g$ if $\{n \in A: f(n) \not \subseteq g(n)\}$ is finite.
\end{definition}
\begin{lemma}\label{pEqualsTTechnical}
Suppose $\lambda < \mathfrak{t}$ is an infinite cardinal and $A \subseteq \omega$ is infinite and $(f_\alpha: \alpha < \lambda)$ is a sequence from $([\omega]^{<\aleph_0})^\omega$ with $f_\alpha \geq_A f_\beta$ for all $\alpha < \beta < \lambda$. Suppose further that for each $\alpha < \lambda$, $\{m \in A: f_\alpha(m) = \emptyset\}$ is finite. Then there is some infinite $B \subseteq A$ and some $f: \omega \to [\omega]^{<\aleph_0}$ such that $f \leq_B f_\alpha$ for each $\alpha < \lambda$, and further $f(m) \not= \emptyset$ for each $m \in B$.
\end{lemma}
\begin{proof}
For notational simplicity, we assume $A = \omega$.
For each $\alpha < \lambda$ define $X_\alpha := \{\langle m, n\rangle: n \in f_\alpha(m)\}$; so $X_\alpha$ is an infinite subset of $\omega \times \omega$. Suppose $\alpha < \beta$; then there is $m_*$ so that for all $m\geq m_*$, $f_\alpha(m) \subseteq f_\beta(m)$. Hence $X_\alpha \backslash X_\beta \subseteq \bigcup_{m < m_*} \{m\} \times f_\alpha(m)$ is finite, so $X_\alpha \supseteq_* X_\beta$. Hence $(X_\alpha: \alpha < \lambda)$ is a tower; by hypothesis on $\lambda$ we can choose an infinite $X \subseteq \omega \times \omega$ such that $X \subseteq_* X_\alpha$ for each $\alpha < \lambda$. Define $f: \omega \to [\omega]^{<\aleph_0}$ by $f(m) = \{n: \langle m, n \rangle \in X\}$. (Each $f(m)$ is finite because $X \subseteq_* X_0$.) Let $B = \{m < \omega: f(m) \not= \emptyset\}$. Clearly this works.
\end{proof}
\newpage
Finally, the following is Theorem 14.1 of \cite{pEqualsTref}.
\begin{theorem}\label{pEqualsT}
$\mathfrak{p} =\mathfrak{t}$.
\end{theorem}
\begin{proof}
We know that $\mathfrak{p} \leq \mathfrak{t}$; suppose towards a contradiction that $\mathfrak{p} < \mathfrak{t}$. We can suppose that $\mathfrak{t} = 2^{\aleph_0} = 2^{<\mathfrak{t}}$ since if we force by the Levy collapse of $2^{<\mathfrak{t}}$ to $\mathfrak{t}$, this adds no new sequences of reals of length less than $\mathfrak{t}$, and so does not affect the values of $\mathfrak{p}$ and $\mathfrak{t}$. So henceforth we assume this.
Our aim is to build a special ultrafilter $\mathcal{U}$ on $\omega$, such that if we set $\hat{V} =V^\omega / \mathcal{U}$ for some or any transitive $V \models ZFC^-$, then $\mathfrak{p}_{\hat{V}} \leq \mathfrak{p}$ and $\mathfrak{t}_{\hat{V}} \geq \mathfrak{t}$. In view of $\mathfrak{p}_{\hat{V}} = \mathfrak{t}_{\hat{V}}$ this clearly suffices for the contradiction.
Inductively choose a tower $(A_\gamma: \gamma < \mathfrak{t})$ so that:
\begin{itemize}
\item[(1)] (This is the definition of tower) Each $A_\gamma$ is infinite and $\gamma < \gamma' < \mathfrak{t}$ implies $A_{\gamma'} \subseteq_* A_\gamma$.
\item[(2)] For each $A \subseteq \omega$, there is some $\gamma< \mathfrak{t}$ such that either $A \subseteq X_\gamma$ or else $A \cap X_\gamma = \emptyset$;
\item[(3)] Suppose $(f_\alpha: \alpha < \lambda)$ is a sequence from $([\omega]^{<\aleph_0})^\omega$ of length $\lambda < \mathfrak{t}$, and for some $\gamma < 2^{\aleph_0}$, we have that for all $\alpha < \beta < \lambda$, $f_\beta \leq_{A_{\gamma}} f_\alpha$.
Then for some $\gamma_* \geq \gamma$, there is some $f: \omega \to [\omega]^{<\aleph_0}$ such that $\{m \in A_{\gamma_*}: f(m) = \emptyset\}$ is finite, and such that $f \leq_{A_{\gamma_*}} f_\alpha$ for each $\alpha < \lambda$.
\end{itemize}
This is straightforward, using Lemma~\ref{pEqualsTTechnical} and that $(2^{\aleph_0})^{<\mathfrak{t}} = \mathfrak{t}$. Let $\mathcal{U} $ be the set of all $A \subset \omega$ such that $A_\gamma \subseteq_* A$ for some $\gamma < 2^{\aleph_0}$. Then $\mathcal{U}$ is a nonprincipal ultrafilter, by (1) and (2). Let $V$ be a transitive model of $ZFC^-$; for instance, we can take $V = \textrm{HC}$, the set of hereditarily countable sets. Let $\hat{V} = V^\omega/\mathcal{U}$.
\vspace{1 mm}
\noindent \textbf{Claim 1.} $\mathbf{p}_{\hat{V}} \leq \mathfrak{p}$.
\noindent \emph{Proof of Claim 1.} Suppose $\lambda < \mathfrak{p}_{\hat{V}}$; we show $\lambda < \mathfrak{p}$. Let $\{B_\alpha: \alpha < \lambda\}$ be a family of subsets of $\omega$ with the strong finite intersection property. Define $f_\alpha: \omega \to [\omega]^{<\aleph_0}$ by $f_\alpha(m) = B_\alpha \cap m$; let $\hat{a}_\alpha = [f_\alpha]_\mathcal{U}$. So each $\hat{a}_\alpha \in [\hat{\omega}]^{<\hat{\aleph}_0}$, and $\{\hat{a}_\alpha: \alpha < \lambda\}$ satisfies the hypothesis of Lemma~\ref{pVtVEquivalents}(C). Since $\lambda < \mathfrak{p}_{\hat{V}}$ there is $\hat{a} \in [\hat{\omega}]^{<\hat{\aleph}_0}$ with $|\hat{a}|$ nonstandard, and with $\hat{a} \subseteq \hat{a}_\alpha$ for each $\alpha < \omega$.
Write $\hat{a} = [f]_{\mathcal{U}}$. For each $\alpha < \lambda$ there is some $\gamma < \mathfrak{t}$ such that $f \leq_{A_\gamma} f_\alpha$. Since $\mathfrak{t}$ is regular, we can choose $\gamma_*$ large enough so that $f \leq_{A_{\gamma_*}} f_\alpha$ for each $\alpha < \lambda$. Define $B \subseteq \omega$ by $B = \bigcup_{m \in A_{\gamma_*}} f(m)$. $B$ is infinite since $\{m \in A_{\gamma_*}: |f(m)| \geq n\} \in \mathcal{U}$ for each $n < \omega$. Also, suppose $\alpha < \lambda$; choose $m_*$ large enough so that $f(m) \subseteq f_\alpha(m)$ for every $m \in A_{\gamma_*} \backslash m_*$. Then $B \backslash B_{\alpha} \subseteq \bigcup_{m \in A_{\gamma_*} \cap m_*} f(m)$ is finite, so $B \subseteq_* B_\alpha$. This shows that $\lambda < \mathfrak{p}$, concluding the proof of the claim.
\vspace{1 mm}
\noindent \textbf{Claim 2.} $\mathfrak{t}_{\hat{V}} \geq \mathfrak{t}$.
\noindent \emph{Proof of Claim 2.} Let $\lambda < \mathfrak{t}$ be given; we show $\lambda < \mathfrak{t}_{\hat{V}}$. It suffices to show (D) from Lemma~\ref{pVtVEquivalents} holds. So let $(\hat{a}_\alpha: \alpha < \lambda)$ be a descending sequence of nonempty sets from $[\hat{\omega}]^{<\hat{\aleph}_0}$; write $\hat{a}_\alpha = [f_\alpha]_{\mathcal{U}}$.
Note that for each $\alpha < \beta < \lambda$ there is some $\gamma$ with $f_\alpha \geq_{A_\gamma} f_\beta$, and for each $\alpha < \lambda$ there is some $\gamma$ with $\{m \in A_\gamma: f_\alpha(m) = \emptyset\}$ finite. Since $\mathfrak{t}$ is regular we can choose $\gamma_*$ large enough so that $f_\alpha \geq_{A_{\gamma_*}} f_\beta$ for all $\alpha < \beta < 2^{\aleph_0}$, and such that $\{m \in A_{\gamma_*}: f_\alpha(m) = \emptyset\}$ is finite for each $\alpha < \lambda$.
By item (3) of the construction we can find $\gamma \geq \gamma_*$ and $f: \omega \to [\omega]^{<\aleph_0}$ such that $f(m) \not= \emptyset$ for all but finitely many $m \in A_\gamma$, and $f \leq_{A_{\gamma}} f_\alpha$ for each $\alpha < \lambda$. Let $\hat{a} = [f]_{\mathcal{U}}$; then $\hat{a}$ is nonempty, so any $\hat{m} \in \hat{a}$ is as desired.
\end{proof}
|
1,314,259,993,207 | arxiv | \section{}
Particulate dispersions have long been subjected to external
fields as a means to separate different constituents; in particular,
sedimentation is important not only for analytical but also for preparative
purposes \cite{manoharan2003}.
For bulk systems, successful separation depends crucially upon avoiding
hydrodynamic instabilities. The development of microfluidics
\cite{squires2005} has made it possible to exploit the suppression
of turbulence at small lengthscales in order to design novel separation
devices \cite{huh2007}; on the other hand, this significantly increased
stability against mechanical perturbations severly limits mixing needed for
many `lab-on-a-chip' applications.
Often strong external fields \cite{elmoctar2003}
or complex fabrication \cite{simonnet2005} are required
to produce hydrodynamic instabilities required for efficient mixing.
Experiments \cite{segre1997,royall2007} and computer simulations
\cite{padding2008} which study velocity fluctuations have played a crucial
role in our understanding of how dispersions respond to external driving
fields, in particular to gravity. The motion of a solute particle
is characterised by a Peclet number $Pe=\tau_{D}/\tau_{S}$, which
is the ratio between the time $\tau_{D}$ it takes a particle to diffuse
its own radius and the time $\tau_{S}$ it takes to sediment the same distance.
A Peclet number of order unity is the dividing line between colloidal ($Pe\lesssim1$)
and granular systems ($Pe\gg1$), i.e. $Pe$ measures the importance of Brownian
motion. All attempts at a quantitative description of sedimentation to date considered a homogenously
distributed dispersion as the initial state.
For preparative purposes, on the other hand, starting with
a particle-rich layer on top of pure solvent is more relevant as it
enables the separation of particles depending on their sedimentation
coefficient. However, this configuration is unstable with respect to gravity.
The particle velocities become correlated, which leads to emergent
density fluctuations and consequently more rapid sedimentation than
Stokes' flow alone. It is well known that many practical
particle concentrations develop this Rayleigh-Taylor (RT) like instability.
This provides an avenue by which the system may be successfully
mixed on the one hand, conversely this very mixing, leads, chaotically,
to a scenario in which separation does not occur.
For stable separation, it is essential to avoid
the RT instability. It is possible to use a density gradient to counteract
the instabilities \cite{manoharan2003}.
The `original' Rayleigh-Taylor instability, which occurs if a heavy, immiscible fluid
layer is placed on top of a lighter one has been intensively studied
for the case of a simple Newtonian fluid both by theory \cite{chandrasekhar},
simulation \cite{kadau2004} and experiment, and
is observed in granular matter \cite{voeltz2001,vinningland2007_1},
in surface-tension dominated colloid-polymer mixtures \cite{aarts2005}
and in a suspension of dielectric particles exposed to an ac electric field gradient \cite{zhao2008}.
Here we consider a suspension of colloidal hard spheres
(without surface tension) of microfluidic dimensions, in
which we have access to all relevant length scales, from the single particle level
to the full system. A systematic study of sedimentation in an
\emph{inhomogenous} system is presented. We employ three approaches:
experiment, computer simulation and theory. The experimental realisation is
provided by confocal microscopy at the single-particle level
\cite{vanblaaderen1995}, while the simulation is a particle-based mesoscale
technique \cite{malevanets1999} which captures the direct interactions between
the colloidal particles, and, crucially, the solvent which mediates the
hydrodynamic interactions and whose backflow drives the RT instability.
Our results at short times are modelled with a linear stability
analysis \cite{chandrasekhar}.
The RT instability is thought of as a fluctuation in the interface
between two fluids. Since in a hard-sphere suspension there is no phase
separation, we consider a continuous density profile, albeit rapidly varying. To capture the
lateral fluctuations, we consider the stability of this density and
associated pressure profile against fluctuations of wavelength $\lambda$
in a horizontal plane perpendicular to gravity. We consider a slit
geometry of height $L$ which is sketched in Fig. \ref{figSnapShots}\textbf{a}.
In the absence of surface tension, the fluctuations of all wavelengths
are in principle unstable, but short wavelength fluctuations are washed
out by diffusion of the colloidal particles and so do not grow exponentially
\cite{duff1962,kurowski1995}.
Our linear stability analysis, which reveals the stable and fast-growing wavelengths
of fluctuations, is based on a continuum hydrodynamics approach where the colloidal
dispersion is considered as an incompressible one-component fluid with inhomogeneous mass density
$\rho(x)$ and corresponding kinematic viscosity $\nu(x)$ as obtained from the
Saito representation \cite{ladd1990}. The spatially varying
density profile is given by $\rho(x)=\phi(x)\rho_{c}+(1-\phi(x))\rho_{s}$,
where $\rho_{c}$ and $\rho_{s}$ are the mass densities of the colloidal
particles and the solvent. The colloidal packing fraction profile
$\phi(x)$ is an input from an equilibrated simulation for inverted gravity.
The stability of the initial density $\rho(x)$
and pressure $p(x)$ profiles against pertubations
$\delta\rho\propto\delta p\propto\exp\left(i(k_{y}y+k_{z}z)+n(k)t\right)$
with wave number $k=(k_{y}^{2}+k_{z}^{2})^{1/2}$ in the $yz$ plane
and growth rate $n$ is calculated via the linearized Navier-Stokes
equations \cite{chandrasekhar} resulting in the eigenvalue problem
\begin{eqnarray}
&n\{(\rho u_{x}')'-\rho k^{2}u_{x}\}=\{\nu(u_{x}'''-k^{2}u_{x}')+\nu'(u_{x}''+k^{2}u_{x})\}'\nonumber\\
&-k^{2}\{\frac{g}{n}\rho'u_{x}+2\nu'u_{x}'+\nu(u_{x}''-k^{2}u_{x})\}
\end{eqnarray}
with the spatial derivative $\dots'=d\dots/dx$, the strength of
the gravitational field $g$ and the fluid velocity field in gravity
direction $u_{x}(x)$. For a system confined between two rigid walls
we impose $u_{x}=0$ along with the no-slip boundary conditions $du_{x}/dx=0$ at $x=0,L$.
We account for colloid diffusion by the correction term $n^{*}(k)=n(k)-Dk^{2}$ \cite{duff1962,kurowski1995},
with diffusion constant $D=k_{B}T/3\pi\eta_{s}\sigma$ ($\sigma$ colloid diameter) and dynamic solvent
viscosity $\eta_{s}$.
In our computer simulation, which includes solvent-mediated momentum transfer between
the colloidal particles, we consider a suspension of $N=15,048$ hard sphere particles
of mass $M$ immersed in a bath of typically $N_{s}=14,274,843$ solvent particles of mass $m$
and number density $n_s=N_{s}/V$. The solvent particles are subjected to
multi-particle collision dynamics \cite{malevanets1999,lamura2001}, which consists
of two steps. In the streaming step, solvent particles move ballistically
for time $\delta t$. In the collision step, particles are sorted
in cubic cells of size $a$, and their velocities relative to the
center-of-mass velocity of each cell are rotated by an angle $\alpha$
around a random axis. We employed the parameters
$\delta t=0.2\sqrt{ma^2/k_BT}$, $\alpha=3\pi/4$, $n_sa^3=5$
and $M=167m$ in order to achieve the hierarchy of time scales
and the same hydrodynamic numbers as in the experiment, see Ref. \cite{padding2008,ripoll2004} for details.
To enforce no-slip boundary condition on the colloid surface
and the confining walls a stochastic-reflection method \cite{inoue2002}
is applied. Statistical averages for time-dependent quantities are performed
over $200$ independent configurations.
\begin{figure}
\includegraphics[width=8.6cm]{figSnapShots2}
\caption{\textbf{a}, A schematic illustrating the spatial parameters
$\sigma$, $\lambda$ and $L$. \textbf{b-e}, Simulation snapshots of a system
which contains $N=33,858$ colloidal particles and $N_{s}=32,118,397$ solvent particles
(not displayed) in a simulation box with dimensions $L/\sigma=18$
and $L_{y}/\sigma=L_{z}/\sigma=81$. The value of the Peclet number
is $Pe=1.6$. \textbf{b-d}, Time series of the system at time $t/\tau_{S}=3.2$
$(\textbf{b}),$ $6.4$ $(\textbf{c}),$ $9.6$ $(\textbf{d})$. The
snapshots are slices of thickness $2\sigma$ done in the $xy$ plane.
\textbf{e}, Slice of thickness $2\sigma$ in the $yz$ plane at time
$t/\tau_{S}=9.6$. The height of the $yz$ plane is $x/L=2/3$, as
indicated by the dashed line in (\textbf{d}). \textbf{f-i}, Experimental
realisation of the Rayleigh-Taylor-like instability.
\textbf{f-h}, Time series of images taken with
a confocal microscope in the $xy$ plane for the parameters $\phi=0.15$,
$Pe=1.1$ and $L_{x}/\sigma=18$ at times $t/\tau_{S}=1.43$ $(\textbf{f}),$
$5.5$ $(\textbf{g}),$ $11.22$ $(\textbf{h})$. \textbf{i}, Slice
in the $yz$ plane at a height $x/L=2/3$ (indicated by the dashed
line in (\textbf{h})) at time $t/\tau_{S}=11.22$. In $(\textbf{f-i})$
the scale bar denote 40 $\mu$m.}
\label{figSnapShots}
\end{figure}
In our single-particle level confocal microscopy experiments we used polymethylmethacrylate colloids sterically stabilised
with polyhydroxy-stearic acid. The colloids
were labeled with the fluorescent dye coumarine and had a diameter
$\sigma=2.8$ $\mu$m with around 4\% polydispersity as determined
by static light scattering. To almost match the colloid refractive
index we used a solvent mixture of cis-decalin and cyclohexyl bromide
(CHB), which we tuned to yield different Peclet numbers, owing to
changes in the degree of density mismatch between colloids and solvent.
Specifically, $Pe=1.1$ and $Pe=2.4$ correspond to 80\% and 87.5\% CHB by
weight respectively. The characteristic time to diffuse a radius
$\tau_D \approx 29$ s. The data were collected
on a Leica SP5 confocal microscope, fitted with a resonant scanner,
at a typical scan-rate of around 10 s per 3D data set. Prior equilibration
was achieved by placing the suspension overnight such that it sedimented
across a thin (typically $50$ $\mu m$) capillary. The capillary
was then inverted, and the evolution under sedimentation was followed.
We begin our discussion by presenting snapshots of the system,
in Fig. \ref{figSnapShots}\textbf{b-e} from computer simulation,
and in Fig. \ref{figSnapShots}\textbf{f-i} from confocal-microscopy.
The similarity is remarkable, and we note that, at the very
least, our simulation qualitatively reproduces the experiment.
For a quantitative comparison, we consider the
dispersion relation of wavenumber against growth rate in
Fig. \ref{figInitialEvolution}\textbf{b}.
The time evolution in the development of the RT instability with
a characteristic wavelength is clear. While snapshots in the gravity
plane (Fig. \ref{figSnapShots}\textbf{b, c, d, f, g,} and
\textbf{h}) illustrate the overall process of sedimentation, snapshots
in the horizontal $yz$ plane show the transient pattern or network-like structure
that results from the RT instability (Fig. \ref{figSnapShots}\textbf{e}
and \textbf{i}). At later times, the network structure decays and
a laterally homogenous density profile develops where the colloids
start to form a layer at the bottom of the cell which becomes more
compact with time. The time evolution is shown in detail in the Movies 1-4,
see EPAPS Document No. [].
The linear stability analysis predicts the existence of the initially fastest growing wavelengths
in the RT instability. We plot the results of the linear stability
analysis for a range of slit widths $L$ keeping $Pe$ fixed, and
for a variety of Peclet numbers keeping $L$ fixed in Fig. \ref{figInitialEvolution}\textbf{a,}
\textbf{b} and \textbf{c} respectively. The dimensionless growth rates,
$n\tau_{D}$, are plotted as a function of wave number $k\sigma=2\pi\sigma/\lambda$,
where $\lambda$ is the wavelength of the fluctuations as indicated
in Fig. \ref{figSnapShots}\textbf{a}. Without diffusion, fluctuations at all wave numbers
are unstable, as shown by the solid lines in Fig. \ref{figInitialEvolution}\textbf{a,
b} and \textbf{c}. Due to diffusion, we find that growth
rates at higher wavevectors are suppressed as expected, i.e., diffusion
destroys the Rayleigh-Taylor instability at sufficiently small wavelengths.
We find excellent agreement between the theory with diffusion and
both simulation and experimental data, up to $k\sigma\approx1$,
which is surprising for a coarse-grained continuum description. With
decreasing wall separation $L$, the growth rate $n_{max}$ decreases
and $k_{max}$ increases, see Fig. \ref{figInitialEvolution}\textbf{a}.
Since the fluid velocity in the gravity direction decreases as $e^{-kx}$,
where $x$ is the distance from the interface, only long wavelength
undulations feel the presence of the walls \cite{chandrasekhar,mikaelian1996}.
Figure \ref{figInitialEvolution}\textbf{b} and \textbf{c} show that
for fixed $L$, driving the sedimentation more strongly by increasing
the Peclet number leads to an increase in the wave number
of the fastest-growing undulation $k_{max}$, and the corresponding
growth rate $n_{max}$.
\begin{figure}
\includegraphics[width=8.6cm]{figInitialEvolution}
\caption{\textbf{a-c} Growth rate $n\tau_{D}$ versus wave number
$k\sigma=2\pi\sigma/\lambda$. \textbf{a},
Simulation results of $n(k)$ for different wall separation distances
$L/\sigma=18,12,9$ and fixed $Pe=1.6$. \textbf{b}, Simulation and
experimental results of $n(k)$ for different Peclet numbers $Pe=4.8,1.6,1.1,0.8,0.4$
and fixed $L/\sigma=18$. \textbf{c}, Experimental results of $n(k)$
for different Peclet numbers $Pe=2.42,1.21$ and fixed $L/\sigma=36$.
The open symbols are the results obtained from simulation, filled
symbols are experimental results, solid lines represent the solutions from the
instabilty analysis and the dashed lines are the same numerical solutions plus
the diffusion correction. First moment of the colloid density
$\langle x\rangle/L$ (\textbf{d}),
(\textbf{f}), (\textbf{h}) and second moment of the colloid density
$\sigma_{x}/L$ (\textbf{e}), (\textbf{g}), (\textbf{i}) versus time
$t/\tau_{S}$. Solid lines indicate simulation data whereas the dashed
lines indicate experimental data. \textbf{d,e}, $L/\sigma=18,12,9$
and $Pe=1.6$. \textbf{f,g}, $Pe=4.8,1.6,1.1,0.8,0.4$ and $L/\sigma=18$.
\textbf{h,i}, $Pe=2.42,1.21$ and $L/\sigma=36$.}
\label{figInitialEvolution}
\end{figure}
So far we have considered only the linear regime of the instability,
which is valid at small times, when the amplitude of the fluctuations
is smaller than the wavelength. Our experiments and simulations permit
detailed access to all relevant time- and length-scales in the non-linear
regime, where the colloids form foam-like structures in the (confined)
$xz$ plane (Fig. \ref{figSnapShots}\textbf{c},\textbf{g})
and a network-like structure in the $yz$ plane (Fig. \ref{figSnapShots}\textbf{e},\textbf{i})
appears. Apparently, both continue to exhibit the characteristic length scale $\lambda_{max}=2\pi/k_{max}$
of the fastest growing wavelength in the linear regime.
\begin{figure}
\includegraphics[width=8.6cm]{fig3melting}
\caption{Time series of images taken with a confocal microscope in the $xy$
plane for the parameters $\phi=0.15$, $Pe=2.42$ and $L_{x}/\sigma=36$ at times
$t/\tau_{S}=3.1$ $(\textbf{a}),$ $8.1$ $(\textbf{c}),$ $24.6$ $(\textbf{d})$
and $56$ $(\textbf{e})$. The crystalline layers are clearly visible.
$(\textbf{b})$ is a slice in the $yz$ plane approximately in the middle
of the colloidal crystal in $(\textbf{a})$. The secondary instability occurs
at $t/\tau_S\approx20-30$, see $(\textbf{d})$.
In $(\textbf{a,c-e})$ the scale bar denotes 40 $\mu$m and 20 $\mu$m
in $(\textbf{b})$.}
\label{figMelting}
\end{figure}
In order to quantify the different regimes of the instability we use
the first moment of the density $\langle x\rangle$, i.e. the centre
of mass of the colloid coordinates and the second moment of the density
$\sigma_{x}^{2}=\langle x^{2}\rangle-\langle x\rangle^{2}$. Here, $\langle x\rangle$
is a measure of the degree of sedimentation, while $\sigma_{x}$ quantifies
the extent to which the instability spreads out the colloids in the
gravity direction. Three regimes are clearly visible in Fig. \ref{figInitialEvolution}
\textbf{d, f, h, e, g} and \textbf{i}.
Firstly we find the linear regime in which the flat interface develops
undulations and hence $\langle x\rangle$ slowly decreases and $\sigma_{x}$
slowly increases, secondly the non-linear regime where \textbf{`}droplets'
of colloid-rich material fall to the bottom and therefore $\langle x\rangle$
sharply decreases and $\sigma_{x}$ sharply increases, and thirdly
the regime in which the colloids start to form a layer at the bottom
of the cell which becomes more compact with time under settling as
can be seen from the slow decrease of both $\langle x\rangle$ and
$\sigma_{x}$. Clear agreement between simulation and experimental data
can be seen from Fig. \ref{figInitialEvolution} \textbf{f} and \textbf{g}.
In the case of a rather large slit width $L/\sigma=36$, there is
a sufficiently high sediment for a region of colloidal crystal to form,
see Fig \ref{figMelting} and EPAPS Document No. [] for Movie 5. Since the
crystal has a finite (albeit small) yield stress, the only flow we
observe initially occurs in a thin fluid layer between the crystal and the lower
solvent region via narrow vertical tubes, in marked contrast
to Fig. \ref{figSnapShots} \textbf{b-d}.
The crystal melts layer by layer until finally it
becomes sufficiently thin that it peels off the wall in a second instability,
which leads to a change of slope for the $Pe=2.42$ line in
Fig. \ref{figInitialEvolution}\textbf{h} and \textbf{i} at $t/\tau_{S}\approx20-30$,
see Fig. \ref{figMelting} \textbf{d}, until most of the particles have sedimented down
(Fig. \ref{figMelting} \textbf{e}).
This observation of driven surface melting at the single
particle level has the potential to provide new insight into this
poorly understood phase transition under non-equilibrium conditions.
Using state-of-the-art simulation and experimental techniques, we
have presented a quantitative analysis of a hydrodynamic instability
in a colloidal system at a microfluidic lengthscale. Our
results show excellent agreement between experiment and simulation,
showing that the latter accurately describes the fundamentally
and practically important phenomena caused by hydrodynamic instabilities.
Furthermore, by employing a simple theoretical treatment to the initial
linear behaviour, we find considerable predictive power. The theory can
flexibly be used to predict conditions for separation and mixing.
We also note that the theory reveals even the length scale of the
network structure that results from the instability.
We finally emphasise that the full access and accuracy to all relevant
length scales in this problem allowed for the observation of novel phenomena,
not yet explored further, such as the inverse gravity induced crystal melting.
\begin{acknowledgments}
We acknowledge ZIM for computing time.
A. W. thanks E. W. Laedke and G. Lehmann for help.
We acknowledge A. A. Louis and J. T. Padding for discussions.
The authors are grateful to Didi Derks for a kind gift
of PMMA colloids.
A. W., R. G. W., G. G., H. L. and A. v. B. thank the DFG/FOM for support
in particular via SFB TR6 (projects A3, A4 and D3).
C. P. R. acknowledges the Royal Society for Funding.
H. T. acknowledges a grant-in-aid from MEXT.
\end{acknowledgments}
|
1,314,259,993,208 | arxiv | \section{Introduction}
Let $L$ be an even lattice of signature $(l, 2)$. The Borcherds lift \cite{Borcherds} is a multiplicative lifting that takes vector-valued modular forms on $\mathrm{SL}_2(\mathbb{Z})$ with poles at cusps for a finite Weil representation attached to $L$ as input and yields automorphic forms for the orthogonal group of $L$. The \emph{Borcherds products} constructed in this way have many interesting properties. For example, in a neighborhood of any cusp, they are represented by a converging infinite product in which the exponents are Fourier coefficients of their input function. Moreover, the zeros and poles of any Borcherds product lie on rational quadratic divisors and their orders can be read immediately off of the Fourier expansion of the input function. Of particular importance are Borcherds products of \emph{singular weight} $l / 2 - 1$, as the Fourier expansion of any singular-weight modular form is supported only on vectors of norm zero, implying massive cancellation when the product is expanded and suggesting a deeper underlying structure. Indeed, many of the known singular-weight products arise as Weyl denominators of generalized Kac--Moody algebras (see for example \cite{Scheithauer}).
There are no known examples of singular-weight Borcherds products on lattices $L$ as above with odd $l \ge 5$. More generally, to the authors' knowledge, no examples of Borcherds products of any half-integral weight appear in the literature for lattices with $l \ge 4$.
In this note we will show that for every $l \in \mathbb{N}$ there exist even lattices of signature $(l, 2)$ that admit holomorphic Borcherds products of half-integral weight. Among even lattices that split two hyperbolic planes, we prove that lattices that admit Borcherds products of half-integral weight have a very simple characterization. (Recall that a lattice $L$ splits two hyperbolic planes if it can be written in the form $L = K \oplus 2U$, where $K$ is an even lattice and where $U$ is unimodular of signature $(1, 1)$, and $2U = U \oplus U$.)
\begin{thrm} Let $L$ be an even lattice of the form $L = K \oplus 2U$ where $K$ is positive-definite and $U$ is unimodular of signature $(1, 1)$ and let $\langle -, - \rangle : L \times L \rightarrow \mathbb{Z}$ be its inner product. The following are equivalent: \\ (i) $L$ admits holomorphic Borcherds products of half-integral weight; \\ (ii) $\langle x,y \rangle \in 8 \mathbb{Z}$ for all $x,y \in K$.
\end{thrm}
\textbf{Acknowledgments.} The authors thank Jan Bruinier and Eberhard Freitag for helpful discussions. H. Wang is grateful to the Max Planck Institute for Mathematics in Bonn for its hospitality and financial support. B. Williams is supported by a fellowship of the LOEWE focus group Uniformized Structures in Arithmetic and Geometry.
\section{Preliminaries}
For lattices that split two hyperbolic planes, the Borcherds lift can be conveniently expressed in terms of Jacobi forms. Suppose $K$ is an even positive-definite lattice with dual lattice $K'$. A \emph{weakly holomorphic Jacobi form} of weight $k \in \mathbb{Z}$ and lattice index $K$ is a holomorphic function $\phi : \mathbb{H} \times (K \otimes \mathbb{C}) \rightarrow \mathbb{C}$ satisfying $$\phi \Big( \frac{a \tau + b}{c \tau + d}, \frac{\mathfrak{z}}{c \tau + d} \Big) = (c \tau + d)^k \mathbf{e}\Big( \frac{c}{c \tau + d} Q(\mathfrak{z})\Big) \phi(\tau, \mathfrak{z})$$ and $$\phi(\tau, \mathfrak{z} + \lambda \tau + \mu) = \mathbf{e}(-Q(\lambda)\tau - \langle \lambda, \mathfrak{z} \rangle) \phi(\tau, \mathfrak{z})$$ for all $M = \begin{psmallmatrix} a & b \\ c & d \end{psmallmatrix} \in \mathrm{SL}_2(\mathbb{Z})$ and $\lambda, \mu \in K$. Here $\mathbb{H}$ is the upper half-plane and $\mathbf{e}(x) = e^{2\pi i x}$. The qualifier \emph{weakly holomorphic} means that the Fourier expansion may be a Laurent series in the variable $q$: $$\phi(\tau, \mathfrak{z}) = \sum_{n \gg -\infty} \sum_{\ell \in K'} c(n, \ell) q^n \zeta^{\ell}, \; \; \zeta^{\ell} := e^{2\pi i \langle \ell, \mathfrak{z} \rangle}.$$ We do not impose any condition on $\ell$; however the transformation laws imply that for any fixed $n$ only finitely many terms appear in the sum $\sum_{\ell \in K'} c(n,r) \zeta^{\ell}$.
Now suppose $\phi$ has weight $0$ and integral Fourier coefficients. The \emph{Borcherds lift} of $\phi$ is a meromorphic modular form of weight $c(0, 0) / 2$ on the Type IV domain attached to the lattice $L = K \oplus 2U$. In terms of the tube domain model $$\mathbb{H}_K = \{Z = (\tau, \mathfrak{z}, w) \in \mathbb{H} \times (K \otimes \mathbb{C}) \times \mathbb{H}: \; Q(\mathrm{im}(\mathfrak{z})) < (\mathrm{im} \, \tau) \cdot (\mathrm{im}\, w) \},$$ writing $q = e^{2\pi i \tau}$, $s = e^{2\pi i w}$, and $r^{\ell} = e^{2\pi i \langle \ell, \mathfrak{z} \rangle}$, the Borcherds lift of $\phi$ is represented locally as the product $$q^A r^B s^C \prod_{(n, \ell, m) > 0} (1 - q^n r^{\ell} s^m)^{c(nm, \ell)},$$ where $(n, \ell, m) > 0$ is a positivity condition with respect to a \emph{Weyl chamber} and $(A, B, C)$ is the associated \emph{Weyl vector}. (See Theorem 4.2 of \cite{Gritsenko} for details.)
This formulation and Borcherds' vector-valued modular forms are related by the theta decomposition. If we write $$\phi(\tau, \mathfrak{z}) = \sum_{\gamma \in K' / K} f_{\gamma}(\tau) \Theta_{K, \gamma}(\tau, \mathfrak{z}), \; \text{where} \; \Theta_{K, \gamma}(\tau, \mathfrak{z}) = \sum_{\ell \in K + \gamma} q^{Q(\ell)} \zeta^{\ell},$$ then the association $$\phi \leftrightarrow F(\tau) = \sum_{\gamma \in K' / K} f_{\gamma}(\tau) \mathfrak{e}_{\gamma}$$ (where $\mathfrak{e}_{\gamma}$ are the basis elements of $\mathbb{C}[K'/K]$) defines an isomorphism between weakly holomorphic Jacobi forms of weight $0$ and weakly holomorphic vector-valued modular forms of weight $-\mathrm{rank}(K) / 2$ for the Weil representation for $K$. The latter may be interpreted as input functions in Borcherds' sense after identifying the Weil representations for $K$ and $L = K \oplus 2U$.
\section{Proof of Theorem 1}
(i) $\Rightarrow$ (ii): Suppose $\Psi$ is the Borcherds lift of a weakly holomorphic Jacobi form $$\phi(\tau, z) = \sum_{n \gg -\infty} \sum_{\ell \in K'} c(n, \ell) q^n \zeta^{\ell},$$ and let $N$ be the greatest common divisor of all inner products of vectors in $K$. Borcherds' congruence (\cite{Borcherds}, Theorem 11.2; a simpler proof for Jacobi forms is given in Corollary 4.5 of \cite{Wang}) implies $$N \sum_{\ell \in K'} c(0, \ell) \equiv 0 \, (\text{mod} \, 24).$$ By definition, each term $c(0, \ell)$ in the above sum is integral. If $\Psi$ has half-integral weight, then $c(0,0)$ is odd. Since $c(0, \ell)$ is symmetric under $\ell \mapsto -\ell$, it follows that $\sum_{\ell \in K'} c(0, \ell)$ is odd and therefore $8 | N$. \\
(ii) $\Rightarrow$ (i): The existence proof is constructive. We proceed in two steps. \\
Step 1. We first construct a Borcherds product of half-integral weight when $K = 8\mathbb{Z}^N$; in other words, a Gram matrix for $K$ is given by the diagonal matrix $\mathrm{diag}(8,...,8)$. Recall (e.g. \cite{Borcherds1}, Example 5 of section 15; or \cite{GN}, Example 2.3) that there is a holomorphic product of weight $1/2$ attached to the lattice $K = 8 \mathbb{Z}$. The input function as a weak Jacobi form can be given as a quotient of theta functions:
$$
\phi_{0,4}(\tau, z) = \frac{\vartheta(\tau, 3z)}{\vartheta(\tau, z)} = \zeta + 1 + \zeta^{-1} + O(q),
$$
where $q = e^{2\pi i \tau}$, $\zeta = e^{2\pi i z}$, and (by Jacobi's triple product)
\begin{align*} \vartheta(\tau, z) &= \sum_{n = -\infty}^{\infty} \left( \frac{-4}{n} \right) q^{\frac{n^2}{8}} \zeta^{\frac{n}{2}} \\ &=q^{\frac{1}{8}}(\zeta^{\frac{1}{2}}-\zeta^{-\frac{1}{2}})\prod_{n=1}^\infty(1-q^n\zeta)(1-q^n\zeta^{-1})(1-q^n). \end{align*}
For any $N \in \mathbb{N}$, we obtain a weak Jacobi form of weight $0$ and index $8 \mathbb{Z}^N$ by repeatedly taking the direct product of $\phi_{0,4}$ with itself: $$\phi_N(\tau, z_1,...,z_n) := \phi_{0,4}(\tau, z_1) \phi_{0,4}(\tau, z_2) ... \phi_{0,4}(\tau, z_n).$$ The Borcherds lift $\Psi_N$ of $\phi_N$ is then a (meromorphic) Borcherds product of weight $1/2$. By a theorem of Bruinier (\cite{Bruinier}, Theorem 1.3) $\Psi_N$ is a quotient of two holomorphic Borcherds products; one of these is of half-integral weight. \\
Step 2. Let $L$ be any even lattice satisfying (ii). The rescaled lattice $K(1/8)$ is integral (not necessarily even) and embeds in a positive-definite Type I unimodular lattice of rank at most $l + 1$ by Corollary 8 of \cite{CS}. It follows that $K$ is embedded in an even lattice $M$ lying in the same genus as $8\mathbb{Z}^N$ for some $N$. Since even lattices in the same genus yield equivalent Jacobi forms, we obtain from Step 1 a holomorphic half-integral weight Borcherds product attached to $M \oplus 2U$. By \cite{Ma}, its quasi-pullback to $L$ is a Borcherds product of half-integral weight.
\section{Examples}
Every step of the above proof can be made constructive, but the half-integral weight products constructed in this way tend to have rather large weight and complicated divisors. Therefore it seems worthwhile to present a few simpler examples of half-integral weight Borcherds products on lattices of rank greater than $5$.\\
In this section we give two examples of half-integral weight Borcherds products associated to lattices of signature $(4, 2)$. These can also be interpreted as Hermitian modular forms of degree two over the Eisenstein and Gaussian integers, respectively. In both cases we give only the principal part of the input form as a vector-valued modular form (which determines the weight and divisor). The two examples were computed in SAGE \cite{sagemath}.
\begin{ex} Take $K = \mathbb{Z}^2$ with Gram matrix $\begin{psmallmatrix} 16 & 8 \\ 8 & 16 \end{psmallmatrix}$. The lattice $L = K \oplus 2 U$ admits (at least) four holomorphic Borcherds products of weight $9/2$. These four forms are conjugates under the symmetries of the discriminant form. The principal part of one of the forms is
\begin{align*}
9 \mathfrak{e}_{(0,0)} &+ q^{-1/24} (4 \mathfrak{e}_{\pm (11/12, 1/24)} + 4 \mathfrak{e}_{\pm (1/24, 1/24)} + 4 \mathfrak{e}_{\pm (1/24, 11/12)} - \mathfrak{e}_{\pm (7/12, 5/24)} \\
&- \mathfrak{e}_{\pm (5/24, 5/24)} - \mathfrak{e}_{\pm (5/24, 7/12)} + 2 \mathfrak{e}_{\pm (5/12, 7/24)} + 2 \mathfrak{e}_{\pm (7/24, 7/24)} + 2 \mathfrak{e}_{\pm (7/24, 5/12)}\\
& + \mathfrak{e}_{\pm (1/12, 11/24)} + \mathfrak{e}_{\pm (11/24, 11/24)} + \mathfrak{e}_{\pm (11/24, 1/12)}) + q^{-1/8} (3 \mathfrak{e}_{\pm (1/8, 0)}\\
& + 3 \mathfrak{e}_{\pm (0, 1/8)}+ 3 \mathfrak{e}_{\pm (1/8, 7/8)}) + q^{-1/6} (\mathfrak{e}_{\pm (1/6, 5/12)} + \mathfrak{e}_{\pm (5/12, 5/12)} \\
&+ \mathfrak{e}_{\pm (5/12, 1/6)} + 2 \mathfrak{e}_{\pm (5/6, 1/12)} + 2\mathfrak{e}_{\pm (1/12, 1/12)} + 2 \mathfrak{e}_{\pm (1/12, 5/6)}),
\end{align*}
where $\mathfrak{e}_{\pm \gamma}$ represents $\mathfrak{e}_{\gamma} + \mathfrak{e}_{-\gamma}$.
\end{ex}
\begin{ex} Take $K = \mathbb{Z}^2$ with Gram matrix $\begin{psmallmatrix} 8 & 0 \\ 0 & 8 \end{psmallmatrix}.$ The lattice $L = K \oplus 2U$ admits two holomorphic Borcherds products of weight $7/2$. These can be interpreted as Hermitian modular forms for a level $4$ subgroup over the Gaussian integers. These forms are conjugate under an involution of the discriminant form. The principal part of one of the forms is \begin{align*} 7 \mathfrak{e}_{(0, 0)} &+ q^{-1/16} ( 3\mathfrak{e}_{\pm (1/8, 0)} + 3\mathfrak{e}_{\pm (0, 1/8)} + \mathfrak{e}_{\pm (1/2, 1/8)} + \mathfrak{e}_{\pm (1/8, 1/2)}) \\ &+ q^{-1/8} (\mathfrak{e}_{\pm (1/8, 1/8)} + \mathfrak{e}_{\pm (1/8, 7/8)}) \\ &+q^{-1/4} (\mathfrak{e}_{\pm (1/4, 0)} + \mathfrak{e}_{\pm (0, 1/4)}). \end{align*}
\end{ex}
\bibliographystyle{plainnat}
\bibliofont
|
1,314,259,993,209 | arxiv | \section{Introduction}\label{sect:introduction}
The concept of universal behavior in physical systems is very fruitful and has been successfully spread to other quantitative sciences.
At the theoretical level quantum and statistical field theories are important tools to study the approach to criticality of physical systems,
where most of the details of the microscopic interactions are washed off by the presence of a second order phase transition and universal features emerge.
In fact, only the degrees of freedom involved, the symmetries and the dimensionality of the system play a crucial role.
In the so-called theory space of all quantum field theories (QFTs) these universal features appear as special points corresponding to critical theories in which the correlation length diverges and physics is nontrivial at all scales. These special QFTs are often lifted to become conformal field theories (CFTs) i.e.\ scale invariance together with Poincare symmetry are promoted to full conformal invariance~\cite{Luty:2012ww}.
Below their upper critical dimension, which is defined as the dimensionality above which a QFT exhibits Gaussian exponents,
these QFTs are generally interacting.
In order to investigate these systems nonperturbatively one can resort to numerical techniques, when such an approach is feasible.
These investigations might, for example, take the form of Monte-Carlo simulations which are based on the application of the Metropolis algorithm on a lattice~(see for example \cite{lattice} and references therein).
From our -- admittedly theoreticians' -- point of view simulations are useful to benchmark the results obtained with alternative analytical, sometimes approximate, methods.
Historically the most recognized analytical method to investigate theories which exhibit a second order phase transition is the renormalization group (RG) approach~\cite{Cardy:1996xt}, especially after the impetus given by the pioneering work of Wilson~\cite{Wilson:1971bg}.
Along this line both perturbative and nonperturbative RG approaches can be employed.
The latter has acquired the name of exact RG, which is typically formulated at functional level in terms of flow equations for the Wilsonian action~\cite{Wegner:1972ih,Polchinski:1983gv}
or for the 1PI effective average action~\cite{Wetterich:1992yh,Morris:1994ie}.
Perturbative investigations have been around since the early days~\cite{Wilson:1971dc,Wilson:1973jj}
and have mainly lead to results expressed in the $\epsilon$-expansion
and its resummation below the upper critical dimension of a QFT~\cite{epsilon}.
In general RG methods, especially if nonperturbative, can give access to global flow, i.e.\ flows which cover the full theory space.
Consequently such flows have an enormous amount of information but require an equal amount of difficult computations.
One usually needs to resort to some approximations in order to obtain tractable RG equations in the full theory space,
but reasonably precise results can be nevertheless obtained, as shown for example
in the computation of the critical exponents of the {\tt Ising} universality class and its $O(N)$-symmetric extensions~\cite{Guida:1998bx}.
The investigations of several different universality classes -- new and old -- continues to this day
and certainly will not lose momentum any time soon~\cite{Pelissetto:2000ek,Gracey:2015tta, Gracey:2016mio,Gracey:2017oqf}.
Soon after the early developments of the Wilsonian action,
it has been observed that the perturbative RG too can be conveniently formulated at functional level~\cite{Jack:1982sr,ODwyer:2007brp}.
In this approach, later referred to as the functional perturbative RG, one constructs beta functions which encode
the scale dependence of several couplings at the same time
and obtains results for several quantities in a more efficient way.
This method has also another important advantage which is relevant for this paper:
it can be used with a lot of generality in that very little of the system under investigation must be specified a priori.
For example, it can be used for theories with rather arbitrary interacting potential (as we will do in this paper),
as well as for families of scalar theories with both unitary and nonunitary interactions~\cite{Codello:2017hhh,Codello:2017epp},
and even for higher derivative theories in which the derivative interactions are present at criticality~\cite{Safari:2017irw,Safari:2017tgs}.
Since it may also happen that internal symmetries emerge at critical points~\cite{Brezin:1973jt, Zia:1974nv,Michel:1983in, TMTB,Osborn:2017ucf},
this method can be taken as a starting point to investigate theories not constrained a priori by any symmetry, even including supersymmetry \cite{Wallace:1975ez,Gies:2017tod}.
In the past few years alternative methods based on conformal invariance have gained considerable popularity
and shown increasing success.
The general strategy of these methods is to focus on the critical points in the space of all theories,
assume that scale invariance is promoted to be local,
and consequently exploit the enhanced conformal symmetry of the system.
It is not an understatement to say that if one assumes that critical points are CFTs, even in $d>2$,
there is a significant advantage when computing close-to-criticality quantities
because of the constraints on correlators imposed by conformal symmetry.
These ideas are at the base of the conformal bootstrap approach~\cite{Rattazzi:2008pe,ElShowk:2012ht}, which follows an early suggestion by A.~Polyakov and is based on the consistency conditions that are obtained by rewriting
the conformal partial wave expansion in two of the $s$, $t$ and $u$ channels thanks to operator product expansion (OPE) associativity.
This method was employed in the analysis of some critical theories and is currently giving
the most precise evaluation of the critical exponents of the universality class of the Ising model~\cite{Simmons-Duffin:2015qma},
and is able to deal with various symmetry groups (see for example \cite{Stergiou:2018gjj}).
Nevertheless, in order to push the analysis to the best accuracy, a good amount of computing power is required
even for the conformal bootstrap.
In this light, CFT methods have taken the stage as consistent and numerically effective substitutes of both lattice and RG methods at criticality.
Besides the numerical achievements of the conformal bootstrap,
several analytic realizations of the underlying idea have been developed, including some
which involve perturbative expansions in small parameters such as $\epsilon$.
Among these methods we mention those based on the singularity structure of conformal blocks~\cite{Gliozzi:2016ysv} and their Mellin representation~\cite{Gopakumar:2016wkt}, and on the large spin expansions~\cite{Alday:2016njk,Alday:2017zzv,Henriksson:2018myn}.
In this work we concentrate on a CFT-based method which determines the conformal data
of a theory in the $\epsilon$-expansion by requiring consistency between
the Schwinger-Dyson equation (SDE), related to a general action at criticality,
and conformal symmetry in the Gaussian limit $\epsilon\to0$ \cite{Rychkov:2015naa}.
We refer to this method as SDE+CFT for brevity
and very briefly discuss the some properties of both SDE and CFT in the following paragraphs.
The interplay of these properties is the essential bulding block of this paper's analysis.
The SDE, which at the leading order are nothing but the generalizations of the classical equations of motion,
can constrain at the operatorial (or functional) level the correlators of a given QFT.
Contact terms at separate points are absent and any insertion of the equations of motion in a correlator
constructed with a string of operators provides a nontrivial relations among correlators.
In particular this is also true if the QFT is a CFT,
so that for any state of the CFT and for any list of operators $O_i$
resulting from the representation of the conformal group
one has the relation
\begin{align}
\left\langle\frac{\delta S}{\delta\phi}(x) \, O_1(y) \, O_2 (z) \dots\right\rangle = 0,
\label{prop}
\end{align}
where $S$ is the conformally invariant action. Furthermore, conformal symmetry greatly constrains the correlators appearing in the above equation, even in $d>2$.
Adopting a basis $O_a$ of normalized scalar primary operators with scaling dimensions $\Delta_a$,
the two point correlators have the following form:
\begin{align}\label{cft-2pf}
\braket{O_a(x) O_b(y)} =\frac{\delta_{ab}} {|x-y|^{2 \Delta_a}} .
\end{align}
The three-point correlator for scalar primary operators is also strongly constrained by conformal symmetry and reads
\begin{align}\label{cft-3pf}
\braket{O_a(x) O_b(y) O_c(z)} =\frac{C_{abc} }{|x-y|^{\Delta_{a}+\Delta_{b}-\Delta_{c}}
|y-z|^{\Delta_{b}+\Delta_{c}-\Delta_{a}} |z-x|^{\Delta_{c}+\Delta_{a}-\Delta_{b}}} \,,
\end{align}
where $C_{abc}$ are the structure constants of the CFT.
Thanks to the power of conformal symmetry, a
CFT is completely and uniquely determined by providing a basis of primary operators $O_a(x)$,
the scaling dimensions $\Delta_a$ and the structure constants $C_{abc}$, which together are known as \emph{CFT data}.
The idea of the SDE+CFT approach is to move below the upper critical dimension $d_c$, above which the theory is Gaussian,
and interpolate the nontrivial correlators shown above with those of the trivial Gaussian theory as a function of the critical coupling.
The consistency of this interpolation determines the leading order corrections in $\epsilon$ of some conformal data
when one exploits further relations between operators that are primary only in the Gaussian limit.
In this work we restrict ourselves to the information we can extract from the analysis of two- and three-point functions,
which is the current state-of-the-art of the approach.
Investigations based on this approach
have been applied up to now to scalar theories with and without $O(N)$ symmerty~
\cite{Rychkov:2015naa,Basu:2015gpa,Nakayama:2016cim,Nii:2016lpa,Hasegawa:2016piv},
and extended to unitary and nonunitary families of multicritical single-scalar theories \cite{Codello:2017qek,Codello:2017epp,Safari:2017irw}.
Here we shall make a step further and apply this method to the study of multicritical scalar theories with multiple (say $N$ different) fields $\phi_i$,
and with a generic interaction encoded in the potential
\begin{equation}
V=\frac{1}{m!} V_{i_1 \cdots i_m}\, \phi_{i_1} \cdots \phi_{i_m},
\label{mcpotential}
\end{equation}
where a sum over repeated indices is understood and no symmetry properties are considered. From the above form of the potential the upper critical dimension is determined to be $d_c=2m/(m-2)$.
In general the number of independent monomials is $\binom{m+N-1}{m}$ and for each one of them one can introduce a coupling.
Moreover the fact that the quadratic part of the action (standard kinetic term in this case) is invariant under linear $SO(N)$ field transformation $\phi\to R\phi$,
which lead to equivalent theories describing the same physics, imposes further constraints on the couplings associated to inequivalent theories.
This can be analyzed with group theoretical methods~\cite{Zia:1974nv,Michel:1983in, TMTB},
also introducing invariants on the space of couplings under such field redefinitions~\cite{Osborn:2017ucf} in terms of which any universal quantity is expected to be expressed.
Discrete symmetries such as permutations can also be taken into account.
In the main text we are able to write explicit eigenvalue equations that depend functionally on the potential
and from which many universal conformal data at the leading nontrivial perturbative order can be extracted.
In particular we derive expressions for the anomalous dimensions of the fields, the anomalous dimensions of the quadratic composite operators, and for several classes of structure constants.
When treating even unitary models with $m=2n$ we are able to extend the procedure
to an infinite tower of higher order composite operators besides the quadratic ones.
For all even models
we write the equations which fix, at the leading order,
possible critical potentials in terms of the parameter $\epsilon$.
It turns out that these equations coincide with the fixed point equations
obtained for a generic potential using the functional perturbative RG approach.
This provides further insight on how some information of one approach (RG) is encoded in the other (CFT) and viceversa.
Likewise, for the cubic nonunitary model with $m=3$ we obtain, in complete analogy to the single field case,
results for anomalous dimensions of the fields, for the quadratic composite operators, and for some structure constants.
We are also able to fix as a function of $\epsilon$ the critical potentials. Similar to the even case,
the critical conditions coincide wih those of the functional perturbative RG approach.
For higher order nonunitary models with $m=2n-1$ and $n>2$ we are not able
to find enough constraints on the critical potential to set it in terms of $\epsilon$,
which is again a situation in complete analogy to the single-field case.
We then specialize our very general results by giving a more explicit form to the potential that is constrained by symmetry.
As an interesting example, we choose the symmetry to be the permutation group $S_q$ acting on the fields with $q=N+1$
and we study it in detail. This symmetry group corresponds to Potts-like field theories, which include as special cases the standard
field-theoretical cubic realization of the Potts universality class, the reduced Potts model, and -- in principle -- infinitely many
generalizations.
Despite being much less constraining than $O(N)$ symmetry
(which nevertheless emerges as an effective symmetry for some fixed points),
the group tensor structures appearing in the potential can be naturally factorized
thus reducing strongly the number of independent parameters.
The Potts models~\cite{Potts:1951rk,Baxter:2000ez,Zia:1975ha,Nienhuis:1979mb,Wu:1982ra,Fortuin:1971dw,Zinati:2017hdy,Delfino:2017biz} are quite ubiquitous in statistical mechanics: Several interesting models can be obtained
if one takes analytic continuations of $q$. The most relevant continuations for this paper are to the value $q=1$,
which is related to models of percolation, and to $q=0$,
which is related to the random cluster model known as spanning forest \cite{Deng:2006ur}.\footnote{In absence of better nomenclature,
we address all continuum field-theoretical models with $S_q$ symmetry as ``Potts model''.
As it will be shown in the paper, these include the standard universality class that describes the standard lattice Potts model at criticality,
as well as some of its multicritical generalizations which we classify by the order of the critical interaction in the corresponding potentials.
}
The easiest way to construct an $S_q$-invariant potential interaction is to follow a standard vector representation of the $S_q$ group.
We concentrate on the Landau-Ginzburg description of Potts models which have upper critical dimensions $d_c>3$,
and therefore can be nontrivial in $d=3$.
This restricts our specific investigation to the cubic \cite{Amit:1976pz}, quartic \cite{Rong:2017cow}, and quintic potentials for which we obtain some universal conformal data, recovering as usual several RG results.
"Multi-field generality" and "functional description" are ingredients that
bring this work close, in spirit, to that of Osborn \& Stergiou \cite{Osborn:2017ucf}
in which several similar questions are addressed using multiloop perturbative RG methods instead of the CFT+SDE technique.
Our work should also be regarded as a companion to a forthcoming paper
devoted to the study of multi-field multicritical Potts models with functional perturbatve RG techniques \cite{CSVZ4}.
While the CFT+SDE methods used in this work are still limited to the leading order of the $\epsilon$-expansion,
their value lies in the fact that they outline the importance of conformal invariance at criticality
and they facilitate the computation of conformal data and, more generally, of the OPE.
In the next section, which is the most important of the paper,
we apply the CFT+SDE technique to a multi-field multicritical model with a general potential,
providing in several subsections general expressions for the conformal data in terms of the potential.
We shall then introduce in Section~\ref{sect:potts_models} the Potts model and discuss its field representations
and group invariants, along with some useful relations,
and introduce the relevant Landau-Ginzburg representation we shall later use. Having imposed the $S_q$ symmetry
we also introduce operators on the space of quadratic fields that project them into irreducible representations with definite anomalous dimensions, which we also give.
In Section~\ref{sect:potts-cft} we present the analysis and the results for the specialized cubic, (restricted) quartic,
and quintic Potts universality classes. We then present our conclusions.
The paper ends with two appendices. The first contains some useful relations for free theory correlators
which are used extensively in the text.
The second includes three parts reporting in order: the reduction relations for $S_q$ symmetric tensors,
some computational details for the quintic model, and few useful RG results~\cite{CSVZ4} needed for the quintic model.
\section{CFT data from classical equations of motion: general results}\label{sect:sde-cft-analysis}
The first part of this paper is devoted to an analysis of general scalar theories with a number of $N$ fields, where no symmetry is imposed on the model.
In the Landau-Ginzburg description, these models are expressed by the following action
\begin{equation}
S=\int \! d^dx \left[{\textstyle{\frac{1}{2}}}\,\partial \phi_i \cdot \partial \phi_i + V(\phi) \right] \,,
\label{general_model}
\end{equation}
and therefore characterized by a standard kinetic term and interactions induced by a generic local potential with a critical dimension $d_c$.
Considering multicritical potentials such as in Eq.~\eqref{mcpotential} that depend on the product of $m$ fields, one has the relation $d_c=2m/(m-2)$.
We shall in general adopt the perturbative $\epsilon$-expansion technique below the critical dimension $d=d_c-\epsilon$. All the fields have the same canonical dimension $\delta=d/2-1=2/(m-2)-\epsilon/2$.
The method we employ is based on the use of the Schwinger-Dyson equations (SDE) combined with the assumption of conformal symmetry of the critical model. Critical information is extracted from the study of two and three point correlators, whose functional form is completely fixed in terms the conformal data parameters. Our analysis gives access to some of these conformal data at leading order in the $\epsilon$-expansion (different quantities can have a different power in $\epsilon$ at leading order).
We devote separate subsections to the computation of the field anomalous dimension, the critical exponents of the mass operators, critical exponents of all higher order operators
for even models, and some structure constants (or OPE coefficients). Finally we shall show for the case of $m=3$ corresponding to $d_c=6$, the case $m=4$ corresponding to $d_c=4$, and then for general models with even $m>4$, how the CFT constraints together with the Schwinger-Dyson equations can be used to fix the critical theory, i.e. the $\epsilon$ dependence of the couplings present in the potential. In particular we show that these constraints are exactly the same as the fixed point conditions of beta functions which appear in the functional perturbative RG approach~\cite{ODwyer:2007brp, Codello:2017hhh,Codello:2017epp, Osborn:2017ucf}.
We stress that the results given in this section are general and, as such, depend on the generic potential $V$ which defines a multicritical model. We shall then restrict ourselves in the next Sections to specific models having in particular the $S_q$ symmetry.
\subsection{Field anomalous dimension}\label{sect:eta-cft}
The SDE-based computation of the anomalous dimension for multi-field scalar theories follows closely that of the single-field case \cite{Codello:2017qek}. Let us consider the general multi-field action \eqref{general_model} which leads to the equation of motion
\begin{equation} \label{eom}
\Box \phi_i = V_i,
\end{equation}
where lower indices on the potential refer to its field derivatives, as in the special case \eqref{mcpotential}. We shall use the parameter $n=m/2$ to label the families of multicritical models
where $n$ is the multicriticality label corresponding to the power of the classically marginal potential $\phi^{2n}$ in the theory \cite{ODwyer:2007brp,Codello:2017hhh,Codello:2017epp}. Notice that for the cubic and quintic models which we shall study in detail later in Section~\ref{sect:potts-cft} $n$ is a half odd number.
In general the fields $ \phi_i $ are not necessarily scaling operators and there can be $N$ different anomalous dimensions associated to the true scaling fields, which correspond to the defining primaries of the CFT.
These are related to the scaling fields $ \tilde\phi_i $ through a linear transformation which leaves the kinetic term invariant
\begin{equation}
\tilde\phi_i = R_{il}\phi_l, \qquad R^TR=1,
\end{equation}
Then, having a definite scaling, the two-point function of the fields $\tilde\phi_i$ takes the following form
\begin{equation}
\langle \tilde\phi_i(x) \tilde\phi_j(y)\rangle = \frac{\tilde c\delta_{ij}}{|x-y|^{2\Delta_i}} ,
\label{2Pc}
\end{equation}
where $\tilde c$ is a constant and $\Delta_i$ is the dimension of the field $\tilde \phi_i$.
Notice that the matrix $R$ can always be chosen such as to diagonalize also the space of fields with the same dimension. In terms of the scaling fields one may also define $V(\phi_i)=\tilde V(\tilde \phi_i)$.
The scaling dimension $\Delta_i$ is the sum of the field canonical dimension $\delta$ and anomalous dimension $\gamma_i$
\begin{equation}
\Delta_i = \delta + \gamma_i, \qquad \delta = \frac{1}{n-1} - \frac{\epsilon}{2}.
\end{equation}
One can also notice that the composite operator $V_i(\phi)$ has scaling dimension $\Delta_i+2$, when interactions are turned on below the upper critical dimension, which means that a recombination of conformal multiplets takes place.
One can find the anomalous dimension $\gamma_i$ solving a simple equation obtained, as in the single field case,
applying $\Box_x\Box_y$ to Eq.~\eqref{2Pc}. One gets
\begin{equation} \label{2}
\langle \tilde V_a(x) \tilde V_b(y) \rangle = \Box^2_x\frac{\tilde c}{|x-y|^{2\Delta}}\;\delta_{ab}.
\end{equation}
where the equations of motion have been used on the l.h.s. The r.h.s of this equation can be straightforwardly calculated and at leading order gives
\begin{equation}
\Box^2_x\frac{\tilde c \delta_{ab}}{|x-y|^{2\Delta}} = \frac{16\Delta(\Delta-\delta)(\Delta+1)(\Delta+1-\delta)}{|x-y|^{2\Delta+4}} \tilde c \delta_{ab}\;\stackrel{\mathrm{LO}}{=}\; \frac{16\delta_c(\delta_c+1)\gamma}{|x-y|^{2\delta_c+4}} c \delta_{ab},
\end{equation}
where $\delta_c$ is the critical ($\epsilon=0$) value of $\delta$ defined in Appendix \ref{free} and $c$ is the free theory value of $\tilde c$ given by Eq.~\eqref{c}.
For the calculation of the l.h.s one can Taylor expand the potential and use \eqref{2pf-free} to get at leading order
\begin{equation} \label{vavb}
\langle \tilde V_a(x) \tilde V_b(y) \rangle \;\stackrel{\mathrm{LO}}{=}\; \sum_\ell \frac{1}{\ell!} \frac{c^\ell}{|x-y|^{2\ell\delta_c}}\;
\tilde V_{ai_1\cdots i_\ell}\, \tilde V_{bi_1\cdots i_\ell}\Big|_{\phi=0}\,,
\end{equation}
For the multicritical model $m=2n$ (with $n$ integer or half integer) this expression picks only the $\ell =2n-1$ term in the sum.
In this case, noticing that $2\delta_c+4=2(2n-1)\delta_c$, Eq.~\eqref{2} at leading order gives the anomalous dimension of the field $\tilde\phi_a$
\begin{equation} \label{eta-cft}
\gamma_a \,\delta_{ab} \stackrel{\mathrm{LO}}{=} \frac{(n-1)^2}{8(2n)!}\,c^{2(n-1)}\;\tilde V_{ai_1\cdots i_{2n-1}}\, \tilde V_{bi_1\cdots i_{2n-1}}\Big|_{\phi=0}.
\end{equation}
Notice that in the Potts models which we shall consider later on, symmetry properties enforce the r.h.s to be proportional to $\delta_{ab}$. Written in terms of the original field the above equation becomes
\begin{equation}
\gamma_a \delta_{ab} = \frac{(n-1)^2}{8(2n)!}\,c^{2(n-1)} R^T_{ac} V_{ci_1i_2\cdots i_{2n-1}} V_{di_1i_2\cdots i_{2n-1}} R_{db},
\end{equation}
which means that the matrix of anomalous dimensions is
\begin{equation} \label{adm}
\boxed{\gamma_{ab} = \frac{(n-1)^2}{8(2n)!}\,c^{2(n-1)} V_{ai_1i_2\cdots i_{2n-1}} V_{bi_1i_2\cdots i_{2n-1}},}
\end{equation}
and that $R$ is the matrix that diagonalizes it. In the rest of the paper we drop the tilde on the fields and the potential and always assume, unless otherwise stated, to work in the diagonal basis.
Let us make an aside comment here. In the RG analysis of physical systems close to criticality
the approach to criticality is controlled by parameters such as the temperature, the simplest example being the Ising model with quartic interaction. In this model using for example dimensional regularization one has to tune to zero the mass operator, which is relevant at criticality. In the multi-field case in order to reach such a condition for all fields, insisting to tune only one parameter, one is forced to require that all the "bare masses" coincide. This requirement is equivalent to the so called zero trace property on generic quartic interactions $v_{ijkk}=v \delta_{ij}$~\cite{Brezin:1973jt}, which implies that at the fixed point
all the anomalous dimensions are equal. In single-field multicritical models where there as more than one relevant operator, the approach to criticality can be controlled by introducing other tuning parameters. Requiring the same number of parameters, in the multi-field case, to control the approach to criticality, one is forced to introduce other conditions on the critical potential to have the same bare mass and bare couplings of all the relevant operators which we would like to tune to zero.
In our general CFT approach of this paper we are not concerned with these extra requirements and keep the arguments as general as possible, imposing no symmetry on the model.
\subsection{Quadratic operators}\label{ss:qo}
We will now move on to the computation of the critical exponents corresponding to the mass operators $\phi_i\phi_j$. It is useful for this purpose to review first how the critical exponent $\gamma_2$ of the operator $\phi^2$ is obtained in single-field scalar theories \cite{Codello:2017qek}, where the critical potential includes only the marginal interaction $V(\phi)=\frac{g}{(2n)!}\phi^{2n}$. Here $n$ is either an integer or a half odd number. For $n\neq 2$ this is done by applying the operator $\Box_x\Box_y$ to the three-point function $\langle\phi(x) \,\phi(y)\, \phi^{2}(z) \rangle$ and calculating it at leading order in two ways: first by using the SDE
\begin{equation} \label{eqn-sf-1}
\langle \square_x\phi(x) \,\square_y\phi(y)\, \phi^2(z) \rangle
\,\stackrel{\mathrm{LO}}{=}\, \frac{g^{2}}{(2n-1)!^2} \frac{C^{\mathrm{free}}_{2n-1,2n-1,2}}{|x-y|^4|x-z|^{2\delta_c}|y-z|^{2\delta_c}}\,,
\end{equation}
and second by direct application of the operator $\Box_x\Box_y$ to the expression for the three-point function
\begin{equation} \label{eqn-sf-2}
\square_x\square_y \langle \phi(x) \,\phi(y)\, \phi^2(z) \rangle
\,\stackrel{\mathrm{LO}}{=}\, C^{\mathrm{free}}_{1,1,2} \,\frac{8(n-2)(\gamma_2 - 2\gamma)}{(n-1)^2} \frac{1}{|x-y|^4|x-z|^{2\delta_c}|y-z|^{2\delta_c}}\,.
\end{equation}
The value of $\gamma_2$ is obtained by equating these two. The two structure constants in the free theory given above can be found from the general formula \eqref{c3_free}. In the multi-field case the situation is slightly different as there is more than one mass operators $\phi_i\phi_j$. So, although these do not mix with derivative operators or operators with more fields, they can mix together. One can assign a (anomalous) scaling dimension only to particular combinations of them that make a scaling operator. Suppose for instance that the combination
\begin{equation}
S_{pq}\; \phi_p\phi_q
\end{equation}
makes a scaling operator, where $S_{pq}$ is a tensor symmetric in its pair of indices. Let us denote the anomalous dimension of this operator by $\gamma^S_2$, where the label $S$ refers to the particular choice of $S_{pq}$. Then the two equations for the single-field case \eqref{eqn-sf-1} and \eqref{eqn-sf-2} are generalized respectively to
\begin{equation} \label{bibjpq}
\langle \square_x\phi_i(x) \,\square_y\phi_j(y)\; [S_{pq}\, \phi_p\phi_q](z) \rangle
\,\stackrel{\mathrm{LO}}{=}\, \frac{V_{i\,i_1\cdots i_{2n-1}}V_{j\,j_1\cdots j_{2n-1}}}{(2n-1)!^2} \frac{C^{\mathrm{free}}_{i_1\cdots i_{2n-1},\, j_1\cdots j_{2n-1},\, pq}\,S_{pq}}{|x-y|^4|x-z|^{2\delta_c}|y-z|^{2\delta_c}},
\end{equation}
\begin{equation} \label{bbijpq}
\square_x\square_y \langle \phi_i(x) \,\phi_j(y)\; [S_{pq}\,\phi_p\phi_q](z) \rangle
\,\stackrel{\mathrm{LO}}{=}\,\frac{8(n-2)(\gamma^S_2 - \gamma^i-\gamma^j)}{(n-1)^2} \frac{C^{\mathrm{free}}_{i,j,pq}\,S_{pq} }{|x-y|^4|x-z|^{2\delta_c}|y-z|^{2\delta_c}},
\end{equation}
where the coupling $g$ has been replaced by the $2n$th derivative of the potential, which is evaluated at $\phi=0$, as will be understood in the rest of the paper. The two free structure constants in \eqref{bibjpq} and \eqref{bbijpq} are defined respectively as the coefficients in the correlation functions
\begin{equation}
\left\langle [\phi_{i_1}\cdots \phi_{i_{2n-1}}](x) [\phi_{j_1}\cdots \phi_{j_{2n-1}}](y) [\phi_p\phi_q](z) \right\rangle, \qquad
\left\langle \phi_i(x) \phi_j(y) [\phi_p\phi_q](z) \right\rangle,
\end{equation}
in the free theory. These structure constants in the free theory are related to their single-field analogues as
\begin{equation}
C^{\mathrm{free}}_{i,j,pq} = C^{\mathrm{free}}_{1,1,2} \; \delta^i_{(p}\delta^j_{q)}, \qquad
C^{\mathrm{free}}_{i_1\cdots i_{2n-1},\, j_1\cdots j_{2n-1},\, pq} = C^{\mathrm{free}}_{2n-1,2n-1,2} \; (\delta^{i_1}_{p}\delta^{j_1}_{q}\, \delta^{i_2}_{j_2}\cdots \delta^{i_{2n-1}}_{j_{2n-1}})\,,
\end{equation}
where in the last expression the Kronecker delta functions are enclosed in parenthesis indicating that the $i_l$, $j_l$ and the $p,q$ indices are separately symmetrized. These notations are introduced in Appendix~\ref{free} in the more general sense. Equating the two equations \eqref{bibjpq} and \eqref{bbijpq} and simplifying a bit leads to
\begin{equation} \label{crit-cft}
(\gamma^S_2 - \gamma^i-\gamma^j) \; S_{ij} = \frac{(n-1)^2c^{2(n-1)}}{8(n-2)(2n-2)!}\, V_{i\,p\,i_2\cdots i_{2n-1}}V_{j\,q\,i_2\cdots i_{2n-1}} \,S_{pq}.
\end{equation}
In models where all anomalous dimensions $\gamma^i=\gamma$ are equal, it is clear from \eqref{eta-cft} that $S_{ij} = \delta_{ij}$ is always an eigenvector with eigenvalue $\gamma_2^1$ given by
\begin{equation} \label{gamma20-eta}
\gamma_2^1 = 2\frac{n^2-1}{n-2}\eta,
\end{equation}
where $\eta=2\gamma$. Eq.~\eqref{crit-cft} may be written entirely in terms of the potential using \eqref{eta-cft}. The result is
\begin{empheq}[box=\fbox]{align}
\gamma^S_2\, S_{ij} &= \frac{(n-1)^2c^{2(n-1)}}{8(n-2)(2n-2)!}\, V_{i\,p\,i_2\cdots i_{2n-1}}V_{j\,q\,i_2\cdots i_{2n-1}} \,S_{pq} \,\,
\nonumber\\
&\,+ \frac{(n-1)^2c^{2(n-1)}}{8(2n)!}\, V_{i\,i_1\cdots i_{2n-1}}V_{p\,i_1\cdots i_{2n-1}} \,S_{pj}\,\, \label{gammaS2}
\\
&\,+ \frac{(n-1)^2c^{2(n-1)}}{8(2n)!}\, V_{j\,i_1\cdots i_{2n-1}}V_{p\,i_1\cdots i_{2n-1}} \,S_{pi} \nonumber
\end{empheq}
Solving this eigenvalue equation one can find the scaling operators and their anomalous dimensions. There is one important exception to the above equation, and that is the operator $V_{ijk}\phi^j\phi^k$ for the case $n=3/2$. This is because this operator which has a definite scaling property is not a primary but a descendant operator. In this case the anomalous dimension, which we can call $\gamma^i_2$, does not satisfy \eqref{gammaS2} and is obtain instead from the exact relation $\gamma^i_2 = \gamma_i + \epsilon/2$.
For the special case of $n=2$ the situation is different. In this case, for the single-field theory information on the anomalous dimension is obtained by comparing the two expressions \cite{Codello:2017qek}
\begin{equation}
\langle \square_x\phi(x) \,\phi(y)\, \phi^2(z) \rangle
\,\stackrel{\mathrm{LO}}{=}\, \frac{g}{3!} \frac{C^{\mathrm{free}}_{3,1,2}}{|x-y|^2|x-z|^{2\delta_c+2}|y-z|^{2\delta_c-2}},
\end{equation}
\begin{equation}
\square_x \langle \phi(x) \,\phi(y)\, \phi^2(z) \rangle
\,\stackrel{\mathrm{LO}}{=}\,\frac{2C^{\mathrm{free}}_{1,1,2} \,(\gamma_2 - 2\gamma) }{|x-y|^2|x-z|^{2\delta_c+2}|y-z|^{2\delta_c-2}}.
\end{equation}
The multifield generalization of these two equations is
\begin{equation}
\langle \square_x\phi_i(x) \,\phi_j(y)\, [S_{ab}\,\phi_{a}\phi_{b}](z) \rangle
\,\stackrel{\mathrm{LO}}{=}\, \frac{V_{ipql}}{3!} \frac{C^{\mathrm{free}}_{pql,j,ab}\,S_{ab}}{|x-y|^2|x-z|^{2\delta_c+2}|y-z|^{2\delta_c-2}},
\end{equation}
\begin{equation}
\square_x \langle \phi_i(x) \,\phi_j(y)\; [S_{ab}\,\phi_{a}\phi_{b}](z) \rangle
\,\stackrel{\mathrm{LO}}{=}\,\frac{2C^{\mathrm{free}}_{i,j,ab} \,S_{ab}\,(\gamma^S_2 - \gamma_i-\gamma_j)}{|x-y|^2|x-z|^{2\delta_c+2}|y-z|^{2\delta_c-2}}.
\end{equation}
Equating the two gives
\begin{equation}
2C^{\mathrm{free}}_{1,1,2}\,\delta^{i}_{(a}\delta^j_{b)} \,S_{ab}\,(\gamma^S_2 - \gamma_i-\gamma_j) = \frac{1}{3!}V_{ipql}\, C^{\mathrm{free}}_{3,1,2}\,\delta^{(p}_j\delta^q_{(a}\delta^{l)}_{b)}\,S_{ab}\,.
\end{equation}
After some simplification, and noticing that $\gamma_i$ are of second order in the potential, this equation reduces to
\begin{equation} \label{crit-cft-2}
\boxed{
\gamma^S_2 \, S_{ij} = \frac{c}{4} V_{ijab}\,S_{ab},
}
\end{equation}
indicating that $S_{ab}$ is an eigenvector of $V_{ijab}$ in this case.
The analyses leading to equations \eqref{gammaS2} and \eqref{crit-cft-2} cannot be pushed further unless we specify the model, or at least the symmetry. These two equations are studied further and solved in Section \ref{sect:potts_models} for Potts field theories in particular dimensions. Another possible path to reduce the complexity in a general analysis would be to restrict the number of fields in order to deal with a finite (possibly small) number of couplings even in absence of a symmetry and consequently on the number of possible critical theories, satisfying set of equations which we shall derive in Section~\ref{sec:FPcond}.
\subsection{Higher order composite operators: recurrence relation and its solution} \label{ss:rr}
So far we have discussed the anomalous dimensions of the fields $\phi_i$ and those of the quadratic operators for a general multicritical scalar model with integer or half-odd $n$. For the unitary "even" critical theories with
integer $n$ we move on and seek scaling operators of arbitrary order $k$ and their anomalous dimensions. For simplicity of notation we sometimes use the following abbreviation
\begin{equation}
{\cal S}_{k}=S_{i_1 \cdots i_k} \phi_{i_1} \cdots \phi_{i_k},
\label{scaling_op}
\end{equation}
the anomalous dimensions of which we denote by $\gamma^S_k$. Let us also note that at the leading perturbative order we are considering here, \eqref{scaling_op} is the most general form the scaling operators can take, while from the next-to-leading order derivative operators start mixing as well. In the RG language this translates to the fact that in the beta function of the potential the leading order contribution comes only from the potential itself, and the contribution from the wave-function renormalization appears only at next-to-leading order, and so on for higher derivative operators.
In analogy with the analysis for single-field models detailed in \cite{Codello:2017qek} we shall study the three point correlation functions
\begin{equation} \label{box-1kk1}
\langle \square_x\phi_i(x) \, {\cal S}_{k}(y) {\cal S}_{k+1}(z) \rangle\,.
\end{equation}
This quantity can be evaluated at leading order by using the equations of motion, which gives
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
&& \langle \square_x\phi_i(x) \,[S_{j_i\cdots j_k}\phi_{j_1}\cdots\phi_{ j_k}](y)\,[S_{l_1\cdots l_{k+1}}\phi_{l_1}\cdots \phi_{{k+1}}](z) \rangle
\nonumber\\
&\stackrel{\mathrm{LO}}{=}& \frac{1}{(2n-1)!} \langle [V_{ii_1\cdots i_{2n-1}} \phi_{i_1}\cdots \phi_{i_{2n-1}}](x) \,[S_{j_i\cdots j_k}\phi_{j_1}\cdots\phi_{ j_k}](y)\, [S_{l_1\cdots l_{k+1}}\phi_{l_1}\cdots \phi_{{k+1}}](z) \rangle \nonumber\\
&\stackrel{\mathrm{LO}}{=}& \frac{C^{\mathrm{free}}_{2n-1,k,k+1}}{(2n-1)!} \, \frac{V_{ii_1\cdots i_rj_1\cdots j_s}S_{j_1\cdots j_s l_1\cdots l_t}S_{l_1\cdots l_ti_1\cdots i_r}}{|x-y|^2|y-z|^{2k\delta_c-2}|z-x|^{2\delta_c+2}}.
\end{eqnarray}
In the last step, for the leading order contribution, we have used Eq.~\eqref{3pfree} where we rename $l_{12}=r$, $l_{23}=s$ and $l_{31}=t$ which are defined as
\begin{equation} \label{rst-cond}
\left\lbrace
\begin{array}{l}
r+s=2n-1 \\
s+t = k \\
t+r = k+1 \\
\ea
\right.
\qquad \Rightarrow \qquad
\left\lbrace
\begin{array}{l}
r= n \\
s= n-1 \\
t= k+1-n \\
\ea
\right.
\end{equation}
On the other hand, applying the box directly we get, again at leading order
\begin{equation}
\langle \square_x\phi_i(x) \, {\cal S}_{k}(y) {\cal S}_{k+1}(z) \rangle\stackrel{\mathrm{LO}}{=} C^{\mathrm{free}}_{1,k,k+1} \, \frac{2(\gamma^S_{k+1}-\gamma^S_k-\gamma_i)}{n-1} \frac{S_{ii_1\cdots i_k}S_{i_1\cdots i_k}}{|x-y|^2|y-z|^{2k\delta_c-2}|z-x|^{2\delta_c+2}}.
\end{equation}
Comparing these two and using
\begin{equation}
C^{\mathrm{free}}_{2n-1,k,k+1} = \frac{(2n-1)!k!(k+1)!}{(n-1)n!(k-n+1)!}c^{n+k}, \qquad
C^{\mathrm{free}}_{1,k,k+1} = \frac{k!(k+1)!}{k!} c^{k+1} = (k+1)! c^{k+1},
\end{equation}
we obtain a recurrence relation for the critical exponents
\begin{equation} \label{rr}
(\gamma^S_{k+1}-\gamma^S_k-\gamma_i) S_{ii_1\cdots i_k}S_{i_1\cdots i_k} = c_{n,k} V_{ii_1\cdots i_rj_1\cdots j_s}S_{j_1\cdots j_s l_1\cdots l_t}S_{l_1\cdots l_ti_1\cdots i_r}\,,
\end{equation}
where for convenience we have introduced the quantity
\begin{equation}
c_{n,k} = \frac{n-1}{2(2n-1)!}\frac{C^{\mathrm{free}}_{2n-1,k,k+1}}{C^{\mathrm{free}}_{1,k,k+1}} =
\frac{c^{n-1}}{2(n-2)!n!} \frac{k!}{(k-n+1)!}.
\label{cnk}
\end{equation}
From the constraints \eqref{rst-cond} it is clear that the smallest $k$ for which the r.h.s of \eqref{rr} does not vanish is $k=n-1$, which means that the anomalous dimensions are linear in the couplings starting from $\gamma^S_n$, while all the lower ones, $\gamma^S_k$ with $k<n$, are at least quadratic. So we start from Eq.~\eqref{rr} with $k=n-1$. The anomalous dimensions $\gamma^S_{n-1}$ and $\gamma^i$ are of higher order and can be omitted, giving
\begin{equation} \label{gamma-n}
\gamma^S_n S_{ii_1\cdots i_{n-1}}S_{i_1\cdots i_{n-1}} \stackrel{\mathrm{LO}}{=} c_{n,n-1} V_{ii_1\cdots i_nj_1\cdots j_{n-1}}S_{j_1\cdots j_{n-1}}S_{i_1\cdots i_n}\,,
\end{equation}
where we have used the fact that $r=n$, $s=n-1$ and $t=0$ in this case. On the r.h.s. of this equation we rename the indices as $j_1\cdots j_{n-1} \to i_1\cdots i_{n-1}$ and $i_1\cdots i_{n} \to j_1\cdots j_{n}$ and then on both sides of the equation we rename $i \to i_n$. We observe then that the coefficients of $S_{i_1\cdots i_{n-1}}$ are independent of it and since such tensors form a complete basis for symmetric $(n-1)$-index tensors one can drop $S_{i_1\cdots i_{n-1}}$ on both sides. This leads, after permuting some indices thanks to symmetry, to
\begin{equation} \label{gamma-S-n}
\gamma^S_n S_{i_1\cdots i_n} = c_{n,n-1} V_{i_1\cdots i_nj_1\cdots j_n}S_{j_1\cdots j_n},
\end{equation}
which is an eigenvalue equation for operators with $n$ fields. Let us now move to the next case $k=n$. This time Eq.~\eqref{rr} gives
\begin{equation}
(\gamma^S_{n+1}-\gamma^S_n) S_{ii_1\cdots i_n}S_{i_1\cdots i_n} \stackrel{\mathrm{LO}}{=}
c_{n,n} V_{ii_1\cdots i_nj_1\cdots j_{n-1}}S_{j_1\cdots j_{n-1}l}S_{li_1\cdots i_n}\,.
\end{equation}
Here the coefficients are not independent of $S_{i_1\cdots i_n}$ because of $\gamma^S_n$, but one can eliminate it using Eq.~\eqref{gamma-S-n}. This gives
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\hspace{-0.7cm}
\gamma^S_{n+1} S_{ii_1\cdots i_n}S_{i_1\cdots i_n} &=& \gamma^S_n S_{ii_1\cdots i_n}S_{i_1\cdots i_n} + c_{n,n} V_{ii_1\cdots i_nj_1\cdots j_{n-1}}S_{j_1\cdots j_{n-1}l}S_{li_1\cdots i_n}
\nonumber\\
&=& c_{n,n-1} V_{i_1\cdots i_nj_1\cdots j_n} S_{ii_1\cdots i_n} S_{j_1\cdots j_n} + c_{n,n} V_{ii_1\cdots i_nj_1\cdots j_{n-1}}S_{li_1\cdots i_n}S_{j_1\cdots j_{n-1}l}
\nonumber\\
&=& c_{n,n-1} V_{j_1\cdots j_ni_1\cdots i_n} S_{ij_1\cdots j_n} S_{i_1\cdots i_n} + c_{n,n} V_{ij_1\cdots j_ni_1\cdots i_{n-1}}S_{i_nj_1\cdots j_n}S_{i_1\cdots i_n},
\end{eqnarray}}%
where in the last step several indices have been renamed conveniently to make $S_{i_1\cdots i_n}$ appear in all terms as on the l.h.s.
Since the coefficients are now independent of $S_{i_1\cdots i_n}$ one can drop this tensor provided that we symmetrize the indices $i_1\cdots i_n$ (when needed we denote such operation by enclosing the indices in round brackets) obtaining
\begin{equation}
\gamma^S_{n+1} S_{i_1\cdots i_{n+1}} = c_{n,n-1} V_{j_1\cdots j_ni_1\cdots i_n} S_{i_{n+1}j_1\cdots j_n} + c_{n,n} V_{j_1\cdots j_ni_{n+1}(i_1\cdots i_{n-1}}S_{i_n)j_1\cdots j_n} \,.
\end{equation}
This equation is therefore manifestly symmetric in the indices $i_1\cdots i_n$, but one may wonder if the r.h.s is symmetric in all $i_1\cdots i_{n+1}$ indices as expected from the l.h.s. At first sight this does not seem clear, but it turns out that the two coefficients $c_{n,n-1}$ and $c_{n,n}$ have just the right ratio to make the r.h.s symmetric. In fact from Eq.~\eqref{cnk} one has $c_{n,n}=nc_{n,n-1}$ and we can write
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\gamma^S_{n+1} S_{i_1\cdots i_{n+1}} &=& c_{n,n-1}\left(V_{j_1\cdots j_ni_1\cdots i_n} S_{i_{n+1}j_1\cdots j_n} + n V_{j_1\cdots j_ni_{n+1}(i_1\cdots i_{n-1}}S_{i_n)j_1\cdots j_n}\right)
\nonumber\\
&=& (n+1)c_{n,n-1} V_{j_1\cdots j_n(i_1\cdots i_n}S_{i_{n+1})j_1\cdots j_n}
\nonumber\\
&=& (c_{n,n-1}+c_{n,n}) V_{j_1\cdots j_n(i_1\cdots i_n}S_{i_{n+1})j_1\cdots j_n}
\nonumber\\
&=& \frac{2}{n} c_{n,n+1} V_{j_1\cdots j_n(i_1\cdots i_n}S_{i_{n+1})j_1\cdots j_n} \,. \label{gamma-n+1}
\end{eqnarray}}%
This gives an eigenvalue equation for scaling operators of the next level, i.e. operators with $n+1$ fields. We would like to generalize this result to operators with an arbitrary number of fields. Indeed it is natural to guess that for $l\geq 0$ the following relation holds
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\gamma^S_{n+l} S_{i_1\cdots i_{n+l}} &=& (c_{n,n-1}+c_{n,n}+\cdots+c_{n,n-1+l}) V_{j_1\cdots j_n(i_1\cdots i_n}S_{i_{n+1}\cdots i_{n+l})j_1\cdots j_n}
\nonumber\\
&=& \frac{l+1}{n} c_{n,n+l} V_{j_1\cdots j_n(i_1\cdots i_n}S_{i_{n+1}\cdots i_{n+l})j_1\cdots j_n} \label{gamma-kc}\,,
\label{gamma_gen}
\end{eqnarray}}%
where in the second line we have used
\begin{equation}
\sum_{i=0}^l c_{n,n-1+i} = \frac{l+1}{n} c_{n,n+l},
\end{equation}
which, using the definition \eqref{cnk}, is equivalent to the following identity which can be easily checked
\begin{equation}
\sum_{i=0}^l \frac{(n-1+i)!}{i!} = \frac{(l+n)!}{n\,l!} = \frac{l+1}{n} \frac{(l+n)!}{(l+1)!}\,.
\end{equation}
The relation~\eqref{gamma_gen} can be proved by induction. Having shown the $l=0$ case \eqref{gamma-n} and assuming the above formula is true, Eq.~\eqref{rr} for $k=n+l$ gives
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\gamma^S_{n+l+1} S_{ii_1\cdots i_{n+l}}S_{i_1\cdots i_{n+l}} &=& \gamma^S_{n+l} S_{ii_1\cdots i_{n+l}}S_{i_1\cdots i_{n+l}} + c_{n,n+l} V_{ii_1\cdots i_nj_1\cdots j_{n-1}}S_{j_1\cdots j_{n-1}l_1\cdots l_{l+1}}S_{l_1\cdots l_{l+1}i_1\cdots i_n}
\nonumber\\[2mm]
&=& c_{n,n+l} \frac{l+1}{n} V_{j_1\cdots j_ni_1\cdots i_n}S_{i_{n+1}\cdots i_{n+l}j_1\cdots j_n}S_{ii_1\cdots i_{n+l}}
\nonumber\\
&+& c_{n,n+l} V_{ii_1\cdots i_nj_1\cdots j_{n-1}}S_{j_1\cdots j_{n-1}l_1\cdots l_{l+1}}S_{l_1\cdots l_{l+1}i_1\cdots i_n}
\nonumber\\[2mm]
&=& c_{n,n+l} \frac{l+1}{n} V_{i_1\cdots i_nj_1\cdots j_n}S_{i_1\cdots i_{n+l}}S_{ij_1\cdots j_n i_{n+1}\cdots i_{n+l}}
\nonumber\\
&+& c_{n,n+l} V_{il_1\cdots l_ni_1\cdots i_{n-1}}S_{i_1\cdots i_{n+l}}S_{i_n\cdots i_{n+l}l_1\cdots l_n}\,,
\end{eqnarray}}%
where again in the last step indices have been conveniently renamed in order to make $S_{i_1\cdots i_{n+l}}$ appear in all terms as that on the l.h.s. Dropping this tensor and setting $i=i_{n+l+1}$ we get
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\gamma^S_{n+l+1} S_{i_1\cdots i_{n+l+1}}
&=& c_{n,n+l} \frac{l+1}{n} V_{j_1\cdots j_n(i_1\cdots i_n}S_{i_{n+1}\cdots i_{n+l}) i_{n+l+1}j_1\cdots j_n}
\nonumber\\
&+& c_{n,n+l} V_{i_{n+l+1}l_1\cdots l_n(i_1\cdots i_{n-1}}S_{i_n\cdots i_{n+l})l_1\cdots l_n}\,,
\end{eqnarray}}%
where some necessary symmetrizations have been introduced.
We can now manipulate this expression further to show that the r.h.s is symmetric in $i_1\cdots i_{n+l+1}$
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\gamma^S_{n+l+1} S_{i_1\cdots i_{n+l+1}}\hspace{-1.5cm}&{}&\nonumber\\
&=& \frac{1}{n} c_{n,n+l} \left((l+1) V_{j_1\cdots j_n(i_1\cdots i_n}S_{i_{n+1}\cdots i_{n+l}) i_{n+l+1}j_1\cdots j_n}
+n V_{i_{n+l+1}l_1\cdots l_n(i_1\cdots i_{n-1}}S_{i_n\cdots i_{n+l})l_1\cdots l_n}\right)
\nonumber\\
&=& \frac{1}{n} c_{n,n+l} \frac{(n+l+1)!}{(n+l)!} V_{j_1\cdots j_n(i_1\cdots i_n}S_{i_{n+1}\cdots i_{n+l} i_{n+l+1})j_1\cdots j_n}
\nonumber\\
&=& \frac{n+l+1}{n} c_{n,n+l} V_{j_1\cdots j_n(i_1\cdots i_n}S_{i_{n+1}\cdots i_{n+l} i_{n+l+1})j_1\cdots j_n}
\nonumber\\
&=& \frac{l+2}{n} c_{n,n+l+1} V_{j_1\cdots j_n(i_1\cdots i_n}S_{i_{n+1}\cdots i_{n+l} i_{n+l+1})j_1\cdots j_n} \,.
\end{eqnarray}}%
This completes our induction reasoning and proves \eqref{gamma-kc}, which can be written more explicitly also as
\begin{equation}
\boxed{\gamma^S_{n+l} S_{i_1\cdots i_{n+l}} = \frac{(n-1)c^{n-1}}{2n!^2} \frac{(n+l)!}{l!}\, V_{j_1\cdots j_n(i_1\cdots i_n}S_{i_{n+1}\cdots i_{n+l})j_1\cdots j_n} \label{gamma-k}\,.}
\end{equation}
Solving this eigenvalue equation one can find both the scaling operators and their anomalous dimensions at leading order, $O(\epsilon)$ in this case, for any $l\ge 0$.
A comment is in order here. The starting point of the derivation of \eqref{gamma-k} was to use the structure of the three-point function
\begin{equation}
\langle \phi_i(x) \, {\cal S}_{k}(y) {\cal S}_{k+1}(z) \rangle\,
\end{equation}
and apply one box operator $\Box_x$ to it. In fact for $k=2n-2$ and $k=2n-1$ the structure that we have used for this three-point function may not be valid because in each case one of the operators ${\cal S}_{k}$ or ${\cal S}_{k+1}$ present in the three-point function can be a descendant and not a primary operator. This means that the recurrence relation \eqref{rr} might be not necessarily valid for $k=2n-2$ and $k=2n-1$, so that Eq.~\eqref{gamma-k} is valid at least up to $l=n-2$ from which one can obtain the anomalous dimension
$\gamma^S_{2n-2}$. However, as we will see later the validity of the recurrence relation \eqref{rr} for $k=2n-2$ and $k=2n-1$ can be shown in a different way, implying that \eqref{gamma-k} is true for all $l\geq 0$.
In fact, in the two cases $k=2n-2$ and $k=2n-1$, when the operator ${\cal S}_{2n-1}$ is a descendant operator, one can instead use the e.o.m to write the quantity \eqref{box-1kk1} as
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
&& \Box_x \Box_y \langle \phi^i(x)\phi^j(y){\cal S}_{2n-2}(z) \rangle, \qquad k=2n-2 \label{bbijS1} \\[1mm]
&& \Box_x \Box_y \langle \phi^i(x)\phi^j(y){\cal S}_{2n}(z) \rangle, \hspace{12.9mm} k=2n-1 \label{bbijS2}\,.
\end{eqnarray}}%
The first one \eqref{bbijS1} gives another eigenvalue equation for $\gamma^S_{2n-2}$ which we already have from \eqref{gamma-k}. Comparing the two gives the functional fixed point equation for all multi-field multicritical even models. The details of this is given in Section~\ref{ss:gem}.
The anomalous dimensions $\gamma^S_{2n-1}$ corresponding to descendant operators are obtained from
an identity relating the scaling dimensions of the descendant operators to those of the fields $\phi^i$, and can be shown to satisfy \eqref{gamma-k} for $k=2n-2$. Finally, the second equation \eqref{bbijS2} gives the missing eigenvalue equation for $\gamma^S_{2n}$.
In other words we will obtain a relation for $\gamma^S_{2n}$ where instead of the tensor $S_{i_1\cdots i_{2n-1}}$ the descendent structure $V_{i i_1\cdots i_{2n-1}}$ is present with the corresponding anomalous dimension.
This is exactly what is needed to complete the space of composite operators of order $2n-1$ in the recurrence relation~\eqref{rr}.
These will be discussed in detail in Section~\ref{ss:mprr}.
\subsection{Structure constants} \label{ssec:ope}
Apart from the anomalous dimensions of the quadratic and higher order operators that we have discussed so far, conformal symmetry along with the equations of motion provide information on several classes of leading order structure constants. This has been shown in the single-field case in \cite{Codello:2017qek}. It is straightforward to extend the computation of structure constants of single-field theories
to the multi-field case. In this section we make such a generalization and provide compact formulas for some structure constants in multicritical and multi-field even or odd models.
\subsubsection{Generalization of $C_{1,2p,2q-1}$} \label{ssec:c12p2q-1}
Consider for instance general even models for which the multicriticality label $n$ is an integer.
Several set of structure constants had been computed for the single-field case in \cite{Codello:2017qek}, for example
\begin{equation} \label{c12p2q-single}
C_{1,2p,2q-1} = \frac{g}{(2n-1)!} \frac{(n-1)^2}{4(p-q)(p-q+1)}C^{\mathrm{free}}_{2n-1,2p,2q-1},
\end{equation}
which is valid in the range $q+p \geq n$, $q-p \geq 1-n$ and $q-p \neq 0,1$.
By arguments similar to those of the previous section this can be straightforwardly generalized to the multi-field case as
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{c12p2q-1}
C_{i,j_1\cdots j_{2p},k_1\cdots k_{2q-1}} &=& \frac{V_{ii_1\cdots i_{2n-1}}}{(2n-1)!} \frac{(n-1)^2}{4(p-q)(p-q+1)}C^{\mathrm{free}}_{i_1\cdots i_{2n-1},j_1\cdots j_{2p},k_1\cdots k_{2q-1}} \nonumber \\
&=& \frac{V_{ii_1\cdots i_{2n-1}}}{(2n-1)!} \frac{(n-1)^2}{4(p-q)(p-q+1)} C^{\mathrm{free}}_{2n-1,2p,2q-1} \nonumber\\
&\times & (\delta^{i_1}_{j_{s+1}} \cdots \delta^{i_r}_{j_{2p}} \, \delta^{j_1}_{k_{t+1}} \cdots \delta^{j_s}_{k_{2q-1}} \, \delta^{k_1}_{i_{r+1}} \cdots \delta^{k_t}_{i_{2n-1}}) \,,
\end{eqnarray}}%
where $q,p$ are constrained as in the single-field case, and the integers $r,s,t$ satisfy the relation
\begin{equation}
2n-1= r+t, \quad 2p = r+s, \quad 2q-1 = s+t.
\end{equation}
The parentheses in the third line enclosing the deltas indicate that the $i_l$s, the $j_l$s and the $k_l$s are separately symmetrized.
To obtain the structure constant that is defined as the coefficient appearing in the three-point function
\begin{equation} \label{3pf-even}
\left\langle \phi_i(x) {\cal S}_{2p}(y) \tilde{{\cal S}}_{2q-1}(z) \right\rangle
\end{equation}
one needs to contract the symmetric tensors $S,\tilde{S}$ with \eqref{c12p2q-1} and find
\begin{equation} \label{ciuv-even}
\boxed{
C_{\phi_i {\cal S}_{2p} \tilde{{\cal S}}_{2q-1}} = \frac{V_{il_1\cdots l_r k_1\cdots k_t}\,S_{j_1\cdots j_s l_1 \cdots l_r} \, \tilde{S}_{k_1\cdots k_t j_1 \cdots j_s}}{(2n-1)!} \frac{(n-1)^2}{4(p-q)(p-q+1)}C^{\mathrm{free}}_{2n-1,2p,2q-1}.
}
\end{equation}
Notice that the tensors $S,\tilde{S}$ in \eqref{3pf-even} must be chosen such that the corresponding operators have a definite scaling, i.e. they satisfy \eqref{gamma-k}. They also must not be descendant operators, which can only occur for $\tilde{S}_{2n-1}$, that is when $q=n$.
The free theory structure constant in the above equation is proportional to $c^{n+p+q-1}$ according to the general formula (A.8) of \cite{Codello:2017qek}. The CFT normalization requires rescaling the fields $\phi = \sqrt{c}\hat\phi$ such that the two point function of $\phi$ are normalized to unity. With this normalization the $c$ factors in the free structure constants are removed and, defining $\hat V(\hat\phi)=V(\phi)$, in the new equation of motion a factor of $c^{-1}$ will appear on the r.h.s as if $V\rightarrow V/c$ in \eqref{eom}. Also, in terms of the rescaled field the $2n$th field derivative of the potentials will be $c^n$ times the original one. More explicitly
\begin{equation}
\Box \hat \phi_i = \frac{1}{c} \hat V_i, \qquad
\hat V_{i_1\cdots i_{2n}} = c^n V_{i_1\cdots i_{2n}}.
\end{equation}
Combining these two we find that in the CFT normalization, dropping the hat on the new fields and potential, one must remove the $c$ factors in the free structure constants and make the replacement $V_{i_1\cdots i_{2n}}\rightarrow c^{n-1} V_{i_1\cdots i_{2n}}$. We shall make such a choice of normalization in Section~\ref{sect:potts-cft}, when studying some Potts models,
but only when we give explicit $\epsilon$ dependent expressions for the structure constants.
\subsubsection{Generalization of $C_{1,2p,2q}$ and $C_{1,2p-1,2q-1}$} \label{ssec:c12p2q}
Consider now general odd models which include the cubic and quintic models that we are especially interested in and write the half-odd multicriticality label as $n=\ell+1/2$ where $\ell$ is an integer.
For scalar theories with a single degree of freedom the following structure constant was computed in \cite{Codello:2017qek}
\begin{equation} \label{c12p2q-single-2}
C_{1,2p,2q} = \frac{g}{(2\ell)!} \frac{(2\ell-1)^2}{4(4(p-q)^2-1)}C^{\mathrm{free}}_{2\ell,2p,2q},
\end{equation}
which is valid only in the range $q+p\geq \ell$ and $|q-p|\leq \ell$. In the multi-field case this generalizes to
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{c12p2q}
C_{i,j_1\cdots j_{2p},k_1\cdots k_{2q}} &=& \frac{V_{ii_1\cdots i_{2\ell}}}{(2\ell)!} \frac{(2\ell-1)^2}{4(4(p-q)^2-1)}C^{\mathrm{free}}_{i_1\cdots i_{2\ell},j_1\cdots j_{2p},k_1\cdots k_{2q}} \\
&=& \frac{V_{ii_1\cdots i_{2\ell}}}{(2\ell)!} \frac{(2\ell-1)^2}{4(4(p-q)^2-1)}C^{\mathrm{free}}_{2\ell,2p,2q}(\delta^{i_1}_{j_{s+1}} \cdots \delta^{i_r}_{j_{2p}} \, \delta^{j_1}_{k_{t+1}} \cdots \delta^{j_s}_{k_{2q}} \, \delta^{k_1}_{i_{r+1}} \cdots \delta^{k_t}_{i_{2\ell}}) \nonumber
\end{eqnarray}}%
with $q,p$ constrained as in the single-field case and where the integers $r,s,t$ satisfy the relation
\begin{equation}
2\ell= r+t, \quad 2p = r+s, \quad 2q = s+t.
\end{equation}
The structure constant defined as the coefficient of the three point function
\begin{equation} \label{3pf}
\left\langle \phi_i(x) {\cal S}_{2p}(y) \tilde{{\cal S}}_{2q}(z) \right\rangle
\end{equation}
is obtained by contracting \eqref{c12p2q} with $S,\tilde S$, which are symmetric tensors that satisfy \eqref{gamma-k} and therefore give rise to scaling operators. This leads to
\begin{equation} \label{ci2p2q}
\boxed{
C_{\phi_i {\cal S}_{2p} \tilde{{\cal S}}_{2q}} = \frac{V_{il_1\cdots l_r k_1\cdots k_t}\,S_{j_1\cdots j_s l_1 \cdots l_r} \, \tilde{S}_{k_1\cdots k_t j_1 \cdots j_s}}{(2\ell)!} \frac{(2\ell-1)^2}{4(4(p-q)^2-1)}C^{\mathrm{free}}_{2\ell,2p,2q}.
}
\end{equation}
Notice also that these operators must not be descendants. This can only occur for ${\cal S}_{2\ell}$ and $\tilde{\cal S}_{2\ell}$, that is when $q=\ell$.
Finally, the structure constants $C_{1,2p-1,2q-1}$ and consequently its multi-field generalization are given respectively by \eqref{c12p2q-single} and \eqref{ci2p2q} after making the shift $p\rightarrow p-\frac{1}{2}$ and $q\rightarrow q-\frac{1}{2}$,
that is, one can immediately write
\begin{equation} \label{ci2p-12q-1}
\boxed{
C_{\phi_i {\cal S}_{2p-1} \tilde{{\cal S}}_{2q-1}} = \frac{V_{il_1\cdots l_r k_1\cdots k_t}\,S_{j_1\cdots j_s l_1 \cdots l_r} \, \tilde{S}_{k_1\cdots k_t j_1 \cdots j_s}}{(2\ell)!} \frac{(2\ell-1)^2}{4(4(p-q)^2-1)}C^{\mathrm{free}}_{2\ell,2p-1,2q-1},
}
\end{equation}
where now $q,p$ fall in the range $q+p\geq \ell+1$ and $|q-p|\leq \ell$, and the integers $r,s,t$ satisfy the relation
\begin{equation}
2\ell= r+t, \quad 2p-1 = r+s, \quad 2q-1 = s+t.
\end{equation}
\subsubsection{Generalization of $C_{1,1,1}$} \label{ssec:c111}
Again, for odd models where $n=\ell+1/2$ it is straightforward to find the generalization of the OPE coefficient $C_{1,1,1}$ to the multi-field case. Excluding the case $\ell=1$ which requires a separate treatment and can be extracted from the computation of the previous subsection setting $p=q=1$ inside Eq.~\eqref{ci2p-12q-1}, for all other values of $\ell$ this is obtained by evaluating the following expressions at leading order
\begin{equation}
\square_x\square_y\square_z\langle\phi_i(x)\phi_j(y)\phi_k(z)\rangle \,\stackrel{\mathrm{LO}}{=}\, \frac{2^8\ell(\ell-1)}{(2\ell-1)^6} \frac{C_{\phi_i \phi_j \phi_k}}{|x-y|^{\delta_c+2}|x-z|^{\delta_c+2}|y-z|^{\delta_c+2}},
\end{equation}
\begin{eqnarray}
\langle\square_x\phi_i(x)\square_y\phi_j(y)\square_z\phi_k(z)\rangle &\,\stackrel{\mathrm{LO}}{=}&\,\frac{V_{i\,a_1\cdots a_\ell\,b_1\cdots b_\ell}V_{j\,b_1\cdots b_\ell\,c_1\cdots c_\ell}V_{k\,c_1\cdots c_\ell\,a_1\cdots a_\ell}}{(2\ell)!^3}\times \nonumber\\
&{}&\frac{C^\mathrm{free}_{2\ell,\,2\ell,\,2\ell}}{|x\!-\!y|^{2\ell\delta_c}|x\!-\!z|^{2\ell\delta_c}|y\!-\!z|^{2\ell\delta_c}},
\end{eqnarray}
and equating the two, recalling that $2\ell\delta_c = \delta_c + 2$. This gives
\begin{equation} \label{cijk}
\boxed{
C_{\phi_i \phi_j \phi_k} = \frac{(2\ell-1)^6c^{3\ell}}{2^8\ell(\ell-1)\ell!^3}\,V_{i\,a_1\cdots a_\ell\,b_1\cdots b_\ell}V_{j\,b_1\cdots b_\ell\,c_1\cdots c_\ell}V_{k\,c_1\cdots c_\ell\,a_1\cdots a_\ell},
}
\end{equation}
where we have directly used $C^\mathrm{free}_{2\ell,\,2\ell,\,2\ell} =(2\ell)!^3c^{3\ell}/\ell!^3$.
\subsubsection{Generalization of $C_{1,1,2k}$} \label{ssec:c112k}
The generalization of the structure constant $C_{1,1,2k}$ to the multi-field case is defined as the coefficient appearing in the three-point correlation function
\begin{equation}
\left\langle \phi_i(x) \phi_j(y) {\cal S}_{2k}(z) \right\rangle
\end{equation}
where the operator ${\cal S}_{2k}$ with $2k$ fields is
a scaling operator satisfying \eqref{gamma-k}. Using the result of \cite{Codello:2017qek} and following the arguments of the previous sections this is straightforwardly calculated
\begin{empheq}[box=\fbox]{align}
C_{\phi_i \phi_j {\cal S}_{2k}} &= \frac{(n-1)^4c^{2n+k-1}}{16k(k-1)(k-n)(k-n+1)} \frac{(2k)!}{k!^2(2n-k-1)!^2} \,\,\nonumber\\
&\times V_{ii_1\cdots i_{2n-k-1}a_1\cdots a_k}V_{ji_1\cdots i_{2n-k-1}b_1\cdots b_k}S_{a_1\cdots a_k \,b_1\cdots b_k}. \label{ciju}
\end{empheq}
As in the single-field case, in this equation $n$ is either an integer or a half-odd number and $k$ is constrained to the range $2\leq k \leq 2n-1$ and $k\neq n,n-1$.
\subsection{"Fixed point" equation from CFT}\label{sec:FPcond}
We conclude this section showing in general how the constraints imposed by conformal symmetry on two and three point functions together with the use of the Schwinger-Dyson equations can fix the possible critical theories at leading order in $\epsilon$.
We shall follow a path which is slightly different from the one employed in~\cite{Nii:2016lpa, Codello:2017qek}, and do not directly rely on the conditions on the scaling dimensions of descendant operators from the equation of motion (when the interactions are turned on below the critical dimension).
Interestingly enough we find conditions which can be simplified to match exactly the fixed point condition of the RG approach in its functional form~\cite{ODwyer:2007brp, Codello:2017hhh,Codello:2017epp, Osborn:2017ucf, CSVZ4} which we dubbed functional perturbative RG approach. It is well known that in general, fixed point equations admit solutions which are characterized by some internal symmetries not necessarily realized away from criticality, giving a scenario where critical theories can have a higher level of symmetry, or an emergent symmetry. Therefore all the discussions in the literature with RG techniques regarding possible symmetry enhancements at criticality~\cite{Zia:1974nv,Michel:1983in, TMTB,Osborn:2017ucf} are directly applicable also in this CFT perturbative framework, at least in the cases shown below, i.e. all unitary multicritical models and the one with a cubic potential.
\subsubsection{The $d_c=6$ case} \label{ss:fp-cubic}
The only odd model that we are able to analyze in this respect is the one corresponding to $n=3/2$. Let us consider the three-point function of the scaling fields $\phi_i$, which takes the following form
\begin{equation}
\langle \phi_i(x) \phi_j(y) \phi_k(z)\rangle = \frac{C_{\phi_i\phi_j\phi_k}}{|x-y|^{\Delta_i+\Delta_j-\Delta_k}|y-z|^{\Delta_j+\Delta_k-\Delta_i}|x-z|^{\Delta_i+\Delta_k-\Delta_j}} , \quad C_{\phi_i\phi_j\phi_k} = -\frac{c^2}{4} V_{ijk},
\end{equation}
where the structure constant has been computed by setting $m=p=q=1$ in \eqref{ci2p-12q-1}.
Acting with three Laplacians on the general scaling form one obtains at leading order
\begin{equation}
\Box_x\Box_y\Box_z\langle \phi_i(x)\phi_j(y)\phi_k(z)\rangle \stackrel{\mathrm{LO}}{=} \frac{32(\epsilon\!-\!2(\gamma_i\!+\!\gamma_j\!+\!\gamma_k))}{|x\!-\!y|^4|y\!-\!z|^4|x\!-\!z|^4}C_{\phi_i\phi_j\phi_k} = \frac{8c^2(2(\gamma_i\!+\!\gamma_j\!+\!\gamma_k)\!-\!\epsilon)}{|x\!-\!y|^4|y\!-\!z|^4|x\!-\!z|^4} V_{ijk},
\label{eqcub3box2}
\end{equation}
while using the SDE one gets
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\langle \Box_x\phi_i(x) \Box_y\phi_j(y) \Box_z\phi_k(z)\rangle &\stackrel{\mathrm{LO}}{=}&
\frac{1}{2!^3} V_{iab}V_{jcd}V_{kef} \langle [\phi_a\phi_b](x) [\phi_c\phi_d](y)[\phi_e\phi_f](z)\rangle
\nonumber\\
&=&\frac{c^3\, V_{iab} V_{jbc}V_{kca}}{|x-y|^4|y-z|^4|x-z|^4}.
\label{eqcub3box1}
\end{eqnarray}}%
Equating the expressions on the right hand side of Eqs.~\eqref{eqcub3box2} and~\eqref{eqcub3box1} one finds
\begin{equation} \label{fp-6}
\boxed{
8(2(\gamma_i+\gamma_j+\gamma_k)-\epsilon) V_{ijk} = c\, V_{iab} V_{jbc} V_{kca}.
}
\end{equation}
Making the replacement $V\rightarrow 8 V/\sqrt{c}$ to accord with our RG conventions this becomes
\begin{equation}
(2(\gamma_i+\gamma_j+\gamma_k)-\epsilon)V_{ijk} = 8\, V_{iab} V_{jbc} V_{kca}.
\label{CFTd6cond}
\end{equation}
One can easily verify that this is nothing but the functional fixed point equation obtained from RG, written in the diagonal basis. To do this let us give a look at the leading order, i.e. cubic, beta function
which in terms of dimensionless variables becomes
\begin{equation}
\beta_v = -d v +\frac{d-2}{2}\phi_i v_i + \phi_i \gamma_{ij} v_j-\frac{2}{3} v_{ij} v_{jl} v_{li}\,.
\end{equation}
Taking the third field-derivative and setting the result to zero we get the fixed point equation
\begin{equation}
0 = -d v_{ijk} +3\frac{d-2}{2} v_{ijk} + \gamma_{ia} v_{ajk}+\gamma_{ja} v_{aik}+\gamma_{ka} v_{aij}-4 v_{iab} v_{jbc} v_{kca}\,.
\end{equation}
Setting $d=6-\epsilon$ one has
\begin{equation}
0 = -\frac{\epsilon}{2} v_{ijk} + \gamma_{ia} v_{ajk}+\gamma_{ja} v_{aik}+\gamma_{ka} v_{aij}-4 v_{iab} v_{jbc} v_{kca}\,,
\end{equation}
which, in the diagonal basis where $\gamma_{ij}\rightarrow \gamma_i\delta_{ij}$, matches the condition found in Eq.~\eqref{CFTd6cond}.
\subsubsection{The $d_c=4$ case}
Suppose that $S_{ij}\phi_i\phi_j$ is a scaling operator with anomalous dimension $\gamma^S_2$. We have seen that in $d_c=4$ quite generally the matrix $S_{ij}$ satisfies the eigenvalue equation \eqref{crit-cft-2}. We can write on the one hand
\begin{equation}
\square_x\square_y \langle \phi_i(x) \,\phi_j(y)\, [S_{kl}\phi_k\phi_l](z) \rangle
\stackrel{\mathrm{LO}}{=} C^{\mathrm{free}}_{112} \, \frac{4\gamma^S_2(\epsilon-\gamma^S_2)}{|x-y|^{4}|x-z|^{2}|y-z|^{2}} S_{ij},
\end{equation}
while on the other hand, using the SDE, we obtain
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\langle \square_x\phi_i(x) \,\square_y\phi_j(y)\, [S_{kl}\phi_k\phi_l](z) \rangle
&=& \frac{1}{3!^2}V_{iklm} V_{jpqr}\langle [\phi_k\phi_l\phi_m](x) \,[\phi_p\phi_q\phi_r](y)\, [S_{kl}\phi_k\phi_l](z) \rangle \nonumber\\
&\stackrel{\mathrm{LO}}{=}& \frac{C^{\mathrm{free}}_{332}}{3!^2} \, \frac{V_{pqik} V_{pqjl} S_{kl}}{|x-y|^{4}|x-z|^{2}|y-z|^{2}}.
\end{eqnarray}}%
Taking into account that $C^{\mathrm{free}}_{332} = 3!^2c^4$ and $C^{\mathrm{free}}_{112} = 2c^2$,
one then finds
\begin{equation}
c^4 V_{pqik} V_{pqjl} S_{kl} = 8c^2\gamma^S_2(\epsilon-\gamma^S_2) S_{ij}.
\end{equation}
In order to rewrite such condition in a simpler form, we can perform some manipulations, eliminating the explicit dependence on the anomalous dimension of the quadratic operators using \eqref{crit-cft-2}
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
c^4 V_{pqik} V_{pqjl} S_{kl} &=& 8c^2\left[\epsilon\frac{c}{4} V_{ijkl}-\gamma^S_2\frac{c}{4} V_{ijkl}\right] S_{kl} \nonumber\\
&=& 8c^2\left[\epsilon\frac{c}{4} V_{ijkl}-\left(\frac{c}{4}\right)^2 V_{ijpq} V_{pqkl}\right] S_{kl} \,.
\end{eqnarray}}%
The r.h.s is now independent of $S_{kl}$, and the equation is valid for all $S_{kl}$, so it is valid for an arbitrary symmetric matrix. One can therefore drop $S_{kl}$ and symmetrize the factor in $kl$. Simplifying the result one obtains
\begin{equation} \label{fp-4}
\boxed{
V_{pqik}V_{ljpq} + V_{pqil}V_{kjpq} + V_{ijpq} V_{pqkl} -\epsilon\,\frac{4}{c}\, V_{ijkl} =0\,.}
\end{equation}
Making the rescaling $V\rightarrow 4V/c$ to match the RG normalization removes the $c/4$ factors and we finally get
\begin{equation} \label{fp-cft-4}
V_{pqik} V_{ljpq}+ V_{pqil} V_{kjpq} + V_{ijpq} V_{pqkl} -\epsilon V_{ijkl} =0\,,
\end{equation}
which is nothing but the functional fixed point equation from RG.
Indeed recalling the functional perturbative RG beta function for the potential at leading order,
written in dimensionaless variables
\begin{equation}
\beta_v = -d v +\frac{d-2}{2}\phi_i v_i
+\frac{1}{2} v_{ij} v_{ij}\,
\end{equation}
and taking four field derivatives, one obtains
\begin{equation}
0 = -d v_{ijkl} +2(d-2) v_{ijkl} +
v_{pqij} v_{pqkl}+v_{pqik} v_{pqjl}+v_{pqil} v_{pqjk}\,.
\end{equation}
Setting $d=4-\epsilon$ then
\begin{equation} \label{fp-rg-4}
v_{pqij} v_{pqkl}+v_{pqik} v_{pqjl}+v_{pqil} v_{pqjk}
-\epsilon v_{ijkl}=0\,.
\end{equation}
in agreement with the result \eqref{fp-cft-4} from conformal symmetry.
\subsubsection{General even models} \label{ss:gem}
We pointed out in Section~\ref{ss:rr} that the anomalous dimension $\gamma^S_{2n-2}$ which is given by the solution \eqref{gamma-k} to the recurrence relations \eqref{rr} can be obtained also in a different way, and that the consistency of the two results gives rise to the fixed point conditions for all multicritical even models. In this section we show this in detail. The first step is to consider the multi-field generalization of the structure constant $C_{1,2p,2q-1}$ given in \eqref{ciuv-even}. For $p=n-1$ and $q=1$ this is obtained by applying $\Box_x$ to the following three-point function
\begin{equation}
\left\langle \phi_i(x)\phi_j(y) {\cal S}_{2n-2}(z)\right\rangle,
\end{equation}
and evaluating it once by direct application of $\Box_x$ and once by using the e.o.m. For this case, in the notation of Eq.~\eqref{ciuv-even} we have
\begin{equation}
\left\lbrace
\begin{array}{l}
t+r=2n-1 \\
r+s = 2n-2 \\
s+t = 1 \\
\ea
\right.
\qquad \Rightarrow \qquad
\left\lbrace
\begin{array}{l}
r= 2n-2 \\
s= 0 \\
t= 1 \\
\ea
\right.
\end{equation}
which leads to the following expression
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
C_{\phi_i\phi_j\mathcal{S}_{2n-2}} &=& \frac{(n-1)^2}{(2n-1)!} \frac{C^{\mathrm{free}}_{2n-1,2n-2,1}}{4(n-2)(n-1)} V_{il_1\cdots l_{2n-2}j} S_{l_1\cdots l_{2n-2}}
\nonumber\\
&=& \frac{(n-1)c^{2n-1}}{4(n-2)} V_{ijl_1\cdots l_{2n-2}} S_{l_1\cdots l_{2n-2}} \,,
\end{eqnarray}}%
where we have used $C^{\mathrm{free}}_{2n-1,2n-2,1} = (2n-1)! c^{2n-1}$.
Let us now apply two boxes to the above three-point function as suggested in Section \ref{ss:rr}. The result coming from the use of SDE is
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
&& \left\langle \Box_x \phi_i(x)\Box_y\phi_j(y) [S_{l_1\cdots l_{2n-2}} \phi_{l_1}\!\cdots \phi_{l_{2n-2}}](z)\right\rangle
\nonumber\\
&=& \frac{1}{(2n-1)!^2} V_{ii_1\cdots i_{2n-1}}V_{jj_1\cdots j_{2n-1}} \left\langle [\phi_{i_1} \!\cdots \phi_{i_{2n-1}}](x)[\phi_{j_1} \!\cdots \phi_{j_{2n-1}}](y) [S_{l_1\cdots l_{2n-2}} \phi_{l_1}\!\cdots \phi_{l_{2n-2}}](z)\right\rangle
\nonumber\\
&=& \frac{1}{(2n-1)!^2} V_{ii_1\cdots i_{n-1}j_1\cdots j_n}V_{jj_1\cdots j_nl_1\cdots l_{n-1}}S_{l_1\cdots l_{n-1}i_1\cdots i_{n-1}} \frac{C^{\mathrm{free}}_{2n-1,2n-1,2n-2}}{|x-y|^{2\delta_c+2}|y-z|^2|z-x|^2}
\nonumber\\
&=& \frac{(2n-2)!}{(n-1)!^2n!} V_{ii_1\cdots i_{n-1}j_1\cdots j_n}V_{jj_1\cdots j_nl_1\cdots l_{n-1}}S_{l_1\cdots l_{n-1}i_1\cdots i_{n-1}} \frac{c^{3n-2}}{|x-y|^{2\delta_c+2}|y-z|^2|z-x|^2},
\end{eqnarray}}%
where we have used
\begin{equation}
C^{\mathrm{free}}_{2n-1,2n-1,2n-2} = \frac{(2n-1)!^2(2n-2)!}{(n-1)!^2n!} c^{3n-2}
\end{equation}
and the counting of the indices in the third line comes from
\begin{equation}
\left\lbrace
\begin{array}{l}
r+s = 2n-1 \\
s+t = 2n-1 \\
t+r = 2n-2 \\
\ea
\right.
\qquad \Rightarrow \qquad
\left\lbrace
\begin{array}{l}
r = n-1 \\
s = n \\
t = n-1 \\
\ea
\right.
\end{equation}
On the other hand, direct application of the boxes gives
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\Box_x \Box_y \left\langle \phi_i(x)\phi_j(y) {\cal S}_{2n-2}(z)\right\rangle \hspace{-2cm} && \nonumber\\
&=&
\Box_x \Box_y \frac{C_{\phi_i\phi_j\mathcal{S}_{2n-2}}}{|x-y|^{(4-2n)\delta-\gamma^S_{2n-2}}|y-z|^{(2n-2)\delta+\gamma^S_{2n-2}}|z-x|^{(2n-2)\delta+\gamma^S_{2n-2}}}
\nonumber\\
&=& \frac{8(n-2)}{(n-1)^2} \left((n-1)\epsilon-\gamma^S_{2n-2}\right)\frac{C_{\phi_i\phi_j\mathcal{S}_{2n-2}}}{|x-y|^{2\delta_c+2}|y-z|^2|z-x|^2} \,.
\end{eqnarray}}%
Equating the two results and simplifying a bit leads to
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
&& \frac{2}{n-1}\left((n-1)\epsilon-\gamma_{2n-2}^S\right) V_{ijl_1\cdots l_{2n-2}}S_{l_1\cdots l_{2n-2}}
\nonumber\\
&=& \frac{(2n-2)!c^{n-1}}{n!(n-1)!^2} V_{ii_1\cdots i_{n-1}j_1\cdots j_n} V_{j j_1\cdots j_n l_1\cdots l_{n-1}} S_{l_1\cdots l_{n-1}i_1\cdots i_{n-1}}\,.
\end{eqnarray}}%
One can now eliminate $\gamma_{2n-2}^S$ using the formula \eqref{gamma-k} for $l=n-2$. This gives
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
(n-1)\epsilon\, V_{ijl_1\cdots l_{2n-2}}S_{l_1\cdots l_{2n-2}} &=& V_{ijl_1\cdots l_{2n-2}} \frac{n-1}{n} c_{n,2n-2} V_{j_1\cdots j_n(l_1\cdots l_n} S_{l_{n+1}\cdots l_{2n-2})j_1\cdots j_n}
\nonumber\\
&+& \frac{(2n-3)!c^{n-1}}{n!(n-2)!^2} V_{ii_1\cdots i_{n-1}j_1\cdots j_n} V_{j j_1\cdots j_n l_1\cdots l_{n-1}} S_{l_1\cdots l_{n-1}i_1\cdots i_{n-1}}
\nonumber\\[2mm]
&=& c_{n,2n-2}\frac{n-1}{n} V_{ijl_1\cdots l_{2n-2}} V_{j_1\cdots j_nl_1\cdots l_n} S_{l_{n+1}\cdots l_{2n-2}j_1\cdots j_n}
\nonumber\\
&+& c_{n,2n-2} V_{ii_1\cdots i_{n-1}j_1\cdots j_n} V_{j j_1\cdots j_n l_1\cdots l_{n-1}} S_{l_1\cdots l_{n-1}i_1\cdots i_{n-1}}
\nonumber\\[2mm]
&=& c_{n,2n-2}\frac{n-1}{n} V_{ijj_1\cdots j_nl_{n+1}\cdots l_{2n-2}} V_{j_1\cdots j_nl_1\cdots l_n} S_{l_1\cdots l_{2n-2}}
\nonumber\\
&+& c_{n,2n-2} V_{il_n\cdots l_{2n-2}j_1\cdots j_n} V_{j j_1\cdots j_n l_1\cdots l_{n-1}} S_{l_1\cdots l_{2n-2}}\,,
\end{eqnarray}}%
where in the second equation we have used the relation
\begin{equation}
c_{n,2n-2} = \frac{c^{n-1}}{2(n-2)!n!} \frac{(2n-2)!}{(n-1)!} = \frac{(2n-3)!c^{n-1}}{n!(n-2)!^2}
\end{equation}
and in the third equality we have renamed the dummy indices to make the indices on $S_{l_1\cdots l_{2n-2}}$ the same as those on the l.h.s. Now we can drop $S_{l_1\cdots l_{2n-2}}$ from both sides and symmetrize the remaining tensors in $l_1\cdots l_{2n-2}$
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
(n-1)\epsilon\, V_{ijl_1\cdots l_{2n-2}} &=& c_{n,2n-2} \frac{n-1}{n} V_{j_1\cdots j_nij(l_{n+1}\cdots l_{2n-2}} V_{l_1\cdots l_n)j_1\cdots j_n}
\nonumber\\
&+& c_{n,2n-2} V_{j_1\cdots j_ni(l_n\cdots l_{2n-2}} V_{l_1\cdots l_{n-1})j j_1\cdots j_n}
\nonumber\\[3mm]
&=& c_{n,2n-2}\frac{1}{2n^2}\, 2n(n-1) V_{j_1\cdots j_nij(l_{n+1}\cdots l_{2n-2}} V_{l_1\cdots l_n)j_1\cdots j_n}
\nonumber\\
&+& c_{n,2n-2}\frac{1}{2n^2}\, 2n^2 V_{j_1\cdots j_ni(l_n\cdots l_{2n-2}} V_{l_1\cdots l_{n-1})j j_1\cdots j_n} \nonumber\\[3mm]
&=& c_{n,2n-2}\frac{1}{2n^2}\, \frac{(2n)!}{(2n-2)!} V_{j_1\cdots j_n(ijl_{n+1}\cdots l_{2n-2}} V_{l_1\cdots l_n)j_1\cdots j_n}
\nonumber\\[3mm]
&=& \frac{(n-1)(2n)!c^{n-1}}{4n!^3} V_{j_1\cdots j_n(ijl_{n+1}\cdots l_{2n-2}} V_{l_1\cdots l_n)j_1\cdots j_n} \,.
\end{eqnarray}}%
With a simple manipulation we have seen that the tensor on the r.h.s is not only symmetric in $l_1\cdots l_{2n-2}$ but also in $ijl_1\cdots l_{2n-2}$ as expected from the l.h.s.
This leads to the following conditions on the couplings of the potential
\begin{equation} \label{fp-even}
\boxed{
0 = (1-n)\epsilon\, V_{i_1\cdots i_{2n}} + \frac{(n-1)(2n)!}{4n!^3}c^{n-1} V_{j_1\cdots j_n(i_1\cdots i_n} V_{i_{n+1}\cdots i_{2n})j_1\cdots j_n}.}
\end{equation}
It is interesting to point out that comparing this fixed point equation with the recurrence relation \eqref{gamma-k} for $l=n$ one immediately notices that $S_{i_1\cdots i_{2n}}=V_{i_1\cdots i_{2n}}$, corresponding to the classically marginal operator
\begin{equation} \label{V}
V_{i_1\cdots i_{2n}} \phi_{i_1} \cdots \phi_{i_{2n}},
\end{equation}
is always an eigenvector with eigenvalue
\begin{equation} \label{gamma-V-2n}
\gamma^V_{2n} = 2(n-1)\epsilon.
\end{equation}
In RG terms, the dimension of the coupling corresponding to the operator \eqref{V} is therefore given at leading order by $\theta^V_{2n} = d-2n \delta - \gamma^V_{2n} = -(n-1)\epsilon$ which is a negative number, hence indicating that the fixed point is infrared stable along the direction of the operator \eqref{V}. The complete stability analysis of the solutions to \eqref{fp-even} requires solving the eigenvalue equation \eqref{gamma-k} for $l=n$.
In Eq.~\eqref{fp-even} one can also make the rescaling $V\rightarrow 4V/(n-1)c^{n-1}$ as done in RG
\begin{equation}
0 = (1-n)\epsilon\, V_{i_1\cdots i_{2n}} + \frac{(2n)!}{n!^3} V_{j_1\cdots j_n(i_1\cdots i_n} V_{i_{n+1}\cdots i_{2n})j_1\cdots j_n}\,.
\end{equation}
This is indeed nothing but the functional fixed point equation for a general even model with multicriticality label $n$ derived from RG. It is obtained by taking the $2n$th field derivative of the leading order beta functional
\begin{equation}
\beta_v = -d v +\frac{d-2}{2}\phi_i v_i +\frac{1}{n!} v_{j_i\cdots j_n} v_{j_i\cdots j_n}\,,
\end{equation}
and setting $d=2n/(n-1)-\epsilon$. It might be worth mentioning that the anomalous dimensions of higher order operators obtained in Section~\ref{ss:rr} can be extracted from the above beta function. One simply needs to deform the potential as $v\rightarrow v+ \delta v$ and linearize the r.h.s in the deformation $\delta v$, take $n+l$ field derivatives (where $l\geq 0$), and evaluate the result at the fixed point where only the $2n$th derivative of the potential persists. The tensor coefficient of $\delta v_{i_1\cdots i_{n+l}}$ then gives the stability matrix that coincide with \eqref{gamma-k}, of course after suitable rescaling of the potential. It might be worth mentioning that since $V_{i_1 \cdots i_m}$ constitute the most general set of couplings at criticality the equations \eqref{fp-6}, \eqref{fp-4} and \eqref{fp-even} are completely general at leading order and admit all possible fixed points, while at higher orders in perturbation theory these equations are corrected by higher powers of the interactions $V_{i_1 \cdots i_m}$.
\subsection{The missing pieces in the recurrence relation} \label{ss:mprr}
\subsubsection{The case $k=2n-2$}
As discussed towards the end of Section~\ref{ss:rr}, the reasonings in that section do not justify the validity of \eqref{rr} for $k=2n-2$ when the operator ${\cal S}_{2n-1}$ is a descendant operator. Indeed for the multi-field models, not all operators ${\cal S}_{2n-1}$ are primaries. The descendant ones take the form of the r.h.s of the equation of motion at the critical point
\begin{equation}
\Box_x \phi_i = \frac{1}{(2n-1)!} V_{ii_1\cdots i_{2n-1}} \phi_{i_1}\cdots \phi_{i_{2n-1}}\,.
\end{equation}
This means that the descendant operators are a set of $2n-1$ index tensors labeled by $i
\begin{equation} \label{do}
\mathcal{V}_i = V_{ii_1\cdots i_{2n-1}}\phi_{i_1}\cdots \phi_{i_{2n-1}}, \quad i=1,2,\cdots , N\,.
\end{equation}
We therefore label the corresponding anomalous dimensions by the index $i$, and denote them from now on as $\gamma^i_{2n-1}$. Let us now insert this into Eq.~\eqref{gamma-k} (setting also $l=n-1$) to see what we get
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\gamma^i_{2n-1} V_{ii_1\cdots i_{2n-1}}
&=& \frac{(n-1)(2n)!c^{n-1}}{4n!^3}\, V_{j_1\cdots j_n(i_1\cdots i_n}V_{i_{n+1}\cdots i_{2n-1})ij_1\cdots j_n}
\nonumber\\
&=& \frac{(n-1)(2n)!c^{n-1}}{4n!^3}V_{j_1\cdots j_n(i_1\cdots i_n}V_{i_{n+1}\cdots i_{2n-1}i)j_1\cdots j_n},
\end{eqnarray}}%
where in the second line the $i$ index is taken into the symmetrizing parenthesis. This can be done because
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
V_{j_1\cdots j_n(i_1\cdots i_n}V_{i_{n+1}\cdots i_{2n-1})ij_1\cdots j_n} &=& \frac{1}{2}\left[V_{j_1\cdots j_n(i_1\cdots i_n}V_{i_{n+1}\cdots i_{2n-1})ij_1\cdots j_n} + V_{j_1\cdots j_ni(i_1\cdots i_{n-1}}V_{i_n\cdots i_{2n-1})j_1\cdots j_n} \right]
\nonumber\\
&=& V_{j_1\cdots j_n(i_1\cdots i_n}V_{i_{n+1}\cdots i_{2n-1}i)j_1\cdots j_n}.
\end{eqnarray}}%
Now on the r.h.s of the above equation one can use the "fixed point" equation \eqref{fp-even} derived in the previous section to obtain
\begin{equation}
\gamma^i_{2n-1} V_{ii_1\cdots i_{2n-1}} = \frac{(n-1)(2n)!c^{n-1}}{4n!^3}V_{j_1\cdots j_n(i_1\cdots i_n}V_{i_{n+1}\cdots i_{2n-1}i)j_1\cdots j_n} = (n-1)\epsilon V_{ii_1\cdots i_{2n-1}}.
\end{equation}
We therefore have
\begin{equation} \label{gamma-2n-1}
\gamma^i_{2n-1} = (n-1)\epsilon,
\end{equation}
consistent with $\Delta^\mathcal{V}_i=\Delta_i+2$, where $\Delta^\mathcal{V}_i$ is the scaling dimension of $\mathcal{V}_i$ defined in \eqref{do}. So the anomalous dimensions $\gamma^i_{2n-1}$ corresponding to descendant operators also satisfy \eqref{gamma-k}. This means that \eqref{rr} is valid for $k=2n-2$ for any ${\cal S}_{2n-1}$.
\subsubsection{The case $k=2n-1$}
The final missing piece in the recurrence relations \eqref{rr} is the case $k=2n-1$. We briefly pointed out earlier in Section~\ref{ss:rr} that this missing information comes from an analysis of the three-point function
\begin{equation}
\left\langle \phi_i(x)\phi_j(y) {\cal S}_{2n}(z)\right\rangle,
\end{equation}
when two box operators $\Box_x \Box_y$ are applied to it. But before that, we need to do the same analysis when only one operator $\Box_x$ is applied. This gives the multi-field generalization of the structure constant $C_{1,1,2n}$ which is also obtained by evaluating Eq.~\eqref{ciuv-even} for $p=n$ and $q=1$.
This gives the structure constant
\begin{equation}
C_{\phi_i\phi_j\mathcal{S}_{2n}} = \frac{n-1}{4n} \frac{C^{\mathrm{free}}_{2n-1,1,2n}}{(2n-1)!} V_{il_1\cdots l_{2n-1}} S_{jl_1\cdots l_{2n-1}}
= \frac{(n-1)c^{2n}}{2} V_{il_1\cdots l_{2n-1}} S_{jl_1\cdots l_{2n-1}}\,,
\end{equation}
where we have used $C^{\mathrm{free}}_{2n-1,1,2n} =(2n)! c^{2n}$.
Let us now apply two boxes to the above three-point function. As usual we evaluate this once using the SDE and once by applying the box operators directly. The SDE method gives
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
&& \left\langle \Box_x \phi_i(x)\Box_y\phi_j(y) \mathcal{S}_{2n}(z)\right\rangle
\\
&=& \frac{1}{(2n-1)!^2} V_{ii_1\cdots i_{2n-1}}V_{jj_1\cdots j_{2n-1}} \left\langle [\phi_{i_1}\cdots \phi_{i_{2n-1}}](x)[\phi_{j_1}\cdots \phi_{j_{2n-1}}](y) \mathcal{S}_{2n}(z)\right\rangle
\nonumber\\
&=& \frac{1}{(2n-1)!^2} V_{ii_1\cdots i_nj_1\cdots j_{n-1}}V_{jj_1\cdots j_{n-1}l_1\cdots l_n}S_{l_1\cdots l_ni_1\cdots i_n} \frac{C^{\mathrm{free}}_{2n-1,2n-1,2n}}{|x-y|^2|y-z|^{2\delta_c+2}|z-x|^{2\delta_c+2}}
\nonumber\\
&=& \frac{(2n)!}{(n-1)!n!^2}V_{ii_1\cdots i_nj_1\cdots j_{n-1}}V_{jj_1\cdots j_{n-1}l_1\cdots l_n}S_{l_1\cdots l_ni_1\cdots i_n} \frac{c^{3n-1}}{|x-y|^2|y-z|^{2\delta_c+2}|z-x|^{2\delta_c+2}},\nonumber
\end{eqnarray}}%
where we have used
\begin{equation}
C^{\mathrm{free}}_{2n-1,2n-1,2n} = \frac{(2n-1)!^2(2n)!}{(n-1)!n!^2} c^{3n-1}\,.
\end{equation}
Direct application of the boxes instead leads to
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\Box_x \Box_y \left\langle \phi_i(x)\phi_j(y) \mathcal{S}_{2n}(z)\right\rangle &=& \Box_x \Box_y \frac{C_{\phi_i\phi_j\mathcal{S}_{2n}}}{|x-y|^{-2-\gamma^S_{2n}}|y-z|^{2+2\delta+\gamma^S_{2n}}|z-x|^{2+2\delta+\gamma^S_{2n}}}
\nonumber\\
&{}&\hspace{-2cm}= \frac{8n(\gamma^S_{2n}-(n-1)\epsilon)}{(n-1)^2} \frac{C_{\phi_i\phi_j\mathcal{S}_{2n}}}{|x-y|^2|y-z|^{2\delta_c+2}|z-x|^{2\delta_c+2}} .
\end{eqnarray}}%
Comparing the two results gives rise to the following eigenvalue equation
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
&& \frac{8n(\gamma^S_{2n}-(n-1)\epsilon)}{(n-1)^2} \frac{(n-1)c^{2n}}{2} V_{il_1\cdots l_{2n-1}} S_{jl_1\cdots l_{2n-1}}
\nonumber\\
&=& \frac{(2n)!c^{3n-1}}{(n-1)!n!^2}V_{ii_1\cdots i_nj_1\cdots j_{n-1}}V_{jj_1\cdots j_{n-1}l_1\cdots l_n}S_{l_1\cdots l_ni_1\cdots i_n},
\end{eqnarray}}%
which can be further simplified using Eq.~\eqref{gamma-2n-1} to replace the term $(n-1)\epsilon$ on the l.h.s with the anomalous dimension of the descendant operators, and also using the definition \eqref{cnk}
\begin{equation}
(\gamma^S_{2n}-\gamma^i_{2n-1}) V_{il_1\cdots l_{2n-1}} S_{jl_1\cdots l_{2n-1}} = c_{n,2n-1}
V_{ii_1\cdots i_nj_1\cdots j_{n-1}}V_{jj_1\cdots j_{n-1}l_1\cdots l_n}S_{l_1\cdots l_ni_1\cdots i_n}.
\end{equation}
This is precisely the missing piece in our recurrence relation \eqref{rr}, that is, the case $k=2n-1$ when the operator of order $2n-1$ is the descendant of $\phi_i$, obtained from the equation of motion. This implies that Eq.~\eqref{gamma-k} is valid for all $l\geq 0$.
\section{Potts models}\label{sect:potts_models}
We now consider a particular family of theories characterized by the $S_q$ symmetry,
the Potts model \cite{Potts:1951rk},
which has been introduced as a spin-lattice model that generalizes the Ising model.
Let $\left\{\sigma_l\right\}$ be a spin configuration
labeled by a regular lattice ${\cal L}$ where location $l \in {\cal L}$ and in which each single spin can take up to $q$ different values $\sigma_l=1,\dots,q$.
The model can be characterized by the microscopic Hamiltonian
\begin{eqnarray}\label{eq:hamiltonian}
{\cal H} &=& -J \sum_{\left<lr\right>} \delta_{\sigma_l,\sigma_r}
\end{eqnarray}
in which the summation extends only to nearest-neighbor spins in the lattice.
The Kronecker delta
ensures that only nearby spins of the same value
change the energetic balance of the model. The net effect is that the model is ferromagnetic if $J>0$
and anti-ferromagnetic if $J<0$. The Hamiltonian \eqref{eq:hamiltonian} is invariant under
the action of the group $S_q$ of permutations of $q$ objects which acts globally on the set of $q$ spin states.
The model is a fundamental actor in the theory of phase transitions because for $J>0$ it can exhibit
either first or second order phase transition according to both the value of $q$ and the dimensionality $d$ of the lattice.
There is an alternative formulation on the lattice based on random clusters~\cite{Fortuin:1971dw}, equivalent for $q\ge 2$, which has the advantage of allowing for an analytic continuation in $q$.
A straightforward expectation is that the critical physics of the $q$-states Potts model can be captured
by an opportune field theoretic realization of an $S_q$-invariant model,
and that the renormalization group flow of such model admits either a Gaussian fixed point if the phase transition is first order (for values of $q$ above a certain dimensional dependent threshold $q_c(d)$), or a non-Gaussian fixed point if the phase transition is second order. For this latter case one expects the universal features of the model also for $d>2$ to be described by a CFT, if scale invariance is lifted to conformal invariance.~\footnote{Very recently some arguments linking 2d complex CFTs to weakly (small latent heat) first order phase transitions have been presented~\cite{Gorbenko:2018ncu,Gorbenko:2018dtm}.}
Several RG analysis of the Potts model, also for the specific analytic continuations to $q=1$ (percolation) or $q=0$ (spanning forests), are available in the literature. The analytic continuation can be performed within a chosen representation of the $S_q$ discrete symmetry group. Perturbatively the standard approach is based on the $\epsilon$-expansion below the upper critical dimension\cite{Zia:1975ha}. A first attempt to study within wilsonian exact RG was made in~\cite{Zinati:2017hdy}. In $d=2$ several exact results are available~\cite{Baxter:2000ez,Nienhuis:1979mb,Delfino:2017biz}. See also~\cite{Wu:1982ra} for a review of the Potts models.
\subsection{Zoology of $S_q$-invariant interactions}\label{sect:invariant_interactions}
For the purpose of constructing QFTs of Potts models, with $S_q$-invariant interactions,
let us first describe a useful representation introducing a set of $q$ vectors $e^\sigma$ which point in the directions
of the vertices of a $N$-simplex, i.e. a simplex in ${\mathbb R}^N$, for $N=q-1$.
The set of vectors satisfies the following properties
\begin{eqnarray}
&& e^\sigma\cdot e^{\sigma'} =
\sum_{i=1}^{N} e_i^\sigma e_i^{\sigma'} = (N+1) \delta_{\sigma,\sigma'}-1 \\
&&\sum_{\sigma=1}^{N+1} e_i^\sigma =0\,,
\qquad
\sum_{\sigma=1}^{N+1} e_i^\sigma e_j^{\sigma} = (N+1) \delta_{ij}\,.
\end{eqnarray}
These relations also determine the vectors $e^\sigma$ uniquely, up to rotations and $S_q$ transformations. We can use the vectors $e^\sigma$ to find a representation of the Kronecker delta
\begin{eqnarray}
\delta_{\sigma,\sigma'} &=&
\frac{1+e^\sigma\cdot e^{\sigma'}}{q}
\end{eqnarray}
which reflects a manifest invariance under the action of the group in the $N$-dimensional space ${\mathbb R}^N$.
This approach is also the key to the construction of all possible $S_q$-invariant field theoretic interactions.
Indeed one can straightforwardly construct manifestly them by considering any number of copies of
the field $\psi^\sigma = e^\sigma_i \phi_i$ and summing over the index $\sigma$ (from now on, repeated Latin indices will be summed over).
We shall restrict our attention to local nonderivative interactions, which for this type are
\begin{eqnarray}
\sum_{\sigma=1}^q (\psi^\sigma)^l
\end{eqnarray}
given $l\in {\mathbb N}$. Clearly, any product of any number of these interactions is also an invariant. In general we have
\begin{eqnarray}
\prod_{a=1}^p \left(\sum_{\sigma_a=1}^q (\psi^{\sigma_a})^{l_a}\right)
\end{eqnarray}
for $p\in {\mathbb N}$ and $l_a\in{\mathbb N}$ for each $a$. Expressing these invariants in terms of the basic fields $\phi_i$
allows one to write down the most general $S_q$-invariant actions.
Before showing how to use the field $\psi^\sigma$ to construct all possible $S_q$-invariant interactions let us start from the simplest nontrivial interacting action which is cubic
\begin{eqnarray}
S[\phi] &=& \int {\rm d}^d x \Bigl\{
\frac{1}{2} \sum_i \partial_\mu \phi_i \partial^\mu \phi_i + \lambda \sum_\sigma (\psi^\sigma)^3
\Bigr\}\,,
\end{eqnarray}
in which we also introduce a coupling to weight the interaction and a kinetic term for the field $\phi_i$.
Expanding the field $\psi^\sigma$ in its ``components'' we obtain a manifestly symmetric action
\begin{eqnarray}\label{eq:action}
S[\phi]
&=& \int {\rm d}^d x \Bigl\{
\frac{1}{2} \sum_i \partial_\mu \phi_i \partial^\mu \phi_i + \lambda \sum_\sigma \sum_{i,j,k} e_i^\sigma e_j^\sigma e_k^\sigma \phi_i \phi_j \phi_k
\Bigr\} \,.
\end{eqnarray}
The critical points of \eqref{eq:hamiltonian} and \eqref{eq:action} are achieved by tuning a single interaction to criticality in both cases.
In particular in the latter QFT continuous description the relevant $S_q$ symmetric mass term is tuned to zero.
We can imagine that a more complete classification of all possible $S_q$-invariant actions, which goes beyond \eqref{eq:action}
and includes in general more interactions, might serve as a tool to uncover multicritical phases that generalize \eqref{eq:hamiltonian}
through the inclusion of more order parameters.
Assuming that close to the Gaussian point the multiplet of scalar fields has canonical dimension $(d-2)/2$,
for increasing number of derivatives and powers of the fields $\phi_i$ the local interactions have increasing mass dimension.
We are interested in writing down all possible local nonderivative interactions which can be marginal in any dimension $d>3$.
It is convenient to introduce the following tensors
\begin{equation}
q^{(p)}_{i_1,\dots, i_p} = \frac{1}{N\!+\!1}Q^{(p)}_{i_1,\dots, i_p} \quad,\quad Q^{(p)}_{i_1,\dots, i_p} =\sum_\alpha e_{i_1}^\alpha \dots e_{i_p}^\alpha
\end{equation}
Notice that by construction the first two tensors can generally be simplified
\begin{eqnarray}
q^{(1)}_{i_1} &=& \sum_\alpha e_{i_1}^\alpha = 0 \,, \label{e=0} \\
q^{(2)}_{i_1 i_2} &=&\frac{1}{N\!+\!1} \sum_\alpha e_{i_1}^\alpha e_{i_2}^\alpha = \delta_{i_1 i_2}\,. \label{ee=1}
\end{eqnarray}
When instead $p\geq 3$ we cannot generally simplify $q^{(p)}$ unless we specify the order of the permutation group.
Since our interest is to deal with a generic value for $q$ (and possibly analytically continue it)
we shall not require any further property, although it is possible to treat the cases $q=1$, $2$ and $3$, for which some simplification occurs, separately.
The most general local potential action for the $N$ fields $\phi_i$ is
\begin{eqnarray} \label{action-sq}
S[\phi] &=& \int {\rm d}^d x \Bigl\{
\frac{1}{2}(\partial\phi)^2 + V(\phi)\Bigr\}\,,
\end{eqnarray}
where the potential $V$ can be written as
\begin{equation}
V(\phi)
= \sum_{p \geq 0} \frac{1}{p!} T^{(p)}_{i_1 \dots i_p} \phi_{i_1}\dots \phi_{i_p}
\end{equation}
in which the tensors up to the quintic order of interactions can be defined as
\begin{eqnarray}
T^{(2)}_{i_1 i_2} &=& \zeta_2 \,\delta_{i_1 i_2}, \label{t2} \\
T^{(3)}_{i_1 i_2 i_3} &=& \zeta_3 \,q^{(3)}_{i_1 i_2 i_3}, \label{t3} \\
T^{(4)}_{i_1 i_2 i_3 i_4} &=& \zeta_{4,1} \,\delta_{(i_1 i_2} \delta_{i_3 i_4)} + \zeta_{4,2} \,q^{(4)}_{i_1 i_2 i_3 i_4}, \label{t4} \\
T^{(5)}_{i_1 i_2 i_3 i_4 i_5} &=& \zeta_{5,1} \,\delta_{(i_1 i_2} q^{(3)}_{i_3 i_4 i_5)} + \zeta_{5,2} \,q^{(5)}_{i_1 i_2 i_3 i_4 i_5}. \label{t5}
\end{eqnarray}
The couplings from $\zeta_2$ to $\zeta_{5,2}$ have a rather straightforward meaning:
$\zeta_2$ plays the role of the mass for the multiplet $\phi_i$,
while all other couplings starting from $\zeta_3$ are genuine interactions with which one can construct a perturbative expansion.
Specifically: $\zeta_3$ is canonically marginal in $d=6$ so with it we can construct a perturbative expansion in $d=6$ and an $\epsilon$-expansion in $d=6-\epsilon$.
Likewise in $d=4$ and $d=4-\epsilon$ one has to consider a perturbative expansion in the couple
$\left\{\zeta_{4,1},\zeta_{4,2}\right\}$.
In $d=\frac{10}{3}$ one has to consider a perturbative expansion in the couple $\left\{\zeta_{5,1},\zeta_{5,2}\right\}$.
For later convenience we want to introduce a basis of the $S_q$-invariant interactions
that is related to the above definition of the tensors $T^{(p)}$.
We denote the basis with ${\cal I}_{i,j}$: the index $i$ refers to the fact that
each element ${\cal I}_{i,j}$ is a fully $S_q$-invariant products of $i$ copies of the field components $\phi$,
while $j$ parametrizes the increasing size of the tensors $q^{(i)}$ in its construction
(the presence of the tensors $q^{(i)}$ instead of the Kronecker delta represents, to some extent,
the departure from an $O(N)$ invariant theory).
The first few invariants are
\begin{equation} \label{eq:invariants-basis}
\begin{array}{lll}
{\cal I}_{2} = \phi_i\phi_i\,, & \qquad\qquad &
{\cal I}_{3} = q^{(3)}_{i_1 i_2 i_3}\, \phi_{i_1}\phi_{i_2}\phi_{i_3}\,, \\[7pt]
{\cal I}_{4,1} = \phi_i\phi_i \, \phi_j\phi_j\,, & \qquad\qquad &
{\cal I}_{4,2} = q^{(4)}_{i_1 i_2 i_3 i_4}\phi_{i_1}\phi_{i_2}\phi_{i_3}\phi_{i_4}\,, \\[7pt]
{\cal I}_{5,1} = \phi_i\phi_i \, q^{(3)}_{i_1 i_2 i_3}\phi_{i_1}\phi_{i_2}\phi_{i_3}\,, & \qquad\qquad &
{\cal I}_{5,2} = q^{(5)}_{i_1 i_2 i_3 i_4 i_5}\phi_{i_1}\phi_{i_2}\phi_{i_3}\phi_{i_4}\phi_{i_5}\,.
\ea
\end{equation}
We have arranged their second index for increasing ``departure'' from $O(N)$ symmetry (in which the only allowed invariants are powers of $\phi_i\phi_i$):
notice that while the basis operators chosen in this paper are the same as the one of \cite{Zinati:2017hdy}, the two bases differ in the way the label $j$ is assigned.
By construction some invariants are algebraically related, for example
\begin{eqnarray}
{\cal I}_{4,1} = ({\cal I}_{2})^2\,, &\qquad&
{\cal I}_{5,1} = {\cal I}_{2} {\cal I}_{3}\,.
\end{eqnarray}
For specific values of $q$ it is possible to find even more relations among the invariants.
In particular, given a natural value of $q$ there is a finite number of independent invariants that
we can build out of the field multiplet. We come back to this point later
when specializing some results to the first few low values of $q$.
One can write useful relations to simplify contractions of such $q^{(i)}$ tensor. We present some of them in the
Appendix~B.\ref{ss:reduction}.
\subsection{Quadratic operators: Imposing $S_{N+1}$ invariance} \label{ss:qo-symm}
In later sections, having specified the model and the explicit form of the potential, we will solve the eigenvalue equations \eqref{crit-cft} and \eqref{crit-cft-2} and determine the eigenvalues $\gamma^S_2$, which are the anomalous dimensions of the quadratic operators. However, considerable information can be extracted only from a knowledge of the symmetry, that is $S_{N+1}$ in our case, and without relying on the precise model.
For this purpose we devote this section to understanding how much the symmetry alone can tell us about the quadratic scaling operators. As the first step, note that the $N$-dimensional space of fields $\phi_i$ or equivalently $\psi^\alpha$ carry the standard representation of $S_{N+1}$ which in the Young-Tableaux notation is nothing but the following diagram with $N+1$ boxes
{\setlength\arraycolsep{6pt}\def0.7{0.7}
\begin{equation}
\begin{array}{|c|c|c|c|c|}
\cline{1-4}
&&\multicolumn{1}{c|}{\dots}& \\ \cline{1-4}
& \multicolumn{4}{c}{} \\ \cline{1-1}
\ea
\end{equation}}%
From this, one can determine the decomposition of the symmetric product $\phi_{i}\phi_{j}$, or equivalently $\psi^{\alpha}\psi^{\beta}$, of two fields into irreducible representations. These irreducible representations are the representations carried by the quadratic scaling operators. Indeed the symmetric product of two standard representations is decomposed as
{\setlength\arraycolsep{5pt}\def0.7{0.7}
\begin{equation}
Sym\!\left(\hspace{2pt}\begin{array}{|c|c|c|c|c|}
\cline{1-4}
&&\multicolumn{1}{c|}{\dots}& \\ \cline{1-4}
& \multicolumn{4}{c}{} \\ \cline{1-1}
\ea \hspace{38pt}
\otimes \hspace{2pt}
\begin{array}{|c|c|c|c|c|}
\cline{1-4}
&&\multicolumn{1}{c|}{\dots}& \\ \cline{1-4}
& \multicolumn{4}{c}{} \\ \cline{1-1}
\ea \hspace{38pt}\right)
=
\begin{array}{|c|c|c|c|c|}
\cline{1-4}
&&\multicolumn{1}{c|}{\dots}& \\ \cline{1-4}
\ea \hspace{3pt}
\oplus \hspace{2pt}
\begin{array}{|c|c|c|c|c|}
\cline{1-4}
&&\multicolumn{1}{c|}{\dots}& \\ \cline{1-4}
& \multicolumn{4}{c}{} \\ \cline{1-1}
\ea \hspace{39pt}
\oplus \hspace{2pt}
\begin{array}{|c|c|c|c|c|}
\cline{1-4}
&&\multicolumn{1}{c|}{\dots}& \\ \cline{1-4}
&& \multicolumn{3}{c}{} \\ \cline{1-2}
\ea \\
\end{equation}}%
where all the diagrams here include $N+1$ boxes. This shows that in $S_{N+1}$ invariant theories there are only three distinct anomalous dimensions, corresponding to three, possibly degenerate, set of scaling operators. In terms of dimensions, the above relation corresponds to the following decomposition
\begin{equation} \label{dim-decomp}
\frac{N(N+1)}{2} = 1 \oplus N \oplus \frac{(N+1)(N-2)}{2},
\end{equation}
which can be obtained from the Hook length formula and specifies the degeneracy of each subspace of scaling operators with the same scaling dimension. More explicitly, one may introduce three projectors in the space of quadratic operators
\begin{equation}
\phi_i\phi_j = (P_1)_{ij,kl}\,\phi_k\phi_l + (P_2)_{ij,kl}\,\phi_k\phi_l + (P_3)_{ij,kl}\,\phi_k\phi_l.
\end{equation}
The explicit form of these projectors is given as follows
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{p1}
(P_1)_{ij,kl} &=& \frac{1}{N} \delta_{ij} \delta_{kl}, \\
(P_2)_{ij,kl} &=& \frac{1}{N-1} \left( q^{(4)}_{ijkl} - \delta_{ij} \delta_{kl} \right), \label{p2} \\
(P_3)_{ij,kl} &=& \delta_{i(k} \delta_{l)j} - \frac{1}{N-1} q^{(4)}_{ijkl} + \frac{1}{N(N-1)}\delta_{ij}\delta_{kl}. \label{p3}
\end{eqnarray}}%
These are projection operators in the sense that
\begin{equation}
(P_a P_b)_{ij,kl} = \delta_{ab} (P_a)_{ij,kl}\quad , \qquad (P_1 + P_2 +P_3)_{ij,kl} = \delta_{i(k} \delta_{l)j}.
\end{equation}
One may find the equivalent split of the product $\psi^\alpha\psi^\beta$ and the corresponding projectors simply by transforming the Latin indices to Greek indices by contracting the above results with the vectors $e^\alpha_i$ and including appropriate powers of $N+1$.
So far we have gained general insight on the space of scaling operators. What is missing is information about the actual values of the anomalous dimensions. These may be obtained resorting to the eigenvalue equation
\begin{equation} \label{gammaS2-general}
(\gamma^{S}_2 - \eta) \; S_{ij} = \mathcal{M}_{ij,ab}\,S_{ab},
\end{equation}
from which the explicit form of the above split also emerges, as will be shown shortly. Based on general grounds, for an $S_{N+1}$ invariant theory the stability matrix takes the general form which is a linear combination of four-index tensors constructed with the $q^{(n)}$ tensors and is symmetrized in its first and second pair of indices, that is
\begin{equation} \label{M}
\mathcal{M}_{ij,ab} = \tau\, q^{(4)}_{ijab} + \rho\, \delta_{ij}\delta_{ab} + \kappa\, \delta_{i(a}\delta_{b)j}.
\end{equation}
This leaves us with only three undetermined parameters in terms of which the anomalous dimensions can be expressed. When contracted with $S_{ab}$ this gives\footnote{We keep the summation on $a,b$ implicit, while summations on $\alpha,\beta$ are made explicit.}
\begin{equation}
\mathcal{M}_{ij,ab} \,S_{ab} = \tau' \sum_\alpha (e^\alpha_a S_{ab} e^\alpha_b)\, e^\alpha_ie^\alpha_j + \rho\, \delta_{ij} S_{aa} + \kappa\, S_{ij}.
\end{equation}
where $(N+1)\tau' =\tau$. If the r.h.s is to be proportional to $S_{ij}$ itself, then either\footnote{Note that summing on $\alpha$ in \eqref{ese=0} gives $S_{aa}=0$.}
\begin{equation} \label{ese=0}
e^\alpha_a S_{ab}\, e^\alpha_b=0,
\end{equation}
for any $\alpha$, or $S_{ij}$ must have the following general structure as in the first term on the r.h.s
\begin{equation} \label{s=aee}
S_{ij} =\sum_\alpha a_\alpha\, e^\alpha_ie^\alpha_j.
\end{equation}
In the first case the eigenvalue is simply equal to $\kappa$, while in the second case one can use the relation
\begin{equation}
e^\alpha_a S_{ab}\, e^\alpha_b = (N^2-1)a_\alpha + \sum_\beta a_\beta,
\end{equation}
to obtain
\begin{equation}
M_{ij, ab} \,S_{ab} = \tau(N-1)S_{ij} + (\tau+\rho N)\, \delta_{ij}\sum_\beta a^\beta +\kappa S_{ij}.
\end{equation}
This equation shows that either $\sum_\beta a^\beta=0$ in which case $S_{ij}$ is an eigenvector of the matrix $M_{ij, pq}$ with eigenvalue $\tau(N-1)+\kappa$, or $\sum_\beta a^\beta\neq 0$ in which case $S_{ij}$ must be proportional to the identity and therefore all $a^\beta$ must be equal (in particular $S_{ij}=\delta_{ij}$ if $a^\beta=1/(N\!+\!1)$). In this case the corresponding eigenvalue of $M_{ij, pq}$ is $(\tau+\rho) N + \kappa$. So in summary the three eigenvalues for $\gamma^S_2$ and their corresponding eigenvectors are respectively
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{es1}
\gamma_2^1-\eta &=& (\tau+\rho) N + \kappa, \hspace{22.1mm} S_{ij} = \delta_{ij},
\\[6pt]
\gamma_2^2-\eta &=& \tau(N-1)+\kappa, \hspace{22.4mm} S_{ij} =\sum_\alpha a^\alpha\, e^\alpha_ie^\alpha_j, \quad \sum_\alpha a^\alpha=0, \label{es2}
\\[-2pt]
\gamma_2^3-\eta &=& \kappa, \hspace{44.5mm} S_{ij} =\sum_{\alpha,\beta} a^{\alpha\beta}\, e^\alpha_ie^\beta_j, \quad e^\alpha_p S_{pq} e^\alpha_q=0. \label{es3}
\end{eqnarray}}%
where in the last case the most general form of $S_{ij}$ has been written and the condition on the symmetric matrix $a^{\alpha\beta}$ is left implicit and will be determined shortly. Notice that the first eigenvalue is also consistent with \eqref{gamma20-eta}. As we will show below, these eigenvectors correspond to the three irreducible representations mentions earlier. The first eigenvector \eqref{es1} clearly corresponds to the one-dimensional representation \eqref{p1}. We will now solve and bring the other two eigenvectors into a more transparent form. In the second case \eqref{es2}, the vector $a^\alpha$ is an $(N+1)$-component vector which is restricted to an $N$-dimensional subspace characterized by $\sum_\alpha a^\alpha=0$. A convenient basis that spans this subspace is $e^\alpha_i$, $i=1,\cdots,N$, the elements of which are ensured by \eqref{e=0} to lie on the $N$-dimensional subspace. This choice makes the eigenvectors transform covariantly under rotations. Therefore there are $N$ eigenvectors $u_p$ with eigenvalue $\gamma^2_2$, which can most conveniently be written as
\begin{equation}
u_{p,ij} =\frac{1}{N\!+\!1} \sum_\alpha e^\alpha_p e^\alpha_ie^\alpha_j = q^{(3)}_{pij}, \qquad p=1,\cdots N. \label{up}
\end{equation}
Notice that $u_{p,ij}$ is symmetric in all three indices. Let us also set $u_{0,ij} = \delta_{ij}$ to make the notation uniform. This $N$-dimensional representation labelled by a single index is equivalent to the double-index redundant description \eqref{p2}. One can also verify that $q^{(3)}_{ijk}(P_2)_{jk,lm}=q^{(3)}_{ilm}$ and $\delta_{jk}(P_2)_{jk,lm}=0$.
One may similarly find a convenient basis for the set of $a^{\alpha\beta}$ in \eqref{es3} that satisfy the condition \eqref{ese=0}. Let us first notice that the correspondence between the symmetric matrices $S_{ij}$ and $a^{\alpha\beta}$ which have respectively $N(N+1)/2$ and $(N+1)(N+2)/2$ independent components is certainly not one-to-one. However one notices that the expression of $S_{ij}$ in terms of $a^{\alpha\beta}$ is redundant under the transformation
\begin{equation}
a^{\alpha\beta} \rightarrow a^{\alpha\beta} - \sigma^\alpha,
\end{equation}
which, by choosing $\sigma^\alpha$ appropriately, allows us to set $\sum_\beta a^{\alpha\beta} =0$ for any $\alpha$. This reduces the number of independent components to $N(N+1)/2$ and makes the correspondence to $S_{ij}$ one-to-one. Now, the condition \eqref{ese=0} on the general $a^{\alpha\beta}$ is
\begin{equation}
(N+1)^2a^{\gamma\gamma} - 2(N+1)\sum_\alpha a^{\gamma\alpha} + \sum_{\alpha,\beta} a^{\alpha\beta} =0
\end{equation}
for fixed $\gamma$, which upon imposing $\sum_\beta a^{\alpha\beta} =0$, i.e. fixing the redundancy, gives the extra $N+1$ conditions $a^{\gamma\gamma} =0$. By making a proper ansatz in terms of $e^\alpha_i$ one may find a convenient (i.e. covariant) basis spanning the matrices $a^{\alpha\beta}$ subject to the constraints
\begin{equation}
\sum_\alpha a^{\gamma\alpha} =0, \quad a^{\gamma\gamma}=0,
\end{equation}
for any $\gamma$. Plugging these basis elements (labeled by two indices $p,q$) back into \eqref{es3} leads to the set of $(N+1)(N-2)/2$ independent eigenvectors $u_{pq,ij}$ labeled by $p,q$ and given explicitly by
\begin{equation} \label{upq}
u_{pq,ij} = q^{(4)}_{pqij} - (N-1) \delta_{i(p} \delta_{q)j} - \frac{1}{N} \delta_{pq} \delta_{ij},
\end{equation}
which spans a subspace orthogonal to the one generated by $\delta_{ij}$ and $q^{(3)}_{pij}$. This is clearly proportional to the projection operator \eqref{p3}. This analysis shows that the space of quadratic operators $\phi^i\phi^j$ that has a degenerate spectrum at the classical level is split by the leading quantum corrections into the three eigenspaces
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
u_{0,ij}\,\phi^i\phi^j &=& \phi\!\cdot\!\phi, \label{U0} \\[5pt]
u_{p,ij}\,\phi^i\phi^j &=& q^{(3)}_{pij}\phi^i\phi^j, \label{U1} \\
u_{pq,ij}\,\phi^i\phi^j &=& q^{(4)}_{pqij}\phi^i\phi^j - (N-1) \phi^p\phi^q - \frac{1}{N} \delta_{pq} \phi\!\cdot\!\phi. \label{U2}
\end{eqnarray}}%
It is important to note that for $S_q$ invariant theories in $d_c=6$, which have a cubic interaction of the form \eqref{t3}, the second operator \eqref{U1} is a descendant. Therefore, as pointed out in Section \ref{ss:qo}, even though \eqref{U1} is always an operator with definite scaling, in the particular case of models with a cubic critical interaction \eqref{t3}, the anomalous dimension of \eqref{U1} is not given by \eqref{es2} but is obtained from $\gamma^2_2 = \gamma+ \epsilon/2$.
While the three set of eigenoperators \eqref{U0}-\eqref{U2} are orthogonal to each other, the bases chosen in \eqref{up} and \eqref{upq} also make the set of scaling operators in \eqref{U1} as well as those in \eqref{U2} separately mutually orthogonal in the free theory, which further motivates such a choice. This orthogonality is demonstrated by showing that the two point functions of different operators vanish. In fact the values of the two point functions of operators with the same anomalous dimension are
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\left\langle [\phi\!\cdot\!\phi](x) [\phi\!\cdot\!\phi](y)\right\rangle &=& \frac{2N}{|x-y|^{8}} c^2, \label{es0-6} \\
\left\langle [u_{p,ij}\,\phi_i\phi_j](x) [u_{q,kl}\,\phi_k\phi_l](y)\right\rangle &=& \frac{2u_{p,ij}u_{q,ij}}{|x-y|^{8}} c^2= \frac{2(N-1)}{|x-y|^{8}}c^2\delta_{pq}, \label{es1-6} \\
\left\langle [u_{pq,ij}\,\phi_i\phi_j](x) [u_{rs,kl}\,\phi_k\phi_l](y)\right\rangle &=& \frac{2u_{pq,ij}u_{rs,ij}}{|x-y|^{8}} c^2= \frac{2(1-N)u_{pq,rs}}{|x-y|^{8}}c^2, \label{es2-6}
\end{eqnarray}
while the two point function of operators with different (anomalous) dimensions clearly vanish in the free theory. In the second equation, which shows the orthogonality of the operators \eqref{U2}, we have used the contraction formula \eqref{qq}. Repeated use of \eqref{fusion} and \eqref{trace} also leads to \eqref{es2-6}. The tensor $u_{pq,rs}$ on the r.h.s of Eq.~\eqref{es2-6} is the identity matrix in the $(N+1)(N-2)/2$ subspace $u_{pq,rs}$, while it vanishes in the $N+1$ dimensional subspace spanned by $\delta_{ij}$ and $u_{p,ij}$ (i.e. it is proportional to a projection operator). Notice also that from Eq.~\eqref{es2-6} the norm of $u_{rs,kl}\,\phi_k\phi_l$ is negative for $N>1$.
The projection operators \eqref{p1}-\eqref{p3} allow for a more natural, though redundant, description of the three scaling operators \eqref{U0}-\eqref{U2} which treats them on the same footing, labeling them all with two indices. In this representation the two-point functions are
\begin{equation}
\left\langle [(P_a)_{ij,pq}\,\phi_p\phi_q](x) \; [(P_b)_{kl,rs}\,\phi_r\phi_s](y)\right\rangle =
\frac{2(P_a P_b)_{ij,kl}}{|x-y|^{4\delta_c}} c^2= \frac{2\delta_{ab}(P_a)_{ij,kl}}{|x-y|^{4\delta_c}}c^2 ,
\end{equation}
which leads us to define the normalized scaling operators $\mathcal{O}^{(2)}_a$ as
\begin{equation} \label{o2}
\mathcal{O}^{(2)}_{a,ij} = \frac{1}{\sqrt{2}} (P_a)_{ij,kl}\,\phi_k\phi_l, \qquad a=1,2,3.
\end{equation}
In the first two operators $a=1,2$ the redundancy is removed by contracting them with $\delta_{ij}$ and $q^{(3)}_{ijk}$ respectively, which brings us back to the definitions \eqref{U0} and \eqref{U1} but with a different normalization. For later use let us then define
\begin{equation} \label{o0}
\mathcal{O}^{(2)}_0 = \mathcal{O}^{(2)}_{1,ij}\,\delta_{ij} = \frac{1}{\sqrt{2}}\,\phi\!\cdot\!\phi,
\end{equation}
\begin{equation} \label{ok}
\mathcal{O}^{(2)}_{k} = \mathcal{O}^{(2)}_{2,ij}\,q^{(3)}_{ijk} = \frac{1}{\sqrt{2}}\,q^{(3)}_{ijk}\phi_i\phi_j,
\end{equation}
which leads to an operator with no free index and an operator with one free index respectively. Notice that here \eqref{o0} and \eqref{ok} have not been normalized to unity. The description of the operator $\mathcal{O}^{(2)}_{3,ij}$ remains redundant and all we can say is that $\mathcal{O}^{(2)}_{3,ij}\,\delta_{ij} =0$ and $\mathcal{O}^{(2)}_{3,ij}\,q^{(3)}_{ijk}=0$, which put $N+1$ constraints on it.
The general analysis of this section can in principle be extended to higher order operators, using group theory arguments and perhaps also the equation \eqref{rr}. This will be beyond the scope of the present article.
\section{Potts models with $d_c=6,4,\frac{10}{3}$}\label{sect:potts-cft}
\subsection{Cubic Potts model}\label{sect:cubic-potts-cft}
We will study in this section the $S_{N+1}$ invariant scalar theory in $d=6-\epsilon$. In this case the critical interaction is cubic and the action takes the form
\begin{eqnarray}
S[\phi]
&=& \int {\rm d}^d x \Bigl\{
\frac{1}{2}(\partial\phi)^2
+ \frac{1}{3!}\zeta_{3} \,q^{(3)}_{i_1 i_2 i_3} \phi_{i_1}\phi_{i_2}\phi_{i_3}
\Bigr\} \label{cubic-potts}\,,
\end{eqnarray}
which has thus a single critical coupling. In the following sections we obtain the leading order critical data of this model including anomalous dimensions and some structure constants, and determine the $\epsilon$-dependence of the critical coupling.
\subsubsection{Anomalous dimension}
For the cubic model where $n=3/2$ the general formula \eqref{eta-cft} for the field anomalous dimension, written in terms of $\eta=2\gamma$, reduces to
\begin{equation}
\eta \,\delta_{ab} = \frac{c}{96}\;V_{aij}\, V_{bij}.
\end{equation}
Inserting into this equation the critical couplings given by $V_{aij} = \zeta_3 \,q^{(3)}_{aij}$ and using Eq.~\eqref{qq} to contract the indices,
one easily finds the expression for the anomalous dimension of the cubic Potts model in terms of the coupling
\begin{equation} \label{eta-cubic}
\eta = \frac{c}{96}\, \zeta^2_3\, (N-1),
\end{equation}
in agreement with the leading order results of ref~\cite{CSVZ4} obtained from RG.
\subsubsection{Quadratic operators} \label{ss:fo-cubic}
For the cubic Potts model the general equation \eqref{crit-cft} which gives the critical exponents of the mass operators reduces to the following equation in which the field anomalous dimensions are equal because of symmetries
\begin{equation}
(\gamma^S_2 - \eta) \; S_{ij} = -\frac{c}{16}\, V_{i\,l\,(p}V_{q)j\,l} \,S_{pq}.
\end{equation}
This equation is consistent with the RG flow equation \cite{CSVZ4} which governs the running of the coupling $J_{ab}$ in the operator $J_{ab}\phi_a\phi_b$. This becomes evident by setting $\beta_{J_{ab}} = -\gamma^S_2 J_{ab}$ in such an equation. It remains to diagonalize the stability matrix. In fact we need to find the eigenvectors and eigenvalues of the matrix
\begin{equation}
\mathcal M_{ij, pq} \equiv -\frac{c}{16} V_{ai(p}V_{q)aj} = -\frac{c}{16}\zeta_3^2\, q^{(3)}_{a\,i(p} \,q^{(3)}{}^{a}{}_{q)j} = -\frac{c}{16} \zeta_3^2\left(q^{(4)}_{ipjq} - \delta_{i(p}\delta_{q)j}\right).
\end{equation}
This is done for a general stability matrix in Section~\ref{ss:qo-symm}. For specific models, all we need to know is the coefficients of the three terms in the stability matrix, i.e. the parameters $\tau,\rho,\kappa$ defined in \eqref{M}. In the present example these are
\begin{equation}
\tau = -\frac{c}{16}\zeta_3^2, \qquad
\rho = 0, \qquad
\kappa = \frac{c}{16}\zeta_3^2.
\end{equation}
Using the relations \eqref{es1}-\eqref{es3} and the value of $\eta$ obtained in the previous section, one can immediately write down the two eigenvalues $\gamma^1_2$ and $\gamma^3_2$ corresponding to the scaling operators $(P_1)_{ij,kl}\,\phi_k\phi_l$ and $(P_2)_{ij,kl}\,\phi_k\phi_l$. The operator $(P_2)_{ij,kl}\,\phi_k\phi_l$ instead is the exception that we discussed in Section \ref{ss:qo}, the anomalous dimension of which does not satisfy \eqref{gammaS2}. This anomalous dimensions is given instead by the formula $\gamma^2_2 = \gamma + \epsilon/2$. In summary the anomalous dimensions and their corresponding eigenoperators are
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{es0-cubic}
\gamma_2^1 &=& -\frac{5c}{96}\, (N-1) \zeta^2_3, \qquad\quad\;\;\;\, (P_1)_{ij,kl}\,\phi_k\phi_l,
\\[2pt]
\gamma_2^2 &=& -\frac{c}{48}\, (2N-5)\zeta^2_3, \qquad\quad\;\, (P_2)_{ij,kl}\,\phi_k\phi_l, \label{es1-cubic}
\\
\gamma_2^3 &=& +\frac{c}{96}\, (N+5)\zeta^2_3, \qquad\qquad (P_3)_{ij,kl}\,\phi_k\phi_l. \label{es2-cubic}
\end{eqnarray}}%
\subsubsection{Structure constants} \label{ss:ope-cubic}
Finally let us give some examples of structure constants in the case of the cubic model.
The general structure constants presented in Sections~\ref{ssec:c12p2q} and~\ref{ssec:c112k} can be directly applied to the case $n=3/2$. In fact if for simplicity we avoid high order operators, the simplest examples that we can immediately find are the
multi-field generalizations of $C_{122}$ and $C_{111}$, keeping in mind that the quadratic operators involved in $C_{122}$ must not be a descendant. The generalization of $C_{122}$ is obtained by \eqref{ci2p2q} upon setting $p=q=1$ and reads
\begin{equation} \label{ciuv-cft}
C_{\phi_i {\cal S}_2 \tilde{{\cal S}}_2}
= -\zeta_3 \sqrt{c}\, q^{(3)}_{ilk}\,S_{jl} \, \tilde S_{kj},
\end{equation}
where appropriate rescaling of the fields has been done to accord with the usual CFT normalization, as discussed at the end of Section \ref{ssec:c12p2q-1}. This can be compared with the OPE coefficient
determined using the renormalization group~\cite{CSVZ4}, provided suitable rescalings are done in the beta function (see \cite{Codello:2017epp}), i.e the replacement $\zeta_3 \rightarrow \zeta_3 /2(4\pi)^{3/2}$ is made, which in terms of $c$ is simply given as $\zeta_3 \rightarrow\zeta_3 \sqrt{c}/8$. Agreement between CFT and RG results is then verified immediately at this level. In order to obtain the explicit form of these OPE coefficients we choose the scaling operators ${\cal S}_2$ and $\tilde{{\cal S}}_2$ among \eqref{o2}, but excluding the descendant operator $\mathcal{O}^{(2)}_{2,ij}$. This gives
\begin{equation}
C_{\phi_i\,\mathcal{O}_{a,pq}^{(2)} \mathcal{O}_{b,rs}^{(2)}} = -\frac{1}{2}\zeta_3 \sqrt{c}\, q^{(3)}_{ijk}\, (P_a)_{pq,jl}\,(P_b)_{rs,lk}.
\end{equation}
with $a,b\neq 2$. These OPE coefficients vanish for the cases $(a,b)=(1,1),(1,3)$ and therefore the only nontrivial example is the following
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
C_{\phi_i\,\mathcal{O}_{3,pq}^{(2)} \mathcal{O}_{3,rs}^{(2)}} = \frac{1}{2}\zeta_3 \sqrt{c}\, &&\left[\frac{N}{(N-1)^2} q^{(5)}_{ipqrs}-\frac{1}{4}\left(\delta_{rp}q^{(3)}_{isq}+\delta_{rq}q^{(3)}_{isp}+\delta_{sp}q^{(3)}_{irq}+\delta_{sq}q^{(3)}_{irp}\right) \right. \nonumber\\
- && \left. \frac{1}{2(N-1)}\left(\delta_{ip}q^{(3)}_{qrs}+\delta_{iq}q^{(3)}_{prs}+\delta_{ir}q^{(3)}_{pqs}+\delta_{is}q^{(3)}_{pqr}\right) \right. \nonumber\\
- && \left. \frac{1}{(N-1)^2}\left(\delta_{pq}q^{(3)}_{irs}+\delta_{rs}q^{(3)}_{ipq}\right)\right].
\end{eqnarray}
In the next section we obtain the fixed point value of $\zeta_3$ from CFT considerations which can then be inserted into the above equations in order to get the physical $\epsilon$-dependent result.
As the second example, Eq.~\eqref{ci2p-12q-1} gives for $p,q=1$ the generalization of the single field structure constant $C_{111}$. This is given by
\begin{equation} \label{c111-cubic}
C_{\phi_i \phi_j \phi_k} = -\frac{1}{8} V_{ijk} C^{\mathrm{free}}_{211} = -\frac{c^2}{4} \zeta_3 \, q^{(3)}_{ijk} \rightarrow -\frac{1}{4} \zeta_3 \sqrt{c} \, q^{(3)}_{ijk} = - \sqrt{\frac{2\epsilon}{7-3N}} \, q^{(3)}_{ijk},
\end{equation}
where the first two expressions are obtained from Eq.~\eqref{ci2p-12q-1}, while the third expression is given in the CFT normalization where the fields are rescaled to set their two point functions to unity. In the last equation $\zeta_3 \sqrt{c}/8$ has been set to the fixed point value computed in the next section in Eq.~\eqref{zeta3-fp}.
Apart from these structure constants that are proportional to the coupling constant it is straightforward to obtain by direct calculation some nontrivial structure constants in the free theory. For instance we have
\begin{equation}
\langle \mathcal{O}^{(2)}_{1,ij}(x) \mathcal{O}^{(2)}_{1,kl}(y) \mathcal{O}^{(2)}_{1,pq}(z)\rangle
= \frac{2\sqrt{2}N^{-3}\delta_{ij}\delta_{kl}\delta_{pq}}{|x-y|^2|y-z|^2|x-z|^2} \label{O111},
\end{equation}
\begin{equation}
\langle \mathcal{O}^{(2)}_{1,ij}(x) \mathcal{O}^{(2)}_{3,kl}(y) \mathcal{O}^{(2)}_{3,pq}(z)\rangle
= \frac{2\sqrt{2}N^{-1}\delta_{ij}(P_3)_{kl,pq}}{|x-y|^2|y-z|^2|x-z|^2} \label{O133},
\end{equation}
where in the second equation repeated use of the fusion and trace rules in Appendix B.\ref{ss:reduction} has been made. Notice that three-point functions involving the descendant operator $\mathcal{O}^{(2)}_{2,ij}$ do not define CFT data. The only nontrivial three-point function left that involves the primary quadratic operators is the one with three operators $\mathcal{O}^{(2)}_{3,ij}$ that we have avoided to write due to its long expression.
\subsubsection{Critical coupling $\zeta_3(\epsilon)$}\label{ss:cc-cubic}
One can also fix at leading order the relation linking the coupling $\zeta_3$ to $\epsilon$. This has been done in the single-field case in \cite{Codello:2017qek},
and extended to a general multi-field model in Section \ref{ss:fp-cubic}. However, it may still be instructive to repeat the argument directly for the permutation invariant case.
Let us start from the relation
\begin{equation}
\langle \phi^i(x) \phi^j(y) \phi^k(z)\rangle = \frac{C_{ijk}}{|x-y|^2|y-z|^2|x-z|^2} , \quad C_{ijk} = -\frac{1}{4}\zeta_3 c^2 q^{(3)}_{ijk} ,
\end{equation}
where for the structure constant $C_{ijk}$ we have used the second expression in Eq.~\eqref{c111-cubic}. This is simply because here we are working with the original scalar field and not the rescaled one $\hat \phi$.
Acting with three Laplacians on the general scaling form one obtains
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\Box_x\Box_y\Box_z\langle \phi^i(x) \phi^j(y) \phi^k(z)\rangle &\stackrel{\mathrm{LO}}{=}& 32(\epsilon-6\gamma)\frac{C_{ijk}}{|x-y|^4|y-z|^4|x-z|^4}\nonumber\\
&=& 8(6\gamma-\epsilon)\zeta_3 c^2\frac{q^{(3)}_{ijk}}{|x-y|^4|y-z|^4|x-z|^4} ,
\label{eqcub3box1-2}
\end{eqnarray}}%
while using the SDE one gets
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\langle \Box_x\phi^i(x) \Box_y\phi^j(y) \Box_z\phi^k(z)\rangle &=& \frac{\zeta_3^3}{2!^3}\langle q^{(3)}_{ii_1i_2}\phi^{i_1}(x)\phi^{i_2}(x)\; q^{(3)}_{jj_1j_2}\phi^{j_1}(y)\phi^{j_2}(y)\;q^{(3)}_{kk_1k_2}\phi^{k_1}(z)\phi^{k_2}(z)\rangle \nonumber\\
&\stackrel{\mathrm{LO}}{=}& \frac{\zeta_3^3}{2!^3}\,8(N-2)c^3\frac{q^{(3)}_{ijk}}{|x-y|^4|y-z|^4|x-z|^4}.
\label{eqcub3box2-2}
\end{eqnarray}}%
Equating the expressions on the right hand side of Eqs.~\eqref{eqcub3box1-2} and~\eqref{eqcub3box2-2}
and recalling the value of the field anomalous dimension \eqref{eta-cubic}, this
gives
\begin{equation}
\frac{1}{4}\zeta_3^3\,(N-1)c^3-8\epsilon\zeta_3 c^2 = \zeta_3^3\,(N-2)c^3,
\end{equation}
which can be trivially solved
\begin{equation} \label{zeta3-fp}
\frac{\sqrt{c}}{8}\zeta_3(\epsilon) \stackrel{\mathrm{LO}}{=} \frac{1}{2}\sqrt{\frac{2\epsilon}{7-3N}}.
\end{equation}
This is in agreement with the RG result~ \cite{CSVZ4} after making the replacement $\zeta_3 \rightarrow 8\zeta_3 /\sqrt{c}$ to accord with the RG conventions. One may now use this $\epsilon$ dependence of the coupling to express all the CFT data found in previous sections in terms of $\epsilon$. As an instance the field anomalous dimension given in \eqref{eta-cubic} and the anomalous dimension of the descendant operator $q^{(3)}_{ijk}\phi^j\phi^k$ reported in \eqref{es1-cubic} become
\begin{equation}
\gamma = \frac{1}{6}\, \frac{N-1}{7-3N}\, \epsilon, \qquad
\gamma^2_2 = \frac{2}{3}\,\frac{2N-5}{3N-7}\,\epsilon.
\end{equation}
It is interesting to note that in the large-$N$ limit these critical data tend to those of the Lee-Yang model \cite{Rong:2017cow}
\begin{equation}
\gamma = -\frac{1}{18}\, \epsilon, \qquad
\gamma^2_2 = \frac{4}{9}\,\epsilon.
\end{equation}
\subsection{Quartic (restricted) Potts model}\label{sect:quartic-potts-cft}
Let us now move to the quartic theory. In this case we impose on top of permutation symmetry also a $\mathbb{Z}_2$ symmetry and hence refer to it as the restricted quartic Potts model \cite{Rong:2017cow}.
The action of the $S_{N+1} \times \mathbb{Z}_2$ invariant theory in $d=4-\epsilon$ is as follows
\begin{eqnarray}
S[\phi]
&=&
\int {\rm d}^d x \Bigl\{
\frac{1}{2}(\partial\phi)^2 + \frac{1}{4}\zeta_{4,1} (\phi^2)^2+ \frac{1}{4!} \zeta_{4,2} \,q^{(4)}_{i_1 i_2 i_3 i_4} \phi_{i_1}\phi_{i_2}\phi_{i_3}\phi_{i_4}\Bigr\}.
\end{eqnarray}
As already mentioned in general the equation of motion
\begin{equation}
\Box \phi_i =\zeta_{4,1} \phi^2 \phi_i +\frac{1}{3!} \zeta_{4,2} \,q^{(4)}_{i i_1 i_2 i_3} \phi_{i_1}\phi_{i_2}\phi_{i_3},
\end{equation}
shows that turning on the interactions there is a recombination of the conformal multiplets such that the composite operator on the r.h.s has a scaling dimension $2+\Delta$, where $\Delta$ is the scaling dimension of the fields $\phi_i$ which in this case are all the same because of symmetries.
\subsubsection{Anomalous dimension}
To compute the anomalous dimension for the quartic model where $n=2$ we need the eigenvalue of the quadratic tensor in $T^{(4)}_{abcd}$ defined in \eqref{t4}. This is given by
\begin{equation}
T^{(4)}_{abc\, p} T^{(4)}{}^{abc}{}_{q} = \left[\zeta^2_{4,1}\frac{1}{3}(N+2) +\zeta^2_{4,2}(N^2-N+1) + 2\,\zeta_{4,1}\zeta_{4,2}N\right]\delta_{pq}\,,
\end{equation}
from which the anomalous dimension follows directly upon setting $n=2$ in \eqref{eta-cft}
\begin{equation}
\eta = \frac{c^2}{96} \left[\zeta^2_{4,1}\frac{1}{3}(N+2) +\zeta^2_{4,2}(N^2-N+1) + 2\,\zeta_{4,1}\zeta_{4,2}N\right]\,.
\end{equation}
This expression agrees with the results obtained from RG analysis \cite{CSVZ4}.
\subsubsection{Quadratic operators}\label{sect:quad_op_4}
The computation of the critical exponents of the quadratic operators for the case of (restricted) quartic Potts model, which corresponds to $n=2$, is easier compared to the other cases as the eigenvalue equation \eqref{crit-cft-2} is linear in the potential.
Indeed following closely the same strategy used for the single field $\phi^4$ theory~\cite{Codello:2017qek} and discussed in general in Section.~\ref{ss:qo},
one can start from the correlator
\begin{equation}
\langle \phi_i(x) \phi_j(y) [S_{pq} \phi_p \phi_q](z)\rangle,
\end{equation}
where $[S_{pq} \phi_p \phi_q]$ must be a scaling operator with anomalous dimension $\gamma^S_2$.
Making use of the SDE on one hand we have
\begin{align}
\Box_x \braket{ \phi_i(x) \phi_j(y) [S_{pq} \phi_p \phi_q](z) }&\overset{{\rm LO}}{=} \frac{{8c^2} \gamma\, S_{ij}}{|y-z|^{2 } |z-x|^{4}}
- \frac{4c^2(2\gamma\!-\!\gamma^S_2) S_{ij}}{|x-y|^{2 }|z-x|^{4}} \,,
\label{box3pf1gamma2}
\end{align}
expression which should match at leading order
\begin{align}
\braket{\Box_x \phi_i(x) \phi_j(y) [S_{pq} \phi_p \phi_q](z) } &=
\frac{T^{(4)}_{iabc} S_{pq}}{3!} \braket{[\phi_a\phi_b\phi_c](x) \phi_j(y) [\phi_p \phi_q](z) }\nonumber \\
&\overset{{\rm LO}}{=} T^{(4)}_{ijpq} S_{pq} \frac{c^3}{|x-y|^{2 }|z-x|^{4}} \,.
\end{align}
We can therefore deduce that at leading order $\gamma/\gamma^S_2 \to 0$, i.e. $\gamma^S_2$ depends linearly on the marginal couplings and write the eigenvalue equation
\begin{equation}
\gamma^S_2 S_{ij}=\frac{c}{4} T^{(4)}_{ijpq} S_{pq}.
\label{gamma2eq}
\end{equation}
To obtain the critical exponents one has to diagonalize the matrix
\begin{equation} \label{v=t4}
\mathcal{M}_{i_1i_2i_3i_4} = \frac{c}{4} T^{(4)}_{i_1i_2i_3i_4} = \frac{c}{4}\zeta_{4,1}\delta_{(i_1i_2} \delta_{i_3i_4)}+\frac{c}{4}\zeta_{4,2}\,q^{(4)}_{i_1i_2i_3i_4}.
\end{equation}
From this, the parameters defined in \eqref{M} are immediately read off
\begin{equation}
\tau = \frac{c}{4}\zeta_{4,2}, \qquad
\rho = \frac{c}{12}\zeta_{4,1}, \qquad
\kappa = \frac{c}{6}\zeta_{4,1}.
\end{equation}
Given the values of these parameters and the fact that the anomalous dimension is of higher order, the eigensolutions of the stability matrix can then be summarized using \eqref{es1}-\eqref{es3} as follows
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{es0-quartic}
\gamma_2^1 &=& \frac{c}{12} \zeta_{4,1} (N+2) +\frac{c}{4}\zeta_{4,2}N, \hspace{47pt} (P_1)_{ij,kl}\,\phi_k\phi_l, \label{es1-quartic}
\\[2pt]
\gamma_2^2 &=& \frac{c}{6} \zeta_{4,1}+\frac{c}{4}\zeta_{4,2}(N-1),\hspace{64pt} (P_2)_{ij,kl}\,\phi_k\phi_l,
\\
\gamma_2^3 &=& \frac{c}{6}\zeta_{4,1}, \hspace{146pt} (P_3)_{ij,kl}\,\phi_k\phi_l. \label{es2-quartic}
\end{eqnarray}}%
\subsubsection{Critical couplings $\zeta_{4,1}(\epsilon)$ and $\zeta_{4,2}(\epsilon)$}\label{ss:cc-quartic}
As in the cubic model, one can fix the $\epsilon$ dependence of the critical couplings $\zeta_{4,1}(\epsilon)$ and $\zeta_{4,2}(\epsilon)$ also in this case. This is of course a special case of the general analysis presented in Section \ref{ss:rr}, but let us take a slightly different root which can serve also as a crosscheck in this particular case. Before getting into the actual calculation let us first review the single-field case. For a single scalar field we have at leading order
\begin{equation}
\square_x\square_y \langle \phi(x) \,\phi(y)\, \phi(z)^{2} \rangle
\stackrel{\mathrm{LO}}{=} C^{\mathrm{free}}_{112} \, \frac{4(\eta-\gamma_2)(\eta +\gamma_2-\epsilon)}{|x-y|^{4}|x-z|^{2}|y-z|^{2}},
\end{equation}
which we can compare, using the SDE, to
\begin{equation}
\langle \square_x\phi(x) \,\square_y\phi(y)\, \phi(z)^{2} \rangle
\stackrel{\mathrm{LO}}{=} \frac{g^2}{(2n-1)!^2} \frac{C^{\mathrm{free}}_{2n-1,2n-1,2}}{|x-y|^{4}|x-z|^{2}|y-z|^{2}},
\end{equation}
where $g$ is the coefficient of $\frac{1}{4!}\phi^4$ in the Lagrangian. Equating the two and using the known result $\gamma_2 = cg/4$, which may be found by applying only one $\Box_x$ to the same correlator as the above, and the fact that $\eta=\mathcal{O}(\epsilon^2)$, one finds $g = 4c\epsilon/3$.
The generalization of this calculation to the multi-field case is slightly more subtle because of the fact that there are more than one quadratic operators and one has to pick those with a definite scaling obtained in Section~\ref{sect:quad_op_4}. Using the projectors in the space of the scaling quadratic operators introduced in Section.~\ref{ss:qo-symm} we write on one hand
\begin{equation}
\square_x\square_y \langle \phi_i(x) \,\phi_j(y)\, [(P_a)_{kl,rs}\phi_r\phi_s](z) \rangle
\stackrel{\mathrm{LO}}{=} C^{\mathrm{free}}_{112} \, \frac{4(\eta-\gamma^a_2)(\eta +\gamma^a_2-\epsilon)}{|x-y|^{4}|x-z|^{2}|y-z|^{2}} (P_a)_{kl,ij},
\end{equation}
while on the other hand, using the SDE, we obtain
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
&& \langle \square_x\phi_i(x) \,\square_y\phi_j(y)\, [(P_a)_{kl,rs}\phi_r\phi_s](z) \rangle
\nonumber\\
&=& \frac{1}{3!^2}\langle [T^{(4)}_{ii_1i_2i_3}\phi_{i_1}\phi_{i_2}\phi_{i_3}](x) \,[T^{(4)}_{jj_1j_2j_3}\phi_{j_1}\phi_{j_2}\phi_{j_3}](y)\, [(P_a)_{kl,rs}\phi_r\phi_s](z) \rangle \nonumber\\
&\stackrel{\mathrm{LO}}{=}& \frac{C^{\mathrm{free}}_{332}}{3!^2} \, \frac{T^{(4)}_{pqir}T^{(4)}_{pqjs}(P_a)_{kl,rs}}{|x-y|^{4}|x-z|^{2}|y-z|^{2}}.
\end{eqnarray}}%
Equating the two expressions we get
\begin{equation}
T^{(4)}_{pqir}T^{(4)}_{pqjs}(P_a)_{kl,rs} c^4 = 8c^2 (\eta-\gamma^a_2)(\eta +\gamma^a_2-\epsilon)(P_a)_{kl,ij}
\end{equation}
and noting that $\gamma^a_2=O(\epsilon)$ while $\eta=O(\epsilon^2)$ one can finally write
\begin{equation}
c^2T^{(4)}_{pqir}T^{(4)}_{pqjs}(P_a)_{kl,rs} = 8\gamma^a_2(\epsilon-\gamma^a_2)(P_a)_{kl,ij}.
\end{equation}
This equation can be specialized in the three different subspaces for the quadratic scaling operators. For $a=1$ we have
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
&& \frac{1}{3} \left[(N+2)\zeta^2_{4,1} +3(N^2-N+1)\zeta^2_{4,2} + 6N\zeta_{4,1}\zeta_{4,2}\right]c^2 \nonumber\\
&=& 8 \left(\frac{c}{12} \zeta_{4,1} (N+2) +\frac{c}{4}\zeta_{4,2}N\right)\left(\epsilon - \frac{c}{12}
\zeta_{4,1} (N+2) -\frac{c}{4}\zeta_{4,2}N\right).
\end{eqnarray}}%
while for $a=2$ we get
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
&& \frac{1}{9} \left[(N+6)\zeta^2_{4,1} +9(N^2-2N+2)\zeta^2_{4,2} + 6(3N-2)\zeta_{4,1}\zeta_{4,2}\right]c^2 \nonumber\\
&=& 8 \left(\frac{c}{6} \zeta_{4,1}+\frac{c}{4}\zeta_{4,2}(N-1)\right)
\left(\epsilon - \frac{c}{6} \zeta_{4,1}-\frac{c}{4}\zeta_{4,2}(N-1)\right).
\end{eqnarray}}%
and finally $a=3$ gives
\begin{equation}
\frac{1}{9} \left[(N+6)\zeta^2_{4,1} +9\zeta^2_{4,2} + 6N\zeta_{4,1}\zeta_{4,2}\right]c^2 = 8 \frac{c}{6}\zeta_{4,1}\left(\epsilon - \frac{c}{6}\zeta_{4,1}\right).
\end{equation}
Despite the fact that there are three equations and two unknown variables, one can check that picking any pair of equations gives rise to the following four solutions for the critical couplings
\begin{equation}
\frac{c}{4}\left( \zeta_{4,1}, \zeta_{4,2}\right) = \!\Biggl\{\left(0, 0 \right), \left(\frac{3 \epsilon}{N\!+\!8}, 0 \right), \frac{\epsilon}{N\!+\!3} \left(1 \,,\, \frac{1}{3} \right),
\frac{\epsilon}{N^2\!-\!5N\!+\!8} \left(1 \,,\, \frac{N\!-\!4}{3} \right)\Biggr\}
\label{FP4}
\end{equation}
which are equal to those found from RG at leading order~\cite{CSVZ4}, after making the replacement $\zeta_{4,1}\rightarrow 4\zeta_{4,1}/c$.
Therefore with this method one can determine at leading order the three different nontrivial CFTs which correspond to the scale invariant theories (RG fixed points).
Now that we have obtained the leading order $\epsilon$-dependence of the couplings, let us express some of the critical data that we have found in terms of $\epsilon$. These universal results are given here, as an example, at the third critical point in \eqref{FP4}. For this model the field anomalous dimension is
\begin{equation}
\eta = \frac{(N+1)(N+7)}{54(N+3)^2} \epsilon\,.
\end{equation}
Also, the anomalous dimensions of the three scaling operators are
\begin{equation} \label{es-quartic-eps}
\gamma_2^1 = \frac{2(N+1)}{3(N+3)} \epsilon,
\qquad
\gamma_2^2 = \frac{N+1}{3(N+3)} \epsilon,
\qquad
\gamma_2^3 = \frac{2}{3(N+3)} \epsilon.
\end{equation}
\subsubsection{Quartic scaling operators}
Having at our disposal the general eigenvalue equation \eqref{gamma-k} for arbitrarily high order operators for even models, in the presence of permutation symmetry which significantly constrains the form of the potential we can take advantage of it to obtain some scaling operators and their corresponding anomalous dimensions for higher order operators.
let us consider as an
example the quartic operators, which will be used also in the next section. For simplicity we restrict ourselves to the space of invariant quartic operators, that is linear combinations of the form
\begin{equation}
\xi_{4,1} \,\delta_{(i_1 i_2} \delta_{i_3 i_4)} + \xi_{4,2} \,q^{(4)}_{i_1 i_2 i_3 i_4}.
\end{equation}
Inserting this into \eqref{gamma-k} using the fact that $V_{i_1i_2i_3i_4} = T^{(4)}_{i_1i_2i_3i_4}$ leads to a two dimensional eigenvalue problem
\begin{equation}
\gamma^S_4
\left(
\begin{array}{c} \xi_{4,1} \\ \xi_{4,2} \ea
\right)
=
\left(
\begin{array}{cc}
\frac{2}{3}(N+8)\zeta_{4,1}+2N\zeta_{4,2} & 2N\zeta_{4,1}+6\zeta_{4,2} \\
4\zeta_{4,2} & 4\zeta_{4,1}+6(N-1)\zeta_{4,2}
\ea
\right)
\left(
\begin{array}{c} \xi_{4,1} \\ \xi_{4,2} \ea
\right)
\end{equation}
which can be solved easily. Below, for each of the nontrivial fixed points of \eqref{FP4} we report the two scaling operators together with their corresponding anomalous dimensions, which are obtained by solving the above eigenvalue equation. For the first nontrivial fixed point in \eqref{FP4} these are
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{o1-quart1}
\mathcal{O}^{(4)}_1 &=& \frac{1}{\sqrt{8N(N+2)}} (\phi\!\cdot\!\phi)^2,
\\ \label{o2-quart1}
\mathcal{O}^{(4)}_2 &=& \sqrt{\frac{N+2}{24N(N-2)(N^2-1)}} \left(q^{(4)}_{ijkl}\,\phi_i\phi_j\phi_k\phi_l - \frac{3N}{N+2}(\phi\!\cdot\!\phi)^2\right).
\end{eqnarray}}%
\begin{equation}
\gamma^1_4 = 2\epsilon, \qquad \gamma^2_4 = \frac{12\epsilon}{N+8}
\end{equation}
Notice that both expressions diverge when $N$ vanishes and also the last expression blows up in the limit $N\rightarrow -1$ which is a sign that the norm of the operators inside parenthesis vanishes in these limits. At the second nontrivial fixed point the scaling operators and their anomalous dimensions are
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{o1-quart2}
\mathcal{O}^{(4)}_1 &=& \frac{1}{\sqrt{24N(N+1)(N+7)}} \left(q^{(4)}_{ijkl}\,\phi_i\phi_j\phi_k\phi_l +3(\phi\!\cdot\!\phi)^2\right).
\\ \label{o2-quart2}
\mathcal{O}^{(4)}_2 &=& \frac{1}{\sqrt{2N(N-1)(N-2)(N+7)}} \left(q^{(4)}_{ijkl}\,\phi_i\phi_j\phi_k\phi_l -\frac{N+1}{2}(\phi\!\cdot\!\phi)^2\right).
\end{eqnarray}}%
\begin{equation}
\gamma^1_4 = 2\epsilon, \qquad \gamma^2_4 = \frac{4(N+1)\epsilon}{3(N+3)}
\end{equation}
Again, at $N=0$ both operators and at $N=-1$ the first operator blows up. Finally at the third nontrivial fixed point we have
{\setlength\arraycolsep{2pt}
\begin{eqnarray} \label{o1-quart3}
\mathcal{O}^{(4)}_1 &=& \frac{N-4}{\sqrt{24N(N-1)(N-2)(N^2-6N+11)}} \left(q^{(4)}_{ijkl}\,\phi_i\phi_j\phi_k\phi_l +\frac{3}{N-4}(\phi\!\cdot\!\phi)^2\right).
\\ \label{o2-quart3}
\mathcal{O}^{(4)}_2 &=& \frac{1}{\sqrt{8N(N+1)(N^2-6N+11)}} \left(q^{(4)}_{ijkl}\,\phi_i\phi_j\phi_k\phi_l -(N-2)(\phi\!\cdot\!\phi)^2\right).
\end{eqnarray}}%
\begin{equation}
\gamma^1_4 = 2\epsilon, \qquad \gamma^2_4 = \frac{2(N-1)(N-2)\epsilon}{3(N^2-5N+8)}
\end{equation}
As the reader might have already noticed, the first operators at each fixed point, i.e. operators \eqref{o1-quart1}, \eqref{o1-quart2}, \eqref{o1-quart3} all have anomalous dimension $\gamma^1_4 = 2\epsilon$. In fact, apart from a normalization factor, these operators are nothing but the potential itself evaluated at the corresponding fixed point, and according to the discussion at the end of Section \ref{ss:gem}, regardless of the fixed point, the critical operator \eqref{V} is always a scaling operator with anomalous dimension \eqref{gamma-V-2n} which in the present case of $n=2$ is equal to $2\epsilon$.
\subsubsection{Structure constants: some examples}
As an example of a structure constant for the quartic Potts model one can consider for instance the expression \eqref{ciuv-even} evaluated for $n=2$. The simplest case corresponds to $p=2$ and $q=1$, which gives a generalization of $C_{114}$. The general expression for this structure constant, when the rescaling $\phi_i \rightarrow \phi_i/\sqrt{c}$ has been done to bring the two-point functions to unity, is
\begin{equation} \label{C114}
C_{\phi_i \phi_j {\cal S}_ 4} = \frac{V_{iklm}\,S_{j klm}}{3!} \frac{c}{8}C^{\mathrm{free}}_{1,3,4} = \frac{c}{2}T^{(4)}_{iklm}\,S_{j klm},
\end{equation}
In order to evaluate this explicitly one needs to choose the operator ${\cal S}_4$ appropriately, i.e. such that it is a scaling operator satisfying \eqref{gamma-k}. As pointed out earlier in Section \ref{ss:gem} and shown explicitly in the previous section, at order $\epsilon$ one of the scaling operators is always the potential itself which corresponds to taking $S_{i_1i_2i_3i_4} = T^{(4)}_{i_1i_2i_3i_4}$. Such operators have been normalized to unity and reported in equations \eqref{o1-quart1}, \eqref{o1-quart2} and \eqref{o1-quart3} respectively for the three nontrivial critical theories given in \eqref{FP4}. Using the explicit form of these scaling operators which all have eigenvalue $2\epsilon$, and inserting them into the general equation \eqref{C114} we obtain the structure constants, respectively for the three nontrivial critical points of \eqref{FP4}
\begin{equation}
C_{\phi_i \phi_j {\cal O}^{(4)}_1}
= \sqrt{\frac{N+2}{2N}} \,\frac{\epsilon \delta_{ij}}{N+8},
\end{equation}
\begin{equation}
C_{\phi_i \phi_j {\cal O}^{(4)}_1}
= \sqrt{\frac{N^2+8N+7}{6N}} \,\frac{\epsilon \delta_{ij}}{3(N+3)},
\end{equation}
\begin{equation}
C_{\phi_i \phi_j {\cal O}^{(4)}_1} = \sqrt{\frac{N^4-9N^3+31N^2-45N+22}{6N}} \,\frac{\epsilon \delta_{ij}}{3(N^2-5N+8)}.
\end{equation}
These results are in complete agreement with RG analysis~\cite{CSVZ4}. As the next step one may be tempted to calculate the structure constants involving the operators ${\cal O}^{(4)}_2$. However, one can argue that, regardless of the critical theory, replacing the operator ${\cal S}_4$ in \eqref{C114} with the operators ${\cal O}^{(4)}_2$ or any other quartic operator with an eigenvalue different from that of ${\cal O}^{(4)}_1$, the resulting structure constants vanish. This can be seen as follows. By symmetry arguments, the structure constant \eqref{C114} which has two free indices can only be proportional to $\delta_{ij}$. This means that \eqref{C114} is proportional to its trace, i.e. setting $i=j$ and summing over the index. But the trace is proportional to the two-point function of the operators $V$ and ${\cal S}_4$ which vanishes if they have different eigenvalues. This can also be checked explicitly for the three ${\cal O}^{(4)}_2$ operators given in the previous section.
\subsection{Quintic Potts model}\label{sect:quintic-potts-cft}
The last model that we consider is the critical quintic model which is defined in $d_c=\frac{10}{3}$. The $S_{N+1}$-invariant action takes the form
\begin{equation}
S[\phi] = \int {\rm d}^d x \Bigl\{
\frac{1}{2}(\partial\phi)^2
+ \frac{1}{5!}\Bigl(\zeta_{5,1} \delta_{(i_1 i_2} q^{(3)}_{i_3 i_4 i_5)} +\zeta_{5,2} \,q^{(5)}_{i_1 i_2 i_3 i_4 i_5}\Bigr)\phi_{i_1}\dots \phi_{i_5}
\Bigr\}\,,
\end{equation}
with two marginal couplings. In the following we give several results for the critical data in the $\epsilon$ expansion at leading order. Notice that one finally has to set $\epsilon=1/3$ in order to get the results in three dimensions.
\subsubsection{Anomalous dimension}
The quintic model corresponds to the multicriticality label $n=5/2$. In this case the computation of the anomalous dimension requires the following quadratic tensor appearing in \eqref{t5}
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
T^{(5)}_{abcd\, p} T^{(5)}{}^{abcd}{}_{q} &=& \left[\zeta^2_{5,1}\frac{3}{100}(N+18) + \zeta^2_{5,2}(N^2-N+1)+ \zeta_{5,1}\zeta_{5,2}\frac{3}{5}(3N-2)\right] N q^{(2)}_{pq} \nonumber\\
&+& \left[\zeta^2_{5,1}\frac{1}{100}(N^2+8N-42)- \zeta^2_{5,2}+ \zeta_{5,1}\zeta_{5,2}\frac{1}{5} N(N-4)\right] q^{(2)}_{pq} \nonumber\\
&+& \left[\zeta^2_{5,1}\frac{3}{50}(N-3)\right]N q^{(2)}_{pq} + \left[\zeta^2_{5,1}\frac{3}{50}(N-3)\right]q^{(2)}_{pq} \nonumber\\[3mm]
&=& \left[\zeta^2_{5,1}\frac{1}{10}(N+6) +\zeta^2_{5,2}(N^2+1) + 2\,\zeta_{5,1}\zeta_{5,2}N\right](N-1)q^{(2)}_{pq}\,.
\end{eqnarray}}%
This can then be used to obtain the anomalous dimension in terms of the two critical couplings
\begin{equation} \label{eta-quintic}
\eta = \frac{3c^3}{640}\left[\zeta^2_{5,1}\frac{1}{10}(N+6) +\zeta^2_{5,2}(N^2+1) + 2\,\zeta_{5,1}\zeta_{5,2}N\right](N-1)\,.
\end{equation}
This expression agrees with the findings of RG analysis~\cite{CSVZ4}.
\subsubsection{Quadratic operators} \label{ss:fo-103}
Let us now turn to the critical exponents of operators of the form $S_{ab}\phi_a\phi_b$. For these to be eigenoperators in the quintic model they must satisfy
\begin{equation}
(\gamma^S_2 - \eta) \; S_{ij} =\frac{3c^3}{32}\, V_{i\,p\,i_2i_3 i_4}V_{j\,q\,i_2 i_3 i_4} \,S_{pq},
\end{equation}
which is a special case of \eqref{crit-cft} for $n=5/2$. This eigenvalue equation is consistent with the linear flow of the couplings of quadratic operators $\phi_a\phi_b$ obtained directly with RG methods~\cite{CSVZ4}.
In order to diagonalize this equation one needs to find the eigenvectors and eigenvalues of the matrix
\begin{equation} \label{M=T5T5}
\mathcal M_{ij, pq} = \frac{3c^3}{32} T^{(5)}_{abc\, ip} T^{(5)}{}^{abc}{}_{jq}, \qquad
T^{(5)}_{i_1i_2i_3i_4i_5} = \zeta_{5,1}\delta_{(i_1i_2} q^{(3)}_{i_3i_4i_5)}+\zeta_{5,2}\,q^{(5)}_{i_1i_2i_3i_4i_5}.
\end{equation}
The details of this computation are collected in Appendix~B.\ref{ss:details}. Here we report the final result, i.e. the eigenvalues
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\label{gamma21-quintinc}
\gamma_2^1 &=& \frac{63c^3}{640}\!\!\left[\frac{1}{10}(N+6)\zeta^2_{5,1} +(N^2+1)\zeta^2_{5,2} + 2N\zeta_{5,1}\zeta_{5,2}\right]\!(N-1) = 21\eta
\\[1mm]
\label{gamma22-quintinc}
\gamma_2^2 &=& \frac{3c^3}{640}\bigg[\frac{9\zeta_{5,1}^2}{10}(N^2+15N-26) + (21N^3-41N^2+41N-41)\zeta_{5,2}^2
\nonumber\\[-1mm]
&& \hspace{10mm} + 6(7N^2-13N+4)\zeta_{5,1}\zeta_{5,2}\bigg],
\\[1mm]
\label{gamma23-quintinc}
\gamma_2^3 &=&\! \frac{3c^3}{640}\left[\frac{3}{10}\zeta^2_{5,1}(N+14) +\zeta^2_{5,2}(N^2+2N+7) + 6\zeta_{5,1}\zeta_{5,2}N\right]\!(N-3),
\end{eqnarray}}%
respectively corresponding to the scaling operators $(P_a)_{ij,kl}\,\phi_k\phi_l$ for $a=1,2,3$. The value of $\gamma_2^1$ is again seen to be consistent with the general result \eqref{gamma20-eta}.
\subsubsection{Structure constants}
Among the general classes of structure constants that we have obtained in this work, the ones in Sections~\ref{ssec:c12p2q},~\ref{ssec:c111} and~\ref{ssec:c112k} can be applied to the case $n=5/2$. The simplest examples, that do not involve high order operators are the
multi-field generalizations of $C_{122}$ and $C_{111}$. The former is given by \eqref{ci2p2q} upon setting $p=q=1$ and reads
\begin{equation} \label{c122-quintic}
C_{\phi_i {\cal S}_2 \tilde{{\cal S}}_2}
= -\frac{9}{4}\; c^{\scriptscriptstyle 3/2}\, T^{(5)}_{i\,a_1a_2\,b_1b_2}\,S_{a_1a_2} \, \tilde S_{b_1b_2},
\end{equation}
where appropriate rescaling of the fields has been done to accord with the usual CFT normalization, as discussed at the end of Section \ref{ssec:c12p2q-1}. This can be compared with the OPE coefficient
determined using the renormalization group~\cite{CSVZ4}, provided suitable rescalings are done in the beta function.
In order to obtain the explicit form of these structure constants (OPE coefficients) we choose the scaling operators ${\cal S}_2$ and $\tilde{{\cal S}}_2$ among \eqref{o2}. This gives
\begin{equation}
C_{\phi_i\,\mathcal{O}_{a,pq}^{(2)} \mathcal{O}_{b,rs}^{(2)}} = -\frac{9}{8}\; c^{\scriptscriptstyle 3/2}\, T^{(5)}_{i\,ab\,ef}\, (P_a)_{pq,ab}\,(P_b)_{rs,ef}.
\end{equation}
These structure constants vanish for the cases $(a,b)=(1,1),(1,3)$ and for the nontrivial cases they are given as follows
\begin{equation}
C_{\phi_i\,\mathcal{O}_{1,pq}^{(2)} \mathcal{O}_{2,rs}^{(2)}} = -\frac{9((N+6)\zeta_{5,1}+10N\zeta_{5,2})}{80N}\, c^{\scriptscriptstyle 3/2} \,\delta_{pq} q^{(3)}_{irs},
\end{equation}
\begin{equation}
C_{\phi_i\,\mathcal{O}_{2,pq}^{(2)} \mathcal{O}_{2,rs}^{(2)}} = -\frac{9((4N-6)\zeta_{5,1}+5(N-1)^2\zeta_{5,2})}{40(N-1)^2}\, c^{\scriptscriptstyle 3/2}\left(q^{(5)}_{ipqrs}-\delta_{pq}q^{(3)}_{irs}-\delta_{rs}q^{(3)}_{ipq}\right),
\end{equation}
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
C_{\phi_i\,\mathcal{O}_{2,pq}^{(2)} \mathcal{O}_{3,rs}^{(2)}} = \frac{9}{40}(N-3)\zeta_{5,1} \,c^{\scriptscriptstyle 3/2} &&\left[\frac{1}{(N-1)^2} q^{(5)}_{ipqrs}-\frac{1}{N-1}\delta_{i(r}q^{(3)}_{s)pq} \right. \nonumber\\
- && \left. \frac{1}{(N-1)^2}\delta_{pq}q^{(3)}_{irs}-\frac{1}{N(N-1)^2}\delta_{rs}q^{(3)}_{ipq}\right],
\end{eqnarray}
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
C_{\phi_i\,\mathcal{O}_{3,pq}^{(2)} \mathcal{O}_{3,rs}^{(2)}} = \frac{9}{20}\,\zeta_{5,1} \,c^{\scriptscriptstyle 3/2} &&\left[\frac{N}{(N-1)^2} q^{(5)}_{ipqrs}-\frac{1}{4}\left(\delta_{rp}q^{(3)}_{isq}+\delta_{rq}q^{(3)}_{isp}+\delta_{sp}q^{(3)}_{irq}+\delta_{sq}q^{(3)}_{irp}\right) \right. \nonumber\\
- && \left. \frac{1}{2(N-1)}\left(\delta_{ip}q^{(3)}_{qrs}+\delta_{iq}q^{(3)}_{prs}+\delta_{ir}q^{(3)}_{pqs}+\delta_{is}q^{(3)}_{pqr}\right) \right. \nonumber\\
- && \left. \frac{1}{(N-1)^2}\left(\delta_{pq}q^{(3)}_{irs}+\delta_{rs}q^{(3)}_{ipq}\right)\right].
\end{eqnarray}
As discussed earlier, the indices $ij$ provide a redundant description of the set of operators.
Using the nonredundant descriptions given by \eqref{o0} and \eqref{ok} one may re-express the above structure constants apart from the last one. These are
\begin{equation}
C_{\phi_i\,\mathcal{O}_0^{(2)} \mathcal{O}_j^{(2)}} = -\frac{9}{80}\, c^{\scriptscriptstyle 3/2} (N-1)((N+6)\zeta_{5,1}+10N\zeta_{5,2})\,\delta_{ij},
\end{equation}
\begin{equation}
C_{\phi_i\,\mathcal{O}_j^{(2)} \mathcal{O}_k^{(2)}} = -\frac{9}{40}\, c^{\scriptscriptstyle 3/2}\,((4N-6)\zeta_{5,1}+5(N-1)^2\zeta_{5,2}) \, q^{(3)}_{ijk},
\end{equation}
\begin{equation}
C_{\phi_i\,\mathcal{O}_j^{(2)} \mathcal{O}_{3,kl}^{(2)}} = -\frac{9}{40}\,c^{\scriptscriptstyle 3/2}\,\zeta_{5,1} (P_3)_{ij,kl}.
\end{equation}
The structure constant $C_{111}$ can also be generalized in this case. Contrary to the cubic model for which $C_{ijk}$ was obtained from \eqref{ci2p-12q-1}, for the quintic model this is obtained from Eq.~\eqref{cijk}. Setting $\ell=2$ we get
\begin{equation}
C_{\phi_i \phi_j \phi_k} = \frac{729}{4096}\, c^{\scriptscriptstyle 9/2} \, V_{ia_1a_2b_1b_2} V_{jb_1b_2c_1c_2} V_{kc_1c_2a_1a_2}
\end{equation}
Replacing the potential derivatives with the expression \eqref{t4} and contracting the indices we get
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
C_{\phi_i \phi_j \phi_k} &=& \frac{729}{4096}\, c^{\scriptscriptstyle 9/2} \left[\frac{1}{250}(11N^2+146N-240) \zeta_{5,1}^3 + (N-1)^2(N^2+2)\zeta_{5,2}^3 \right. \nonumber\\
&& \left. \hspace{15mm}+ \frac{3}{100}(N^3+88N^2-159N+54) \zeta_{5,1}^2\zeta_{5,2} \right. \nonumber\\
&& \left. \hspace{15mm}+ \frac{3}{5}(5N^3-10N^2+9N-6) \zeta_{5,1}\zeta_{5,2}^2 \right] q^{(3)}_{ijk} \,.
\end{eqnarray}}%
Let us stress that contrary to the cubic and the quartic models, in our CFT+SDE approach, and without exploiting the RG analysis, we cannot fix the relation among the critical couplings and $\epsilon$ for the same reasons as that in the single-field case. Therefore in this case we can use our RG results~\cite{CSVZ4}, which have been reported in Appendix
B.\ref{sec:quintic_RG} in order to write explicit expressions in $\epsilon$.
\subsubsection{Some universal results}
We collect in this section a few examples of critical quantities expressed in terms of $\epsilon$. The results are reported for two critical points that are infrared, one for the case $N=0$ corresponding to percolation theory and one for $N=-1$ corresponding to spanning forest. For $N=0$ we take the second critical point \eqref{n0-2} and report as a few examples the following critical quantities which include the field anomalous dimension, the three anomalous dimensions of the quadratic operators as well as two structure constants
\begin{equation}
\eta = -0.00431599 \epsilon, \quad
\begin{array}{l}
\gamma^1_2 = -0.0906358 \epsilon, \\[1mm]
\gamma^2_2 = -0.118809 \epsilon, \\[1mm]
\gamma^3_2 = -0.0906358 \epsilon,
\ea
\quad
\begin{array}{c}
C_{\phi_i\,\mathcal{O}_0^{(2)} \mathcal{O}_j^{(2)}} = 0.39996\, \epsilon^{\scriptscriptstyle 1/2} \,\delta_{ij},
\\
C_{\phi_i \phi_j \phi_k} = - 0.08266 \,\epsilon^{\scriptscriptstyle 3/2} \, q^{(3)}_{ijk} \,.
\ea
\end{equation}
For the case $N=-1$ we pick the second critical point \eqref{n1-2}. For this model the anomalous dimensions of the quadratic operators and the two structure constants are
\begin{equation}
\eta = -0.000509232 \epsilon, \quad
\begin{array}{l}
\displaystyle \gamma^1_2 = -0.0106939 \epsilon, \\[1mm]
\gamma^2_2 = -0.0183324 \epsilon, \\[1mm]
\gamma^3_2 = -0.0816536 \epsilon,
\ea
\quad
\begin{array}{c}
C_{\phi_i\,\mathcal{O}_0^{(2)} \mathcal{O}_j^{(2)}} = - 0.3708\, \epsilon^{\scriptscriptstyle 1/2} \,\delta_{ij},
\\
C_{\phi_i \phi_j \phi_k} = - 2.04362 \,\epsilon^{\scriptscriptstyle 3/2} \, q^{(3)}_{ijk} \,.
\ea
\end{equation}
The $\epsilon$-dependence of the critical points presented in Appendix B.\ref{sec:quintic_RG} can be calculated also analytically, from which analytic results could have been given for the above quantities. However, these are complicated expressions involving square roots. We therefore avoid such expressions and present here only the approximate numerical values.
\section{Conclusions}\label{sect:conclusions}
We have employed a general method based on the use of conformal symmetry and Schwinger-Dyson equations to investigate multicritical multi-field scalar QFTs,
characterized by a critical potential of order $m$. At the leading order in the perturbative $\epsilon$-expansion,
this method gives access, in a simple way, to some universal data which includes both nontrivial anomalous dimensions and structure constants.
These results generalize the method applied to generic multicritical models of a single scalar field presented in~\cite{Codello:2017qek}.
Even without assuming any symmetry and considering only two- and three-point functions,
one can already find a considerable amount of information which includes the anomalous dimensions of the fields, the scaling dimension of the quadratic composite operators,
a tower of all-order composite operators for the "even" ($m$=$2n$) unitary multicritical theories,
and the explicit form of several structure constants. For $m=2n$ and $m=3$ one can also find the equations that constrain the interaction potential at criticality.
In particular we show how these constraints can be cast in exactly the same form as fixed-point conditions which could be obtained from a functional perturbative RG analysis.
Most of the general results and computational strategies presented in this work are new: while part of the results could in principle be obtained from more standard perturbative RG methods,
one of the objectives of this paper has been to show how conformal symmetry alone can give access to such critical information.
We remark here some interesting general features of our investigation. The results obtained in the first part of the paper for a general class of multi-field scalar QFTs
are derived without the use of any internal symmetry, but of course can be specialized to cases characterized by any (continuous or discrete) symmetry.
We have focused on the derivation of many universal quantities, but we stress that
also
the criticality conditions that
we have obtained on the set of possible couplings of the potential in the Landau-Ginzburg description are important by themselves.
In fact these conditions, which coincide with fixed point of functional beta functions determined with standard perturbative RG, are in many cases expected
to lead to the emergence of some symmetries at criticality, which is a fact that has already been observed in the literature (see~\cite{Osborn:2017ucf} and references therein).
Clearly, with a growing number of fields the pattern of solutions is increasingly complex, but in principle
any multicritical multi-field scalar theory can be analyzed specializing our general framework.
Pursuing such an approach one can access -- at the perturbative level -- all the possible internal continuous
or discrete symmetries of a theory with given critical dimension $d_c$ and number of fields $N$ that is a CFT at criticality.
In other words one can expect constrained emergent symmetries at a critical point.
We have then specialized the analysis to potentials characterized by $S_q$ invariance,
which encompass the Potts model and some of its multicritical generalizations.
We have used the standard representation theory of $S_q$ to construct all the symmetric interaction terms in the multicritical potential.
Even before embarking on explicit calculations for particular models, we have explored how far we could get from a knowledge of the symmetry group alone.
We have given explicit expressions for the decomposition of the quadratic operators into scaling operators,
which carry irreducible representations of $S_q$, and we have presented model independent formulas for anomalous dimensions of such operators.
The formalism gives the possibility to perform an analytic continuations of $q$ to some specific values,
e.g.\ the ones of special interest in statistical mechanics: $q=1$ (percolation) and $q=0$ (spanning forest).
Therefore, as an application we have analyzed in detail all the theories which have an upper critical dimension $d_c>3$,
which are nontrivial critical models in any integer dimensions $d$ in the range $3 \le d<d_c$, and specialized $q$ to the values of interest.
The results we have found match with the ones that will be presented in a companion paper~\cite{CSVZ4} which is devoted to the study of
Potts-like field theories with functional perturbative renormalization group methods
and which puts an emphasis on those with quintic interactions ($d_c=10/3$).
The results found there are confirmed by the present investigation for all critical and multicritical Potts-like models.
In the cubic (standard Potts) and quartic (restricted Potts)
critical models one can fix, with the aid of relations based on conformal invariance,
the critical values of the couplings as a function of $\epsilon$.
Unfortunately this is not possible in the quintic case, therefore we completed the analysis of the quintic model
using the RG results of the companion paper which for convenience we have included and briefly discussed in Appendix B.\ref{sec:quintic_RG}.
Likewise in the main text, we have also focused on the generalization of percolation and spanning forests universality classes,
for which we show that critical solutions associated to second order phase transition do exist, depending on the kind of multicritical theory considered.
There are several directions and extensions of this work that one can take for future investigations.
One such direction would be the inclusion of large-$N$ types of analyses.
It is not immediately clear if in this very general framework the large-$N$ expansion can be of help because we lack the constraints offered by a symmetry such as $O(N)$,
but we expect that there can be intermediate semi-general situations in which it could be useful.
We also expect that extending the analysis to higher order correlation functions one can have access to further relations
and informations on the conformal data. In particular one could wish to extend the results to the next-to-leading nontrivial order in the $\epsilon$ expansion.
Furthermore, one could change the structure and the degrees of freedom of the theories: e.g.\ considering general tensor models or theories with fermion \cite{Torres:2018jij} or vector fields.
Again interesting constraints on the symmetries should be investigated both in this framework
and in the functional perturbative RG to get access to
their universal data.
Among these possible extensions we would like to include
the study of nonunitary higher-derivative theories, which have recently been investigated in the single-field case in \cite{Safari:2017irw,Safari:2017tgs}
and which are important to extend critical theories to higher dimensions.
A final future direction that we would like to point out is inspired by the works \cite{Gorbenko:2018ncu,Gorbenko:2018dtm}
and involves the study of ``walking transitions''. In some
systems scale invariance can be approximately realized because the renormalization group runs ``close'' to complex fixed points.
These complex CFTs are nonunitary, but otherwise fully consistent conformal theories with complex conformal data.
For these models too we expect that the general conditions of criticality derived in Section~\ref{sec:FPcond} select
the allowed internal symmetries compatible with the number of fields and the upper critical dimensions.
Furthermore, the $q$-states Potts model considered in this paper is a prototypical example for the investigation of complex CFTs because,
as a function of $q$, pairs of fixed points that are related by the reflection $\phi^i \to -\phi^i$
annihilate and morph into pairs of purely imaginary complex CFTs (structurally similar to the PT-invariance of the Lee-Yang model's potential \cite{Zambelli:2016cbw}).
Therefore, by opportunely tuning $q$ and $d$ it is realistically possible to encounter scenarios in
which walking transitions are realized.
\bigskip
\noindent \emph{Acknowledgements.}\\
OZ acknowledges support from the DFG under Grant No.~Za~958/2-1.
|
1,314,259,993,210 | arxiv | \section{Introduction}
\label{sec:intro}
In this paper we study the recently conjectured 5d gauge theory descriptions of 6d SCFTs compactified on a circle \cite{Hayashi:2015zka}. The 6d $\mathcal{N}=(1,0)$ SCFTs have a tensor multiplet and $Sp(N)$ gauge symmetry with $N_f=2N+8$ fundamental hypermulitplets, and these can be Higgsed \cite{Kim:2015fxa} to the E-string theory \cite{Ganor:1996mu,Klemm:1996hh,Minahan:1998vr,Eguchi:2002fc}. The 5d $\mathcal{N}=1$ gauge theories have $Sp(N+1)$ gauge symmetry and $N_f=2N+8$ fundamental hypermultiplets. The 5d $Sp(N+1)$ gauge theories at $N\geq1$ with $N_f \leq 2N+6$ hypermultiplets are known to have non-trivial 5d UV fixed points \cite{Seiberg:1996bd,Intriligator:1997pq}. If $N_f \geq 2N+7$, the theories have Landau pole issues, in that the Coulomb branch moduli spaces are incomplete by having strong coupling singularities. So for such theories to have UV fixed points, there should be physical explanations of these singularities. Since the 5d descriptions suggested in \cite{Hayashi:2015zka} have $N_f=2N+8$ matters beyond the bound of \cite{Seiberg:1996bd,Intriligator:1997pq}, it would be desirable to have a better understanding on how this is happening. \\
In this paper we test these novel 5d-6d dualities by studying the spectrum of instanton solitons. Instantons in the 5d gauge theories play important roles in studying 5d and 6d SCFTs \cite{Tachikawa:2015mha,Yonekura:2015ksa,Kim:2015jba}. In particular, for 5d gauge theories having 6d UV fixed points, instantons are Kaluza-Klein momenta on the compactified circle. Therefore instantons are crucial objects in 5d to understand the 6d physics.\\
To study instantons, we use the ADHM construction engineered by string theory brane picture. However, since the ADHM construction embeds the instanton quantum mechanics into string theory, it often contains unwanted extra degrees of freedom which are not included in the QFT that one is interested in. So when we compute instanton partition functions via the string theory engineered ADHM partition functions, the extra contribution part should be subtracted to obtain correct QFT instanton partition functions \cite{Hwang:2014uwa}. For various models one could separately compute these extra contributions from string theory considerations \cite{Hwang:2014uwa}. Unfortunatley we don't know how to compute these extra contributions for the 5d $Sp(N+1)$ gauge theories with $N_f=2N+8$ hypermultiplets. Nonetheless one can compute one-instaton function exactly. Brane system for the 5d gauge theories have an $O7^-$-plane, and one-instanton sector is described by the half-D1-brane localized at the $O7^-$-plane. One expect that there is no extra degrees of freedom in one-instanton sector. See section 2.2 and Figure~\ref{5dbrane}. \\
Instanton partition functions compute the BPS spectrum of instantons bound to W-bosons in the Coulomb phase. Part of 5d W-bosons uplift to 6d self-dual strings wrapping the circle, and instantons are KK momenta on these strings. So one can study the same physics from the elliptic genera of 6d self-dual strings. We compute these elliptic genera, and compared them with the one-instanton partition function of the 5d gauge theories. We find perfect agreements, which provide nontrival supports of the proposal made in \cite{Hayashi:2015zka}. In particular, our test clarifies the physical setting of the 5d-6d dualities, by emphasizing the roles of background Wilson lines,
and also by explicitly showing the relations between various 5d and 6d parameters.\\
This paper is organized as follows. In section 2, we will briefly review the E-string theory and $Sp(N)$ generalizations. In both cases, it is crucial to consider the effects of background Wilson lines for the flavor symmetries. We compare the E-strings' elliptic genera and the 5d instanton partition functions combined with perturbative index, and show the fugacity map of the two indices. In section 3, we compute elliptic genera for self-dual strings in the 6d SCFT with $Sp(1)$ gauge symmetry and 10 hypermultiplets using 2d gauge theory description. We compare this result with the one-instanton partition functions of the 5d $Sp(2)$ gauge theory with 10 fundamental hypermultiplets. In section 4, we will generalize our result to 6d $Sp(N)$ gauge theories. We can see that 5d $Sp(N+1)$ gauge group can be decomposed into the $Sp(1)\times Sp(N)$, and the former $Sp(1)$ gives the 6d self-string structure as E-string theory and the latter $Sp(N)$ gives 6d gauge group. In section 5, we will conclude with some remarks on the future direction.\\
\section{E-strings and their $Sp(N)$ generalizations}
\label{sec:review}
We will briefly review the E-string theory \cite{Ganor:1996mu,Klemm:1996hh,Minahan:1998vr,Eguchi:2002fc,Kim:2014dza}, and their circle compactifications to the 5d $Sp(1)$ gauge theory with 8 hypermultiplets. The E-string theory and 6d $\mathcal{N}=(1,0)$ SCFT with $Sp(1)$ gauge symmetry are well-studied in reference \cite{Kim:2014dza,Kim:2015fxa}, and we will follow their idea. \\
First consider type IIA brane description of the 6d $\mathcal{N}=(1,0)$ SCFT with $Sp(N)$ gauge symmetry and $N_f=2N+8$ hypermultiplets. The case with $N=0$ engineers the E-string theory.
Brane system is given in Figure~\ref{6dbrane} \cite{Brunner:1997gf,Hanany:1997gh}, and this theory is also known as $(D_{N+4},D_{N+4})$ minimal conformal matter theory \cite{Heckman:2013pva,DelZotto:2014hpa,Heckman:2014qba,Heckman:2015bfa}. We focus on the self dual-strings which couple to the tensor multiplet in the 6d SCFT. The self-dual strings are instanton soliton strings in 6d gauge theory, and it is realized as D2-branes living on D6-branes. The quiver diagram for the 2d $\mathcal{N}=(0,4)$ gauge theory living on D2-branes is given in Figure~\ref{2dquiver}. Their SUSY and Lagrangian are studied in \cite{Kim:2014dza,Kim:2015fxa}.
$O(n)$ vector multiplet and symmetric hypermultiplet come from the strings stretch between D2-D2 branes with appropriate boundary conditions in the presence of O8$^{-}$-plane. Hypermultiplets whose representation is $(n,2N)$ come from D2-D6 strings, and Fermi multiplets whose representation is $(n,4N+16)$ come from D2-D8 strings and D2-D6 strings across NS5 brane. We circle compactify the theory along $x^1$ direction.
\begin{figure}
\centering
\begin{tikzpicture}
\draw [blue,thick] (-0.05,-2) -- (-0.05,2);
\draw [dashed] (0,-2) -- (0,2);
\draw [blue,thick] (0.05,-2) -- (0.05,2);
\draw [thick] (-3.3,0.05) -- (3.3,0.05);
\draw [thick] (-3.3,0) -- (3.3,0);
\draw [thick] (-3.3,-0.05) -- (3.3,-0.05);
\draw [thick,red] (0,0.12) -- (2.5,0.12);
\draw [thick,red] (0,0.17) -- (2.5,0.17);
\filldraw [black!40,draw=black!100,thick] (2.5,0) circle (0.25cm);
\filldraw [black!40,draw=black!100,thick] (-2.5,0) circle (0.25cm);
\node at (1.1,-0.5) {$2N$ D6s};
\node at (1.1,0.5) [red] {$n$ D2s};
\node at (2.5,-0.5) [black] {NS5};
\node at (1.8,1.8) [blue] {O8$^-$ - 8 D8s};
\draw (6,2) --(6,-1.2);
\draw (4.3,1.3) --(12.3,1.3);
\node at (5.2,0.9) {D2};
\node at (5.2, 0.3) {NS5};
\node at (5.2, -0.3) {D6};
\node at (5.2,-0.9) {O8$^-$-D8};
\node at (6.5,1.7) {0}; \node at (7.1,1.7) {1}; \node at (7.7,1.7) {2}; \node at (8.3,1.7) {3}; \node at (8.9,1.7) {4}; \node at (9.5,1.7) {5}; \node at (10.1,1.7) {6}; \node at (10.7,1.7) {7}; \node at (11.3,1.7) {8}; \node at (11.9,1.7) {9};
\node at (6.5,0.9) {$\bullet$}; \node at (7.1,0.9) {$\bullet$}; \node at (7.7,0.9) {-}; \node at (8.3,0.9) {-}; \node at (8.9,0.9) {-}; \node at (9.5,0.9) {-}; \node at (10.1,0.9) {$\bullet$}; \node at (10.7,0.9) {-}; \node at (11.3,0.9) {-}; \node at (11.9,0.9) {-};
\node at (6.5,0.3) {$\bullet$}; \node at (7.1,0.3) {$\bullet$}; \node at (7.7,0.3) {$\bullet$}; \node at (8.3,0.3) {$\bullet$}; \node at (8.9,0.3) {$\bullet$}; \node at (9.5,0.3) {$\bullet$}; \node at (10.1,0.3) {-}; \node at (10.7,0.3) {-}; \node at (11.3,0.3) {-}; \node at (11.9,0.3) {-};
\node at (6.5,-0.3) {$\bullet$}; \node at (7.1,-0.3) {$\bullet$}; \node at (7.7,-0.3) {$\bullet$}; \node at (8.3,-0.3) {$\bullet$}; \node at (8.9,-0.3) {$\bullet$}; \node at (9.5,-0.3) {$\bullet$}; \node at (10.1,-0.3) {$\bullet$}; \node at (10.7,-0.3) {-}; \node at (11.3,-0.3) {-}; \node at (11.9,-0.3) {-};
\node at (6.5,-0.9) {$\bullet$}; \node at (7.1,-0.9) {$\bullet$}; \node at (7.7,-0.9) {$\bullet$}; \node at (8.3,-0.9) {$\bullet$}; \node at (8.9,-0.9) {$\bullet$}; \node at (9.5,-0.9) {$\bullet$}; \node at (10.1,-0.9) {-}; \node at (10.7,-0.9) {$\bullet$}; \node at (11.3,-0.9) {$\bullet$}; \node at (11.9,-0.9) {$\bullet$};
\end{tikzpicture}
\caption{type IIA brane system for 6d $\mathcal{N}=(1,0)$ $Sp(N)$ gauge theory with $N_f=2N+8$ fundamental hypermultiplets. $n$ D2 branes engineer $n$ self-dual strings. }
\label{6dbrane}
\end{figure}
\subsection{The elliptic genera of self-dual strings}
\begin{figure}
\centering
\begin{tikzpicture}
\draw [thick] (-3.1,-2) circle (0.6cm);
\filldraw [white!100!,draw=black!100,thick] (-2,-2) circle (0.8cm);
\draw [thick] (1,-2.8) rectangle (3.5,-1.2);
\draw [thick] (-2.8,-0.2) rectangle (-1.2,0.8);
\draw [thick,dashed] (-1.2,-2) -- (1,-2);
\draw [thick] (-2,-1.2) -- (-2, -0.2);
\node at (-2,-2) {$O(n)$};
\node at (-2.0,0.3) {$Sp(N)$};
\node at (2.25,-2) {$SO(4N+16)$};
\node at (-4.2,-2.4) {sym.};
\end{tikzpicture}
\caption{2d ADHM quiver diagram for the self-dual strings}
\label{2dquiver}
\end{figure}
We focus on elliptic genera of the self-dual strings of the 6d $Sp(N)$ theories
\begin{align}
Z^{\textrm{6d},Sp(N)} = 1 + \sum_{n=1}^{\infty} w^n Z_n^{\textrm{6d},Sp(N)} \,.
\end{align}
where $w$ is the fugacity for the string winding number.
The elliptic genus of the 2d gauge theory on a tours is
\begin{align}
Z_n^{\textrm{6d},Sp(N)} = \textrm{Tr}_{RR} \left[ (-1)^F q^{2H_L} \bar{q}^{2H_R} e^{2\pi i \epsilon_1(J_1+J_R)} e^{2\pi i \epsilon_1(J_2+J_R)}
\prod_{i=1}^{N} e^{2\pi i \alpha_i G_i} \prod_{l=1}^{N_f=2N+8}e^{2\pi i m_l F_l} \right] \,.
\label{ellipticg}
\end{align}
$q \equiv e^{i \pi \tau}$ contains the complex structure of the torus $\tau$.\footnote{We use definition of $q$ as $q \equiv e^{i \pi \tau}$ instead of usual $q \equiv e^{2i \pi \tau}$, because instanton fugacity in 5d gauge theory correspond with this definition of $q$.} $H_R \sim \{ Q , Q^{\dagger}\}$ where $Q,Q^{\dagger}$ are $(0,2)$ supercharges of the theory. $J_1,J_2$ and $J_R$ are Cartans of $SO(4)_{2345}$ and $SO(3)_{789} \sim SU(2)_R$. $G_i$ are Cartans of $Sp(N)$ gauge group of 6d SCFT and $\alpha_i$ are corresponding chemical potentials. $F_l$ are Cartans of $SO(4N+16)$ flavor symmetry and $m_l$ are corresponding chemical potentials. The elliptic genus of $n$ E-strings is given by $Z_n^{\textrm{E-strings}} \equiv Z_n^{\textrm{6d},Sp(0)}$. \\
The elliptic genus of the 2d gauge theory \eqref{ellipticg} was studied in \cite{Benini:2013nda,Benini:2013xpa,Gadde:2013ftv}, and the E-string case(or $O(n)$ gauge group) was further studied in \cite{Kim:2014dza,Kim:2015fxa}. The elliptic genus is given by an integral over the $O(n)$ flat connections on $T^2$.
$O(n)$ gauge group has two disconnected parts $O(n)^{\pm}$. So the Wilson lines $U_1$, $U_2$ along the temporal and spatial circle have two disconnected sector. The discrete holonomy sectors for $O(n)$ gauge group on $T^2$ are listed in section 3 of \cite{Kim:2014dza}. Usually elliptic genus is given by sum of 8 discrete sectors for a given $n$. But $n=1$ and $n=2$ cases are special, and they are given by sum of 4 and 7 sectors respectively.\\
The elliptic genus \eqref{ellipticg} is given by \cite{Benini:2013xpa,Kim:2014dza}
\begin{align}
Z_n^{\textrm{6d},Sp(N)} = \sum_{I} \frac{1}{|W_I|} \frac{1}{(2\pi i)^r} \oint Z_{\textrm{1-loop}}^{(I)} \; \,, \quad Z_{\textrm{1-loop}}^{(I)} \equiv Z^{(I)}_{\textrm{vec}}Z^{(I)}_{\textrm{sym.}}Z^{(I)}_{\textrm{Fermi}}Z^{(I)}_{\textrm{fund.}} \,.
\end{align}
The 1-loop determinant for the 2d multiplets are given by
\begin{align}
Z_{\textrm{vec}} \;&=\; \prod_{i=1}^{r} \left( \frac{2\pi \eta^2 du_i}{i} \cdot \frac{\theta_1(2\epsilon_+)}{i \eta} \right) \prod_{\alpha \in \textrm{root}} \frac{\theta_1(\alpha(u)) \theta_1(2\epsilon_+ + \alpha(u))}{i\eta^2} \,, \\
Z_{\textrm{sym hyper}} \;&=\; \prod_{\rho \in \textrm{sym}} \frac{i \eta}{\theta_1(\epsilon_1 + \rho(u))} \frac{i \eta}{\theta_1(\epsilon_2 + \rho(u))} \,, \\
Z^{SO(4N+16)}_{\textrm{Fermi}} \;&=\; \prod_{\rho \in \textrm{fund}} \prod_{l=1}^{2N+8}\frac{\theta_1(m_l + \rho(u))}{i\eta} \,, \\
Z^{Sp(N)}_{\textrm{fund hyper}} \;&=\; \prod_{\rho \in \textrm{fund}} \prod_{i=1}^{N} \frac{i \eta}{ \theta_1(\epsilon_+ + \rho(u) + \alpha_i) }\frac{i \eta}{ \theta_1(\epsilon_+ + \rho(u) - \alpha_i) } \,,
\end{align}
where $\epsilon_{\pm} \equiv \frac{\epsilon_1\pm\epsilon_2}{2}$ and $r$ is the rank of the gauge group $O(n)$. $\eta \equiv \eta(\tau)$ is the Dedekind eta function and $\theta_i(z) \equiv \theta_i(\tau,z)$ are the Jacobi theta functions.
`$I$' refers the disconnected holonomy sectors and $u_i$ are zero modes of 2d gauge fields along the torus. $|W_I|$ is order of Weyl group of $O(n)_I$ for each sector `$I$' \cite{Kim:2014dza}.
For later convenience, we will use following fugacity notation $t \equiv e^{2\pi i \epsilon_+}\,, \; u \equiv e^{2\pi i \epsilon_-}\,,\;v_i \equiv e^{2\pi i \alpha_i}\,, \; y_l\equiv e^{2\pi i m_l}$. The elliptic genus contains contour integral of $u_i$, which is a residue sum given by Jeffrey-Kirwan residue(JK-residue) prescription \cite{Benini:2013nda,Benini:2013xpa}.\\
The E-string elliptic genus has manifest $E_8$ global symmetry. One should turn on the $E_8$ Wilson line on a circle to obtain 5d SYM description of E-string theory \cite{Kim:2014dza}.\footnote{This shift can be naturally understood by embedding the 6d SCFT into M-theory. Namely, to obtain the D4-D8-08 which realizes 5d SYM description, one has to compactify the M5-M9 system on a circle with a Wilson line that breaks $E_8$ to $SO(16)$.} This background $E_8$ Wilson line provides the extra shift $m_8 \rightarrow m_8 - \tau$ to the chemical potential. So it gives following shift of the theta functions
\begin{align}
\theta_i(m_8) \rightarrow \pm\left( \frac{y_8}{q} \right) \theta_i(m_8) \,,
\end{align}
where we have $(-)$ sign for $i=1,4$ and $(+)$ sign for $i=2,3$. The overall factor shifts by $\frac{y_8}{q}$ can be absorbed by the redefinition of the string winding fugacity $w \rightarrow wqy_8^{-1}$ \cite{Kim:2014dza}. We shall observe later that the $E_8$ Wilson line effect continues to be crucial for the 6d $Sp(N)$ generalizations of the E-string theory.
\paragraph{One-string} With the effect of $E_8$ Wilson line, one-string elliptic genus is given by the sum of 4 discrete sectors
\begin{align}
Z_{n=1}^{\textrm{E-string}}
&= \frac{1}{2}\left( -Z_{1,[1]} +Z_{1,[2]} +Z_{1,[3]} -Z_{1,[4]} \right) \,,
\end{align}
where $Z_{1,[I]}$ for $I=1,2,3,4$ are given by
\begin{align}
Z_{1,[I]} =
-\frac{\eta^2}{\theta_1(\epsilon_1) \theta_1(\epsilon_2)} \prod_{l=1}^{8} \frac{\theta_I(m_l)}{\eta} \,.
\end{align}
In order to compare this result with the 5d instanton partition functions, we expand this result in terms of $q$
\begin{align}
Z^{\textrm{E-string}}_{n=1} &= \frac{t}{(1-tu)(1-t/u)}\chi^{SO(16)}_{16}(y_i)q^0 + \frac{t}{(1-tu)(1-t/u)}\chi^{SO(16)}_{\overline{128}}(y_i)q^1 +\mathcal{O}(q^2) \,,
\end{align}
where $\chi^{SO(16)}_{\textrm{R}}$ denotes $SO(16)$ character of representation R.
\paragraph{Two-strings} At n=2, the elliptic genus is given by the sum of 7 sectors. We skip the details of the calculation here, because we shall see the calculation with $Sp(N)$ generalizations in Section \ref{sec:rank1}. We just report the q-expanded two-strings result in the presence of $E_8$ Wilson line
\begin{align}
Z^{\textrm{E-string}}_{n=2}
& = \left( -\frac{t\;(t+\frac{1}{t})}{(1-tu)(1-t/u)}+\frac{1}{2}\left( \left(\frac{t\;\chi^{SO(16)}_{16}(y_i)}{(1-tu)(1-t/u)}\right)^2+\left(\frac{t^2\;\chi^{SO(16)}_{16}(y_i^2)}{(1-t^2u^2)(1-t^2/u^2)}\ \right)\right)\right)q^0 \\
& \quad + \left( (\frac{t}{(1-tu)(1-t/u)})^2\chi^{SO(16)}_{16}(y_i)\chi^{SO(16)}_{\overline{128}}(y_i) - \frac{t\; (t+\frac{1}{t})}{(1-tu)(1-t/u)}\chi^{SO(16)}_{128}(y_i) \right) q +\mathcal{O}(q^2) \,.
\end{align}
\subsection{5d SYM and instanton partition functions}
\begin{figure}
\centering
\begin{tikzpicture}
\draw [thick] (-0.1,-0.1) -- (0.1,0.1);
\draw [thick] (0.1,-0.1) -- (-0.1,0.1);
\filldraw [black!40,draw=black!100,thick] (0.3,0.2) circle (0.07cm);
\filldraw [black!40,draw=black!100,thick] (0.3,-0.2) circle (0.07cm);
\filldraw [black!40,draw=black!100,thick] (-0.3,0.2) circle (0.07cm);
\filldraw [black!40,draw=black!100,thick] (-0.3,-0.2) circle (0.07cm);
\draw [thick] (-1.3,3.05) -- (-2.3,2.05) -- (-2.3,1.75) -- (-2.0,1.45) -- (-1.5,1.2) -- (-1,0.7) -- (1,0.7) -- (1.5,1.2) -- (2.0,1.45) -- (2.3,1.75) -- (2.3,2.05) -- (1.3,3.05);
\draw [thick] (-1.3,-3.05) -- (-2.3,-2.05) -- (-2.3,-1.75) -- (-2.0,-1.45) -- (-1.5,-1.2) -- (-1,-0.7) -- (1,-0.7) -- (1.5,-1.2) -- (2.0,-1.45) -- (2.3,-1.75) -- (2.3,-2.05) -- (1.3,-3.05);
\filldraw [blue!60,draw=blue!60,thick] (-1.3,3.05) circle (0.07cm);
\filldraw [blue!60,draw=blue!60,thick] (-1.3,-3.05) circle (0.07cm);
\filldraw [blue!60,draw=blue!60,thick] (1.3,3.05) circle (0.07cm);
\filldraw [blue!60,draw=blue!60,thick] (1.3,-3.05) circle (0.07cm);
\draw [thick] (-1,0.7) -- (-1,-0.7) ;
\draw [thick] (1,0.7) -- (1,-0.7) ;
\draw [thick] (1+0.5,0.7+0.5)--(-1-0.5,0.7+0.5);
\draw [thick] (1+0.5,-0.7-0.5)--(-1-0.5,-0.7-0.5);
\draw [thick] (2.0,1.45) -- (3.0,1.45);
\draw [thick] (-2.0,1.45) -- (-3.0,1.45) ;
\draw [thick] (2.0,-1.45) -- (3.0,-1.45) ;
\draw [thick] (-2.0,-1.45) -- (-3.0,-1.45) ;
\draw [thick] (2.3,1.75) -- (3,1.75) ;
\draw [thick] (-2.3,1.75) -- (-3,1.75) ;
\draw [thick] (2.3,-1.75) -- (3,-1.75) ;
\draw [thick] (-2.3,-1.75) -- (-3,-1.75) ;
\draw [thick] (2.3,2.05) -- (3,2.05) ;
\draw [thick] (-2.3,2.05) -- (-3,2.05) ;
\draw [thick] (2.3,-2.05) -- (3,-2.05) ;
\draw [thick] (-2.3,-2.05) -- (-3,-2.05) ;
\draw [thick,,red!100] (-1,0) -- (1,0) ;
\draw [thick,,red!100] (-1.65,2.7) -- (1.65,2.7) ;
\draw [thick,,red!100] (-1.65,-2.7) -- (1.65,-2.7) ;
\node [red!100] at (0.9,0.3) {$\frac{1}{2}$ D1};
\node [red!100] at (0.9,3) {D1};
\node at (0,-0.5) {O7-4 D7};
\draw [thick] [->] [red!100] (0,2.8) --(0,3.2);
\draw [thick] [->] [red!100] (0,-2.8) --(0,-3.2);
\draw (6,2) -- (6,-1.2);
\draw (4.3,1.3) --(12.3,1.3);
\node at (5.2,0.9) {D1};
\node at (5.2, 0.3) {NS5};
\node at (5.2, -0.3) {D5};
\node at (5.2,-0.9) {O7-D7};
\node at (6.5,1.7) {0}; \node at (7.1,1.7) {1}; \node at (7.7,1.7) {2}; \node at (8.3,1.7) {3}; \node at (8.9,1.7) {4}; \node at (9.5,1.7) {5}; \node at (10.1,1.7) {6}; \node at (10.7,1.7) {7}; \node at (11.3,1.7) {8}; \node at (11.9,1.7) {9};
\node at (6.5,0.9) {$\bullet$}; \node at (7.1,0.9) {-}; \node at (7.7,0.9) {-}; \node at (8.3,0.9) {-}; \node at (8.9,0.9) {-}; \node at (9.5,0.9) {-}; \node at (10.1,0.9) {$\bullet$}; \node at (10.7,0.9) {-}; \node at (11.3,0.9) {-}; \node at (11.9,0.9) {-};
\node at (6.5,0.3) {$\bullet$}; \node at (7.1,0.3) {$\bullet$}; \node at (7.7,0.3) {$\bullet$}; \node at (8.3,0.3) {$\bullet$}; \node at (8.9,0.3) {$\bullet$}; \node at (9.5,0.3) {$\bullet$}; \node at (10.1,0.3) {-}; \node at (10.7,0.3) {-}; \node at (11.3,0.3) {-}; \node at (11.9,0.3) {-};
\node at (6.5,-0.3) {$\bullet$}; \node at (7.1,-0.3) {$\bullet$}; \node at (7.7,-0.3) {$\bullet$}; \node at (8.3,-0.3) {$\bullet$}; \node at (8.9,-0.3) {$\bullet$}; \node at (9.5,-0.3) {-}; \node at (10.1,-0.3) {$\bullet$}; \node at (10.7,-0.3) {-}; \node at (11.3,-0.3) {-}; \node at (11.9,-0.3) {-};
\node at (6.5,-0.9) {$\bullet$}; \node at (7.1,-0.9) {$\bullet$}; \node at (7.7,-0.9) {$\bullet$}; \node at (8.3,-0.9) {$\bullet$}; \node at (8.9,-0.9) {$\bullet$}; \node at (9.5,-0.9) {-}; \node at (10.1,-0.9) {-}; \node at (10.7,-0.9) {$\bullet$}; \node at (11.3,-0.9) {$\bullet$}; \node at (11.9,-0.9) {$\bullet$};
\end{tikzpicture}
\caption{type IIB brane diagram for the 5d $\mathcal{N}=1$ $Sp(2)$ gauge theory with $N_f=10$ hypermultiplets. The figure shows the covering space of $\mathbb{Z}_2$ quotient by O7 (the cross in the figure). The blue dots denote 7-branes on which vertical 5-branes can end. Half-D1 brane is stuck to the O7$^-$-plane. }
\label{5dbrane}
\end{figure}
Non-perturbative effect of the 5d gauge theory is essential for the duality. We first consider the general 5d $\mathcal{N}=1$ $Sp(N+1)$ gauge theories with $N_f=2N+8$ fundamental hypermultiplets. Type IIB brane diagram for $N=1$ case is given in Figure~\ref{5dbrane}. Instantons are realized by the D1 branes living on the D5 branes.
One should carefully use the string theory engineered ADHM construction. It contains unwanted extra degrees of freedoms \cite{Hwang:2014uwa}. For example, Figure~\ref{5dbrane2} shows the brane diagram for $Sp(N+1)$ gauge theory with $N_f=2N+6$ matters at $N=1$, which was considered in \cite{Seiberg:1996bd}. In this case D1 branes which can escape to infinity provide extra degrees of freedom. Their contribution to the instanton partition function can be computed separately. To obtain correct instanton partition function, one should subtract this extra contribution from the ADHM quantum mechanical index. However, for 5d $Sp(N+1)$ gauge theory with $N_f=2N+8$ matters, we don't know how to identify the contribution of the extra degrees of freedom to the index.\footnote{$N_f=2N+6\,,2N+7$ cases are considered in \cite{Bergman:2015dpa}.} The extra states are supposed to be provided by the D1 branes moving vertically away from the D5 branes. We currently do not have technical controls of such extra states.\\
\begin{figure}
\centering
\begin{tikzpicture}
\draw [thick] (-0.1,-0.1) -- (0.1,0.1);
\draw [thick] (0.1,-0.1) -- (-0.1,0.1);
\filldraw [black!40,draw=black!100,thick] (0.3,0.2) circle (0.07cm);
\filldraw [black!40,draw=black!100,thick] (0.3,-0.2) circle (0.07cm);
\filldraw [black!40,draw=black!100,thick] (-0.3,0.2) circle (0.07cm);
\filldraw [black!40,draw=black!100,thick] (-0.3,-0.2) circle (0.07cm);
\draw [thick] (-2.3,3) -- (-2.3,1.75) -- (-2.0,1.45) -- (-1.5,1.2) -- (-1,0.7) -- (1,0.7) -- (1.5,1.2) -- (2.0,1.45) -- (2.3,1.75) -- (2.3,3) ;
\draw [thick] (-2.3,-3) -- (-2.3,-1.75) -- (-2.0,-1.45) -- (-1.5,-1.2) -- (-1,-0.7) -- (1,-0.7) -- (1.5,-1.2) -- (2.0,-1.45) -- (2.3,-1.75) -- (2.3,-3) ;
\draw [thick] (-1,0.7) -- (-1,-0.7) ;
\draw [thick] (1,0.7) -- (1,-0.7) ;
\draw [thick] (1+0.5,0.7+0.5)--(-1-0.5,0.7+0.5);
\draw [thick] (1+0.5,-0.7-0.5)--(-1-0.5,-0.7-0.5);
\draw [thick] (2.0,1.45) -- (3.0,1.45);
\draw [thick] (-2.0,1.45) -- (-3.0,1.45) ;
\draw [thick] (2.0,-1.45) -- (3.0,-1.45) ;
\draw [thick] (-2.0,-1.45) -- (-3.0,-1.45) ;
\draw [thick] (2.3,1.75) -- (3,1.75) ;
\draw [thick] (-2.3,1.75) -- (-3,1.75) ;
\draw [thick] (2.3,-1.75) -- (3,-1.75) ;
\draw [thick] (-2.3,-1.75) -- (-3,-1.75) ;
\draw [thick,,red!100] (-1,0) -- (1,0) ;
\draw [thick,,red!100] (-2.3,2.7) -- (2.3,2.7) ;
\draw [thick,,red!100] (-2.3,-2.7) -- (2.3,-2.7) ;
\node [red!100] at (0.9,0.3) {$\frac{1}{2}$ D1};
\node [red!100] at (0.9,3) {D1};
\node at (0,-0.5) {O7-4 D7};
\draw [thick] [->] [red!100] (0,2.8) --(0,3.2);
\draw [thick] [->] [red!100] (0,-2.8) --(0,-3.2);
\end{tikzpicture}
\caption{type IIB brane diagram for the 5d $\mathcal{N}=1$ $Sp(2)$ gauge theory with $N_f=8$ hypermultiplets.}
\label{5dbrane2}
\end{figure}
However one-instanton sector is special, because this sector is realized by the half-D1 brane stuck to O7$^-$ plane. The half-D1 brane can not escape to infinity, so it does not contains any extra degrees. For this reason, we expect that one can study the one-instanton sector of the general 5d $Sp(N+1)$ gauge theories with $N_f=2N+8$ fundamental hypermultiplets using the ADHM description.\\
5d index has the perturbative part and the instanton part $ Z^{\textrm{5d}} = Z^{\textrm{5d}}_{\textrm{pert}} Z^{\textrm{5d}}_{\textrm{inst}}$. The 5d Instanton partition functions for the $Sp(N+1)$ gauge group with matters are well-studied in \cite{Kim:2012gu,Hwang:2014uwa}. As we explained above, naive instanton partition functions can contain unwanted degrees freedom, so one should subtract this factor
\begin{align}
Z_{\textrm{inst}}=\frac{Z_{\textrm{ADHM}}}{Z_{\textrm{extra}}} =1+\sum_{k=1}^{\infty}q^k Z_k^{\textrm{5d},Sp(N+1)} \,,
\end{align}
$q$ is instanton fugacity and $k$ is instanton number. There is no $Z_{\textrm{extra}}$ factor for the one-instanton sector. $Z_k^{\textrm{5d},Sp(N+1)}$ is given by
\begin{align}
Z_k^{\textrm{5d},Sp(N+1)} = \textrm{Tr} \left[ (-1)^F e^{-\beta \{Q,Q^{\dagger}\}} e^{-\epsilon_1 (J_1+J_R)} e^{-\epsilon_2 (J_2 + J_R)} e^{-\alpha_i G_i} e^{-m_l F_l}\right] \,.
\label{ADHM}
\end{align}
$Q,Q^{\dagger}$ are two of the (0,4) supercharges of the ADHM QM system \cite{Hwang:2014uwa}. $J_1$ and $J_2$ are the Cartan generators of $SO(4)$ which rotating the $\mathbb{R}^4$. $J_R$ is the Cartan of $SU(2)_R$ R-symmetry. $G_i$ and $F_l$ are Cartans of $Sp(N+1)$ gauge group and $SO(4N+16)$ flavor symmetry group, and their conjugate chemical potentials are $\alpha_i$ and $m_l$. We will use the following fugacities convention $t = e^{-\epsilon_+},\; u = e^{-\epsilon_-},\; v_i= e^{-\alpha_i}$ and $y_l = e^{-m_l}$. \\
$Z_k^{\textrm{5d},Sp(N+1)}$ is given by the sum of $Z_k^{\pm}$, because the dual ADHM gauge group $O(k)$ has two disconnected sectors $O(k)^{\pm}$ \footnote{Actually $Sp(N+1)$ gauge theory has $\mathbb{Z}_2$ valued $\theta$ angle because $\pi_4(Sp(N+1)) = \mathbb{Z}_2$, so its index is given by
\begin{displaymath}
Z_k^{\textrm{5d},Sp(N+1)} = \left\{
\begin{array}{ll}
\frac{1}{2} (Z_k^+ + Z_k^-) & \,, \; \theta=0 \\
\frac{(-1)^k}{2} (Z_k^+ - Z_k^-) & \,, \; \theta=\pi
\end{array} \right. \;.
\end{displaymath}
But in our case, $\theta$ is not important. Its effect can be absorbed by redefinition of the flavor chemical potential.}
\begin{align}
Z_k^{\textrm{5d},Sp(N+1)} = \frac{1}{2} (Z_k^+ + Z_k^-) & \,.
\label{5dinst}
\end{align}
If one set $k=2n+\chi$ where $\chi=0$ or 1, $Z_k^{\pm}$ is given by
\begin{align}
Z_k^{\pm} = \frac{1}{|W|} \oint \prod_{I=1}^{n} \frac{d\phi_I}{2\pi i}Z_{\textrm{vec}}^{\pm}(\phi,\alpha_j ; \epsilon_{1,2}) \prod_{l} Z_{R_l}^{\pm}(\phi, \alpha_j, m_l ; \epsilon_{1,2}) \,,
\end{align}
where Weyl factor $|W|$ is given by
\begin{align}
|W|^{\chi=0}_+ = 2^{n-1} n! \,, \;|W|^{\chi=1}_+ = 2^{n} n! \,, \; |W|^{\chi=0}_- = 2^{n-1} (n-1)! \,, \; |W|^{\chi=1}_- = 2^{n} n! \,.
\end{align}
$R_l$ denotes the representation of hypermultipet matters. See \cite{Hwang:2014uwa} for the details.
Vector multiplet part for $O(k)^{+}$ sector is given by
\begin{align}
Z_\textrm{vec}^{+}
&= \left[
\frac{1}{ 2\sinh\frac{\pm\epsilon_- + \epsilon_+}{2} \prod_{i=1}^{N+1} 2\sinh\frac{\pm\alpha_i +\epsilon_+}{2}} \prod_{I=1}^{n} \frac{2\sinh\frac{\pm\phi_I}{2} 2\sinh\frac{\pm\phi_I + 2 \epsilon_+}{2} }{2\sinh\frac{\pm\phi_I \pm\epsilon_-+\epsilon_+}{2}}
\right]^{\chi} \nonumber \\
& \quad \times \prod_{I=1}^{n} \frac{2\sinh\epsilon_+}{2\sinh\frac{\pm\epsilon_-+\epsilon_+}{2} \prod_{i=1}^{N+1} 2\sinh\frac{\pm\phi_I \pm\alpha_i +\epsilon_+}{2}}
\frac{\prod_{I>J}^n 2\sinh\frac{\pm\phi_I\pm\phi_J}{2} 2\sinh\frac{\pm\phi_I\pm\phi_J+2\epsilon_+}{2}}{\prod_{I=1}^n 2\sinh\frac{\pm2\phi_I\pm\epsilon_-+\epsilon_+}{2} \prod_{I>J}^n 2\sinh\frac{\pm\phi_I\pm\phi_J\pm\epsilon_-+\epsilon_+}{2}} \,,
\end{align}
and for $O(k)^{-}$ sector is given by
\begin{align}
Z_\textrm{vec}^{-}
&=
\frac{1}{ 2\sinh\frac{\pm\epsilon_- + \epsilon_+}{2} \prod_{i=1}^{N+1} 2\cosh\frac{\pm\alpha_i +\epsilon_+}{2}} \prod_{I=1}^{n} \frac{2\cosh\frac{\pm\phi_I}{2} 2\cosh\frac{\pm\phi_I + 2 \epsilon_+}{2} }{2\cosh\frac{\pm\phi_I \pm\epsilon_-+\epsilon_+}{2}}
\nonumber \\
& \quad \times \prod_{I=1}^{n} \frac{2\sinh\epsilon_+}{2\sinh\frac{\pm\epsilon_-+\epsilon_+}{2} \prod_{i=1}^{N+1} 2\sinh\frac{\pm\phi_I \pm\alpha_i +\epsilon_+}{2}}
\frac{\prod_{I>J}^n 2\sinh\frac{\pm\phi_I\pm\phi_J}{2} 2\sinh\frac{\pm\phi_I\pm\phi_J+2\epsilon_+}{2}}{\prod_{I=1}^n 2\sinh\frac{\pm2\phi_I\pm\epsilon_-+\epsilon_+}{2} \prod_{I>J}^n 2\sinh\frac{\pm\phi_I\pm\phi_J\pm\epsilon_-+\epsilon_+}{2}} \,,
\end{align}
for $\chi=1$ and
\begin{align}
Z_\textrm{vec}^{-}
&=
\frac{2\cosh \epsilon_+}{ 2\sinh\frac{\pm\epsilon_- + \epsilon_+}{2} 2\sinh(\pm \epsilon_-+\epsilon_+) \prod_{i=1}^{N+1} 2\sinh(\pm\alpha_i +\epsilon_+)} \prod_{I=1}^{n-1} \frac{ 2\sinh(\pm\phi_I) 2\sinh(\pm\phi_I + 2 \epsilon_+) }{2\sinh(\pm\phi_I \pm\epsilon_-+\epsilon_+)}
\nonumber \\
& \quad \times \prod_{I=1}^{n} \frac{2\sinh\epsilon_+}{2\sinh\frac{\pm\epsilon_-+\epsilon_+}{2} \prod_{i=1}^{N+1} 2\sinh\frac{\pm\phi_I \pm\alpha_i +\epsilon_+}{2}}
\frac{\prod_{I>J}^n 2\sinh\frac{\pm\phi_I\pm\phi_J}{2} 2\sinh\frac{\pm\phi_I\pm\phi_J+2\epsilon_+}{2}}{\prod_{I=1}^n 2\sinh\frac{\pm2\phi_I\pm\epsilon_-+\epsilon_+}{2} \prod_{I>J}^n 2\sinh\frac{\pm\phi_I\pm\phi_J\pm\epsilon_-+\epsilon_+}{2}} \,,
\end{align}
for $\chi=0$. Here and below, repeated $\pm$ signs in the argument of the $\sinh$ functions mean multiplying all such functions.
For instance, \begin{align}
2\sinh(\pm a \pm b+c) \equiv 2\sinh(a+b+c)2\sinh(a-b+c)2\sinh(-a+b+c)2\sinh(-a-b+c) \,.
\end{align} \\
Fundamental hypermultiplets index contribution for $O(k)^{+}$ sector is given by
\begin{align}
Z_{\textrm{fund}}^{+} = \left(2\sinh\frac{m}{2} \right) \prod_{I=1}^n 2\sinh\frac{\pm\phi_I+m}{2} \,,
\end{align}
and for $O(k)^{-}$ sector is given by
\begin{align}
Z_{\textrm{fund}}^{-} = 2\cosh\frac{m}{2} \prod_{I=1}^n 2\sinh\frac{\pm\phi_I+m}{2} \,,
\end{align}
for $\chi=1$, and
\begin{align}
Z_{\textrm{fund}}^{-} = 2\sinh\frac{m}{2}\prod_{I=1}^{n-1} 2\sinh\frac{\pm\phi_I+m}{2} \,,
\end{align}
for $\chi=0$.\\
\paragraph{One-instanton: $N=0$} One can see that there is no contour integral for one-instanton sector. The one-instanton partition function for $Sp(1)$ gauge group with 8 fundamental matters is given by the sum of $Z_{1}^{\pm}$
\begin{align}
Z^{\textrm{5d},Sp(1)}_{k=1} & = \frac{1}{2} \left( \frac{1}{ 2\sinh\frac{\pm\epsilon_- + \epsilon_+}{2} 2\sinh\frac{\pm\alpha+\epsilon_+}{2}} \prod_{l=1}^{8} 2\sinh\frac{m_l}{2}
+ \frac{1}{ 2\sinh\frac{\pm\epsilon_- + \epsilon_+}{2} 2\cosh\frac{\pm\alpha+\epsilon_+}{2}} \prod_{l=1}^{8} 2\cosh\frac{m_l}{2} \right) \nonumber \\
&= \frac{t}{(1-tu)(1-t/u)}\frac{t^2}{(1-t^2v^2)(1-t^2/v^2)} \Big[
(t+\frac{1}{t})\chi_{128}^{SO(16)} - (v+\frac{1}{v}) \chi_{\overline{128}}^{SO(16)}
\Big] \,.
\label{5dinstresult}
\end{align}
It shows $SO(16)$ global symmetry.
\paragraph{Perturbative part: $N=0$} To obtain full 6d degrees of freedom, we must include the perturebative partition function
\begin{align}
Z^{\textrm{5d},Sp(1)}_{\textrm{pert}}
&= \textrm{PE}[ \frac{t}{(1-t u)(1-t/u)}\left(-(t+\frac{1}{t}) \chi^{Sp(1)}_{\textrm{adj},+}+ \chi^{Sp(1)}_{\textrm{fund},+} \chi_{16}^{SO(16)}(y_i)\right)] \nonumber \\
&= \textrm{PE}[ \frac{t}{(1-t u)(1-t/u)}\left(-(t+\frac{1}{t})v^2 + v \chi_{16}^{SO(16)}(y_i)\right)] \,,
\label{5dpertresult}
\end{align}
where $\chi^{Sp(1)}_{\textrm{R},+}$ denotes the $Sp(1)$ character of the representation R, but only sums over positive weights. This is because our index acquires contribution only from quarks, W-bosons, and their superpartners, but not from anti-quarks or anti-W-bosons. We will use this notation throughout the paper.
Plethystic exponetial of $f(x)$ is defined by
\begin{align}
\textrm{PE}[f(x)] \equiv \exp\left( \sum_{n=1}^{\infty} \frac{1}{n} f(x^n)\right)
\label{pe}
\end{align}
where $x$ collectively denotes all the fugacities.
If we expand the 5d index $Z^{\textrm{5d}}=Z_{\textrm{pert}}^{\textrm{5d}}Z_{\textrm{inst}}^{\textrm{5d}}$ in terms of $Sp(1)$ fugacity $v$, it is exactly same as the E-strings elliptic genera in the sense of double expansion of the instanton fugacity $q$ and the string winding fugacity $w$ \footnote{In \cite{Kim:2014dza}, they exactly showed 5d-6d relation up to five-instantons and two-strings order.}. The 5d Coulomb vev fugacity $v$ is identified the 6d string winding number fugacity $w$, and the instanton fugacity $q$ becomes the string momentum fugacity $q$. Keeping in mind the 5d-6d fugacity relations and the $E_8$ Wilson line effect, we will study in the next two sections the 6d $Sp(N)$ gauge theories and their 5d $Sp(N+1)$ gauge theory descriptions.
\section{6d SCFT with $Sp(1)$ gauge symmetry}
\label{sec:rank1}
In this section, we will study the circle compactified 6d SCFT with $Sp(1)$ gauge symmetry and its 5d $Sp(2)$ gauge theory description. Both theories have $N_f=10$ fundamental hypermultiplets. We confirm the duality by comparing the 5d instanton partition function and the elliptic genera of the self-dual strings in the 6d theory. The elliptic genera for the 6d $Sp(1)$ gauge theory are partially studied in \cite{Kim:2015fxa}. Main difference between \cite{Kim:2015fxa} and our computation is the presence of the $E_8$ Wilson line. The 6d theory can be Higgs to the E-string theory. So for duality to hold, one has to turn on the background $SO(20)$ Wilson line which reduces to the $E_8$ Wilson line after Higgsing. A natural guess is that the $SO(20)$ Wilson line will induce a shift $y_8\rightarrow y_8q^{-2}$, while leaving other $y_l$ unchanged. Indeed, we will show that the 5d and 6d indices agree with each other after this shift.
\subsection{6d index}
To study full structure of the 6d index, we include not only the instanton soliton strings(or self-dual strings) part but also the 6d perturbative part $Z^{\textrm{6d}} = Z^{\textrm{6d}}_{\textrm{pert}}Z^{\textrm{6d}}_{\textrm{s.d}}$. The elliptic genus for self-dual strings is given by
\begin{align}
Z^{\textrm{6d},Sp(1)}_{\textrm{s.d}} = 1 + \sum_{n=1}^{\infty} w^n Z_n^{\textrm{6d},Sp(1)}\,,
\end{align}
where $Z_n^{\textrm{6d},Sp(1)}$ is given in \eqref{ellipticg}.
The matter contents of the 2d gauge theory description for the self-dual strings are given in Figure~\ref{2dquiver}. We are considering $N=1$ case, so there is an additional fundamental hypermultiplet contribution compared to the E-string theory. To compare the 6d index with the 5d index, we will study the $q$-expanded form of the elliptic genera finally.
\paragraph{One-string}
One-string elliptic genus is similar with E-string case
\begin{align}
Z_{n=1}^{\textrm{6d},Sp(1)}
&= \frac{1}{2}\left( -Z_{1,[1]} +Z_{1,[2]} +Z_{1,[3]} -Z_{1,[4]} \right) \,,
\end{align}
where $Z_{1,[I]}$ are given by
\begin{align}
Z_{1,[I]} =
- \frac{\eta^2}{\theta_1(\epsilon_1) \theta_1(\epsilon_2)} \cdot \prod_{l=1}^{10} \frac{\theta_I(m_l)}{\eta} \cdot \frac{\eta^2}{\theta_I(\epsilon_+ \pm \alpha)} \,,
\end{align}
again after redefining the string winding fugacity $w \rightarrow wqy_8^{-1}$.
The $q$ expansion of this index is given by
\begin{align}
Z_{n=1}^{\textrm{6d},Sp(1)}& = q^0 \frac{t}{(1-tu)(1-t/u)}\left( \chi^{SO(20)}_{20}(y_i) -(v+\frac{1}{v})(t+\frac{1}{t}) \right) \nonumber \\
&\quad + q^1 \frac{t}{(1-tu)(1-t/u)}\frac{t^2}{(1-t^2v^2)(1-t^2/v^2)} \left( (t+\frac{1}{t}) \chi^{SO(20)}_{\overline{512}}(y_i) - (v+\frac{1}{v}) \chi^{SO(20)}_{512}(y_i) \right) +\mathcal{O}(q^2) \nonumber \\
& \equiv q^0 f_1(t,u,v,y_i) + q^1 \; Z_{1}^{\textrm{inst}}+\mathcal{O}(q^2) \,,
\label{6dsp11}
\end{align}
where $f_1$ and $Z_{1}^{\textrm{inst}}$ are defined by
\begin{align}
f_1(t,u,v,y_i) &= \frac{t}{(1-tu)(1-t/u)}\left( \chi^{SO(20)}_{20}(y_i) -(v+\frac{1}{v})(t+\frac{1}{t}) \right) \,, \\
Z_{1}^{\textrm{inst}} &= \frac{t}{(1-tu)(1-t/u)}\frac{t^2}{(1-t^2v^2)(1-t^2/v^2)} \left( (t+\frac{1}{t}) \chi^{SO(20)}_{\overline{512}}(y_i) - (v+\frac{1}{v}) \chi^{SO(20)}_{512}(y_i) \right) \,.
\end{align}
\paragraph{Two-strings} Two-string elliptic genus is given by the sum of 7 discrete sectors
\begin{align}
\label{2wz1}
Z_{n=2}^{\textrm{6d},Sp(1)} = \frac{1}{2}Z_{2,[0]} +\frac{1}{4}\left( Z_{2,[1]} +Z_{2,[2]} +Z_{2,[3]} +Z_{2,[4]} +Z_{2,[5]} +Z_{2,[6]} \right) \,
\end{align}
where $Z_{2,[I]}$ are given by
\begin{align}
Z_{2,[0]} &= \oint \eta^2 du \frac{\theta_1(2\epsilon_+)}{i \eta} \cdot \frac{\eta^6}{\theta_1(\epsilon_1)\theta_1(\epsilon_2)\theta_1(\epsilon_1 \pm 2u)\theta_1(\epsilon_2 \pm 2u)}
\cdot \prod_{l=1}^{10} \frac{\theta_1(m_l \pm u)}{\eta} \cdot \frac{\eta^4}{\theta_1(\epsilon_+ \pm \alpha \pm u)} \,, \nonumber\\
Z_{2,[I]} &= \frac{\theta_1(a_v) \theta_1( 2\epsilon_+ +a_v)}{\eta^2} \cdot \frac{\eta^6}{\theta_1(\epsilon_1 +a_v)\theta_1(\epsilon_2+a_v)\theta_1(\epsilon_1+2a_{\pm})\theta_1(\epsilon_2+2a_{\pm})} \nonumber \\
&\quad \cdot \prod_{l=1}^{10} \frac{\theta_1(m_l+a_+) \theta_1(m_l+a_-)}{\eta^2} \cdot \frac{\eta^4}{\theta_1(\epsilon_+ \pm \alpha +a_+)\theta_1(\epsilon_+ \pm \alpha +a_-)} \,, \; \textrm{for} \; I=1, \dots, 6.
\end{align}
Here $a_+,a_-,a_v(=a_++a_-)$ are given for $I=1,\dots,6$ by
\begin{align}
[I=1] : (a_+,a_-) &= (0,\frac{1}{2}) \,, & [I=2] &: (a_+,a_-) =(\frac{\tau}{2},\frac{1+\tau}{2}) \,, \nonumber \\
[I=3] : (a_+,a_-) &= (0,\frac{\tau}{2}) \,, & [I=4] &: (a_+,a_-) =(\frac{1}{2},\frac{1+\tau}{2}) \,, \\
[I=5] : (a_+,a_-) &= (0,\frac{1+\tau}{2}) \,, & [I=6] &: (a_+,a_-) =(\frac{1}{2},\frac{\tau}{2}) \,. \nonumber
\end{align}
$Z_{2,[0]}$ has a contour integral given by JK-residue \cite{Benini:2013nda,Benini:2013xpa}. The JK-residue prescription requires to sum over the residues at $u=-\frac{\epsilon_{1,2}}{2},\; -\frac{\epsilon_{1,2}}{2}+\frac{1}{2},\; -\frac{\epsilon_{1,2}}{2}+\frac{\tau}{2},-\frac{\epsilon_{1,2}}{2}+\frac{1+\tau}{2}$ from the symmetric and $u=-\epsilon_+ \pm \alpha$ from the fundamental hypermultiplet. The $SO(20)$ Wilson line shift changes the sign of $Z_{2,[I=1,2,5,6]}$
\begin{align}
\label{6dsp12}
Z_{n=1}^{\textrm{6d},Sp(1)}= \frac{1}{2}Z_{2,[0]} +\frac{1}{4}\left( -Z_{2,[1]} -Z_{2,[2]} +Z_{2,[3]} +Z_{2,[4]} -Z_{2,[5]} -Z_{2,[6]} \right) \,,
\end{align}
again after redefining the string winding fugacity $w \rightarrow wqy_8^{-1}$.
$Z_{2,[I]}$ are obtained by
\begin{align}
Z_{2,[0]}
&= \frac{1}{2}\frac{1}{\eta^{12}\theta_1(\epsilon_1)\theta_1(\epsilon_2)} \Bigg[ \sum_{i=1}^{4}\left( \frac{\prod_{l=1}^{10}\theta_i(m_l \pm\frac{\epsilon_1}{2})}{\theta_1(2\epsilon_1)\theta_1(\epsilon_2-\epsilon_1) \theta_i(\epsilon_+ \pm\alpha \pm\frac{\epsilon_1}{2} )}+(\epsilon_1 \rightarrow \epsilon_2)\right)\nonumber \\
&+ \left( \frac{\prod_{l=1}^{10} \theta_1(m_l \pm (\epsilon_+ +\alpha))}{\theta_1(\epsilon_1 \pm(\epsilon_+ + \alpha))\theta_1(\epsilon_2 \pm (\epsilon_+ + \alpha)) \theta_1(-2\alpha)\theta_1(2\epsilon_++2\alpha)} +(\alpha \rightarrow -\alpha)\right) \Bigg] \,,
\end{align}
\begin{align}
Z_{2,[1]} &= \frac{\theta_2(0)\theta_2(2\epsilon_+)\prod_{l=1}^{10}\theta_1(m_l)\theta_2(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_2(\epsilon_1)\theta_2(\epsilon_2)\theta_1(\epsilon_+\pm\alpha)\theta_2(\epsilon_+\pm\alpha)} \,, \; \nonumber \\
Z_{2,[2]} &= \frac{\theta_2(0)\theta_2(2\epsilon_+)\prod_{l=1}^{10}\theta_3(m_l)\theta_4(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_2(\epsilon_1)\theta_2(\epsilon_2)\theta_3(\epsilon_+\pm\alpha)\theta_4(\epsilon_+\pm\alpha)} \,, \; \nonumber \\
Z_{2,[3]} &= \frac{\theta_4(0)\theta_4(2\epsilon_+)\prod_{l=1}^{10}\theta_1(m_l)\theta_4(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_4(\epsilon_1)\theta_4(\epsilon_2)\theta_1(\epsilon_+\pm\alpha)\theta_4(\epsilon_+\pm\alpha)} \,, \; \nonumber \\
Z_{2,[4]} &= \frac{\theta_4(0)\theta_4(2\epsilon_+)\prod_{l=1}^{10}\theta_2(m_l)\theta_3(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_4(\epsilon_1)\theta_4(\epsilon_2)\theta_2(\epsilon_+\pm\alpha)\theta_3(\epsilon_+\pm\alpha)} \,, \; \nonumber \\
Z_{2,[5]} &= \frac{\theta_3(0)\theta_3(2\epsilon_+)\prod_{l=1}^{10}\theta_1(m_l)\theta_3(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_3(\epsilon_1)\theta_3(\epsilon_2)\theta_1(\epsilon_+\pm\alpha)\theta_3(\epsilon_+\pm\alpha)} \,, \; \nonumber \\
Z_{2,[6]} &= \frac{\theta_3(0)\theta_3(2\epsilon_+)\prod_{l=1}^{10}\theta_2(m_l)\theta_4(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_3(\epsilon_1)\theta_3(\epsilon_2)\theta_2(\epsilon_+\pm\alpha)\theta_4(\epsilon_+\pm\alpha)} \,. \; \nonumber \\
\end{align}
Finally $q$-expanded form of the two-strings elliptic genus \eqref{6dsp12} is
\begin{align}
Z_{n=1}^{\textrm{6d},Sp(1)}
& = q^0\Bigg[
-\frac{t(t+\frac{1}{t})}{(1-tu)(1-t/u)} \nonumber \\\
&+\frac{1}{2} \left(\frac{t\left( \chi^{SO(20)}_{20}(y_i) -(v+\frac{1}{v})(t+\frac{1}{t}) \right)}{(1-tu)(1-t/u)}\right)^2
+ \frac{1}{2}\left(\frac{t^2\left( \chi^{SO(20)}_{20}(y_i^2) -(v^2+\frac{1}{v^2})(t^2+\frac{1}{t^2}) \right)}{(1-t^2u^2)(1-t^2/u^2)}\right)
\Bigg] \nonumber \\
&+q^1 \Bigg[
\frac{t}{(1-tu)(1-t/u)}\frac{t^2}{(1-t^2v^2)(1-t^2/v^2)} \times
\Bigg(
(t+\frac{1}{t})(v+\frac{1}{v}) \chi^{SO(20)}_{\overline{512}}(y_i)
- (t+\frac{1}{t})^2 \chi^{SO(20)}_{512}(y_i) \nonumber \\
&+ \frac{t}{(1-tu)(1-t/u)}\left( \chi^{SO(20)}_{20}(y_i) -(v+\frac{1}{v})(t+\frac{1}{t}) \right) \times
\left( (t+\frac{1}{t}) \chi^{SO(20)}_{\overline{512}}(y_i) - (v+\frac{1}{v}) \chi^{SO(20)}_{512}(y_i) \right)
\Bigg)
\Bigg] \nonumber \\
&+\mathcal{O}(q^2) \\
& \equiv q^0 \left( f_2(t,u,v,y_i) + \frac{1}{2}\left( f_1(t,u,v,y_i) + f_1(t^2,u^2,v^2,y_i^2) \right)\right) + q^1 \left( Z_{2}^{\textrm{inst}} + f_1(t,u,v,y_i) Z_{1}^{\textrm{inst}} \right) + \mathcal{O}(q^2) \,,
\end{align}
where $f_2(t,u,v,y_i)$ and $Z_{2}^{\textrm{inst}}$ are defined by
\begin{align}
f_2(t,u,v,y_i) &= -\frac{t(t+\frac{1}{t})}{(1-tu)(1-t/u)} \\
Z_{2}^{\textrm{inst}} &= \frac{t}{(1-tu)(1-t/u)}\frac{t^2}{(1-t^2v^2)(1-t^2/v^2)}
\left( (t+\frac{1}{t})(v+\frac{1}{v}) \chi^{SO(20)}_{\overline{512}}(y_i) - (t+\frac{1}{t})^2 \chi^{SO(20)}_{512}(y_i) \right) \,.
\end{align}
\paragraph{Three-strings} Three-string elliptic genus is given by the sum of 8 discrete sectors which are given by
\begin{align}
Z_{3,[1]}
& = -\oint \eta^2 du \frac{\theta_1(2\epsilon_+) \theta_1(2\epsilon_+\pm u) \theta_1(\pm u)}{i\eta^5}
\cdot \frac{\eta^{12}}{\theta_1(\epsilon_{1,2})^2 \theta_1(\epsilon_{1,2} \pm u) \theta_1(\epsilon_{1,2} \pm 2u)} \nonumber \\
&\quad \cdot \prod_{l=1}^{10} \frac{\theta_1(m_l) \theta_1(m_l \pm u) }{\eta^3}
\cdot \frac{\eta^6}{\theta_1(\epsilon_+\pm \alpha)\theta_1(\epsilon_+\pm \alpha \pm u)} \,, \\
Z_{3,[2]}
& = - \oint \eta^2 du \frac{\theta_1(2\epsilon_+) \theta_2(2\epsilon_+\pm u) \theta_2(\pm u)}{i\eta^5}
\cdot \frac{\eta^{12}}{\theta_1(\epsilon_{1,2})^2 \theta_2(\epsilon_{1,2} \pm u) \theta_1(\epsilon_{1,2} \pm 2u)} \nonumber \\
&\quad \cdot \prod_{l=1}^{10} \frac{\theta_2(m_l) \theta_1(m_l \pm u) }{\eta^3}
\cdot \frac{\eta^6}{\theta_2(\epsilon_+\pm \alpha)\theta_1(\epsilon_+\pm \alpha \pm u)} \,, \\
Z_{3,[3]}
& = - \oint \eta^2 du \frac{\theta_1(2\epsilon_+) \theta_3(2\epsilon_+\pm u) \theta_3(\pm u)}{i\eta^5}
\cdot \frac{\eta^{12}}{\theta_1(\epsilon_{1,2})^2 \theta_3(\epsilon_{1,2} \pm u) \theta_1(\epsilon_{1,2} \pm 2u)} \nonumber \\
&\quad \cdot \prod_{l=1}^{10} \frac{\theta_3(m_l) \theta_1(m_l \pm u) }{\eta^3}
\cdot \frac{\eta^6}{\theta_3(\epsilon_+\pm \alpha)\theta_1(\epsilon_+\pm \alpha \pm u)} \,, \\
Z_{3,[4]}
& = - \oint \eta^2 du \frac{\theta_1(2\epsilon_+) \theta_4(2\epsilon_+\pm u) \theta_4(\pm u)}{i\eta^5}
\cdot \frac{\eta^{12}}{\theta_1(\epsilon_{1,2})^2 \theta_4(\epsilon_{1,2} \pm u) \theta_1(\epsilon_{1,2} \pm 2u)} \nonumber \\
&\quad \cdot \prod_{l=1}^{10} \frac{\theta_4(m_l) \theta_1(m_l \pm u) }{\eta^3}
\cdot \frac{\eta^6}{\theta_4(\epsilon_+\pm \alpha)\theta_1(\epsilon_+\pm \alpha \pm u)} \,, \\
Z_{3,[I']}
& = - \frac{\theta_1(a_1+a_2)\theta_1(a_2+a_3)\theta_1(a_3+a_1) \theta_1(2\epsilon_++a_1+a_2)\theta_1(2\epsilon_++a_2+a_3)\theta_3(2\epsilon_++a_3+a_1)}{\eta^6} \nonumber \\
&\quad \cdot \frac{\eta^{12}}{\theta_1(\epsilon_{1,2}+2a_1)\theta_1(\epsilon_{1,2}+2a_2)\theta_1(\epsilon_{1,2}+2a_3) \theta_1(\epsilon_{1,2}+a_1+a_2)\theta_1(\epsilon_{1,2}+a_2+a_3)\theta_1(\epsilon_{1,2}+a_3+a_1)} \nonumber \\
&\quad \cdot \prod_{l=1}^{10} \frac{\theta_1(m_l+a_1)\theta_1(m_l+a_2)\theta_1(m_l+a_3)}{\eta^3}
\cdot \frac{\eta^6}{\theta_1(\epsilon_+ \pm \alpha+ a_1)\theta_1(\epsilon_+ \pm \alpha+ a_2)\theta_1(\epsilon_+ \pm \alpha +a_3)} \,,
\end{align}
where $a_1,a_2,a_3$ are given for $I'=1',2',3',4'$ by
\begin{align}
\label{2wdisc}
[I'=1'] &\rightarrow (a_1,a_2,a_3) = (\frac{1}{2},\frac{1+\tau}{2},\frac{\tau}{2}) \;\,, & [I'=2'] &\rightarrow (a_1,a_2,a_3) = (\frac{\tau}{2},\frac{1+\tau}{2},0) \;\,, \nonumber \\
[I'=3'] &\rightarrow (a_1,a_2,a_3) = (0,\frac{\tau}{2},\frac{1}{2}) \;\,, & [I'=4'] &\rightarrow (a_1,a_2,a_3) = (\frac{1}{2},\frac{1+\tau}{2},0) \;\,.
\end{align}
Each $Z_{3,[I]}$ has a contour integral. The non-zero JK-residues come from the poles at $u=-\frac{\epsilon_{1,2}}{2},\; -\frac{\epsilon_{1,2}}{2}+\frac{1}{2},\; -\frac{\epsilon_{1,2}}{2}+\frac{\tau}{2},-\frac{\epsilon_{1,2}}{2}+\frac{1+\tau}{2},-\epsilon \pm \alpha$ and $u= -\epsilon_{1,2} + \cdots$, where $\cdots$ part is decided by $\theta_i(\epsilon_{1,2}+u)=0$. After turning on the $SO(20)$ Wilson line, the three-string elliptic genus becomes
\begin{align}
Z_{n=3}^{\textrm{6d},Sp(1)}&= \frac{1}{4}\left( -Z_{3,[1]} + Z_{3,[2]} + Z_{3,[3]} - Z_{3,[4]} \right) +\frac{1}{8} \left( -Z_{3,[1']} + Z_{3,[2']} + Z_{3,[3']}- Z_{3,[4']} \right) \nonumber \\
&= q^0 \left( \frac{1}{3}f_1(t^3,u^3,v^3,y_i^3) + \frac{1}{6}f_1(t,u,v,y_i)^3 +\frac{1}{2}f_1(t,u,v,y_i)f_1(t^2,u^2,v^2,y_u^2) + f_1(t,u,v,y_i)f_2(t,u,v,y_i) \right) \nonumber \\
& + q \left( Z_{3}^{\textrm{inst}} + f_1(t,u,v,y_i)Z_{2}^{\textrm{inst}}+\left( f_2(t,u,v,y_i) + \frac{1}{2}\left( f_1(t,u,v,y_i) + f_1(t^2,u^2,v^2,y_i^2) \right)\right) Z_{1}^{\textrm{inst}}\right) +\mathcal{O}(q^2) \,,
\label{6dsp13}
\end{align}
where $Z_{3}^{\textrm{inst}}$ are defined by
\begin{align}
&Z_{3}^{\textrm{inst}} \nonumber \\
&= \frac{t}{(1-tu)(1-t/u)}\frac{t^2}{(1-t^2v^2)(1-t^2/v^2)}
\left( (t+\frac{1}{t})(t^2+1+\frac{1}{t^2}) \chi^{SO(20)}_{\overline{512}}(y_i) - (v+\frac{1}{v})(t^2+1+\frac{1}{t^2}) \chi^{SO(20)}_{512}(y_i) \right)\,.
\end{align}
\paragraph{Perturbative index} The perturbative index of the theory on a circle is given by
\begin{align}
\hspace*{-1.3cm}Z^{\textrm{6d},Sp(1)}_{\textrm{pert}}
= \textrm{PE} \left[
\left( \frac{t}{(1-tu)(1-t/u)} \right) \left( -(t+\frac{1}{t}) \left( \chi^{Sp(1)}_{\textrm{adj},+} + \chi^{Sp(1)}_{\textrm{adj}} \frac{q^2}{1-q^2} \right) +
\left( \chi^{Sp(1)}_{\textrm{fund},+} \chi^{SO(20)}_{\textrm{fund}} + \chi^{Sp(1)}_{\textrm{fund}} \chi^{SO(20)}_{\textrm{fund}} \frac{q^2}{1-q^2} \right) \right)
\right] \nonumber \,,
\end{align}
where PE is defined in \eqref{pe}.
First term of the index comes from the 6d W-bosons and second term comes from the 6d fundamental quarks. The background $SO(20)$ Wilson line has no effect on the fields in the $SO(20)$ fundamental representation, and only affects spinor representation. So the perturbative index is unaffected by this Wilson line. In the exponent, we have only kept the contributions from BPS states with positive central charges in the regime $q \ll v \ll y_l^{\pm1}$.
\subsection{5d index}
Everything is same with $Sp(1)$ case. We only increase the gauge group rank by 1 and add two more fundamental hypermultiplets. So the index is generalization of \eqref{5dinstresult} and \eqref{5dpertresult}
\paragraph{Perturbative index}
\begin{align}
Z_{\textrm{pert}}^{\textrm{5d},Sp(2)}
&=\textrm{PE}\left[ \frac{t}{(1-tu)(1-t/u)}\left(-(t+\frac{1}{t}) \chi^{Sp(2)}_{\textrm{adj},+} + \chi^{Sp(2)}_{\textrm{fund},+} \chi^{SO(20)}_{\textrm{fund}} \right) \right] \nonumber \\
&=\textrm{PE}\Big[ \frac{t}{(1-tu)(1-t/u)}\Big(-(t+\frac{1}{t})
\left( v_1^2+v_2^2 +v_1 v_2 + \frac{v_1}{v_2} \right) + \left( v_1+v_2 \right) \chi^{SO(20)}_{\textrm{fund}}(y_i) \Big) \Big] \,.
\label{5dsp2pert}
\end{align}
Here $v_i$ are defined below \eqref{ADHM}, and we chose our $Sp(2)$ positive roots by $2e_1,2e_2,e_1+e_2$ and $e_1-e_2$ where $e_1$ and $e_2$ are orthogonal unit vectors.
\paragraph{One-instanton} One-instanton partition function is
\begin{align}
Z^{\textrm{5d},Sp(2)}_{k=1}
&= \frac{1}{2} \left( Z^{+}_{\textrm{vec}} Z^{+}_{\textrm{fund}} + Z^{-}_{\textrm{vec}} Z^{-}_{\textrm{fund}} \right) \nonumber \\
&= \frac{1}{2} \left( \frac{\prod_{j=1}^{10} 2\sinh\frac{m_j}{2}}{2\sinh\frac{\pm\epsilon_- + \epsilon_+}{2} \prod_{i=1}^{2} 2\sinh\frac{\pm \alpha_i + \epsilon_+}{2} }
+ \frac{\prod_{j=1}^{10} 2\cosh\frac{m_j}{2}}{2\sinh\frac{\pm\epsilon_- + \epsilon_+}{2} \prod_{i=1}^{2} 2\cosh\frac{\pm \alpha_i + \epsilon_+}{2} } \right) \nonumber \\
&= \frac{t}{(1-tu)(1-t/u)}\frac{t^2}{(1-t^2v_1^2)(1-t^2/v_1^2)}\frac{t^2}{(1-t^2v_2^2)(1-t^2/v_2^2)} \nonumber \\
&\times \left(
-\left( (v_1+\frac{1}{v_1})(v_2+\frac{1}{v_2}) + (t+\frac{1}{t})^2 \right) \chi^{SO(20)}_{512}(y_i)
+\left( v_1+\frac{1}{v_1} + v_2+\frac{1}{v_2} \right)(t+\frac{1}{t}) \chi^{SO(20)}_{\overline{512}}(y_i)
\right) \,.
\label{5dsp2}
\end{align}
If we set $v_1=v,\;v_2=w$ and expand $Z^{\textrm{5d}}=Z^{\textrm{5d}}_{\textrm{pert}}Z^{\textrm{5d}}_{\textrm{inst}}$ in terms of $w$, it gives the same result as the 6d index $Z^{\textrm{6d}}$. Namely we checked that the $w$ expansion of $Z^{\textrm{5d}}$ completely argrees with \eqref{6dsp11},\eqref{6dsp12} and \eqref{6dsp13}.
\section{Generalization to the 6d SCFTs with $Sp(N)$ gauge group }
\label{sec:general}
In the previous section, we observed that the 6d string winding fugacity $w$ corresponds to one of the fugacities for the 5d $Sp(2)$ gauge symmetry in the instanton partition function. We can generalize this observation.
$Sp(N+1)$ group can be decomposed into $Sp(1) \times Sp(N) \subset Sp(N+1)$. We expect that the former $Sp(1) \sim SU(2)$ is responsible for the string winding fugacity, and the latter $Sp(N)$ gives the 6d gauge symmetry. We will confirm this assertion by comparing the 5d and 6d indices.
The 5d index for $Sp(N+1)$ gauge group and $N_f=2N+8$ fundamental hypermultiplets is given by
\begin{align}
\hspace*{-.5cm}Z^{\textrm{5d},Sp(N+1)}
&=\textrm{PE}\left[ \frac{t}{(1-tu)(1-t/u)}\left(-(t+\frac{1}{t}) \chi^{Sp(N+1)}_{\textrm{adj},+} + \chi^{Sp(N+1)}_{\textrm{fund},+} \chi^{SO(4N+16)}_{\textrm{fund}} \right) \right] \nonumber \\
&\quad \times \left(1 + q\left( \frac{ \prod_{I=1}^{2N+8} 2\sinh\frac{m_I}{2} }{ 2\sinh\frac{\epsilon_1}{2} 2\sinh\frac{\epsilon_2}{2} \prod_{i=1}^{N+1} 2\sinh\frac{\epsilon_+ \pm \alpha_i}{2}}
+\frac{ \prod_{I=1}^{2N+8} 2\cosh\frac{m_I}{2} }{ 2\cosh\frac{\epsilon_1}{2} 2\cosh\frac{\epsilon_2}{2} \prod_{i=1}^{N+1} 2\cosh\frac{\epsilon_+ \pm \alpha_i}{2}} \right) +\mathcal{O}(q^2) \right) \,.
\end{align}
First line is the perturbative index and second line is the one-instanton partition function.
To compare this result with the 6d index, we specially treat one of the Coulomb vev fugacity $v_{N+1}=e^{-\alpha_{N+1}} \equiv w$. Then $Sp(N+1)$ characters can be rewritten in terms of $Sp(N)$ characters and $w$
\begin{align}
\chi^{Sp(N+1)}_{\textrm{fund}}(v_i)
& \equiv \sum_{i=1}^{N+1} \left( v_i + \frac{1}{v_i} \right) = \chi^{Sp(N)}_{\textrm{fund}}(v_i) +\left( w+\frac{1}{w} \right) \,, \\
\chi^{Sp(N+1)}_{\textrm{adj}}(v_i)
& \equiv \frac{ \left(\chi^{Sp(N+1)}_{\textrm{fund}}(v_i) \right)^2 + \chi^{Sp(N+1)}_{\textrm{fund}}(v_i^2) }{2} \nonumber \\
& = \frac{\left( \chi^{Sp(N)}_{\textrm{fund}}(v_i) +\left( w+\frac{1}{w} \right) \right)^2 + \chi^{Sp(N)}_{\textrm{fund}}(v_i^2) +\left( w^2+\frac{1}{w^2} \right)}{2} \nonumber \\
& = \chi^{Sp(N)}_{\textrm{adj}}(v_i) + \left(w+\frac{1}{w} \right) \chi^{Sp(N)}_{\textrm{fund}}(v_i) + w^2 + 1 + \frac{1}{w^2} \,.
\end{align}
Then the perturbative index becomes
\begin{align}
Z_{\textrm{pert}}^{\textrm{5d},Sp(N+1)}
&= \textrm{PE}\left[ \frac{t}{(1-tu)(1-t/u)}\left(-(t+\frac{1}{t}) \chi^{Sp(N)}_{\textrm{adj},+} + \chi^{Sp(N)}_{\textrm{fund},+} \chi^{SO(4N+16)}_{\textrm{fund}} \right) \right] \nonumber \\
&\times \textrm{PE} \left[ \frac{t}{(1-tu)(1-t/u)}\left(-(t+\frac{1}{t}) w^2 + w \left( -(t+\frac{1}{t})\chi^{Sp(N)}_{\textrm{fund}} + \chi^{SO(4N+16)}_{\textrm{fund}} \right) \right) \right] \,,
\label{5dNpert}
\end{align}
where we only keep positive weights(roots) in the plethystic exponential. \\
We can expand the instanton partition function in terms of $w$
\begin{align} \label{5dNinst}
Z^{\textrm{5d},Sp(N+1)}_{k=1} &=\frac{1}{2} \Bigg( \frac{ \prod_{I=1}^{2N+8} 2\sinh\frac{m_I}{2} }{ 2\sinh\frac{\epsilon_1}{2} 2\sinh\frac{\epsilon_2}{2} \prod_{i=1}^{N} 2\sinh\frac{\epsilon_+ \pm \alpha_i}{2}} \frac{t}{(1-t w)(1-t/w)} \nonumber \\
&\qquad\qquad + \frac{ \prod_{I=1}^{2N+8} 2\cosh\frac{m_I}{2} }{ 2\cosh\frac{\epsilon_1}{2} 2\cosh\frac{\epsilon_2}{2} \prod_{i=1}^{N} 2\cosh\frac{\epsilon_+ \pm \alpha_i}{2}} \frac{t}{(1+t w)(1+t/w)} \Bigg) \nonumber \\
& = \frac{1}{2}w \left( -\frac{ \prod_{I=1}^{2N+8} 2\sinh\frac{m_I}{2} }{ 2\sinh\frac{\epsilon_1}{2} 2\sinh\frac{\epsilon_2}{2} \prod_{i=1}^{N} 2\sinh\frac{\epsilon_+ \pm \alpha_i}{2}}
+ \frac{ \prod_{I=1}^{2N+8} 2\cosh\frac{m_I}{2} }{ 2\cosh\frac{\epsilon_1}{2} 2\cosh\frac{\epsilon_2}{2} \prod_{i=1}^{N} 2\cosh\frac{\epsilon_+ \pm \alpha_i}{2}} \right) \nonumber \\
& - \frac{1}{2}w^2 \left( t + \frac{1}{t} \right) \left( \frac{ \prod_{I=1}^{2N+8} 2\sinh\frac{m_I}{2} }{ 2\sinh\frac{\epsilon_1}{2} 2\sinh\frac{\epsilon_2}{2} \prod_{i=1}^{N} 2\sinh\frac{\epsilon_+ \pm \alpha_i}{2}}
+ \frac{ \prod_{I=1}^{2N+8} 2\cosh\frac{m_I}{2} }{ 2\cosh\frac{\epsilon_1}{2} 2\cosh\frac{\epsilon_2}{2} \prod_{i=1}^{N} 2\cosh\frac{\epsilon_+ \pm \alpha_i}{2}} \right) \nonumber \\
& + \frac{1}{2}w^3 \left( t^2 +1+ \frac{1}{t^2} \right) \left( -\frac{ \prod_{I=1}^{2N+8} 2\sinh\frac{m_I}{2} }{ 2\sinh\frac{\epsilon_1}{2} 2\sinh\frac{\epsilon_2}{2} \prod_{i=1}^{N} 2\sinh\frac{\epsilon_+ \pm \alpha_i}{2}}
+ \frac{ \prod_{I=1}^{2N+8} 2\cosh\frac{m_I}{2} }{ 2\cosh\frac{\epsilon_1}{2} 2\cosh\frac{\epsilon_2}{2} \prod_{i=1}^{N} 2\cosh\frac{\epsilon_+ \pm \alpha_i}{2}} \right) \nonumber \\
& +\cdots \,.
\end{align}
Now we will compare this with the 6d index. Note that the first line of \eqref{5dNpert} is already same as the 6d perturbative index, so $w^0q^0$ orders clearly agree with each other.
\paragraph{One-string} Now we compare the 5d-6d results at $w^1q^0$ and $w^1q^1$ orders. One-string elliptic genus has following form
\begin{align}
Z_{n=1}^{\textrm{6d},Sp(N)}
&= \frac{1}{2}\left( -Z_{1,[1]} +Z_{1,[2]} +Z_{1,[3]} -Z_{1,[4]} \right) \,,
\label{6dN}
\end{align}
where $Z_{1,[I]}$ are given by
\begin{align}
Z_{1,[I]} =
- \frac{\eta^2}{\theta_1(\epsilon_1) \theta_1(\epsilon_2)} \prod_{i=1}^{N}\frac{\eta^2}{\theta_I(\epsilon_+\pm \alpha_i)}\prod_{l=1}^{2N+8} \frac{\theta_I(m_l)}{\eta} \,.
\end{align}
After making $q$ expansion of $Z_{1,[I]}$, and after replacing all chemical potential by $z \rightarrow \frac{i z}{2\pi}$ (where $z$ denotes $\epsilon_{1,2} , \alpha_i, m_l$), one obtains
\begin{align}
Z_{1,[1]} & = \frac{ \prod_{l=1}^{2N+8} 2\sinh\frac{m_l}{2} }{ 2\sinh\frac{\epsilon_1}{2} 2\sinh\frac{\epsilon_2}{2} \prod_{i=1}^{N} 2\sinh\frac{\epsilon_+ \pm \alpha_i}{2}}q^1 + \mathcal{O}(q^2) \,,\\
Z_{1,[2]} & = \frac{ \prod_{l=1}^{2N+8} 2\cosh\frac{m_l}{2} }{ 2\cosh\frac{\epsilon_1}{2} 2\cosh\frac{\epsilon_2}{2} \prod_{i=1}^{N} 2\cosh\frac{\epsilon_+ \pm \alpha_i}{2}}q^1 + \mathcal{O}(q^2) \,, \\
Z_{1,[3]} & = \left( \frac{\sum_{l=1}^{2N+8}2\cosh\frac{m_l}{2} -\sum_{i=1}^{N} 2\cosh\frac{\epsilon_+\pm \alpha_i}{2}} {2 \cdot 2\sinh\frac{\epsilon_1}{2} 2\sinh\frac{\epsilon_1}{2}} \right)q^0+F(m_l,v_i,\epsilon_i) q^1 + \mathcal{O}(q^2) \,,\\
Z_{1,[4]} & = -\left( \frac{\sum_{l=l}^{2N+8}2\cosh\frac{m_l}{2} -\sum_{i=1}^{N} 2\cosh\frac{\epsilon_+\pm \alpha_i}{2}} {2 \cdot 2\sinh\frac{\epsilon_1}{2} 2\sinh\frac{\epsilon_1}{2}} \right)q^0+F(m_l,v_i,\epsilon_i) q^1 + \mathcal{O}(q^2) \,.
\end{align}
We do not write explicit form of $F(m_l,v_i,\epsilon_i)$ which is the coefficient of $q$ in $Z_{1,[3]}$ and $Z_{1,[4]}$, because they are canceled after summation. Then $w^1q^0$ term in \eqref{6dN} agrees with \eqref{5dNpert}. Also we have checked that $w^1q^1$ term agrees with the corresponding order of $Z^{5d}$.
\paragraph{Two-strings} We compare the 5d-6d results at $w^2q^0$ and $w^2q^1$ orders. Two-string elliptic genus is given by
\begin{align}
Z_{2,[0]} &= \oint \eta^2 du \frac{\theta_1(2\epsilon_+)}{i \eta} \cdot \frac{\eta^6}{\theta_1(\epsilon_1)\theta_1(\epsilon_2)\theta_1(\epsilon_1 \pm 2u)\theta_1(\epsilon_2 \pm 2u)}
\cdot \prod_{l=1}^{2N+8} \frac{\theta_1(m_l \pm u)}{\eta} \cdot \prod_{i=1}^{N}\frac{\eta^4}{\theta_1(\epsilon_+ \pm \alpha_i \pm u)} \,, \nonumber\\
Z_{2,[I]} &= \frac{\theta_1(a_v) \theta_1( 2\epsilon_+ +a_v)}{\eta^2} \cdot \frac{\eta^6}{\theta_1(\epsilon_1 +a_v)\theta_1(\epsilon_2+a_v)\theta_1(\epsilon_1+2a_{\pm})\theta_1(\epsilon_2+2a_{\pm})} \nonumber \\
&\quad \cdot \prod_{l=1}^{2N+8} \frac{\theta_1(m_l+a_+) \theta_1(m_l+a_-)}{\eta^2} \cdot \prod_{i=1}^{N}\frac{\eta^4}{\theta_1(\epsilon_+ \pm \alpha_i +a_+)\theta_1(\epsilon_+ \pm \alpha_i +a_-)} \,,
\end{align}
where discrete sector $I$ is same as \eqref{2wdisc}. There are additional poles from symmetric hypermultiplets, which are given by $u_*=-\epsilon_+ \pm\alpha_i$ for all $i$. Now we can obtain general form of two-strings
elliptic genus.
\begin{align}
Z_{2,[0]}
&= \frac{1}{2}\frac{1}{\eta^{12}\theta_1(\epsilon_1)\theta_1(\epsilon_2)} \Bigg[ \sum_{i=1}^{4}\left( \frac{\prod_{l=1}^{2N+8}\theta_i(m_l \pm\frac{\epsilon_1}{2})}{\theta_1(2\epsilon_1)\theta_1(\epsilon_2-\epsilon_1)\prod_{m=1}^{N} \theta_i(\epsilon_+ \pm\alpha_m \pm\frac{\epsilon_1}{2} )}
+(\epsilon_1 \rightarrow \epsilon_2)\right)\nonumber \\
+ \sum_{n=1}^{N} & \Bigg( \frac{\prod_{l=1}^{2N=8} \theta_1(m_l \pm (\epsilon_+ +\alpha_n))}{\theta_1(\epsilon_1 \pm2(\epsilon_+ + \alpha_n))\theta_1(\epsilon_2 \pm 2(\epsilon_+ + \alpha_n)) \theta_1(-2\alpha_n)\theta_1(2\epsilon_++2\alpha_n)\prod_{\substack{m=1\\ m\neq n}}^N \theta_1(- \alpha_n \pm \alpha_m)\theta_1(2\epsilon_++\alpha_n \pm \alpha_m) } \nonumber \\
& \quad +(\alpha_n \rightarrow -\alpha_n) \Bigg)
\Bigg] \,,
\end{align}
\begin{align}
Z_{2,[1]} &= \frac{\theta_2(0)\theta_2(2\epsilon_+)\prod_{l=1}^{2N+8}\theta_1(m_l)\theta_2(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_2(\epsilon_1)\theta_2(\epsilon_2) \prod_{m=1}^N \theta_1(\epsilon_+\pm\alpha_m)\theta_2(\epsilon_+\pm\alpha_m)} \,, \; \nonumber \\
Z_{2,[2]} &= \frac{\theta_2(0)\theta_2(2\epsilon_+)\prod_{l=1}^{2N+8}\theta_3(m_l)\theta_4(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_2(\epsilon_1)\theta_2(\epsilon_2) \prod_{m=1}^N \theta_3(\epsilon_+\pm\alpha_m)\theta_4(\epsilon_+\pm\alpha_m)} \,, \; \nonumber \\
Z_{2,[3]} &= \frac{\theta_4(0)\theta_4(2\epsilon_+)\prod_{l=1}^{2N+8}\theta_1(m_l)\theta_4(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_4(\epsilon_1)\theta_4(\epsilon_2) \prod_{m=1}^N \theta_1(\epsilon_+\pm\alpha_m)\theta_4(\epsilon_+\pm\alpha_m)} \,, \; \nonumber \\
Z_{2,[4]} &= \frac{\theta_4(0)\theta_4(2\epsilon_+)\prod_{l=1}^{2N+8}\theta_2(m_l)\theta_3(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_4(\epsilon_1)\theta_4(\epsilon_2) \prod_{m=1}^N \theta_2(\epsilon_+\pm\alpha_m)\theta_3(\epsilon_+\pm\alpha_m)} \,, \; \nonumber \\
Z_{2,[5]} &= \frac{\theta_3(0)\theta_3(2\epsilon_+)\prod_{l=1}^{2N+8}\theta_1(m_l)\theta_3(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_3(\epsilon_1)\theta_3(\epsilon_2) \prod_{m=1}^N \theta_1(\epsilon_+\pm\alpha_m)\theta_3(\epsilon_+\pm\alpha_m)} \,, \; \nonumber \\
Z_{2,[6]} &= \frac{\theta_3(0)\theta_3(2\epsilon_+)\prod_{l=1}^{2N+8}\theta_2(m_l)\theta_4(m_l)}{\eta^{12}\theta_1(\epsilon_1)^2\theta_1(\epsilon_2)^2 \theta_3(\epsilon_1)\theta_3(\epsilon_2) \prod_{m=1}^N \theta_2(\epsilon_+\pm\alpha_m)\theta_4(\epsilon_+\pm\alpha_m)} \,. \; \nonumber \\
\end{align}
After plugging these into the \eqref{6dsp12}, one can obtain two-string elliptic genus.
We compared the $q$-expanded form of this elliptic genus with 5d index by increasing $N$ up to $N=8$, and we saw perfect agreements of the two results. We also checked the agreement of three-strings elliptic genus up to $N=3$.
\section{Conclusion}
\label{sec:conc}
We studied the 6d SCFTs compactified on a circle with $Sp(N)$ gauge symmetry and $N_f=2N+8$ fundamental hypermultiplets. In particular, we tested the 5d $Sp(N+1)$ gauge theory descriptions of the 6d theories. We compared the Witten indices of the 5d and 6d theories. For the 5d instanton partition function, the usual ADHM construction contains unwanted string theory degrees except in the one-instanton sector. We observe perfect agreement of the two indices in double expansion of the string winding fugacity $w$ and the instanton fugacity $q$ up to $w^3q^1$ order. As usual, the 5d instanton charge is mapped to the 6d KK momentum mode. The 5d $Sp(N+1)$ gauge group is decomposed into the $Sp(1) \times Sp(N)$, and the former $Sp(1)$ charge is mapped the 6d self-dual string winding number. The fugacities for the latter 5d $Sp(N)$ gauge symmetry and $SO(4N+16)$ flavor symmetry are mapped to the 6d $Sp(N)$ gauge symmetry and $SO(4N+16)$ flavor symmetry. We have also observed that the background $SO(4N+16)$ Wilson line plays crucial roles in these 5d-6d dualities, similar to the $E_8$ Wilson line in the E-string theory. These results provide the detailed rules of the dualities proposed by \cite{Hayashi:2015zka}. \\
The natural question is what happens if we naively compute higher instanton partition functions using \eqref{5dinst}. Our naive computation shows disagreements with the result predicted by the ellipitic genus of self-dual strings.
Difference between two results must comes from the extra degrees in the string engineered ADHM construction. We hope this result gives the better understanding of the extra degrees in the brane system. \\
We can also try to check the duality between 5d gauge theory with $Sp(2)$ and $SU(3)$ gauge groups, each having $10$ fundamental hypermulitplets\cite{Tachikawa:2015mha,Yonekura:2015ksa,Hayashi:2015fsa,Gaiotto:2015una}. The SU(3) gauge theory is also conjectured to uplift to the same 6d $Sp(1)$ SCFT on a circle. Although the string theory engineered ADHM construction of the $SU(3)$ gauge theories have extra degrees too, we can detour this problem by introducing anti-symmetric hypermultiplets. Anti-symmetric representation is same as (anti-)fundamental representation in $SU(3)$ group. So the 5d $SU(3)$ gauge theory with $10$ fundamental hypermultiplets can be regarded as the gauge theory with 8 fundamental and 2 anti-symmetric hypermultiplets. This trivial change of viewpoint affects the details of the ADHM construction. Such alternative ADHM descriptions are sometimes shown to provide more useful description of instantons \cite{Gaiotto:2015una}. It is interesting to see if their ideas apply to our system.
\vspace{0.8cm}
\noindent{\bf\large Acknowledgements}
\noindent
The author would like to thank Yoonseok Hwang, Ki-Hong Lee, and Sangmin Lee for helpful disccusions. I would especially like to thank Seok Kim for suggesting this project, valuable discussions, and careful reading of the manuscript. I also thank Sung-Soo Kim for informing me of the publication schedule of \cite{sskim}, which has an overlap with this paper. The work of YY is supported by Samsung Science and Technology Foundation under Project Number SSTF-BA1402-08 and BK 21 Plus Program.
\pagebreak
\providecommand{\href}[2]{#2}\begingroup\raggedright
|
1,314,259,993,211 | arxiv | \section{Introduction}
Antiferromagnetic materials (AFM) are promising candidates for fast, robust and energy-efficient spintronic devices \cite{Baltz2018}. AFMs possess two or more magnetic sublattices, with vanishing net magnetization. This absence of magnetic stray fields enables higher bit packing density of AFMs devices compared to ferro(i)magnetic materials (FMs), enhanced robustness against interfering external magnetic fields and potentially THz switching speeds \citep{Kampfrath2011}. Especially insulating antiferromagnetic materials (iAFM) have emerged as a promising material class for the development of low power devices, because their low damping allows for the transport of spin currents over long distances \citep{Lebrun2018}.
Crucial for the implementation of AFMs as active spintronic devices is the control of the antiferromagnetic order. In recent years it has been established that current pulses through an adjacent heavy metal layer can induce a reorientation of the antiferromagnetic ordering in insulating AFMs \citep{Moriyama2018, Chen2018, Baldrati2019}. For iAFMs with strong magnetostriction, the reorientation of the N\'eel vector is dominated by a thermomagnetoelastic switching and strongly depends on the device geometry \citep{Zhang2018, Baldrati2020, Meer2021}.
For FMs the device geometry and shape-induced control of the domains is a key tool for tailoring functional device properties. In AFMs, conventional shape anisotropy caused by the magnetic dipolar interactions is not present, due to the absence of a demagnetization field. However, theoretical work on shape-induced phenomena in finite size antiferromagnets predicts an ordering of the antiferromagnetic domains, with long-range strain fields leading to the formation of shape-dependent domain structures \citep{Gomonay2007, Gomonay2014}. It has been shown that the shape-induced domains in the FM layer of a AFM/FM bilayer can for instance be used to imprint an AFM vortex state into the adjacent AFM layer \citep{Wu2011, Sort2006}. However, initial studies of patterning-induced effects in antiferromagnetic LaFeO$_3$ could not observe any changes in the domain structure, after patterning different elements with etching \citep{Czekaj2007}. Later studies patterned elements via an Ar$^+$ ion implantation-based patterning technique, which resulted in antiferromagnetic structures embedded in a non-magnetic layer \citep{Folven2010, Lee2020}. This technique led to the observation of changes in the antiferromagnetic ordering near the patterning edge for LaFeO$_3$ \citep{Folven2010, Folven2011, Folven2012} and more recently La$_{0.7}$Sr$_{0.3}$FeO$_3$ \citep{Lee2020}, interpreted as an edge effect near the edge for elements patterned along the easy axis.
Further studies have suggested the exploitation of this effect in exchange bias applications of AFM/FM heterostructures \citep{Folven2015}. However, these previous investigations of patterning-induced modulations of the AFM order have been focused on the passive application of AFMs in AFM/FM bilayers. Considering the potential of antiferromagnets as active elements in spintronic devices, it is important to investigate patterning- and shape-induced effects and in particular the control of the domain configuration in AFMs without an adjacent FM layer. It is crucial to not only understand patterning-induced effects near the edge, but also the influence of shape-dependent strain on the domain structure inside a structured antiferromagnetic device. This effect would be most suitable to tailor domain configurations by the shape.
The prototypical collinear antiferromagnet NiO has been considered to be a promising candidate for an active element in spintronic applications, in contrast to LaFeO$_3$, due to the possibility of electrically controlling and reading the AFM order \citep{Hoogeboom2017, Moriyama2018, Baldrati2018a, Fischer2018} and recent observations of ultrafast currents in the THz regime in NiO/Pt bilayers \citep{Kampfrath2011, Moriyama2020}. In addition, NiO exhibits a high N\'eel temperature of 523$\,$K in the bulk \citep{Roth1960} and strong magneto-elastic coupling \citep{Aytan2017}. The latter has been used extensively to manipulate the AFM order of NiO, by growth-induced strain \citep{Kozio-Rachwa2020, Altieri2003, Alders1998}, piezoelectric substrates exerting strain \citep{Barra2021} and indirectly via current-induced heating leading to strain \citep{Meer2021}. However, the effect of shape-dependent strain on the domain structure of NiO thin films has not been explored. Considering the application of AFMs with strong magnetostriction like NiO or CoO in active spintronic devices, it is important to investigate how the geometry influences the antiferromagnetic domain configuration and how one could use different geometries to control the antiferromagnetic order.
In this work, we demonstrate the tailoring of the AFM ground state of NiO by shape-dependent strain. We study the Néel vector orientation in patterned elements by photoemission electron microscopy (PEEM) exploiting the x-ray magnetic linear dichroism (XMLD) effect for magnetic contrast. We first identify and compare the shape-induced domain structure of elements oriented along different axes before we theoretically explore how shape-induced effects can manipulate the antiferromagnetic ordering in different element geometries. Finally, we demonstrate how the modification of the shape-dependent strain by variation of the aspect ratio of our elements can be used to control the antiferromagnetic domain configuration, demonstrating thus a tool for the shape-induced control of future AFM devices.
\section{Results} To investigate shape-induced effects on the antiferromagnetic domain structure, we have grown an epitaxial NiO(10nm)/Pt(2nm) bilayer on an MgO(001) substrate and used Ar ion beam etching to pattern various elements with different orientations. Similarly prepared bilayers of NiO and Pt are currently extensively used for current-induced switching \citep{Moriyama2018, Chen2018, Baldrati2019, Zhang2018, Baldrati2020, Meer2021} and THz radiation experiments \citep{Paperfrom2020}. As depicted in Fig.\ref{fig:1}a, we have etched trenches with a width of around $1\,$\textmu m and a depth of about $20\,$nm around the desired elements. Additionally, we deposited about $1.4\,$nm of ruthenium inside the trenches to reduce the possibility of discharges during PEEM imaging \citep{Stohr1999}. To allow for a reconfiguration of the AFM domains, the sample was annealed after pattering above its Néel temperature for 10 minutes at 550$\,$K under vacuum. Measurements were carried out at the UE49-PGM/SPEEM at the BESSY II electron storage ring operated by the Helmholtz-Zentrum Berlin für Materialien und Energie \citep{Kronast2016}. \\
\begin{figure}[b]
\includegraphics{2022_4_4_Figure1.png}
\caption{\label{fig:1} (a) XMLD-PEEM image of shape-induced NiO domains inside a rectangular shaped element, with its edges oriented along the in-plane projection of the easy axis. (b) Sketch of the experimental setup and orientation of the observed different Néel vector directions $[ \pm 5 \pm 5$ $19]$ with respect to the polarization vector. An orientation along $[5 5 19]$ (yellow) is possible, but was not observed in this element.
\end{figure}
We first measured polarization-dependent absorption spectra around the Ni \textit{L}$_3$ and \textit{L}$_2$ edge (Appendix \ref{sec_XMLDsignal}) to verify the antiferromagnetic ordering of our films at room temperature. The XMLD contrast is proportional to the orientation of the Néel vector. By studying the XMLD contrast dependence on the azimuthal angle $\gamma$ and angle of the beam-polarization $\omega$ (Appendix \ref{sec_XMLDsignal}) we can identify in Fig. \ref{fig:1}a four antiferromagnetic domains present in a rectangular element of $10\times5\,$\textmu m whose long axis is oriented along the [110] direction. Three different levels of XMLD are observed inside our element, indicating three types of domains, a larger domain in the center (domain 1 - blue), two originating from the short edges (domain 2 - green) and one narrow at the long edge (domain 3 - red). The directions of the N\'eel vector in the different domains are depicted in Fig. \ref{fig:1}b. We can observe that the in-plane projection of the N\'eel vector in all domains is oriented orthogonal to the edges of the element along [110] and $[1\bar{1}0]$.
The formation of domain 3 (red) along the edge can be attributed to localized changes of the anisotropy near the edge of the element, related to patterning-induced material property changes. However, the shape of domains 2 (green) and 1 (blue) in the center of the element can not be understood by local changes of the anisotropy at the edge of the element and we need to model long-range magnetoelastic interactions to understand the domain configuration.
NiO is known for its strong magnetoelastic coupling which is responsible for the creation of internal (magnetoelastic) stresses in a magnetically ordered state. In a free standing homogeneously ordered sample these stresses induce a pronounced spontaneous strain ($u_0\propto 10^{-4}- 3\cdot 10^{-3}$ \cite{Nakahigashi1975}) characterised with the strain tensor $\hat{u}^\mathrm{spon}_0$ whose components are related with the components of the N\'eel vector $\mathbf{n}$.
In a multilayer system the internal magnetoelastic stresses are complemented by the external stresses due to the clamping of the antiferromagnetic layer by a nonmagnetic substrate \citep{Gomonay2002}. In this case the resulting strain field can be split into two parts, spontaneous (or plastic) strains $\hat{u}^\mathrm{spon}_0[\mathbf{n}(\mathbf{r})]$ associated with the distribution of the N\'eel vector (as in the absence of a substrate) and additional, elastic strains $\hat{u}^\mathrm{elast}$: $\hat{u}^\mathrm{tot}=\hat{u}^\mathrm{spon}_0+\hat{u}^\mathrm{elast}$.
To calculate the elastic strain we use an approach of elasticity theory with continuously distributed defects \cite{Teodosiu1982, Kleman1972}. In particular, we assume that in magnetic multilayers the defects originate from the incompatibility between the spontaneous strain $\hat{u}^\mathrm{spon}_0$ and the non-deformed (reference) state of a non-magnetic substrate at the NiO/substrate interface, or from incompatibility between spontaneous strain in neighboring domains. From the compatibility condition for the total strain $\varepsilon_{ijk}\varepsilon_{lmn}\partial_j\partial_m{u}^\mathrm{tot}_{kn} =0$ (where $\varepsilon_{ijk}$ is an antisymmetric Levi-Civita tensor) we obtain a set of equations for the elastic strains
\begin{equation}\label{eq_incompatibility}
\varepsilon_{ijk}\varepsilon_{lmn}\partial_j\partial_m{u}^\mathrm{elast}_{kn} =\eta_{il},
\end{equation}
in which the incompatibility tensor ${\eta}_{il}\equiv-\varepsilon_{ijk}\varepsilon_{lmn}\partial_j\partial_m{u}^\mathrm{spon}_{kn}[\mathbf{n}(\mathbf{r})]$ is calculated for a given distribution of the N\'eel vector.
Equation (\ref{eq_incompatibility}) is similar to equations of electrostatics, in which the incompatibility $\hat{\eta}$ plays the role of the charges and the elastic strains $\hat{u}^\mathrm{elast}$ correspond to the potentials \cite{Eshelby1956}. Moreover, similar to the electric and magnetostatic stray fields, the field of the corresponding elastic strains is long-range and therefore can stabilise an inhomogeneous distribution of the magnetic vectors. The similarity with the equations of electrostatics and magnetostatics allows for a qualitative interpretation of the magnetoelastic effects in terms of charge distributions. Here, we consider some of the effects that reinforce our intuitive reasoning through modelling, as explained below. As shown in Ref. \cite{Schmitt2020b}, we distinguish four types of T-domains with the N\'eel vector oriented along $[ \pm 5 \pm 5$ $19]$. The pairs with opposite orientation of projection on the film plane have the same in-plane components of spontaneous strain and will be treated in the further discussion as the same domain.
We start with a discussion of the origin of the domain structure in thin films and patterned elements. A single-domain continuous film of NiO is charged due to incompatibility strain charges homogeneously distributed at the interface with the non-magnetic substrate. The charge density depends on the elastic and magnetoelastic properties of the interface and is localised in a thin layer of the order of the exchange length (characteristic length scale at which the N\'eel vector decays inside the nonmagnetic region), see Appendix~\ref{sec_theory}. These charges create additional homogeneous strain $\hat{u}^\mathrm{elast}$ whose non-negative contribution to the energy of the NiO layer is proportional to the volume of the NiO.
\begin{widetext}
\begin{figure}[h]
\includegraphics{2022_4_4_Figure3.png}
\caption{\label{fig:3} (a) XMLD images of different NiO(10)/Pt(2) rectangular devices with varying aspect ratio. The edges of the devices are oriented along the in-plane projection of the easy axes. The arrows show the in-plane projection of the Néel vector determined from the greyscale contrast. (b) Final equilibrium state of the magnetic texture after considering magneto-elastic interactions to simulate the domain distribution in different aspect ratios. The color code indicates the direction of the Néel vector.}
\end{figure}
\end{widetext}
In a multidomain sample with equally distributed domains of all types, the average strain incompatibility and average charge density vanish. The local charge density is still non-zero and contributes to the energy of the sample. However, this contribution is proportional to the average domain volume. Hence, a small-scale multidomain structure is energetically favourable, with the domain size being limited by the positive energy of the domain walls (similar to the Kittel model in ferromagnets \citep{Kittel1949}). However, the formation of a new domain inside a single-domain region is blocked by a high energy barrier associated with the coherent rotation of a large number of magnetic moments. The energy barrier can be much lower at the sample surfaces and edges due to the additional contributions from surface energy and incompatibility charges at the corners \citep{Gomonay2002}.
First, we consider the role of the surface magnetic anisotropy, which in our case favours alignment of the N\'eel vector perpendicular to the surface. For this we studied the evolution of the magnetic structure in the patterned elements with different aspect ratio cut parallel to the in-plane projection of the easy magnetic axis (see Fig. \ref{fig:3}a).
In this geometry the surface anisotropy induces the formation of the dark domains along $[{\overline{1}}10]$ edges and bright domains along [110] edges. The final texture includes two closure domains localised at the short edges and a large orthogonal domain that spreads between the two long edges. The closure domains grow from the edges due to magnetoelastic forces that act to diminish the average magnetoelastic charge of the sample. This growth is limited by an increase of the energy of the domain walls. The size of the closure domains is of the order of the size of the short edge and depends on the aspect ratio of the sample (see Fig. \ref{fig:3}b). It should be noted that in the absence of magnetoelastic coupling, the closure domains would be localised in the vicinity of the short edge within a distance of several magnetic domain wall widths (see appendix Fig. \ref{Comparision}), independent on the aspect ratio of the device. Our simulations also show that the closure domains can be localised along the longer edges of the samples as well, as could be experimentally observed for larger patterned devices. However, both configurations (the one with the closure domains along the short and the other along the long edges) are observed in a finite range of aspect ratios (between 1/3 and 3) for which their energies have comparable values. In this case the structure of the final state depends on the initial configuration and kinetics of the domain growth.
To better illustrate the effect of strain, we next investigate elements oriented along the [100] and [010] axes, where even more significant effects are expected. For elements oriented along the projection of the hard axes the domains do not align along the edges of the element, but instead are centered around the corners of the elements, see Fig. \ref{fig:2}a. Inside the element we can observe a large green domain and two blue domains, which are located near the top left and bottom right corner. Outside of the element we observe a domain (green arrows) and two additional domains (blue arrows) located at the top right and bottom left corner, opposite to the domains in the inner corners. In the case that the domain formation is dominated by an alignment along certain crystallographic axes one would expect the same domains to be present at the inside and outside edge of the element. However, this is not the case and we therefore need to consider shape-induced strain, in particular the role of incompatibility charges in the corners of the elements, to understand the origin of the domain structure.
\begin{figure}[b]
\includegraphics{2022_4_4_Figure2.png}%
\caption{\label{fig:2} (a) Antiferromagnetic domain structure of a rectangular shaped element oriented along the in-plane projections of the hard axes. (b) Simulated equilibrium state of the magnetic texture. The color code and the arrows indicate the direction of the Néel vector.}
\end{figure}
For elements cut along the in-plane projection of the hard magnetic axis, the surface anisotropy favours an orientation of the N\'eel vector along a hard magnetic axis and does not set a preferable domain type. However, the surface anisotropy sets orthogonal easy directions at neighboring edges and favours the formation of vortex-like textures of the N\'eel vectors in the vicinity of the sample corners. Such a rotation of the N\'eel vector through 90$^\circ$ is associated with an inhomogeneous rotation of the spontaneous strain $\hat{u}^\mathrm{spon}_0$ and creates an elastic vortex structure
-- so-called disclinations \cite{JohnA.SimmonsR.deWit1970,Kleman1972} -- localized in the corners. Each disclination is characterised by incompatibility charges which have opposite sign in neighboring corners (see Fig. \ref{SketchCorner}). These charges create a radially distributed field of elastic strain $\hat{u}^\mathrm{elast}$ \cite{Kleman1972} that via magnetoelastic coupling sets preferable directions for the N\'eel vector along the bisectrices of the element. In other words, the elastic strains lower the energy barrier for a closure domain. In this case, the closure domain starts to grow from the opposite corners, which have the same sign, as shown in Fig. \ref{fig:2}(b).
Interestingly, such magnetoelastic disclinations appear not only at the inner corners of the rectangular elements, but also at the corners of the outer part of the element, where the spontaneous strain rotates in the opposite direction \citep{Kim2018}.
Hence, internal and external strain charges of the same corner have opposite signs. As a result, the closure domains of the same type start to grow along the different diagonals in the internal and external regions (see Fig. \ref{fig:2}(a) and (b)).
Magnetoelastic disclinations (corner charges) are present also in the elements cut along the easy magnetic axes. In this case, corresponding elastic strains set a preferable direction of the N\'eel vector inside the domain walls near the corners. Our calculations show a difference of domain wall width and pinning energy in the neighboring corner as a result of the strain-induced anisotropy.
In addition, different antiferromagnetic domains are accompanied by the deformation of the crystallographic structure. Due to the need for mechanical equilibrium, the creation of antiferromagnetic domains is accompanied by destressing effects \citep{Gomonay2005,Gomonay2007}.
To demonstrate the role of incompatibility and the destressing effects in the formation and stabilization of the domain structure, we have calculated the evolution of the domain structure for elements along the in-plane projection of the easy axes, starting from the almost homogeneous state (domain 2, green) with small domains (domain 1, blue) localised at the long edges of the sample using different values of the damping parameter (different rates of the energy losses). At the initial stage the closure domains (at the long edge), being pinned in the corners, grow in size trying to reduce the average incompatibility of the sample (Appendix Fig. \ref{Movie}). In case of slow (quasistatic) relaxation (large damping) the system evolves into a state with the closure domains 1 along the long edges separated by domain 2. In the opposite case of small damping the closure domains merge and the final state corresponds to the closure domains of the type 2 localised at the short edges.
\section{Discussion}
By investigating elements with different orientations and aspect ratios etched into NiO/Pt bilayers, we identify long-range strain to govern the shape-dependent formation of antiferromagnetic domains. We observe a preferential orientation of the Néel vector perpendicular to the edge of our devices due to patterning effects. In addition, by etching a trench around our elements we introduce elastic strain into the system. We investigate the domain structures of elements oriented along the projection of the easy and hard axes and can identify shape-dependent strain to be responsible for the observed domain structures. We can reproduce our experimental observations by magnetoelastic modelling that accounts for the spontaneous strain, due to the distribution of the Néel vector, and elastic strain due to contributions from the substrate and the patterning. Analogous to shape-anisotropy in ferromagnets, magnetoelastic interactions in antiferromagnets are long-range and can be used to tailor the antiferromagnetic ground state of antiferromagnetic devices. For example, by choosing the right size, aspect ratio and orientation one could use shape-induced strain to control the antiferromagnetic ground state in antiferromagnetic THz emitters to tailor and optimize their response \citep{Paperfrom2020}. In addition, the strain from the patterned device itself could be used in electrical switching of antiferromagnets to support or hinder the reorientation of the Néel vector independent of the underlying switching mechanism.
In summary, we identify how shape-dependent strain can be used to control the antiferromagnetic ground state in NiO over several microns. Since magnetoelastic coupling is significant for several other antiferromagnets such as CoO and Hematite, shape-induced strain can be considered to be the antiferromagnetic equivalent of conventional shape-induced anisotropy in ferromagnets and provide a unique means to control antiferromagnets.
\begin{acknowledgments
The authors thank T. Reimer for skillful technical assistance. We thank HZB for the allocation of synchrotron radiation beamtime, we thankfully acknowledge the financial support by HZB. The work has benefited from insights gained from experiments that were performed at the CIRCE beamline at ALBA Synchrotron with the collaboration of ALBA staff. L.B. acknowledges the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie Grant Agreement ARTES No. 793159. L.B. and M.K. acknowledge support from the Graduate School of Excellence Materials Science in Mainz (MAINZ) DFG 266, the DAAD (Spintronics network, Project No. 57334897 and Insulator Spin-Orbitronics, Project No. 57524834), and all groups from Mainz acknowledge that this work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), TRR 173-268565370 (Project Nos. A01, A03, A11, B02, and B12) and KAUST (OSR-2019-CRG8-4048). J.S. additionally acknowledges the Alexander von Humboldt Foundation and O.G and J.S. acknowledge the EU FET Open RIA Grant No. 766566 and the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), TRR 288-422213477 (project A09). R.R. also acknowledges support from the European Commission through the Project 734187-SPICOLOST (H2020-MSCA-RISE-2016), the European Union’s Horizon 2020 research and innovation program through the Marie Sklodowska-Curie Actions Grant Agreement SPEC No. 894006, the MCIN/AEI (RYC 2019-026915-I), the Xunta de Galicia (ED431B 2021/013, Centro Singular de Investigación de Galicia Accreditation 2019-2022, ED431G 2019/03) and the European Union (European Regional Development Fund - ERDF). M.K. acknowledges financial support from the Horizon 2020 Framework Programme of the European Commission under FET-Open Grant Agreement No. 863155 (s-Nebula). This work was also supported by ERATO “Spin Quantum Rectification Project” (Grant No. JPMJER1402) and the Grant-in-Aid for Scientific Research on Innovative Area, “Nano Spin Conversion Science” (Grant No. JP26103005), Grant-in-Aid for Scientific Research (S) (Grant No. JP19H05600) from JSPS KAKENHI, Japan.
\end{acknowledgments}
|
1,314,259,993,212 | arxiv | \section{Introduction}
A complex projective structure on a compact Riemann surface $X$ of negative Euler characteristic is a maximal atlas of holomorphic charts with values in $\mathbb{CP}^{1}$ whose transition functions are given by restrictions of M\"{o}bius transformations. Varying the Riemann surface structure on the underlying smooth surface $\Sigma$, complex projective structures collect into the moduli space of marked complex projective structures $\mathcal{CP}_{\Sigma}$ which is a holomorphic affine bundle modelled on the cotangent bundle of Teichm\"{u}ller space $\mathcal{T}_{\Sigma}.$
Furthermore, the moduli space admits a holonomy map
\begin{align}
\mathcal{CP}_{\Sigma}\rightarrow
\textnormal{Hom}(\pi_{1}(\Sigma), \textnormal{PSL}_{2}(\mathbb{C}))// \textnormal{PSL}_{2}(\mathbb{C})
\end{align}
which is a local bi-holomorphism. In this paper, we will generalize all of these results to the moduli space of $\mathrm{G}$-opers where $\mathrm{G}$ is a complex simple Lie group of adjoint type.
Given a reductive complex Lie group $\mathrm{G},$ Beilinson-Drinfeld \cite{BD91} introduced a higher rank generalization of complex projective structures, called $\mathrm{G}$-opers, which share many of their interesting properties. For $\textnormal{SL}(n, \mathbb{C})$, these objects were previously studied in the arena of $n$-th order linear ordinary differential equations by Teleman \cite{TEL59}.
A $\mathrm{G}$-oper on a Riemann surface $X$ is a triple $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega)$, where $E_{\mathrm{G}}$ is a holomorphic principal $\mathrm{G}$-bundle on $X$, $E_{\mathrm{B}}$ is a holomorphic reduction to a Borel subgroup $\mathrm{B}<\mathrm{G}$, and $\omega$ is a holomorphic flat connection on $E_{\mathrm{G}}$ which satisfies a certain non-degeneracy condition with respect to the sub-bundle $E_{\mathrm{B}}$. In the case that $\mathrm{G}=\textnormal{PSL}_{2}(\mathbb{C}),$ the notion of a $\mathrm{G}$-oper on $X$ reduces to the standard encoding of a complex projective structure on $X$ via a holormophic flat $\mathbb{CP}^{1}$-bundle over $X$ equipped with a holomorphic section transverse to the flat structure.
Fixing a connected, closed Riemann surface $X$ of genus at least two and a complex simple Lie group $\mathrm{G}$ of adjoint group, the space of $\mathrm{G}$-opers on $X$ has a (non-unique) parameterization by the \emph{Hitchin base}
\begin{align}\label{base param}
\mathcal{B}_{X}(\mathrm{G}):=\bigoplus_{i=1}^{\ell} \textnormal{H}^{0}(X, \mathcal{K}^{m_{i}+1}).
\end{align}
Here, $\mathcal{K}$ is the canonical sheaf of holomorphic one forms on $X$ and the integers
$1=m_{1}\leq m_{2}\leq... \leq m_{\ell}$ are the exponents of the Lie algebra $\mathfrak{g}$ of $\mathrm{G}.$ The situation for general semi-simple groups $\mathrm{G}$ is not very different, and amounts to taking products and discrete phenomena (see \cite{BD91}). To avoid various Lie-theoretic subtleties, in this paper we focus on the case of simple groups of adjoint type.
By way of the parameterization \eqref{base param} by the Hitchin base, the space of $\mathrm{G}$-opers on $X$ acquires the the structure of a complex manifold, and as with complex projective structures, there is a \emph{holonomy} map to the space of gauge equivalence classes of $C^{\infty}$-flat $\mathrm{G}$-bundles on the smooth surface $\Sigma$ underlying the Riemann surface $X.$
When $X$ is a closed, connected Riemann surface of genus at least two, Beilinson-Drinfeld proved \cite{BD91}, \cite{BD05} (see also \cite{WEN16}) that the holonomy map is a proper, holomorphic Lagrangian embedding for the Atiyah-Bott-Goldman \cite{AB83}\cite{GOL84} complex symplectic structure on the moduli space of flat reductive $\mathrm{G}$-bundles.
The primary goal of this paper is to extend the above results to the setting where the Riemann surface is allowed to vary, using the theory of complex projective structures as a guide.
Let $\Sigma$ be a closed, oriented, smooth connected surface of genus at least two. A $\Sigma$-marked Riemann surface is a pair $X:=(\Sigma, J)$ comprising a complex structure $J$ on $\Sigma$ whose induced orientation agrees with the ambient orientation of $\Sigma.$ Two $\Sigma$-marked Riemann surfaces are isomorphic if they are bi-holomorphic via a diffeomorphism of $\Sigma$ isotopic to the identity. The Teichm\"{u}ller space $\mathcal{T}_{\Sigma}$ is the space parameterizing isomorphism classes of $\Sigma$-marked Riemann surfaces: $\mathcal{T}_{\Sigma}$ is a complex manifold of dimension $3g-3$ where $g$ is the genus of $\Sigma.$
Abusing the equivalence relation, we sometimes refer to an element of $\mathcal{T}_{\Sigma}$ as a $\Sigma$-marked Riemann surface, and we call a $\mathrm{G}$-oper on a $\Sigma$-marked Riemann surface a $\Sigma$-marked $\mathrm{G}$-oper.
Our first theorem constructs the moduli space of $\Sigma$-marked $\mathrm{G}$-opers as a complex manifold.
\begin{theorem}\label{def space}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type. Then, there is a Hausd\"{o}rff complex manifold $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ parameterizing isomorphism classes of $\Sigma$-marked $\mathrm{G}$-opers and a holomorphic submersion
\begin{align}
\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma}.
\end{align}
The space $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ is a fine moduli space.
\end{theorem}
To prove this result, we take a complex-analytic approach by first constructing universal Kuranishi families \cite{KUR62} deforming a given $\Sigma$-marked $\mathrm{G}$-oper. The bases of these universal families are used to simultaneously construct a topology and a coordinate atlas on $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}).$
Varying $X$ over the Teichm\"{u}ller space of $\Sigma$ gives rise to a holomorphic vector bundle $\mathcal{B}_{\Sigma}(\mathrm{G})$ over $\mathcal{T}_{\Sigma},$ whose fiber over $X$ is the associated Hitchin base $\mathcal{B}_{X}(\mathrm{G})$. Our next theorem, which is an analogue of the parameterization result of Beilinson-Drinfeld \cite{BD91}, establishes the relation between the moduli space of $\mathrm{G}$-opers $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ and the bundle $\mathcal{B}_{\Sigma}(\mathrm{G}).$
\begin{theorem}\label{id h base}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type. There is a (natural in $\mathrm{G}$) commutative diagram
\begin{center}
\begin{tikzcd}
\mathcal{CP}_{\Sigma} \arrow{r} \arrow{d}
& \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}) \arrow{dl} \\
\mathcal{T}_{\Sigma}.
\end{tikzcd}
\end{center}
Furthermore, every smooth section $s$ of the projection
\begin{align}
\mathcal{CP}_{\Sigma} \rightarrow \mathcal{T}_{\Sigma}
\end{align}
induces a diffeomorphism
\begin{align}
\phi_{s}:\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{B}_{\Sigma}(\mathrm{G})
\end{align}
commuting with the projections to $\mathcal{T}_{\Sigma}.$ If $s$ is holomorphic, then $\phi_{s}$ is a bi-holomorphism.
\end{theorem}
This theorem shows that the holomorphic fiber bundle $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ has a structure that resembles an affine bundle over $\mathcal{T}_{\Sigma}$ whose underlying vector bundle is $\mathcal{B}_{\Sigma}(\mathrm{G}).$ For $\mathrm{G}=\textnormal{PSL}_{2}(\mathbb{C})$, this is literally true, and it is well known that $\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))\simeq \mathcal{CP}_{\Sigma}$ is a holomorphic affine bundle over $\mathcal{T}_{\Sigma}$ modeled on the cotangent bundle $T^{\star}\mathcal{T}_{\Sigma}\simeq \mathcal{B}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))$ of the Teichm\"{u}ller space of $\Sigma.$
For general $\mathrm{G}$, instead of being able to subtract arbitrary elements in a fiber over $X\in \mathcal{T}_{\Sigma}$ of $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma}$, one can only subtract an arbitrary element of the fiber from a $\textnormal{PSL}_{2}(\mathbb{C})$-oper on $X.$ This is intimately related to a fact we shall discuss later, namely that $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ admits a constant rank closed holomorphic $2$-form which is degenerate if $\textnormal{rk}(G)>1$, and for which the fibers of the projection $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma}$ are maximal isotropic sub-manifolds.
By the results in \cite{LS17}, if $(M,\omega)$ is a symplectic manifold and $N$ is a smooth manifold, a locally trivial fiber bundle $(M, \omega)\rightarrow N$ with Lagrangian fibers has the property that the fibers have a canonical flat affine structure.\footnote{This fact was known long before the paper \cite{LS17}, but the discussion in \cite{LS17} makes this issue very explicit. Furthermore, the content of \cite{LS17} is closely related to the circumstances addressed in this paper.} For pre-symplectic manifolds which are the total space of a maximally isotropic fibration, a weaker result is true.
In light of this, the strange \emph{partially affine} structure on the fibers of $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma}$ for general $\mathrm{G}$ is ultimately a reflection of the fact that $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma}$ is a maximally isotropic fibration. We hope that this discussion relieves some of the "suspiciousness" regarding the bijection between $\mathrm{G}$-opers on $X$ and $\mathcal{B}_{X}(\mathrm{G})$
referred to in \cite{BD05}[pg. 21].
Now we move on to a discussion of the forgetful map from the moduli space of $\Sigma$-marked $\mathrm{G}$-opers to the space of $C^{\infty}$-flat $\mathrm{G}$-bundles on $\Sigma$. This is the map sending a $\Sigma$-marked $\mathrm{G}$-oper $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ to the $C^{\infty}$-flat $\mathrm{G}$-bundle $(E_{\mathrm{G}}, \omega)$ over $\Sigma$.
Let $\mathcal{F}_{\Sigma}^{\star}(\mathrm{G})$ be the set of isomorphism classes of smooth, irreducible flat $\mathrm{G}$-bundles over $\Sigma$ with isotropy equal to the center of $\mathrm{G}$. Utilizing the results of Goldman \cite{GOL84} and standard techniques of defomormation theory, $\mathcal{F}_{\Sigma}^{\star}(\mathrm{G})$ has the structure of a complex symplectic manifold.
The next theorem generalizes the (independent) classical result of Earle \cite{EAR81}, Hejhal \cite{HEJ78} and Hubbard \cite{HUB81} concerning the local injectivity of the holonomy map from the moduli space of marked complex projective structures on $\Sigma$ to the space of flat bundles $\mathcal{F}_{\Sigma}^{\star}(\textnormal{PSL}_{2}(\mathbb{C})).$
\begin{theorem}\label{immersion}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type. Then, the holonomy map
\begin{align}
\textnormal{H}: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{F}_{\Sigma}^{\star}(\mathrm{G})
\end{align}
is a holomorphic immersion.
\end{theorem}
Our proof is similar in spirit to the proof of Hubbard \cite{HUB81}, with sheaf cohomology playing a central role. In particular, we identify a complex of sheaves $\mathcal{A}^{\bullet}$ on $X$ whose hyper-cohomology governs the deformations of $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$. We identify the derivative of the map
\begin{align}\label{immm}
\textnormal{H}:\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{F}_{\Sigma}^{\star}(\mathrm{G})
\end{align}
at the given $\mathrm{G}$-oper with the induced map in hyper-cohomology for a suitable morphism $\mathcal{A}^{\bullet}\rightarrow \mathcal{B}^{\bullet},$ where $\mathcal{B}^{\bullet}$ is the holomorphic De-Rham complex of the holomorphic flat bundle $(E_{\mathrm{G}}, \omega, X).$ Working in the Dolbeault resolution, a differential-geometric calculation shows that the resulting map between hyper-cohomology groups is complex-linear and injective.
We now mention the translation of this result to the space of $\mathrm{G}$-valued homomorphisms of the fundamental group of $\Sigma.$ Let $\widetilde{\Sigma}\rightarrow \Sigma$ be a fixed universal covering of $\Sigma$ with deck group $\pi.$ For our purposes, the deck group $\pi,$ which is abstractly isomorphic to the fundamental group of $\Sigma,$ is a more convenient model.
Taking the holonomy of a flat connection yields a bi-holomorphism
\begin{align}\label{hol iso}
\textnormal{hol}: \mathcal{F}_{\Sigma}^{\star}(\mathrm{G})\rightarrow \textnormal{Hom}^{\star}(\pi, G)/G,
\end{align}
where $\textnormal{Hom}^{\star}(\pi, G)/\mathrm{G}$ is the space of conjugacy classes of irreducible\footnote{A homomorphism $\rho: \pi\rightarrow \mathrm{G}$ is irreducible if the image of $\rho$ does not lie in any proper parabolic subgroup $P<G.$} homomorphisms $\rho: \pi\rightarrow \mathrm{G}$ with centralizer equal to the center of $G.$ Therefore, Theorem \ref{immersion} in combination with \eqref{hol iso} implies that the map
\begin{align}
\textnormal{hol}\circ \textnormal{H}: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow
\textnormal{Hom}^{\star}(\pi, G)/G
\end{align}
is a holomorphic immersion.
Moving to symplectic geometry, Theorem \ref{immersion} equips the moduli space of $\Sigma$-marked $\mathrm{G}$-opers $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ with a closed, holomorphic $2$-form of constant rank defined as the pull-back of the Atiyah-Bott-Goldman (see \cite{GOL84}) symplectic form on $\mathcal{F}_{\Sigma}^{\star}(\mathrm{G})$ via the holomorphic immersion \eqref{immm}.
\begin{theorem}\label{pre sym}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type.
The space $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ admits a closed holomorphic $2$-form $\tau_{\mathrm{G}}$ of constant rank for which the fibers of the projection to $\mathcal{T}_{\Sigma}$ are maximal isotropic sub-manifolds.
\end{theorem}
A closed holomorphic $2$-form of constant rank on a complex manifold is called a complex pre-symplectic form, so Theorem \ref{pre sym} equips $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ with a complex pre-symplectic form.
We finish the discussion with the following theorem concerning the behavior of the isomorphism in Theorem \ref{id h base} with respect to the pre-symplectic structure on $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}).$
\begin{theorem}
There is a complex pre-symplectic form $\omega_{\mathcal{B}_{G}}$ on $\mathcal{B}_{\Sigma}(\mathrm{G})$ such that, given any holomorphic Lagrangian section $s$ of
\begin{align}
\mathcal{CP}_{\Sigma} \rightarrow \mathcal{T}_{\Sigma},
\end{align}
the induced bi-holomorphism
\begin{align}
\phi_{s}: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{B}_{\Sigma}(\mathrm{G})
\end{align}
satisfies $\phi_{s}^{\star}\omega_{\mathcal{B}_{G}}=\sqrt{-1}\tau_{\mathrm{G}},$ where $\tau_{\mathrm{G}}$ is the complex pre-symplectic form from Theorem \ref{pre sym}.
\end{theorem}
This generalizes a result of Kawai \cite{KAW96}, which was later clarified by
Loustau \cite{LOU15}, to the setting of $\mathrm{G}$-opers.
Lastly, we remark that all of the constructions in this paper are invariant under the mapping class group $\textnormal{Mod}(\Sigma)$ of isotopy classes of orientation-preserving diffeomorphisms of $\Sigma.$ In particlar, there is a holomorphic action of $\textnormal{Mod}(\Sigma)$ on $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ lifting the usual action on $\mathcal{T}_{\Sigma}$, and the \emph{holonomy map} to $\mathcal{F}_{\Sigma}^{\star}(\mathrm{G})$ is
$\textnormal{Mod}(\Sigma)$-equivariant.
\subsection{Conventions and content}
We close this introduction with some comments about notational conventions.
In using the dictionary between locally free sheaves and holomorphic vector bundles, calligraphic lettters denote a locally free sheaf $\mathcal{F}$ of rank $n>0$, and Roman letters denote the corresponding rank $n$-holomorphic vector bundle $F.$
The main technical tool in this paper is the theory of hyper-cohomology of complexes of sheaves. For readers unfamiliar with this theory, we recommend the beautiful book of Voisin \cite{VOI07} for an elementary introduction.
\subsection{Roadmap}
In Section \ref{G theory}, we rapidly review the theory of holomorphic connections on principal $\mathrm{G}$-bundles and fix notation which will be used throughout the paper.
Section \ref{g opers} serves as an introduction to $\mathrm{G}$-opers. In particular, we give two equivalent definitions, one of which is the original definition of Beilinson-Drinfeld \cite{BD91}, and the second of which is a translation of this definition which is more in the spirit of the theory of locally homogeneous geometric structures. After these definitions, we explain the connection between complex projective structures and $\mathrm{G}$-opers.
In Section \ref{models}, we survey the basic structure theory of $\mathrm{G}$-opers on a fixed Riemann surface $X.$ Most importantly, we construct explicit differential-geometric models of $\mathrm{G}$-opers, upon which all of our calculations depend.
We also review the formal side of the theory, and the results here are all essentially due to Beilinson-Drinfeld \cite{BD91}, \cite{BD05}. But, our point of view is a bit different, and we hope that it is more accessible to the differential geometrically minded reader.
In Section \ref{families}, we build the machinery which allows us to prove that the moduli space of $\mathrm{G}$-opers admits a complex manifold structure. This includes a discussion of Kuranishi families and the infinitesimal deformation theory of $\mathrm{G}$-opers. We close Section \ref{families} with a proof of the identifications of the moduli space of $\mathrm{G}$-opers with the bundle of Hitchin bases $\mathcal{B}_{\Sigma}(\mathrm{G}).$
In the final Section \ref{hol map pre}, we prove that the map from the moduli space of $\mathrm{G}$-opers to the moduli space of flat $\mathrm{G}$-bundles over $\Sigma$ is a holomorphic immersion. Using this result, we prove that the moduli space of $\mathrm{G}$-opers admits a natural holomorphic pre-symplectic form. Finally, we show that there is a family of identifications of the moduli space of $\mathrm{G}$-opers with $\mathcal{B}_{\Sigma}(\mathrm{G})$ that are complex pre-symplectomorphisms for a natural complex pre-symplectic form on $\mathcal{B}_{\Sigma}(\mathrm{G}).$
\textbf{Acknowledgements}: We deeply thank David Dumas, Bill Goldman, Brice Loustau and Richard Wentworth for many years of conversations. Furthermore, we are grateful to Patrick Brosnan for explaining how to use the Grothendieck-Riemann-Roch theorem.
\section{Gauge theory preliminaries}\label{G theory}
We begin with a rapid introduction to holomorphic flat $\mathrm{G}$-bundles and holomorphic reductions of structure.
For the purposes of these definitions, $\mathrm{G}$ may be taken to be any complex Lie group with Lie algebra $\mathfrak{g}$. Let $E_{\mathrm{G}}$ be a holomorphic, right principal $\mathrm{G}$-bundle over a Riemann surface $X.$ For $g\in G,$ let $R_{g}: E_{\mathrm{G}}\rightarrow E_{\mathrm{G}}$ denote the holomorphic right $\mathrm{G}$-action.
\begin{definition}
A holomorphic connection on $E_{\mathrm{G}}$ is a holomorphic $1$-form
\begin{align}
\omega: TE_{\mathrm{G}}\rightarrow \mathfrak{g},
\end{align}
which satisfies:
\begin{enumerate}
\item $R_{g}^{\star}\omega=\textnormal{Ad}(g^{-1})\circ \omega$
\item If $X\in \mathfrak{g}$ and $X^{\sharp}$ is the $\mathrm{G}$-invariant vertical vector field on $E_{\mathrm{G}}$ induced by the infinitesimal $\mathrm{G}$-action, then $\omega(X^{\sharp})=X.$
\end{enumerate}
\end{definition}
If $L: G\rightarrow \textnormal{GL}[V]$ is a holomorphic representation of $\mathrm{G}$ on a complex vector space $V$, we denote the associated holomorphic vector bundle by $E_{G}[V].$ A $V$-valued holomorphic differential $k$-form $\overline{\beta}$ on $E_{\mathrm{G}}$ is called $\mathrm{G}$-equivariant
if
\begin{align}
R_{g}^{\star}\overline{\beta}=L(g^{-1})\circ \overline{\beta}
\end{align}
for all $g\in G.$
The $k$-form $\overline{\beta}$ is \emph{horizontal} if the interior product with any vertical tangent vector $Y^{\sharp}$ on $E_{\mathrm{G}}$ satisfies
\begin{align}
\iota_{\sharp}\circ \overline{\beta}=0.
\end{align}
Given any $\mathrm{G}$-equivariant, horizontal holomorphic $1$-form $\overline{\beta}$, there exists a unique
$\beta\in \textnormal{H}^{0}(X,\mathcal{K}\otimes \mathcal{E}_{G}[V])$ whose pullback to $E_{\mathrm{G}}$ is equal to $\overline{\beta}.$ Here, $\mathcal{K}$ is the canonical sheaf of germs of holomorphic $1$-forms on $X.$ Throughout this article, we will implicitly identify horizontal, equivariant forms $\overline{\beta}$ with their basic companion $\beta$.
The curvature of a holomorphic connection $\omega$ is the horizontal, $\mathrm{G}$-equivariant holomorphic $2$-form
\begin{align}
F(\omega):=d\omega+\frac{1}{2}[\omega,\omega].
\end{align}
In the above definition, the bracket $[\omega, \omega]$ is the result of tensoring the wedge product of $1$-forms with the Lie bracket on $\mathfrak{g}.$ The curvature descends to a global section $F(\omega)\in \textnormal{H}^{0}(X, \Omega_{X}^{2}\otimes \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]).$ A holomorphic connection $\omega$ is \emph{flat} if $F(\omega)=0.$
Since $X$ is a Riemann surface, the vanishing of any holomorphic $2$-form on $X$ implies that a holomorphic connection $\omega$ on a holomorphic principal $\mathrm{G}$-bundle over $X$ is automatically flat.
If $H<\mathrm{G}$ is a closed complex Lie subgroup and $E_{H}$ is a holomorphic reduction of structure of the bundle $E_{\mathrm{G}}$ to the subgroup $H<G,$ then the composition
\begin{align}
TE_{H}\rightarrow TE_{\mathrm{G}} \xrightarrow{\omega} \mathfrak{g}\rightarrow \mathfrak{g}/\mathfrak{h}
\end{align}
yields
a horizontal, $H$-equivariant $1$-form
\begin{align}
\overline{\Psi}: TE_{H}\rightarrow \mathfrak{g}/\mathfrak{h}.
\end{align}
Since $\overline{\Psi}$ is horizontal and $H$-equivariant, there is a unique global section $\Psi\in \textnormal{H}^{0}(X, \mathcal{K}\otimes \mathcal{E}_{H}[\mathfrak{g}/\mathfrak{h}])$ whose pullback to $E_{H}$ agrees with $\overline{\Psi}.$ The section $\Psi$ is called the \emph{second fundamental form} of $\omega$ relative to the reduction $E_{H}.$
Next, let $\mathcal{O}\subset \mathfrak{g}/\mathfrak{h}$ be an $H\times \mathbb{C}^{*}$-invariant subset.
\begin{definition}\label{con position}
Let $E_{H}$ be a holomorphic reduction to a subgroup $H<\mathrm{G}$ of a holomorphic flat bundle $(E_{\mathrm{G}}, \omega).$ Then the we say that $\omega$ has relative position $\mathcal{O}$, written $\textnormal{pos}_{E_{H}}(\omega)=\mathcal{O},$ if for all non-zero tangent vectors $v\in TX,$
the second fundamental form satisfies $\Psi(v)\in E_{H}[\mathcal{O}].$
\end{definition}
Let $\mathrm{G}$ be a connected complex semi-simple Lie group. Given a holomorphic principal $\mathrm{G}$-bundle $E_{\mathrm{G}}$ over a complex manifold $M$ equipped with a holomorphic connection $\omega,$ the pair $(E_{\mathrm{G}}, \omega)$ is \textit{irreducible} if for every proper parabolic subgroup $P<\mathrm{G}$ and every holomorphic reduction $E_{P}$ of $E_{\mathrm{G}}$, the second fundamental form of $\omega$ relative to $E_{P}$ is non-vanishing.
\section{$\mathrm{G}$-opers}\label{g opers}
In this section, we define $\mathrm{G}$-opers on a Riemann surface $X$ where $\mathrm{G}$ is a connected complex semi-simple Lie group.
\subsection{Lie theory preliminaries}
Before moving forward to the definiton of $\mathrm{G}$-opers, we need some Lie-theoretic preliminaries. Choose a Borel subgroup $\mathrm{B}<\mathrm{G}$ and the corresponding Lie sub-algebra $\mathfrak{b}<\mathfrak{g}.$ Furthermore, choose a Cartan subgroup $\mathrm{H}<\mathrm{B}.$
There is an $\mathrm{H}$-invariant Lie algebra grading
\begin{align}\label{grading}
\mathfrak{g}\simeq \bigoplus_{i=-K}^{K}\mathfrak{g}_{i}
\end{align}
called the \emph{height grading}.
The height grading \eqref{grading} defines a $\mathrm{B}$-invariant filtration
\begin{align}
\mathfrak{g}^{K}\subset \mathfrak{g}^{K-1}\subset ...\subset \mathfrak{g}^{0}\subset \mathfrak{g}^{-1}\subset...\subset \mathfrak{g}^{-K}=\mathfrak{g}
\end{align}
with
\begin{align}
\mathfrak{g}^{j}=\bigoplus_{i=j}^{K} \mathfrak{g}_{i}
\end{align}
for $-K\leq j\leq K.$ In particular, $\mathfrak{g}^{0}=\mathfrak{b}.$
The induced filtration
\begin{align}\label{g filtration}
\mathfrak{g}^{-1}/\mathfrak{b}\subset\mathfrak{g}^{-2}/\mathfrak{b}\subset...\subset \mathfrak{g}/\mathfrak{b}
\end{align}
is $\mathrm{B}$-invariant and independent of the choice of Cartan sub-algebra.
In terms of the associated flag variety $\mathrm{G}/\mathrm{B}$, there is a $\mathrm{G}$-equivariant isomomorphism
\begin{align}\label{t bundle iso}
T(\mathrm{G}/\mathrm{B})\simeq G\times_{B} \mathfrak{g}/\mathfrak{b},
\end{align}
where $T(\mathrm{G}/\mathrm{B})$ is the tangent bundle of $\mathrm{G}/\mathrm{B}$ and $\mathrm{B}$ acts on $\mathfrak{g}/\mathfrak{b}$ via the adjoint action.
Using the isomorphism \eqref{t bundle iso}, the filtration $\eqref{g filtration}$ induces a filtration
\begin{align}\label{tan fil}
T^{-1}(\mathrm{G}/\mathrm{B})\subset T^{-2}(\mathrm{G}/\mathrm{B})\subset...\subset T(\mathrm{G}/\mathrm{B}).
\end{align}
of the tangent bundle of $\mathrm{G}/\mathrm{B}.$
The following is an important basic fact which we will use throughout.
\begin{theorem}\label{open orbit}
There is a unique dense, open $\mathrm{B}$-orbit $\mathcal{O}\subset \mathfrak{g}^{-1}/\mathfrak{b}$ with respect to the adjoint $\mathrm{B}$-action.
\end{theorem}
\iffalse
\begin{proof}
The centralizer $\mathfrak{h}<\mathfrak{b}$ of the regular semi-simple element $x\in \mathfrak{b}$ defining the height grading \eqref{grading} is a Cartan sub-algebra, and the $-1$-graded piece $\mathfrak{g}_{-1}$ in \eqref{grading} is the direct sum of the negative simple root spaces induced by the root space decomposition of $\mathfrak{g}$ relative to $\mathfrak{h}.$
The subset $\mathcal{O}_{-1}\subset \mathfrak{g}_{-1}$ consisting of elements whose projection onto every negative simple root space is non-zero forms a dense, open $H$-invariant subset of $\mathfrak{g}_{-1}$, and the image of the projection
\begin{align}
\mathcal{O}_{-1}\rightarrow \mathfrak{g}^{-1}/\mathfrak{b}
\end{align}
is a dense, open $\mathrm{B}$-orbit. This completes the proof.
\end{proof}
\fi
\textbf{Remark:} By the previous discussion (see \eqref{t bundle iso}), this open orbit corresponds to a sub-fiber bundle
\begin{align}\label{orbit tangent}
\mathcal{O}_{\mathrm{G}/\mathrm{B}}\subset T^{-1}(\mathrm{G}/\mathrm{B}),
\end{align}
whose fibers are open, $\mathbb{C}^{\star}$-invariant subsets of
the vector bundle $T^{-1}(\mathrm{G}/\mathrm{B}).$
\begin{definition}\label{imm position}
Let $Y$ be a Riemann surface and $f: Y\rightarrow \mathrm{G}/\mathrm{B}$ a holomorphic immersion. Then we say that $f$ has position $\mathcal{O}$ if for all non-zero vectors $v\in TY$
the differential satisfies $df(v)\in \mathcal{O}_{\mathrm{G}/\mathrm{B}}.$
\end{definition}
\subsection{$\Sigma$-marked $\mathrm{G}$-opers}
Let $\Sigma$ be a closed, connected, oriented smooth surface of genus at least two. A $\Sigma$-marked Riemann surface $X$ is a pair $X:=(\Sigma, J)$ where $J$ is a complex structure on $\Sigma$ which induces the ambient orientation of $\Sigma.$
For the following, recall Definition \ref{con position}, the open orbit from Theorem \ref{open orbit}, and fix a Borel subgroup $\mathrm{B}<\mathrm{G}.$
\begin{definition}
A $\Sigma$-marked $\mathrm{G}$-oper is a $4$-tuple $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ where
\begin{enumerate}
\item X is a $\Sigma$-marked Riemann surface.
\item $(E_{\mathrm{G}},\omega)$ is a holomorphic flat $\mathrm{G}$-bundle on $X.$
\item $E_{\mathrm{B}}$ is a holomorphic reduction of $E_{\mathrm{G}}$ to $\mathrm{B}<\mathrm{G}.$
\item The relative position of $\omega$ satisfies $\textnormal{pos}_{E_{\mathrm{B}}}(\omega)=\mathcal{O}.$
\end{enumerate}
\end{definition}
Now we introduce the notion of isomorphism of $\Sigma$-marked $\mathrm{G}$-opers.
\begin{definition}
Let $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ and $(F_{\mathrm{G}}, F_{\mathrm{B}}, \eta, Y)$ be a pair of $\Sigma$-marked $\mathrm{G}$-opers. An isomorphism is a Cartesian diagram
\[
\begin{tikzcd}
E_{\mathrm{G}}\arrow{r}{\phi} \arrow{d} &
F_{\mathrm{G}} \arrow{d} \\
X \arrow{r}{f} &
Y,
\end{tikzcd}
\]
where $\phi$ is an isomorphism of holomorphic principal $\mathrm{G}$-bundles satisfying $\phi^{\star}\eta=\omega$ and
\begin{align}
\phi |_{E_{\mathrm{B}}}: E_{\mathrm{B}}\rightarrow F_{\mathrm{B}}
\end{align}
is an isomorphism of principal $\mathrm{B}$-bundles. Furthermore, $f:X\rightarrow Y$ is a biholomorphism whose underlying smooth map $f: \Sigma\rightarrow \Sigma$ is isotopic to the identity.
\end{definition}
This defines the category/groupoid $\widetilde{\mathcal{O}\mathfrak{p}}_{\Sigma}(\mathrm{G})$\footnote{As we have defined it, the collection of objects in this category is not a set. This could be remedied in various ways, e.g. by working in a Grothendieck universe, or by restricting the underlying sets of the principal bundles appearing. We will make no further mention of this issue.} of $\Sigma$-marked $\mathrm{G}$-opers. We denote by $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ the set of isomorphism classes of $\Sigma$-marked $\mathrm{G}$-opers. We will call $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ the moduli space of $\Sigma$-marked $\mathrm{G}$-opers.
The Teichm\"{u}ller groupoid $\widetilde{\mathcal{T}}_{\Sigma}$ is the category whose objects are $\Sigma$-marked Riemann surfaces $X$ and morphisms $f: X\rightarrow Y$ are biholomorphisms whose underlying $C^{\infty}$-map $f:\Sigma\rightarrow \Sigma$ is isotopic to the identity. There is an obvious full functor\footnote{We will see later that this functor is faithful if and only if $\mathrm{G}$ is of adjoint type.}
\begin{align}
\widetilde{\pi}: \widetilde{\mathcal{O}\mathfrak{p}}_{\Sigma}(\mathrm{G})\rightarrow \widetilde{\mathcal{T}}_{\Sigma},
\end{align}
which descends to a set map
\begin{align}
\pi: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma},
\end{align}
where $\mathcal{T}_{\Sigma}$ is the set of isomorphism classes of the groupoid $\widetilde{\mathcal{T}}_{\Sigma}.$
The set $\mathcal{T}_{\Sigma}$ is the Teichm\"{u}ller space of $\Sigma.$ It is a classical fact that $\mathcal{T}_{\Sigma}$ has the structure of a Hausd\"orff complex manifold of dimension $3g-3$ where $\mathrm{G}$ is the genus of $\Sigma.$
We now introduce the equivalent notion of a $\Sigma$-marked \emph{developed} $\mathrm{G}$-oper, which is closer in spirit to the definition of a complex projective structure.
Fix once and for all a universal cover $\widetilde{\Sigma}\rightarrow \Sigma$ and denote the corresponding group of deck transformations by $\pi.$ With respect to our definitions, given a $\Sigma$-marked Riemann surface $X,$ this gives a unique isomorphism of the group of holomorphic deck transformations of the universal covering $\widetilde{X}\rightarrow X$ with $\pi.$ In what follows, we supress this unique identification.
In the following, recall Definition \ref{imm position}.
\begin{definition}
A $\Sigma$-marked developed $\mathrm{G}$-oper is a triple $(f, \rho, X)$ where
\begin{enumerate}
\item X is a $\Sigma$-marked Riemann surface.
\item $\rho:\pi\rightarrow \mathrm{G}$ is a homomorphism.
\item $f: \widetilde{X}\rightarrow \mathrm{G}/\mathrm{B}$ is a holomorphic immersion satisfying $\textnormal{pos}(f)=\mathcal{O}.$
\end{enumerate}
\end{definition}
Next comes the definition of an isomorphism of $\Sigma$-marked developed $\mathrm{G}$-opers.
\begin{definition}
An isomorphism of $\Sigma$-marked developed $\mathrm{G}$-opers $(f_{1}, \rho_{1}, X_{1})$ and $(f_{2}, \rho_{2}, X_{2})$ is a commutative diagram
\[
\begin{tikzcd}
\widetilde{X}_{1} \arrow{r}{f_{1}} \arrow{d}{\widetilde{h}} &
\mathrm{G}/\mathrm{B} \arrow{d}{L_{g}} \\
\widetilde{X}_{2} \arrow{r}{f_{2}} &
\mathrm{G}/\mathrm{B}.
\end{tikzcd}
\]
where
\begin{itemize}
\item The map $\widetilde{h}: \widetilde{X}_{1}\rightarrow \widetilde{X}_{2}$ is a $\pi$-equivariant biholomorphism such that the induced underlying $C^{\infty}$-map $h: \Sigma\rightarrow \Sigma$ is isotopic to the identity.
\item The right vertical arrow $L_{g}: \mathrm{G}/\mathrm{B} \rightarrow \mathrm{G}/\mathrm{B}$ is left translation by an element $g\in G.$
\end{itemize}
\end{definition}
As before, this defines a groupoid $\widetilde{\mathcal{DO}}\mathfrak{p}_{\Sigma}(\mathrm{G})$ of $\Sigma$-marked developed $\mathrm{G}$-opers, and the corresponding \emph{moduli space} $\mathcal{DO}\mathfrak{p}_{\Sigma}(\mathrm{G})$ of isomorphism classes of $\Sigma$-marked developed $\mathrm{G}$-opers. Note the following fact which comes immediately from the definition.
\begin{proposition}
Suppose $((f_{1}, \rho_{1}, X_{1})$ and $(f_{2}, \rho_{2}, X_{2})$ are isomorphic $\Sigma$-marked developed $\mathrm{G}$-opers. Then there exists $g\in \mathrm{G}$ such that $\rho_{1}=g\circ \rho_{2}\circ g^{-1}.$
\end{proposition}
The next result is the promised equivalence between our first definition of $\Sigma$-marked $\mathrm{G}$-opers and the latter notion of $\Sigma$-marked developed $\mathrm{G}$-opers: we omit the proof since it is a standard exercise in differential geometry.
\begin{theorem}\label{dop to op equivalence}
There is an equivalence of categories
\begin{align}
\widetilde{\mathcal{D}}:\widetilde{\mathcal{DO}}\mathfrak{p}_{\Sigma}(\mathrm{G}) \rightarrow \widetilde{\mathcal{O}\mathfrak{p}}_{\Sigma}(\mathrm{G})
\end{align}
which descends to
a bijection
\begin{align}
\mathcal{D}:\mathcal{DO}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})
\end{align}
\end{theorem}
\textbf{Remark:} There is an obvious action of the group $\textnormal{Diff}^{+}(\Sigma)$ of orientation preserving diffeomorphisms of $\Sigma$ on these categories, and the normal subgroup $\textnormal{Diff}_{0}(\Sigma)$ of diffeomorphisms isotopic to the identity acts via isomorphisms. Therefore, the map $\mathcal{D}$ is equivariant for the induced actions of the mapping class
group $\textnormal{Mod}(\Sigma):=\textnormal{Diff}^{+}(\Sigma)/\textnormal{Diff}_{0}(\Sigma)$ on the corresponding moduli spaces.
\iffalse
\begin{proof}
This is a standard construction in differential geometry, so we only sketch the proof.
Given a developed $\mathrm{G}$-oper $(f,\rho, X),$ consider the commutative diagram
\[
\begin{tikzcd}
f^{\star}G \arrow{d} \arrow{r}
& G \arrow{d} \\
\widetilde{X} \arrow{r}{f}
& \mathrm{G}/\mathrm{B}.
\end{tikzcd}
\]
The bundle $f^{\star}\mathrm{G}$ is a $\pi$-equivariant principal $\mathrm{B}$-bundle over $\widetilde{X}.$ Therefore, $F^{\star}\mathrm{G}$ descends to a principal $\mathrm{B}$-bundle $E_{f,B}$
on $X.$ By extending the structure group to $G,$ we obtain a triple $(E_{f,G}, E_{f,B}, X)$ of a principal $\mathrm{G}$-bundle and holomorphic reduction to $\mathrm{B}$ on $X.$
Furthermore, the Mauer-Cartan form on $\mathrm{G}$ pulls back via $f$ and again by equivariance, descends to a holomorphic flat connection $\omega_{f}$ on $E_{f,G}.$ Hence, we have defined a tuple $(E_{f,G}, E_{f,B}, \omega_{f}, X)$ from the triple $(f,\rho, X).$ We leave it to the reader to check that the corresponding tuple is a $\mathrm{G}$-oper and that the assignment is functorial.
Furthermore, given $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X),$ using the covering map $p: \widetilde{X}\rightarrow X$ we obtain $(p^{\star}(E_{\mathrm{G}}), p^{\star}(E_{\mathrm{B}}), p^{\star}\omega, \widetilde{X}).$ Since $\widetilde{X}$ is simply connected, there is a canonical $\omega$-flat trivialization
$p^{\star}(E_{\mathrm{G}})\simeq \widetilde{X}\times \mathrm{G}$ and with respect to this trivialization, the reduction
$p^{\star}(E_{\mathrm{B}})$ corresponds to a holomorphic $\rho$-equivariant map
\begin{align}
f: \widetilde{X}\rightarrow \mathrm{G}/\mathrm{B},
\end{align}
where $\rho$ is the holonomy of the flat connection $\omega.$ This defines a $\Sigma$-marked developed $\mathrm{G}$-oper $(f, \rho, X).$ This completes the sketch of the proof.
\end{proof}
\fi
Finally, we quickly describe how these equivalent categories are enhancements of the usual equivalence of categories between $C^{\infty}$-flat $\mathrm{G}$-bundles on $\Sigma$ and homomorphisms $\pi\rightarrow G.$
Let $\widetilde{\mathcal{F}}_{\Sigma}(\mathrm{G})$ be the category of $C^{\infty}$-flat $\mathrm{G}$-bundles on $\Sigma$ and $\textnormal{Hom}(\pi, \mathrm{G})$ the set of $\mathrm{G}$-valued homomorphisms of the group $\pi.$ As in Theorem \ref{dop to op equivalence} there is an equivalence of categories
\begin{align}
\widetilde{\textnormal{hol}}:\textnormal{Hom}(\pi, \mathrm{G}) \rightarrow \widetilde{\mathcal{F}}_{\Sigma}(\mathrm{G}) ,
\end{align}
where we view $\textnormal{Hom}(\pi, \mathrm{G})$ as a transformation groupoid (hence a category) under the action of $\mathrm{G}$ by conjugation.
Consider the forgetful functors
\begin{align}\label{forget 1}
\widetilde{\mathcal{DO}}\mathfrak{p}_{\Sigma}(\mathrm{G})&\rightarrow \textnormal{Hom}(\pi, \mathrm{G}) \\
(f, \rho, X) &\mapsto \rho,
\end{align}
and
\begin{align}\label{forget 2}
\widetilde{\mathcal{O}\mathfrak{p}}_{\Sigma}(\mathrm{G})&\rightarrow \widetilde{\mathcal{F}}_{\Sigma}(\mathrm{G}) \\
(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X) &\mapsto (E_{\mathrm{G}}, \omega),
\end{align}
where $\omega$ is the flat $C^{\infty}$-connection on the underlying $C^{\infty}$-bundle $E_{\mathrm{G}}$ which is induced by $\omega$ and the holomorphic structure of $E_{\mathrm{G}}.$
The proof of the following theorem is straightforward.
\begin{theorem}\label{comm diagram opers}
There is a commutative diagram
\begin{center}
\begin{tikzcd}
\widetilde{\mathcal{DO}\mathfrak{p}}_{\Sigma}(\mathrm{G}) \arrow{d} \arrow{r}{\widetilde{\mathcal{D}}}
& \widetilde{\mathcal{O}}\mathfrak{p}_{\Sigma}(\mathrm{G}) \arrow{d} \\
\textnormal{Hom}(\pi, \mathrm{G}) \arrow{r}{\widetilde{\textnormal{hol}}}
& \widetilde{\mathcal{F}}_{\Sigma}(\mathrm{G}),
\end{tikzcd}
\end{center}
where the horizontal arrows are the previously constructed equivalences of categories and the vertical arrows are the functors defined by $\eqref{forget 1}$ and $\eqref{forget 2}.$
\end{theorem}
\subsection{Opers for $\textnormal{PSL}_{2}(\mathbb{C})$} \label{sl2 opers}
As we have promised to exhibit $\mathrm{G}$-opers as a generalization of complex projective structures on Riemann surfaces, in this section we recall the basic properties of the space of complex projective structures and elucidate the relationship to $\mathrm{G}$-opers.
A complex projective structure on $\Sigma$ is a maximal atlas of charts with values in $\mathbb{CP}^{1}$ whose transition maps are given by restrictions of M\"obius transformations. In particular, any complex projective structure induces a Riemann surface structure on $\Sigma.$ We refer to such a structure as a $\Sigma$-marked complex projective structure.
Given two $\Sigma$-marked complex projective structures $Z_{1}$ and $Z_{2},$ an isomorphism is a smooth map
\begin{align}
h: Z_{1}\rightarrow Z_{2}
\end{align}
whose projective coordinate representation is locally given by a M\"obius transformation, and such that the underlying smooth map $h: \Sigma\rightarrow \Sigma$ is isotopic to the identity. The moduli space of $\Sigma$-marked complex projective structures $\mathcal{CP}_{\Sigma}$ is the set of isomorphism classes of $\Sigma$-marked complex projective structures.
Equivalently, given a $\Sigma$-marked complex projective structure $Z,$ lifting the structure to the universal cover $\widetilde{Z}$, the coordinate charts globalize to a holomorphic immersion
\begin{align}
f: \widetilde{Z}\rightarrow \mathbb{CP}^{1},
\end{align}
which is equivariant for a homomorphism $\rho: \pi\rightarrow \textnormal{PSL}_{2}(\mathbb{C}).$
For $G=\textnormal{PSL}_{2}(\mathbb{C}),$ the flag variety $\mathrm{G}/\mathrm{B}$ is $\mathrm{G}$-equivariantly isomorphic to the complex projective line $\mathbb{CP}^{1}.$ In this case, the definition of a $\Sigma$-marked developed $\mathrm{G}$-oper yields a triple $(f,\rho, X)$ consisting of a $\Sigma$-marked Riemann surface $X,$ a homomorphism $\rho: \pi \rightarrow \textnormal{PSL}_{2}(\mathbb{C})$, and a $\rho$-equivariant local bi-holomorphism
$f: \widetilde{X}\rightarrow \mathbb{CP}^{1}.$
Therefore, a $\Sigma$-marked developed $\textnormal{PSL}_{2}(\mathbb{C})$-oper $(f,\rho, X)$ is the same as a $\Sigma$-marked complex projective structure; or in other terminology, a locally homogeneous $(\textnormal{PSL}_{2}(\mathbb{C}), \mathbb{CP}^{1})$ geometric structure on $\Sigma$ in the sense of Ehressmann-Thurston.
Let $(f_{1},\rho_{1}, X), (f_{2}, \rho_{2}, X)$ be two $\Sigma$-marked developed $\textnormal{PSL}(2,\mathbb{C})$-opers. Choose a small open set $U\in \widetilde{X}$ such that $f_{1}|_{U}$ is a biholomorphism onto $f_{1}(U).$
Then,
\begin{align}
f_{2} \circ f_{1}^{-1}: f_{1}(U)\rightarrow \mathbb{CP}^{1}
\end{align}
is a locally injective holomorphic map.
Given any holomorphic map $q: V\rightarrow \mathbb{CP}^{1}$ where $V\subset \mathbb{CP}^{1}$ is an open set, let $j_{x}^{k}(q)$ be the holomorphic k-jet of the map $q$ at $x\in V.$
The action of a M\"obius transformation $g\in \textnormal{PSL}(2,\mathbb{C})$ is denoted by $L_{g}: \mathbb{CP}^{1}\rightarrow \mathbb{CP}^{1}.$
A proof of the following proposition may be found in \cite{HUB06}.
\begin{proposition}\label{osc map}
There exists a unique holomorphic map
\begin{align}
\mathrm{O}_{f_{1}, f_{2}}: \widetilde{X}\rightarrow \textnormal{PSL}_{2}(\mathbb{C})
\end{align}
which satisfies
\begin{align}
j_{f_{1}(x)}^{2}(L_{\mathrm{O}_{f_{1},f_{2}}(x)})=j_{f_{1}(x)}^{2}(f_{2}\circ f_{1}^{-1}).
\end{align}
Furthermore, for every $\gamma\in \pi,$
\begin{align}
\mathrm{O}_{f_{1},f_{2}}(\gamma(x))=\rho_{2}(\gamma)\circ \mathrm{O}_{f_{1},f_{2}}(x) \circ \rho_{1}(\gamma^{-1}).
\end{align}
\end{proposition}
In the complex projective structures literature, the map $\mathrm{O}_{f_{1},f_{2}}$ is usually called the \emph{osculating} map.
By Proposition \ref{osc map}, the map
\begin{align}
\widetilde{X}\times \textnormal{PSL}_{2}(\mathbb{C}) &\rightarrow \widetilde{X} \times \textnormal{PSL}_{2}(\mathbb{C}) \\
(x, g) &\mapsto (x, \mathrm{O}_{f_{1},f_{2}}(x)g)
\end{align}
descends to an isomorphism of holomorphic principal $\textnormal{PSL}_{2}(\mathbb{C})$-bundles
\begin{align}
\mathrm{O}_{f_{1}, f_{2}}: \widetilde{X} \times_{\rho_{1}} \textnormal{PSL}_{2}(\mathbb{C}) \rightarrow \widetilde{X} \times_{\rho_{2}} \textnormal{PSL}_{2}(\mathbb{C}).
\end{align}
Let $\{f_{1}, x, e_{1}\}$ be a fixed $\mathfrak{sl}_{2}(\mathbb{C})$-triple in $\mathfrak{sl}_{2}(\mathbb{C})$ where the span of $\{x, e_{1}\}$ is the equal to the upper triangular Borel sub-algebra.
Recalling the $\Sigma$-marked developed $\textnormal{PSL}_{2}(\mathbb{C})$-oper $(f_{1},\rho_{1}, X_{1}),$ the locally injective holomorphic map $f_{1}:\widetilde{X}\rightarrow \mathbb{CP}^{1}$ defines a holomorphic reduction of $\widetilde{X}\times_{\rho_{1}} \textnormal{PSL}_{2}(\mathbb{C})$ to the Borel $\mathrm{B}<\mathrm{G}:$
\begin{align}
E_{\mathrm{B}}:=\{ [(x,g)]\in \widetilde{X}\times_{\rho_{1}} \textnormal{PSL}_{2}(\mathbb{C}) \ | \ L_{g^{-1}}\circ f_{1}(x)=e\mathrm{B}\in \textnormal{PSL}_{2}(\mathbb{C})/\mathrm{B}\}.
\end{align}
Note that the definition of $E_{\mathrm{B}}$ is invariant the \emph{right} $\mathrm{B}$-action on pairs in $\widetilde{X}\times_{\rho_{1}} \textnormal{PSL}_{2}(\mathbb{C}).$
The Borel subgroup acts on $\mathfrak{g}_{1}=\textnormal{span}_{\mathbb{C}}(e_{1}),$
and therefore the associated line bundle $E_{\mathrm{B}}[\mathfrak{g}_{1}]$ is well defined. It is a direct
consequence of the oper condition that $E_{\mathrm{B}}[\mathfrak{g}_{1}]\simeq K.$
Given a $\Sigma$-marked Riemann surface $X,$ let $\widetilde{\mathcal{DO}\mathfrak{p}}_{X}(\textnormal{PSL}_{2}(\mathbb{C}))$ be the fiber over $X$ of the fully faithful functor
\begin{align}
\widetilde{\mathcal{DO}\mathfrak{p}}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))\rightarrow \widetilde{\mathcal{T}}_{\Sigma},
\end{align}
and $\mathcal{DO}\mathfrak{p}_{X}(\textnormal{PSL}_{2}(\mathbb{C}))$ the corresponding set of isomorphism classes.
Then we have the following proposition (see \cite{DUM09}).
\begin{proposition}\label{param sl2}
If $\omega_{\rho_{2}}$ is the canonically defined flat connection on $\widetilde{X} \times_{\rho_{2}} \textnormal{PSL}_{2}(\mathbb{C})$ and $\omega_{\rho_{1}}$ is the canonically defined flat connection on $\widetilde{X} \times_{\rho_{1}} \textnormal{PSL}_{2}(\mathbb{C})$, then
\begin{align}
\mathrm{O}_{f_{1}, f_{2}}^{\star}(\omega_{\rho_{2}})-\omega_{\rho_{1}} \in \textnormal{H}^{0}(X, \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}_{1}])\simeq \textnormal{H}^{0}(X, \mathcal{K}^{2}).
\end{align}
Moreover, this assignment
defines a bijection (which depends on $(f_{1}, \rho_{1})),$
\begin{align}
\mathcal{DO}\mathfrak{p}_{X}(\textnormal{PSL}_{2}(\mathbb{C}))\simeq \textnormal{H}^{0}(X, \mathcal{K}^{2}).
\end{align}
This gives the space $\mathcal{DO}\mathfrak{p}_{X}(\textnormal{PSL}_{2}(\mathbb{C}))$ the structure of an affine space with underlying vector space of translations $\textnormal{H}^{0}(X, \mathcal{K}^{2}).$
\end{proposition}
\textbf{Remark:} Soon, we will see that this is a general phenomenon for $\mathrm{G}$-opers when $\mathrm{G}$ is a complex simple Lie group of adjoint type. The above description is equivalent to the classical identification of complex projective structures with holomorphic quadratic differentials that arises from the Schwarzian derivative \cite{DUM09}. In particular, up to a constant multiple, the quadratic differential appearing in Proposition \ref{param sl2} is the Schwarzian derivative of the locally univalent (multi-valued) map $f_{2}\circ f_{1}^{-1}.$
The question of understanding the moduli space of $\Sigma$-marked complex projective structures was analyzed by Hubbard in \cite{HUB81} where he proved the following theorem.
We adopt the language of opers here, whereas Hubbard worked directly with complex projectives structures since opers were not defined at the time.
\begin{theorem}\label{Hubbard}
The moduli space $\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))$ of $\Sigma$-marked $\textnormal{PSL}_{2}(\mathbb{C})$-opers has the structure of a complex manifold of dimension $6g-6$ where $g$ is the genus of $\Sigma.$
Furthermore, the map
\begin{align}\label{cps proj}
\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C})) \rightarrow \mathcal{T}_{\Sigma}
\end{align}
is a holomorphic affine bundle.
Finally, every holomorphic section of the projection
\begin{align}
\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C})) \rightarrow \mathcal{T}_{\Sigma}
\end{align}
induces a biholomorphism
\begin{align}
\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C})) \simeq T^{*}\mathcal{T}_{\Sigma}
\end{align}
where $T^{*}\mathcal{T}_{\Sigma}$ is the cotangent bundle of Teichm\"{u}ller space.
\end{theorem}
\textbf{Remark:} There is a family of Bers' holomorphic sections of the bundle \eqref{cps proj}, each of which yields a holomorphic identification
\begin{align}
\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))\simeq T^{*}\mathcal{T}_{\Sigma}.
\end{align}
A Bers' section requires the choice of a (conjugate) Riemann surface $[\overline{Y}]\in \overline{\mathcal{T}_{\Sigma}}$, and is defined by sending a Riemann surface $[X]\in \mathcal{T}_{\Sigma}$ to the complex projective structure on the \emph{top} component of the quasi-Fuchsian manifold
determined by $(X, \overline{Y})\in \mathcal{T}_{\Sigma}\times \overline{\mathcal{T}_{\Sigma}}.$
In section \ref{families}, we will prove a generalization of Theorem \ref{Hubbard} exhibiting the space of $\mathrm{G}$-opers (for $\mathrm{G}$ complex simple of adjoint type) as a bundle over $\mathcal{T}_{\Sigma}.$
\section{Explicit Models}\label{models}
In this short section, we will give an explicit construction of $\mathrm{G}$-opers where $\mathrm{G}$ is a complex simple Lie group of adjoint type. These explicit models will be essential for many cohomological calculations we make later in the paper.
Let $\mathfrak{g}$ be the simple Lie algebra of $\mathrm{G}$ and $\ell$ denote the rank of $\mathfrak{g}.$ As always, we have fixed a Borel subalgebra $\mathfrak{b}<\mathfrak{g}.$ Recall the height grading
\begin{align}\label{height}
\mathfrak{g}\simeq \bigoplus_{i=-m_{\ell}}^{m_{\ell}} \mathfrak{g}_{i}
\end{align}
corresponding to a Cartan subalgebra $\mathfrak{h}<\mathfrak{b}.$
Fixing $e_{1}\in \mathfrak{g}_{1}$ a principal nilpotent element (i.e. an element whose projection onto every simple root space is non-vanishing), the centralizer of $e_{1}$ is an $\ell$-dimensional subspace; we fix basis vectors $e_{i}\in \mathfrak{g}_{m_{i}}$
such that $\{e_{1},...,e_{\ell}\}\subset \mathfrak{g}$ spans the centralizer of $e_{1}.$ The numbers $1=m_{1}\leq...\leq m_{\ell}$ are the exponents of $\mathfrak{g}.$
Next, complete the regular nilpotent element $e_{1}$ to an $\mathfrak{sl}_{2}$-triple $\{f_{1}, x, e_{1}\}\subset \mathfrak{g}.$. Then we have
$f_{1}\in \mathfrak{g}_{-1}, x\in \mathfrak{g}_{0}, e_{1}\in \mathfrak{g}_{1}.$ Therefore, since \eqref{height} is a grading,
\begin{align}\label{f1 action}
\textnormal{ad}(f_{1})(\mathfrak{g}_{i})\subset \mathfrak{g}_{i-1},
\end{align}
and
\begin{align}\label{e1 action}
\textnormal{ad}(e_{1})(\mathfrak{g}_{i})\subset \mathfrak{g}_{i+1}.
\end{align}
\textbf{Remark}
The sub-algebra generated by an $\mathfrak{sl}_{2}$-triple $\{f_{1}, x, e_{1}\}\subset \mathfrak{g}$ with $e_{1}$ principal nilpotent is called a principal three-dimensional sub-algebra.
The $\mathfrak{sl}_{2}$-triple $\{f_{1}, x, e_{1}\}\subset \mathfrak{g}$ induces an injective homomorphism
\begin{align}
\iota_{\mathrm{G}}: \textnormal{PSL}_{2}(\mathbb{C})\rightarrow G
\end{align}
called the principal three-dimensional subgroup. Moreover, there is a holomorphic embedding
\begin{align}
f_{G}: \mathbb{CP}^{1}\rightarrow \mathrm{G}/\mathrm{B}
\end{align}
which is $\iota_{\mathrm{G}}$-equivariant. The holomorphic map $f_{G}$ is called the principal rational curve.
Fix a $\Sigma$-marked Riemann surface $X$ and let $K^{i}$ be the $i$-th tensor power of the canonical bundle of $X.$
Consider the $C^{\infty}$-vector bundle
\begin{align}
E_{\mathrm{G}}[\mathfrak{g}]:=\bigoplus_{i=-m_{\ell}}^{m_{\ell}} K^{i}\otimes \mathfrak{g}_{i}
\end{align}
with structure group $\mathrm{G},$ where the grading of $\mathfrak{g}$ is taken from \eqref{height}.
Given a smooth section of $E_{\mathrm{G}}[\mathfrak{g}]$
\begin{align}
s=\sum_{i} \beta_{i}\otimes V_{i},
\end{align}
where $\beta_{i}\otimes V_{i}\in \mathcal{A}^{0}(X, K^{i}\otimes \mathfrak{g}_{i}),$ define a holomorphic structure on $E_{\mathrm{G}}[\mathfrak{g}]$ via the $\overline{\partial}$-operator:
\begin{align}
\overline{\partial} s=\sum_{i} \overline{\partial}_{i} \beta_{i}\otimes V_{i}+ h\cdot \beta_{i}\otimes \textnormal{ad}(e_{1})(V_{i}).
\end{align}
Above, $\overline{\partial}_{i}$ is the $\overline{\partial}$-operator on $K^{i}$ and $h$ is the Hermitian metric on $\Theta$ arising from the unique hyperbolic metric which uniformizes $X$, which we view as a tensor $h\in \mathcal{A}^{(0,1)}(X, K).$ Furthermore, the term
$h\cdot \beta_{i}$ is viewed as a tensor in $\mathcal{A}^{(0,1)}(X, K^{i+1}).$ Finally, note that this is a well-defined $\overline{\partial}$-operator by \eqref{e1 action}.
For every non-zero $i\in [-m_{\ell}, m_{\ell}]\cap \mathbb{Z},$ the Hermitian metric $h$ induces a Hermitian metric on $K^{i}$, and for $K^{0}\simeq \mathcal{O}$ we use the trivial Hermitian metric
\begin{align}
(f, g)\mapsto f\overline{g}.
\end{align}
If $\partial_{i}$ is the $(1,0)$-part of the Chern connection of the induced Hermitian metric on $K^{i},$ define an operator $\nabla$ by the formula:
\begin{align}
\nabla s=\sum_{i} \partial_{i} \beta_{i}\otimes V_{i}+ \beta_{i} \otimes \textnormal{ad}(f_{1})(V_{i}).
\end{align}
Viewing $\beta_{i}\in \mathcal{A}^{(1,0)}(X, K^{i-1}),$ this is a well-defined connection of type $(1,0)$ by \eqref{f1 action}.
\begin{proposition}
The operator $\nabla$ defines a holomorphic connection on the holomorphic vector bundle $(E_{\mathrm{G}}[\mathfrak{g}], \overline{\partial})$ and the $C^{\infty}$-connection $D=\nabla+\overline{\partial}$ is flat.
\end{proposition}
\begin{proof}
The $C^{\infty}$-connection $D=\nabla+ \overline{\partial}$ is flat if and only if
\begin{align}
F(D)=F(\nabla)+\overline{\partial}^{2}+\nabla\circ \overline{\partial} +\overline{\partial}\circ \nabla=0.
\end{align}
Because we are on a Riemann surface, $F(\nabla)=\overline{\partial}^{2}=0,$ and therefore if
\begin{align}
\nabla\circ \overline{\partial} +\overline{\partial}\circ \nabla=0,
\end{align}
we simultaneously obtain that $\nabla$ is holomorphic and $D$ is flat.
Letting
\begin{align}
s=\sum_{i} \beta_{i}\otimes V_{i},
\end{align}
we compute
\begin{align}
\nabla\circ \overline{\partial}s&=\sum_{i} \partial_{i}\overline{\partial_{i}}\beta_{i}\otimes V_{i} - \overline{\partial}_{i}\beta_{i}\otimes \textnormal{ad}(f_{1})(V_{i}) \\
&+\partial_{i+1}(h\cdot \beta_{i})\otimes \textnormal{ad}(e_{1})(V_{i})
- h\cdot \beta_{i}\otimes \textnormal{ad}(f_{1})\circ \textnormal{ad}(e_{1})(V_{i}).
\end{align}
There are some subtle identifications here; namely we must consider the following terms as elements of the following spaces:
\begin{enumerate}
\item $\overline{\partial}_{i}\beta_{i}\in \mathcal{A}^{(1,1)}(X, K^{i-1})$.
\item $\partial_{i+1}(h\cdot \beta_{i})\in \mathcal{A}^{(1,1)}(X, K^{i+1})$
\item $h\cdot \beta_{i}\in \mathcal{A}^{(1,1)}(X, K^{i}).$
\end{enumerate}
In the other direction, we compute
\begin{align}
\overline{\partial}\circ \nabla s&=\sum_{i} \overline{\partial}_{i}\partial_{i} \beta_{i}\otimes V_{i}
+ \overline{\partial}_{i}\beta_{i}\otimes \textnormal{ad}(f_{1})(V_{i}) \\
&-h\cdot \partial_{i}\beta_{i}\otimes \textnormal{ad}(e_{1})(V_{i})
+ h\cdot \beta_{i}\otimes \textnormal{ad}(e_{1})\circ \textnormal{ad}(f_{1})(V_{i}).
\end{align}
Summing these two terms yields
\begin{align}
\nabla\circ \overline{\partial}s+ \overline{\partial}\circ \nabla s&=\sum_{i} (\partial_{i}\overline{\partial_{i}}\beta_{i}+ \overline{\partial}_{i}\partial_{i} \beta_{i}) \otimes V_{i} \\
&+ h\cdot \beta_{i}\otimes [\textnormal{ad}(e_{1}), \textnormal{ad}(f_{1})](V_{i}) \\
&+ (\partial_{i+1}(h\cdot \beta_{i})- h\cdot \partial_{i}\beta_{i})\otimes \textnormal{ad}(e_{1})(V_{i}) \\
&=\sum_{i} F(\nabla^{i})\beta_{i} \otimes V_{i} +ih\cdot \beta_{i}\otimes V_{i} \\
&+ (\partial_{i+1}(h\cdot \beta_{i})- h\cdot \partial_{i}\beta_{i})\otimes \textnormal{ad}(e_{1})(V_{i}).
\end{align}
Here, $\nabla^{i}$ is the Chern connection of the Hermitian metric on $K^{i}$ induced by the Hermitian metric on $K^{-1}$ associated to the uniformizing hyperbolic metric on $X.$
But, since the hyperbolic metric has constant curvature $-1$,
\begin{align}
F(\nabla^{i})\beta_{i}=-ih\cdot \beta_{i},
\end{align}
which implies
\begin{align}
\sum_{i} F(\nabla^{i})\beta_{i} \otimes V_{i} +ih\cdot \beta_{i}\otimes V_{i}=0.
\end{align}
Furthermore, since the metric $h$ is parallel for the Chern connection,
\begin{align}
\sum_{i} \partial_{i+1}(h\cdot \beta_{i})- h\cdot \partial_{i}\beta_{i}=0.
\end{align}
Hence,
\begin{align}
\nabla\circ \overline{\partial}s+ \overline{\partial}\circ \nabla s=0,
\end{align}
which completes the proof.
\end{proof}
The sub-bundle
\begin{align}
E_{\mathrm{B}}[\mathfrak{b}]:=\bigoplus_{i=0}^{m_{\ell}} K^{i}\otimes \mathfrak{g}_{i}
\end{align}
is $\overline{\partial}$-invariant, and thus defines a holomorphic sub-bundle of $E_{\mathrm{G}}[\mathfrak{g}].$ The following proposition is immediate.
\begin{proposition}\label{unif oper}
Let $E_{\mathrm{G}}$ be the holomorphic principal $\mathrm{G}$-bundle whose associated adjoint bundle is $E_{\mathrm{G}}[\mathfrak{g}],$ and $E_{\mathrm{B}}$ the reduction of structure whose corresponding adjoint bundle is
$E_{\mathrm{B}}[\mathfrak{b}].$ Finally, let $\omega$ be the holomorphic flat connection induced by $\nabla.$ Then $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ is a $\Sigma$-marked $\mathrm{G}$-oper whose second fundamental form is given by
\begin{align}
\Psi: \Theta&\rightarrow K^{-1}\otimes \mathfrak{g}_{-1}\simeq E_{\mathrm{B}}[\mathfrak{g}^{-1}/\mathfrak{b}]\\
\xi &\mapsto \xi\otimes f_{1}.
\end{align}
\end{proposition}
\begin{proof}
The only thing to recognize is that since $f_{1}\in \mathfrak{g}_{-1}$ is a principal nilpotent element, $f_{1}\in \mathcal{O}$ where $\mathcal{O}\in \mathfrak{g}^{-1}/\mathfrak{b}$ is the unique open orbit from the proof of Theorem \ref{open orbit}. Hence,
\begin{align}
\Psi(\xi)\in E_{\mathrm{B}}(\mathcal{O})
\end{align}
for all non-zero tangent vectors $\xi.$ This completes the proof.
\end{proof}
\textbf{Remark:} Under the correspondence with $\Sigma$-marked developed $\mathrm{G}$-opers, the $\mathrm{G}$-oper of Proposition \ref{unif oper} is the triple $(f, \rho, X)=(f_{G}\circ f_{0}, \iota_{\mathrm{G}}\circ \rho_{0}, X)$ where
\begin{align}
\rho_{0}: \pi\rightarrow \textnormal{PSL}(2, \mathbb{R})
\end{align}
is the Fuchsian homomorphism uniformizing $X$, and
\begin{align}
f_{0}: \widetilde{X}\rightarrow \mathbb{H}^{2}\subset \mathbb{CP}^{1}
\end{align}
is the developing map of the uniformizing hyperbolic structure on $X.$
Given a tuple $\vec{\alpha}:=(\alpha_{1},...\alpha_{\ell})\in \bigoplus_{i=1}^{\ell} \textnormal{H}^{0}(X, \mathcal{K}^{m_{i}+1})$, form the new operator
\begin{align}
\nabla^{\vec{\alpha}}=\nabla + \sum_{i=1}^{\ell} \alpha_{i}\otimes \textnormal{ad}(e_{i}),
\end{align}
where we recall that $\{e_{i}\}_{i=1}^{\ell}$ is a homogeneous basis of the centralizer of $e_{1}$ with respect to the grading \eqref{height}.
Since the $\alpha_{i}$ are holomorphic and
\begin{align}
[e_{1}, e_{i}]=0
\end{align}
for all $1\leq i \leq \ell,$ it immediately follows that
\begin{align}
\sum_{i=1}^{\ell} \alpha_{i}\otimes e_{i}\in \textnormal{H}^{0}(X, \mathcal{K}\otimes \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]).
\end{align}
Therefore, $\nabla^{\vec{\alpha}}$ is a holomorphic connection on $(E_{\mathrm{G}}[\mathfrak{g}], \overline{\partial}).$ Furthermore, the second fundamental form of $\nabla^{\vec{\alpha}}$ with respect to $E_{\mathrm{B}}[\mathfrak{b}]$ is still given by the $\Psi$ of Proposition \ref{unif oper}.
Let $\omega^{\vec{\alpha}}$ denote the corresponding holomorphic connection on the principal bundle $E_{\mathrm{G}}.$ The following proposition follows immediately from Proposition \ref{unif oper}.
\begin{proposition}\label{oper param 1}
For every $\vec{\alpha}\in \bigoplus_{i=1}^{\ell} \textnormal{H}^{0}(X, \mathcal{K}^{m_{i}+1}), \
(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega^{\vec{\alpha}}, X)$ defines a $\Sigma$-marked $\mathrm{G}$-oper.
\end{proposition}
Proposition \ref{oper param 1} yields a well-defined map
\begin{align}
\bigoplus_{i=1}^{\ell} \textnormal{H}^{0}(X, \mathcal{K}^{m_{i}+1})\rightarrow \mathcal{O}\mathfrak{p}_{X}(\mathrm{G}),
\end{align}
which depends upon the choice of a $\textnormal{PSL}_{2}(\mathbb{C})$-oper on $X$ and a homogeneous basis of the centralizer in $\mathfrak{g}$ of $e_{1}\in \mathfrak{g}_{-1}.$ In the next section, we will see that this map is a bijection.
\subsection{Parameterizing $\mathrm{G}$-opers}\label{param}
In this short section, we quickly review the parameterization of $\mathrm{G}$-opers (for $\mathrm{G}$ simple of adjoint type) on $X$ via pluri-canonical sections on $X$, see \cite{BD91} and \cite{BD05} for details. In particular, this gives a more intrinsic construction of the explicit models from Section \ref{models}.
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type and rank $\ell.$ Fix a Borel subgroup $\mathrm{B}<\mathrm{G}$ with corresponding sub-algebra $\mathfrak{b}<\mathfrak{g}.$ Let
\begin{align}
\iota_{\mathfrak{g}}: \mathfrak{sl}_{2}(\mathbb{C})\rightarrow \mathfrak{g}
\end{align}
be a principal three-dimensional sub-algebra
sending the upper triangular Borel subalgebra in $\mathfrak{sl}_{2}(\mathbb{C})$ to the fixed Borel sub-algebra $\mathfrak{b}<\mathfrak{g}$: this uniquely defines an injective homomorphism $\iota_{\mathrm{G}}: \textnormal{PSL}_{2}(\mathbb{C})\rightarrow \mathrm{G}.$
Let $\{f_{1}, x, e_{1}\}\subset \mathfrak{g}$ be the corresponding $\mathfrak{sl}_{2}(\mathbb{C})$-triple generating the image of $\iota_{\mathfrak{g}},$ and set $V=\textnormal{Ker}(\textnormal{ad}(e_{1})).$ Because $e_{1}\in \mathfrak{g}$ is a principal nilpotent element, the vector space $V$ is $\ell$-dimensional.
The regular semi-simple element $x$ induces a line decomposition
\begin{align}
V=\bigoplus_{i=1}^{\ell} V\cap \mathfrak{g}_{m_{i}}=\bigoplus_{i=1}^{\ell} V_{m_{i}},
\end{align}
where $\{m_{1},...m_{\ell}\}$ are the exponents of $\mathfrak{g}.$ If $\mathrm{B}_{0}< \textnormal{PSL}_{2}(\mathbb{C})$ is the standard Borel of upper triangular matrices, then $V$ is $\mathrm{B}_{0}$-invariant, where $\mathrm{B}_{0}$ acts using the homomorphism $\iota_{\mathrm{G}}: \textnormal{PSL}_{2}(\mathbb{C})\rightarrow G.$ Therefore, to any principal $\mathrm{B}_{0}$-bundle $E_{\mathrm{B}_{0}},$ there is an associated vector bundle $E_{\mathrm{B}_{0}}[V].$
The following theorem due to \cite{BD05} is a generalization of Proposition \ref{param sl2}.
\begin{theorem} \label{unique iso}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type.
Let $(E_{\mathrm{G}_{0}}, E_{\mathrm{B}_{0}}, \mu, X)$ be a $\Sigma$-marked $\textnormal{PSL}_{2}(\mathbb{C})$-oper and $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ be a $\Sigma$-marked $\mathrm{G}$-oper. Then there exists a unique
isomorphism $\phi: E_{\mathrm{B}_{0}}[\mathrm{B}]\rightarrow E_{\mathrm{B}}$ such that $\phi^{*}\omega - \mu \in \textnormal{H}^{0}(X, \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}_{0}}[V]).$
\end{theorem}
Fixing a homogeneous basis of the graded vector space $V,$ by \cite{BD05} there is canonical isomorphism $\mathcal{E}_{\mathrm{B}_{0}}[V]\simeq \bigoplus_{i=1}^{\ell} \mathcal{K}^{m_{i}}.$
This yields the promised parameterization of $\mathcal{O}\mathfrak{p}_{X}(\mathrm{G}).$
\begin{theorem}[\cite{BD05}]\label{param opers 2}
The map
\begin{align}
\mathcal{O}\mathfrak{p}_{X}(\mathrm{G})&\rightarrow \textnormal{H}^{0}(X, \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}_{0}}[V])\simeq \bigoplus_{i=1}^{\ell} \textnormal{H}^{0}(X, \mathcal{K}^{m_{i}+1}) \\
(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X) &\mapsto \phi^{\star}\omega-\mu
\end{align}
is a bijection. Furthermore, upon choosing the appropriate homogeneous basis of $V,$ it is inverse to the map defined by Proposition \ref{oper param 1}
\end{theorem}
\textbf{Remark:} Note that this bijection depends upon a choice of $\textnormal{PSL}_{2}(\mathbb{C})$-oper on $X$ and a choice of homogeneous basis of the graded vector space $V.$ These are exactly the choices that we had to make to construct the explicit models of section \ref{models}. Finally, observe that Theorem \ref{param opers 2} verifies the claim following Proposition \ref{oper param 1}, namely that the explicit models yield a parameterization of the space of $\Sigma$-marked $\mathrm{G}$-opers on $X.$
\section{Kuranishi families and the global structure of $\Sigma$-marked $\mathrm{G}$-opers}\label{families}
This section develops the machinery to prove the existence of a natural complex structure on the space $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ of $\Sigma$-marked $\mathrm{G}$-opers. The approach here is standard in (the analyic approach to) deformation theory, and consists of four parts:
\begin{enumerate}
\item Develop a notion of families of $\Sigma$-marked $\mathrm{G}$-opers.
\item Identify the infinitesimal deformations and obstructions with some cohomology groups of an appropriate complex of sheaves.
\item Apply the Kuranishi method (Hodge theory, elliptic complexes, etc.) to show that every unobstructed infinitesimal deformation is tangent to an honest deformation.
\item Use these families to construct a holomorphic atlas on $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}).$
\end{enumerate}
We will develop in detail the first two parts of this program. The third part would require a significant detour into Hodge theory, elliptic complexes, and related analytic notions; therefore we have chosen to omit these details from the paper. We will give references to completely analogous constructions in the literature from which the reader can, with some work, fill in the details of this technical part. Finally, we will carry out the final process of constructing a holomorphic atlas on
$\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}).$
We begin with the following definition.
\begin{definition}
A $\Sigma$-marked family of Riemann surfaces is a tuple $(\mathcal{X}, \mathcal{B},p)$
where $\mathcal{X}$ and $\mathcal{B}$ are complex manifolds such that:
\begin{enumerate}
\item $p: \mathcal{X}\rightarrow \mathcal{B}$ is a proper holomorphic submersion.
\item In the $C^{\infty}$-category, $p: \mathcal{X}\rightarrow \mathcal{B}$ is a fiber bundle with fiber $\Sigma$ and structure
group $\textnormal{Diff}_{0}(\Sigma).$
\end{enumerate}
\end{definition}
A morphism of $\Sigma$-marked families is defined as follows.
\begin{definition}
Given a pair of $\Sigma$-marked families of Riemann surfaces $(\mathcal{X}_{1}, \mathcal{B}_{1}, p_{1})$ and
$(\mathcal{X}_{2}, \mathcal{B}_{2}, p_{2})$, a morphism $F=(\phi, f)$ consists of
\begin{enumerate}
\item A Cartesian diagram
\begin{center}
\begin{tikzcd}
\mathcal{X}_{1} \arrow{r}{\phi} \arrow{d}{p_{1}}
& \mathcal{X}_{2} \arrow{d}{p_{2}} \\
\mathcal{B}_{1} \arrow{r}{f}
& \mathcal{B}_{2}
\end{tikzcd}
\end{center}
Such that $\phi$ and $f$ are holomorphic maps and the pair $(\phi, f)$, as $C^{\infty}$-maps, are maps of $(\Sigma, \textnormal{Diff}_{0}(\Sigma))$-fiber bundles.
\end{enumerate}
\end{definition}
In particular, the restriction of $\phi$ to any fiber is a biholomorphism isotopic to the identity. This notion makes sense exactly because we have required the fiber bundle to have fiber $\Sigma$ with structure group $\textnormal{Diff}_{0}(\Sigma).$
Let $X$ be a $\Sigma$-marked Riemann surface. A small deformation of $X$ is a germ of a $\Sigma$-marked family $\mathcal{X}\rightarrow (\mathcal{B},b)$ where $b\in \mathcal{B}$ and a morphism
\begin{center}
\begin{tikzcd}
X \arrow{r}{\phi} \arrow{d}{p_{x}}
& \mathcal{X} \arrow{d}{p} \\
\{x\} \arrow{r}{f}
& \mathcal{B},
\end{tikzcd}
\end{center}
such that $f(x)=b.$ Note that it follows automatically from the definitions that $\phi: X\rightarrow p^{-1}(b)$ is a biholomorphism whose underlying smooth map is isotopic to the identity.
A small deformation $(\mathcal{X}, \mathcal{B}, b , p)$ of $X$ is universal if for every small deformation $(\mathcal{X}_{0}, \mathcal{B}_{0}, \mathrm{B}_{0}, p_{0})$ of $X,$ there is a unique germ of a morphism
\begin{align}
F: (\mathcal{X}_{0}, \mathcal{B}_{0}, \mathrm{B}_{0}, p_{0})\rightarrow (\mathcal{X}, \mathcal{B}, b , p),
\end{align}
such that the following diagram commutes
\begin{center}
\begin{tikzcd}
(X, \{x\}, x, p_{x}) \arrow{d}{\textnormal{id}} \arrow{r}
& (\mathcal{X}_{0}, \mathcal{B}_{0}, \mathrm{B}_{0}, p_{0}) \arrow{d}{F} \\
(X, \{x\}, x, p_{x}) \arrow{r}
& (\mathcal{X}, \mathcal{B}, b , p).
\end{tikzcd}
\end{center}
The following theorem is a summary of the main results of \cite{AC09}.
\begin{theorem}
Let $X$ be a $\Sigma$-marked Riemann surface. Then there is an open set $U_{X}\subset \textnormal{H}^{1}(X, \Theta)\simeq \mathbb{C}^{3g-3}$ containing the origin and a universal $\Sigma$-marked small deformation of $X$ over $U.$
Moreover, the open sets $U_{X}\subset \textnormal{H}^{1}(X, \Theta)$ yield an atlas of holomorphic charts providing $\mathcal{T}_{\Sigma}$ with the structure of a Hausd\"orff complex manifold of dimension $3g-3.$
Finally, the $\Sigma$-marked universal families over $U_{X}$ glue to give a holomorphic fiber bundle $\mathcal{C}_{\Sigma}\rightarrow \mathcal{T}_{\Sigma}$ called the universal Teichm\"{u}ller curve.
\end{theorem}
Now, let $\mathcal{X}\xrightarrow{p} \mathcal{B}$ be a $\Sigma$-marked family of Riemann surfaces. Let $\widehat{E_{\mathrm{G}}}\xrightarrow{\pi} \mathcal{X}$ be a holomorphic principal $\mathrm{G}$-bundle on $\mathcal{X}.$ In order to define families of $\Sigma$-marked $\mathrm{G}$-opers, we must develop the notion of a relative holomorphic connection on $\widehat{E_{\mathrm{G}}}.$
To start, let $M$ and $N$ be complex manifolds and $f: M\rightarrow N$ a holomorphic map which is a fiber bundle in the $C^{\infty}$-category (in particular $f$ is a submersion). If $\Omega_{M}^{1}$ is the sheaf of holomorphic $1$-forms on $M$, then define the sheaf of relative holomorphic $1$-forms $\Omega_{M/N}^{1}$ via the exact sequence
\begin{align}
0\rightarrow f^{\star}\Omega_{N}^{1}\rightarrow \Omega_{M}^{1}\rightarrow \Omega_{M/N}^{1}\rightarrow 0.
\end{align}
Now, consider the following commutative diagram with exact rows and columns:
\begin{center}
\begin{tikzcd}
&
0\arrow{d}
& 0 \arrow{d}
& \\
0 \arrow{r}
& (p\circ \pi)^{\star}\Omega_{\mathcal{B}}^{1} \arrow{r}\arrow{d}
& \pi^{\star}\Omega_{\mathcal{X}}^{1} \arrow{r}\arrow{d}
&\pi^{\star}\Omega_{\mathcal{X}/\mathcal{B}}^{1} \arrow{r}
& 0 \\
&\Omega_{\widehat{E_{\mathrm{G}}}}^{1} \arrow{r}{\simeq} \arrow{d}
& \Omega_{\widehat{E_{\mathrm{G}}}}^{1} \arrow{d}
&
& \\
& \Omega_{\widehat{E_{\mathrm{G}}}/\mathcal{B}}^{1} \arrow{r}{u} \arrow{d}
& \Omega_{\widehat{E_{\mathrm{G}}}/\mathcal{X}}^{1} \arrow{d} \arrow{r}
& 0 \\
& 0
& 0
& .
\end{tikzcd}
\end{center}
Chasing this diagram, we obtain an exact sequence
\begin{align}\label{rel cotangent sequence}
0\rightarrow \pi^{\star} \Omega_{\mathcal{X}/\mathcal{B}}^{1}\rightarrow \Omega_{\widehat{E_{\mathrm{G}}}/\mathcal{B}}^{1}
\rightarrow \Omega_{\widehat{E_{\mathrm{G}}}/\mathcal{X}}^{1}\rightarrow 0.
\end{align}
Dualizing yields the exact sequence
\begin{align}\label{rel tangent sequence}
0\rightarrow \Theta_{\widehat{E_{\mathrm{G}}}/\mathcal{X}}\rightarrow \Theta_{\widehat{E_{\mathrm{G}}}/\mathcal{B}}\rightarrow \pi^{\star}\Theta_{\mathcal{X}/\mathcal{B}}\rightarrow 0.
\end{align}
This is an exact sequence of $\mathrm{G}$-equivariant locally free sheaves (equivalently $\mathrm{G}$-equivariant holomorphic vector bundles) on $\widehat{E_{\mathrm{G}}}.$
Now comes the definition of a relative holomorphic connection on $\widehat{E_{\mathrm{G}}}.$
\begin{definition}
A relative holomorphic connection on $\widehat{E_{\mathrm{G}}}$ is a $\mathrm{G}$-equivariant splitting of the exact sequence \eqref{rel tangent sequence}
.\end{definition}
\textbf{Remark:} If $\mathcal{B}=\{\textnormal{pt}\},$ then \eqref{rel tangent sequence} is the usual tangent sequence associated to a principal $\mathrm{G}$-bundle on $X,$ and this is the usual sheaf-theoretic definition of a holomorphic connection (see Section \ref{A bundles}).
Abusing notation, we denote by $\Theta_{\widehat{E_{\mathrm{G}}}/\mathcal{X}}$ the holomorphic vector bundle associated to the locally free sheaf $\Theta_{\widehat{E_{\mathrm{G}}}/\mathcal{X}}$ on $\widehat{E_{\mathrm{G}}}.$
Given $g\in G,$ we denote the right holomorphic $\mathrm{G}$-action by
$R_{g}: \widehat{E_{\mathrm{G}}}\rightarrow \widehat{E_{\mathrm{G}}}.$ Note that $\Theta_{\widehat{E_{\mathrm{G}}}/\mathcal{B}}$ is a $\mathrm{G}$-equivariant sub-sheaf of $\Theta_{\widehat{E_{\mathrm{G}}}},$ and therefore the infinitesimal $\mathrm{G}$-action may be restricted to sections of $\Theta_{\widehat{E_{\mathrm{G}}}/\mathcal{B}}.$
Perhaps a more familiar (equivalent) definition of a relative holomorphic connection on $\widehat{E_{\mathrm{G}}}$ is the following.
\begin{definition}
A relative holomorphic connection on $\widehat{E_{\mathrm{G}}}$ is a holomorphic map
\begin{align}
\overline{\omega}: \Theta_{\widehat{E_{\mathrm{G}}}/\mathcal{B}} \rightarrow \mathfrak{g},
\end{align}
which is linear in the fibers and which satisfies:
\begin{enumerate}
\item $R_{g}^{\star}\overline{\omega}=\textnormal{Ad}(g^{-1})\circ\overline{\omega}$
for all $g\in G.$
\item If $X\in \mathfrak{g}$ and $X^{\sharp}$ is the $\mathrm{G}$-invariant section of $\Theta_{\widehat{E_{\mathrm{G}}}/\mathcal{X}}$ given by the infinitesimal $\mathrm{G}$-action on $\widehat{E_{\mathrm{G}}},$ then $\overline{\omega}(X^{\sharp})=X.$
\end{enumerate}
\end{definition}
As with ordinary connections on principal $\mathrm{G}$-bundles, relative holomorphic connections are an affine space
modelled on the vector space of holomorphic sections $\textnormal{H}^{0}(\mathcal{X}, \Omega_{\mathcal{X}/\mathcal{B}}^{1}\otimes \widehat{E_{\mathrm{G}}}[\mathfrak{g}]).$
If
\begin{align}
d_{(\pi,p)}: \Omega_{\widehat{E_{\mathrm{G}}}/\mathcal{B}}^{1}\rightarrow \Omega_{\widehat{E_{\mathrm{G}}}/\mathcal{B}}^{2}
\end{align}
is the relative exterior derivative along the fibers of $p\circ \pi$, then the curvature of $\overline{\omega}$ is defined by
\begin{align}
F(\overline{\omega})=d_{(\pi, p)}\overline{\omega}+\frac{1}{2}[\overline{\omega}, \overline{\omega}],
\end{align}
and descends to a holomorphic section $F(\overline{\omega})\in \textnormal{H}^{0}(\mathcal{X}, \Omega_{\mathcal{X}/\mathcal{B}}^{2}\otimes \widehat{E_{\mathrm{G}}}[\mathfrak{g}]).$ Since $\mathcal{X}$ is a family of Riemann surfaces, $\Omega_{\mathcal{X}/\mathcal{B}}^{2}$ is the zero sheaf and such a section is necessarily zero. Therefore, every holomorphic relative connection is automatically flat.
Finally, if $\mathrm{H}<\mathrm{G}$ is a closed complex subgroup and $\widehat{E_{\mathrm{H}}}$ is a holomorphic reduction of $\widehat{E_{\mathrm{G}}}$ to $\mathrm{H},$ then the relative second fundamental form $\overline{\Psi}$ of $\overline{\omega}$ relative to $\widehat{E_{\mathrm{H}}}$ is defined as the composition
\begin{align}
\Theta_{\widehat{E_{\mathrm{H}}}/\mathcal{B}}\rightarrow \Theta_{\widehat{E_{\mathrm{G}}}/\mathcal{B}} \xrightarrow{\overline{\omega}} \mathfrak{g}\rightarrow \mathfrak{g}/\mathfrak{h}
\end{align}
and descends to a holomorphic section $\overline{\Psi}\in \textnormal{H}^{0}(\mathcal{X}, \Omega_{\mathcal{X}/\mathcal{B}}^{1}\otimes \widehat{E_{\mathrm{H}}}(\mathfrak{g}/\mathfrak{h})).$
We now arrive at the definition of a $\Sigma$-marked family of $\mathrm{G}$-opers.
\begin{definition}\label{family g oper}
Let $\mathrm{G}$ be a connected complex semi-simple Lie group with a fixed Borel subgroup $\mathrm{B}<\mathrm{G}.$
A $\Sigma$-marked family of $\mathrm{G}$-opers consists of a tuple $(\widehat{E_{\mathrm{G}}}, \widehat{E_{\mathrm{B}}}, \mathcal{X}, \mathcal{B}, \overline{\omega})$ such that
\begin{enumerate}
\item The pair $(\mathcal{X}, \mathcal{B})$ is a $\Sigma$-marked family of Riemann surfaces.
\item $\widehat{E_{\mathrm{G}}}$ is a holomorphic, right principal $\mathrm{G}$-bundle over $\mathcal{X}$ and $\widehat{E_{\mathrm{B}}}$ is a holomorphic reduction of structure to the Borel subgroup $\mathrm{B}<\mathrm{G}.$
\item $\overline{\omega}$ is a relative holomorphic connection on $\widehat{E_{\mathrm{G}}}.$
\item For all non-zero vectors $v\in \Theta_{\mathcal{X}/\mathcal{B}},$
\begin{align}
\overline{\Psi}(v)\in \widehat{E_{\mathrm{B}}}[\mathcal{O}]
\end{align}
where $\mathcal{O}\subset \mathfrak{g}^{-1}/\mathfrak{b}$ is the unique open $\mathrm{B}$-orbit.
\end{enumerate}
\end{definition}
\textbf{Remark}: If $(\mathcal{X}, \mathcal{B})=(X, \{x\})$ is a trivial family over a point $\{x\}$, then a $\Sigma$-marked family of $\mathrm{G}$-opers is identical to a $\Sigma$-marked $\mathrm{G}$-oper. In general, this definition formalizes the notion of a $\Sigma$-marked family of Riemann surfaces where each fiber in the family is equipped with a $\mathrm{G}$-oper structure.
A small deformation of a $\Sigma$-marked $\mathrm{G}$-oper $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ is defined in the obvious way, and such a deformation is said to be universal if any other deformation is pulled back from this one by a unique morphism. Since all of these definitions are obvious adaptations of the definitions given for $\Sigma$-marked families of Riemann surfaces, we elect to not explicitly spell out the enhanced definitions here.
Finally, we can state the main theorem whose hypotheses are contained in this section. The proof of this theorem will occupy the next few sub-sections.
\begin{theorem}\label{uni def}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type. Then, given any $\Sigma$-marked $\mathrm{G}$-oper $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$, there is a universal small deformation whose base $\mathcal{B}$ is of complex dimension $\textnormal{dim}_{\mathbb{C}}(G)(g-1)+(3g-3)$ where $g$ is the genus of $\Sigma.$
\end{theorem}
\subsection{Atiyah bundles and flat connections}\label{A bundles}
In order to study the infinitesimal deformation theory of $\Sigma$-marked $\mathrm{G}$-opers, we now review the sheaf-theoretic way of thinking about connections on principal bundles.
Let $\mathrm{G}$ be a connected complex semi-simple Lie group and $E_{\mathrm{G}}$ a holomorphic principal $\mathrm{G}$-bundle over a $\Sigma$-marked Riemann surface $X.$ Consider the sub-sheaf $\mathcal{G}\subset\Theta_{E_{\mathrm{G}}}$ of the tangent sheaf of $E_{\mathrm{G}}$ whose local sections consist of $\mathrm{G}$-invariant holomorphic vector fields on $E_{\mathrm{G}}.$
By a theorem of Atiyah \cite{ATI57}, the sheaf $\mathcal{G}$ descends to a locally free sheaf on $X$: namely there exists a unique locally-free sheaf $\mathcal{A}(E_{\mathrm{G}})$ on $X$, called the \emph{Atiyah} sheaf (equivalently bundle), such that the pullback $\pi^{\star}\mathcal{A}(E_{\mathrm{G}})$ of $\mathcal{A}(E_{\mathrm{G}})$ to $E_{\mathrm{G}}$ via the projection map $E_{\mathrm{G}}\xrightarrow{\pi} X$ is equal to $\mathcal{G}.$ Note that Beilinson-Drinfeld \cite{BD05} call this the (Atiyah)-algebroid of infinitesimal symmetries of $E_{\mathrm{G}}.$
There is an exact sequence of locally-free sheaves (equivalently holomorphic vector bundles) on $X$
\begin{align}\label{atiyah sequence}
0\rightarrow \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]\rightarrow \mathcal{A}(E_{\mathrm{G}})\xrightarrow{\sigma_{\mathrm{G}}} \Theta\rightarrow 0,
\end{align}
where $\mathcal{E}_{\mathrm{G}}[\mathfrak{g}]$ is the sheaf of vertical $\mathrm{G}$-invariant vector fields on $E_{\mathrm{G}}.$ This is the sheaf of sections of the holomorphic vector bundle $E_{\mathrm{G}}[\mathfrak{g}].$
The map $\sigma_{\mathrm{G}}: \mathcal{A}(E_{\mathrm{G}})\rightarrow \Theta$ is called the \emph{symbol} map.
\textbf{Example:} If $V$ is a rank $n$-holomorphic vector bundle over $X,$ let $\mathcal{D}_{0}^{1}(V)$ denote the sheaf of first-order differential operators on holomorphic sections of $V$ which have scalar principal symbol.
Given a local holomorphic frame $\{e_{i}\}_{i=1}^{n}$, a local section $P$ of the sheaf $\mathcal{D}_{0}^{1}(V)$
is given by an expression of the form
\begin{align}
P=\sum_{i=1}^{n} \xi\otimes \textnormal{Id} + B,
\end{align}
where $\xi$ is a local section of $\Theta$ and $\mathrm{B}$ is a local section of $\textnormal{End}(V).$
The action of $P$ on a section $s=\sum_{i=1}^{n} s^{i}\otimes e_{i}$ in this trivialization is given by
\begin{align}
P(s)=\sum_{i=1}^{n} \xi(s^{i})\otimes e_{i}+s^{i}\otimes B(e_{i}).
\end{align}
If $E_{\textnormal{GL}_{n}(\mathbb{C})}$ is the holomorphic principal $\textnormal{GL}_{n}( \mathbb{C})$-bundle associated to $V,$ then the Atiyah sequence \eqref{atiyah sequence} for $E_{\textnormal{GL}_{n}(\mathbb{C})}$ is equivalent to the exact sequence
\begin{align}\label{SES vb}
0\rightarrow \textnormal{End}(V)\rightarrow \mathcal{D}_{0}^{1}(V)\xrightarrow{\sigma_{V}} \Theta\rightarrow 0
\end{align}
where $\sigma_{V}(P)=\xi$ is the \emph{principal symbol} of the differential operator $P.$ This example explains the terminology \emph{symbol map} for the map $\mathcal{A}(E_{\mathrm{G}})\xrightarrow{\sigma_{\mathrm{G}}} \Theta.$
Finally, if $\nabla$ is a holomorphic connection on $V,$ then for any local section $\xi$ of $\Theta,$ the operator $\nabla_{\xi}$ is a local section of $\mathcal{D}_{0}^{1}(V).$ Moreover, the fact that $\nabla$ satisfies the Leibniz rule implies that $\nabla$ defines a holomorphic splitting of \eqref{SES vb}. This shows that holomorphic splittings of \eqref{SES vb} are equivalent to holomorphic connections on $V.$
Generally, in light of the fact that a holomorphic connection on $E_{\mathrm{G}}$ is equivalent to a $\mathrm{G}$-equivariant horizontal, holomorphic splitting $TE_{\mathrm{G}}\simeq \mathcal{H}\oplus \mathcal{V}$ (e.g. a $\mathrm{G}$-equivariant horizontal, holomorphic distribution), we arrive at Atiyah's \cite{ATI57} definition of a holomorphic connection on $E_{\mathrm{G}}.$
\begin{definition}
A holomorphic connection on $E_{\mathrm{G}}$ is a holomorphic splitting of the symbol map
\begin{align}
\mathcal{A}(E_{\mathrm{G}})\xrightarrow{\sigma_{\mathrm{G}}} \Theta.
\end{align}
\end{definition}
We can recast the definition of a $\mathrm{G}$-oper in this language. Namely, the locally-free sub-sheaf $\mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}]\subset \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]$ determines a locally-free sub-sheaf $\mathcal{A}^{-1}\subset \mathcal{A}(E_{\mathrm{G}})$ and a short exact sequence
\begin{align}\label{ses atiyah}
0\rightarrow \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}]\rightarrow \mathcal{A}^{-1}\xrightarrow{\sigma_{\mathrm{G}, -1}} \Theta\rightarrow 0.
\end{align}
Then, a $\mathrm{G}$-oper structure is a holomorphic splitting $\omega$ of \eqref{ses atiyah} such that
the composition
\begin{align}
\Theta\xrightarrow{\omega} \mathcal{A}^{-1}\rightarrow \mathcal{A}^{-1}/\mathcal{A}(E_{\mathrm{B}})\simeq \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}/\mathfrak{b}]
\end{align}
lies in the unique open $\mathrm{B}$-orbit $\mathcal{O}\subset\mathfrak{g}^{-1}/\mathfrak{b}$ for every non-zero vector in $\Theta.$
\subsection{Deformation theory of $\mathrm{G}$-opers}
In this section we study the infinitesimal deformation theory of $\mathrm{G}$-opers. Our efforts in the previous Section \ref{A bundles} will come to fruition here as the deformation theory is nicely captured using the theory of Atiyah bundles.
Before diving in, we need some definitions. In Section \ref{families} we introduced the notion of families of $\mathrm{G}$-opers over complex manifolds. Unfortunately, this is not the right context for the study of infinitesimal deformation theory. We remedy this here. In this subsection, we assume some familiarity with complex analytic spaces. The interested reader may find all of the relevant background material in the book \cite{GLS07}.
Let $\mathbb{D}^{(n)}$ denote the complex analytic space associated to the holomorphic function
\begin{align}
f: \mathbb{C}&\rightarrow \mathbb{C} \\
z&\mapsto z^{n+1}.
\end{align}
Any complex analytic space with underlying topolgical space a single point is called a \emph{fat point}. As a locally ringed space,
\begin{align}
(\mathbb{D}^{(n)}, \mathcal{O}_{\mathbb{D}^{(n)}})\simeq (\{0\}, \mathbb{C}[\varepsilon]/(\varepsilon^{n+1})).
\end{align}
An $n$-th order deformation of a $\Sigma$-marked Riemann surface $X$ consists of a commutative diagram
\begin{center}
\begin{tikzcd}
X \arrow{r} \arrow{d}
& \mathcal{X} \arrow{d}{p} \\
\mathbb{D}^{(0)} \arrow{r}
& \mathbb{D}^{(n)}.
\end{tikzcd}
\end{center}
Here, all maps are maps of complex analytic spaces, and $p$ is assumed to be a flat map of complex analytic spaces. Furthermore, the top horizontal arrow is a closed embedding of complex analytic spaces.
Following the rubric of section \ref{families} and accepting some familiarity with the theory of complex analytic spaces, there is a straightforward notion of a $\Sigma$-marked $\mathrm{G}$-oper over the family $\mathcal{X}\xrightarrow{p} \mathbb{D}^{(n)},$ and an $n$-th order deformation of a $\Sigma$-marked $\mathrm{G}$-oper $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$. With these definitions in place, the following theorem solves the problem of infinitesimal deformations of $\mathrm{G}$-opers.
\begin{theorem}\label{kuranishi family}
Let $\Xi:=(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ be a $\Sigma$-marked $\mathrm{G}$-oper. There is a two term complex
of locally-free sheaves
\begin{align}\label{2 term}
\mathcal{A}^{\bullet}:\mathcal{A}(E_{\mathrm{B}})\xrightarrow{[\hat{\omega}, \ ]} \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}]
\end{align}
such that,
\begin{enumerate}
\item The $0$-th hyper-cohomology $\mathbb{H}^{0}(X, \mathcal{A}^{\bullet})$ is in bijection with infinitesimal automorphisms of the $\mathrm{G}$-oper $\Xi.$
\item The $1$-st hyper-cohomology $\mathbb{H}^{1}(X, \mathcal{A}^{\bullet})$ is in bijection with isomorphism classes of first-order deformations of the $\mathrm{G}$-oper $\Xi$ such that $0\in \mathbb{H}^{1}(X, \mathcal{A}^{\bullet})$ corresponds to the trivial first order deformation.
\item There is a quadratic obstruction map
\begin{align}
\textnormal{Ob}:\mathbb{H}^{1}(X, \mathcal{A}^{\bullet})\rightarrow \mathbb{H}^{2}(X, \mathcal{A}^{\bullet})
\end{align}
such that a first-order deformation in $\mathbb{H}^{1}(X, \mathcal{A}^{\bullet})$ extends to a second-order deformation if and only if its image under the map $\textnormal{Ob}$ vanishes.
\end{enumerate}
Finally, we have the following equalities
\begin{enumerate}
\item $\mathbb{H}^{0}(X, \mathcal{A}^{\bullet})=\{0\}.$
\item If $\mathrm{G}$ is complex simple of adjoint type, then $\textnormal{dim}_{\mathbb{C}}(\mathbb{H}^{1}(X, \mathcal{A}^{\bullet}))=(g-1)\textnormal{dim}_{\mathbb{C}}(G)+(3g-3)$ where $\mathrm{G}$ is the genus of $\Sigma.$
\item If $\mathrm{G}$ is complex simple of adjoint type, then $\mathbb{H}^{2}(X, \mathcal{A}^{\bullet})=\{0\}.$
\end{enumerate}
\end{theorem}
\begin{proof}
Let $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ be a $\Sigma$-marked $\mathrm{G}$-oper. Recall that $\mathcal{A}^{-1}\subset \mathcal{A}(E_{\mathrm{G}})$ is the locally free sub-sheaf such that
the holomorphic flat connection $\omega$ is a holomorphic splitting of
\begin{align}\label{oper splitting}
0\rightarrow \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}]\rightarrow \mathcal{A}^{-1}\xrightarrow{\sigma_{\mathrm{G},-1}} \Theta\rightarrow 0.
\end{align}
Therefore, the holomorphic flat connection $\omega$ may be viewed as a global holomorphic section $\hat{\omega}\in \textnormal{H}^{0}(X, \mathcal{K}\otimes \mathcal{A}^{-1}).$
Locally on $X,$ we may write $\hat{\omega}=\sum_{i} \beta_{i}\otimes u_{i}$ where $\beta_{i}$ are locally defined holomorphic $1$-forms on $X$ and the $u_{i}$ are local sections of $\mathcal{A}^{-1}.$ If $s$ is a local section of $\mathcal{A}(E_{\mathrm{B}}),$
then define
\begin{align}\label{bracket}
[\hat{\omega}, s]:=\sum_{i} \left(\beta_{i}\otimes [u_{i}, s] -\mathcal{L}_{\sigma_{\mathrm{B}}(s)}\beta_{i}\otimes u_{i}\right)
\end{align}
where $\mathcal{L}_{\sigma_{\mathrm{B}}(s)}\beta_{i}$ is the Lie derivative of the local holomorphic $1$-form $\beta_{i}$ along the local holomorphic vector field $\sigma_{\mathrm{B}}(s).$
The bracket \eqref{bracket} defines a sheaf map
\begin{align}\label{complex first}
\mathcal{A}(E_{\mathrm{B}})\xrightarrow{[\hat{\omega}, \ ]} \mathcal{K}\otimes \mathcal{A}^{-1}.
\end{align}
Since $\hat{\omega}$ is a splitting of \eqref{oper splitting},
\begin{align}
\sum_{i} \alpha_{i}\otimes \sigma_{\mathrm{G}, -1}(u_{i})=1
\end{align}
as a local section of $\mathcal{K}\otimes \Theta\simeq \mathcal{O}.$ Therefore
\begin{align}
0&=\mathcal{L}_{\sigma_{\mathrm{B}}(s)}\left(\sum_{i} \alpha_{i}\otimes \sigma_{\mathrm{G}, -1}(u_{i})\right) \\
&=\sum_{i} \left(\mathcal{L}_{\sigma_{\mathrm{B}}(s)}\alpha_{i}\otimes \sigma_{\mathrm{G}, -1}(u_{i})+\alpha_{i}\otimes [\sigma_{\mathrm{B}}(s), \sigma_{\mathrm{G}, -1}(u_{i})]\right).
\end{align}
Given a local section $s$ of $\mathcal{A}(E_{\mathrm{B}})$, we compute
\begin{align}
\textnormal{id}\otimes \sigma_{\mathrm{G}, -1}([\hat{\omega}, s])&=\sum_{i} \left(\alpha_{i}\otimes \sigma_{\mathrm{G}, -1}[u_{i},s]-\mathcal{L}_{\sigma_{\mathrm{B}}(s)}\alpha_{i}\otimes \sigma_{\mathrm{G},-1}(u_{i})\right) \\
&= \sum_{i}\left( \alpha_{i}\otimes \sigma_{\mathrm{G},-1}[u_{i},s]+\alpha_{i}\otimes [\sigma_{\mathrm{B}}(s), \sigma_{\mathrm{G},-1}(u_{i})]\right) \\
&=0,
\end{align}
where we have used the fact that the symbol map is a map of sheaves of Lie algebras which is functorial with respect to sub-sheaves of $\mathcal{A}(E_{\mathrm{G}}).$
Since $\textnormal{ker}(\sigma_{\mathrm{G}, -1})=\mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}],$ the map \eqref{complex first} lifts to
\begin{align}
\mathcal{A}^{\bullet}:=\mathcal{A}(E_{\mathrm{B}})\xrightarrow{[\hat{\omega}, \ ]} \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}].
\end{align}
Proposition $4.4$ in \cite{CHE12} implies that the complex $\mathcal{A}^{\bullet}$ governs the deformation theory of the $\Sigma$-marked $\mathrm{G}$-oper $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X).$ This means that:
\begin{enumerate}
\item Infinitesimal automorphisms are in bijection with $\mathbb{H}^{0}(X, \mathcal{A}^{\bullet}).$
\item First order deformations up to isomorphism are in bijection with $\mathbb{H}^{1}(X, \mathcal{A}^{\bullet}).$
\item Obstructions to lifting a first order deformation to a second order deformation lie in $\mathbb{H}^{2}(X, \mathcal{A}^{\bullet}).$
\end{enumerate}
\textbf{Remark}: Specifically, what Chen proves in \cite{CHE12} is that given the data $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X),$ the complex $\mathcal{A}^{\bullet}$ controls the deformation theory of this tuple, where we constrain the second fundamental form of $\omega$ relative to $E_{\mathrm{B}}$ to lie in $E_{\mathrm{B}}[\mathfrak{g}^{-1}/\mathfrak{b}].$ But, since lying in the open orbit is a open condition, any small deformation satisfying this property automatically yields a deformation of the $\Sigma$-marked $\mathrm{G}$-oper $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X).$ This proves the first part the Theorem \ref{kuranishi family}.
Now consider the short exact sequence of complexes
\begin{align}\label{SES X to S}
0\rightarrow \mathcal{A}_{0}^{\bullet}\rightarrow \mathcal{A}^{\bullet}\rightarrow \Theta^{0}\rightarrow 0
\end{align}
defined by
\begin{center}
\begin{tikzcd}
0 \arrow{d}
& 0 \arrow{d} \\
E_{\mathrm{B}}[\mathfrak{b}] \arrow{r}{[\hat{\omega}, \ ]} \arrow{d}
& \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}] \arrow{d} \\
\mathcal{A}(E_{\mathrm{B}}) \arrow{r}{[\hat{\omega}, \ ]} \arrow{d}{\sigma_{\mathrm{B}}}
& \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}] \arrow{d}\\
\Theta \arrow{r} \arrow{d}
&0 \\
0.
\end{tikzcd}
\end{center}
Since $\textnormal{H}^{0}(X, \Theta)=\{0\}$, the long exact sequence in hyper-cohomology implies that
$\mathbb{H}^{0}(X, \mathcal{A}_{0}^{\bullet})\simeq \mathbb{H}^{0}(X, \mathcal{A}^{\bullet}).$ But, the automorphism group of a $\mathrm{G}$-oper on $X$ is finite \cite{BD05} (equal to the center of $\mathrm{G}$), so $\mathbb{H}^{0}(X, \mathcal{A}_{0}^{\bullet})=\{0\}.$ Hence $\mathbb{H}^{0}(X, \mathcal{A}^{\bullet})=\{0\}.$
Now, we turn to the final statement of Theorem \ref{kuranishi family}. By Theorem \ref{param opers 2} and using the fact that $\mathrm{G}$ is adjoint simple, there is a parameterization of $\Sigma$-marked $\mathrm{G}$-opers on $X$ by the vector space
\begin{align}
\bigoplus_{i=1}^{\ell}\textnormal{H}^{0}(X, \mathcal{K}^{m_{i}+1}).
\end{align}
Appealing again to \cite{CHE12}, the complex $\mathcal{A}_{0}$ governs the deformation theory of $\Sigma$-marked $\mathrm{G}$-opers on the fixed $\Sigma$-marked Riemann surface $X,$ and therefore
\begin{align}\label{dim count}
\textnormal{dim}_{\mathbb{C}}(\mathbb{H}^{1}(X, \mathcal{A}_{0}^{\bullet}))&=\textnormal{dim}_{\mathbb{C}}\left(\bigoplus_{i=1}^{\ell}\textnormal{H}^{0}(X, \mathcal{K}^{m_{i}+1})\right) \\
&=\textnormal{dim}_{\mathbb{C}}(G)(g-1),
\end{align}
where the last equality follows by an application of the Riemann-Roch formula.
We first show that $\mathbb{H}^{2}(X, \mathcal{A}_{0}^{\bullet})=\{0\}.$ This will be achieved by examining the explicit models from section \ref{models} and an application of the Grothiendieck-Riemann-Roch theorem.
Using the explicit models of section \ref{models}, there are $C^{\infty}$-isomorphisms
\begin{align}
\mathcal{E}_{\mathrm{B}}[\mathfrak{b}]&\simeq \bigoplus_{i=0}^{m_{\ell}} \mathcal{K}^{i}\otimes \mathfrak{g}_{i} \\
\mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}]&\simeq \bigoplus_{i=-1}^{m_{\ell}} \mathcal{K}^{i+1}\otimes \mathfrak{g}_{i}.
\end{align}
With this informaton, the relevant ranks and degrees of the bundles in question are:
\begin{align}
\textnormal{deg}(\mathcal{E}_{\mathrm{B}}[\mathfrak{b}])&=\sum_{i=0}^{m_{\ell}}
i(2g-2)\textnormal{dim}_{\mathbb{C}}(\mathfrak{g}_{i}). \\
\textnormal{rank}(\mathcal{E}_{\mathrm{B}}[\mathfrak{b}])&=\textnormal{dim}_{\mathbb{C}}(\mathfrak{b}).\\
\textnormal{deg}(\mathcal{K}\otimes\mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}])&=\sum_{i=-1}^{m_{\ell}}
(i+1)(2g-2)\textnormal{dim}_{\mathbb{C}}(\mathfrak{g}_{i}). \\
\textnormal{rank}(\mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}])&=\textnormal{dim}_{\mathbb{C}}(\mathfrak{g}^{-1}).
\end{align}
Therefore, using \eqref{dim count} and the fact that $\mathbb{H}^{0}(X, \mathcal{A}_{0}^{\bullet})=\{0\}$, an application of the Grothiendick-Riemann-Roch theorem reveals:
\begin{align}
(1-g)\textnormal{dim}_{\mathbb{C}}(G)+\textnormal{dim}_{\mathbb{C}}(\mathbb{H}^{2}(X, \mathcal{A}_{0}^{\bullet}))&=(1-g)\textnormal{rk}(\mathcal{A}_{0}^{\bullet})+\textnormal{deg}(\mathcal{A}_{0}^{\bullet})
\\
&=(1-g)\left(\textnormal{rk}(\mathcal{E}_{\mathrm{B}}[\mathfrak{b}])-\textnormal{rk}(\mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}])\right) \\
&+ \left(\textnormal{deg}(\mathcal{E}_{\mathrm{B}}[\mathfrak{b}])-\textnormal{deg}(\mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}])\right) \\
&=(g-1)\ell -(2g-2)\textnormal{dim}_{\mathbb{C}}(\mathfrak{b}) \\
&=(1-g)\left(2\textnormal{dim}_{\mathbb{C}}(\mathfrak{b})-\ell\right) \\
&=(1-g)\textnormal{dim}_{\mathbb{C}}(G).
\end{align}
Hence, $\mathbb{H}^{2}(X, \mathcal{A}_{0}^{\bullet})=\{0\}.$
Returning to the short exact sequence of complexes \eqref{SES X to S}
\begin{align}
0\rightarrow \mathcal{A}_{0}^{\bullet}\rightarrow \mathcal{A}^{\bullet}\rightarrow \Theta\rightarrow 0,
\end{align}
and using that $\textnormal{H}^{0}(X, \Theta)=H^{2}(X, \Theta)=\{0\},$ the long exact sequence derived from \eqref{SES X to S} yields the exact sequence:
\begin{align}
0\rightarrow \mathbb{H}^{1}(X, \mathcal{A}_{0}^{\bullet})\rightarrow \mathbb{H}^{1}(X, \mathcal{A}^{\bullet})\rightarrow \textnormal{H}^{1}(X, \Theta)\rightarrow 0 \rightarrow \mathbb{H}^{2}(X, \mathcal{A}^{\bullet})\rightarrow 0.
\end{align}
This simultaneously proves that
\begin{align}
\textnormal{dim}_{\mathbb{C}}(\mathbb{H}^{1}(X, \mathcal{A}^{\bullet}))=(g-1)\textnormal{dim}_{\mathbb{C}}(G)+(3g-3)
\end{align}
where $g$ is the genus of $\Sigma$ and $\mathbb{H}^{2}(X, \mathcal{A}^{\bullet})=\{0\}.$ This completes the proof.
\end{proof}
The above proof also establishes the following:
\begin{corollary}\label{cor sur}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type and $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ be a $\Sigma$-marked $\mathrm{G}$-oper. Then, the induced map
\begin{align}
\mathbb{H}^{1}(X, \mathcal{A}^{\bullet})\rightarrow \textnormal{H}^{1}(X, \Theta)
\end{align}
is surjective.
\end{corollary}
Now, we can apply the Kuranishi method and obtain the following theorem.
\begin{theorem}\label{Kuranishi family}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type. Given any $\Sigma$-marked $\mathrm{G}$-oper $\Xi=(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$, there is an open set $U\subset \mathbb{H}^{1}(X, \mathcal{A}^{\bullet})$ containing the origin and a universal small deformation
\begin{align}
\left(\widehat{E_{\mathrm{G}}}, \widehat{E_{\mathrm{B}}}, \overline{\omega}, \mathcal{X}, (U,0)\right)
\end{align}
of $\Xi.$
\end{theorem}
\textbf{Remark}: Since we are omitting the proof of this result, let us make some comments. The aforementioned \emph{Kuranishi method} was introduced, building on the work of Kodaira and Spencer \cite{KOD86}, by Kuranishi \cite{KUR62} where he established that if $M$ is a compact complex manifold with $H^{2}(M, \Theta_{M})=\{0\},$ then there exists a universal small deformation of $M$ parameterized by an open set in $\textnormal{H}^{1}(M, \Theta_{M})$ containing the origin. This was followed by an explosive development in deformation theory which continues to this day: we cite \cite{MAN04} for a recount of the basic theory and applications.
For our purposes, the results of \cite{CHE12} establish the infinitesimal deformation theory of a $\Sigma$-marked $\mathrm{G}$-oper as discussed in the previous theorem. The question of building a small universal deformation of a pair $(E_{\mathrm{G}}, X)$ was solved in \cite{CS16}.
We make two remarks here: since $E_{\mathrm{B}}$ is a reduction of structure of $E_{\mathrm{G}},$ a deformation of $E_{\mathrm{B}}$ induces a unique deformation of $E_{\mathrm{G}},$ and thus deformations of the triple $(E_{\mathrm{B}}, \omega, X)$ are equivalent to deformations of the tuple $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X).$ This explains why $\mathcal{A}(E_{\mathrm{G}})$ does not appear in the two-term complex \eqref{2 term}.
Also, in the paper \cite{CS16}, they only study holomorphic vector bundles. But, since $\mathrm{G}$ is of adjoint type, the deformation theory of $E_{\mathrm{G}}$ is completely equivalent to the deformation theory of the holomorphic vector bundle $E_{\mathrm{G}}[\mathfrak{g}],$ and therefore the results of \cite{CS16} concerning holomorphic vectors apply with no essential modifications.
A combination of the techniques in the aforementioned papers \cite{CHE12}, \cite{CS16} yields a proof of Theorem \ref{Kuranishi family} where no new ideas are necessary. Hence, we conclude the discussion of Theorem \ref{Kuranishi family} here.
\subsection{Global structure of $\Sigma$-marked $\mathrm{G}$-opers}
In this section, we finally arrive at the proof of Theorem \ref{def space}, providing the moduli space of $\Sigma$-marked $\mathrm{G}$-opers with a canonical complex structure when $\mathrm{G}$ is a complex simple Lie group of adjoint type.
\begin{theorem}\label{global opers}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type. The moduli space of $\Sigma$-marked $\mathrm{G}$-opers $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ admits the structure of a Hausd\"orff complex manifold of dimension $\textnormal{dim}_{\mathbb{C}}(G)(g-1)+(3g-3)$ where $\mathrm{G}$ is the genus of $\Sigma.$
Moreover, the natural map
\begin{align}
\mathcal{P}: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma}
\end{align}
is a holomorphic submersion.
Finally, there is a commutative diagram
\begin{center}
\begin{tikzcd}
T_{[(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)]}\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}) \arrow{r}{d\mathcal{P}} \arrow{d}
& T_{[X]}\mathcal{T}_{\Sigma} \arrow{d} \\
\mathbb{H}^{1}(X, \mathcal{A}^{\bullet}) \arrow{r}
& \textnormal{H}^{1}(X, \Theta_{X})
\end{tikzcd}
\end{center}
where the lower horizontal arrow is the induced map in hyper-cohomology from the exact sequence
\begin{align}
0\rightarrow \mathcal{A}_{0}^{\bullet}\rightarrow \mathcal{A}^{\bullet}\rightarrow \Theta\rightarrow 0.
\end{align}
\end{theorem}
\begin{proof}
Let $\Xi:=(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ be a $\Sigma$-marked $\mathrm{G}$-oper. Let $(\mathcal{B}_{\Xi},0_{\Xi})\subset (\mathbb{H}^{1}(X, \mathcal{A}^{\bullet}), 0)$ be the pointed base of a $\Sigma$-marked universal small deformation of $\Xi$ given by Theorem \ref{Kuranishi family}.
Define the map
\begin{align}
\phi_{\Xi}: \mathcal{B}_{\Xi}\rightarrow \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})
\end{align}
which sends $b\in \mathcal{B}_{\Xi}$ to the $\Sigma$-marked $\mathrm{G}$-oper structure on the fiber of the universal deformation over $b\in \mathcal{B}_{\Xi}.$ As the structure group of the $\Sigma$-marked deformation of $X$ is $\textnormal{Diff}_{0}(\Sigma),$ the map $\phi_{\Xi}$ is well defined. The proof proceeds in four steps which we enumerate below.
\begin{enumerate}
\item Up to shrinking $B_{\Xi},$ the map $\phi_{\Xi}$ is injective.
Suppose there exists $b_{1}, b_{2}\in \mathcal{B}_{\Xi}$ such that $\phi_{\Xi}(b_{1})=\phi_{\Xi}(b_{2}).$ Let $\Xi_{b_{1}}$ and $\Xi_{b_{2}}$ the $\Sigma$-marked $\mathrm{G}$-opers lying over $b_{1}, b_{2}\in \mathcal{B}_{\Xi}.$
By the assumption $\phi_{\Xi}(b_{1})=\phi_{\Xi}(b_{2})$ and the definition of $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$, there is an isomorphism $F$ of $\Sigma$-marked $\mathrm{G}$-opers between $\Xi_{b_{1}}$ and $\Xi_{b_{2}}.$ Since the family over $(\mathcal{B}_{\Xi}, 0_{\Xi})$ is universal, potentially shrinking $\mathcal{B}_{\Xi},$ the isomorphism $F$ is induced by an automorphism of $\Xi.$
Since $\mathrm{G}$ is a complex simple Lie group of adjoint type and $X$ has no non-trivial automorphisms isotopic to the identity, the results of \cite{BD05} imply that there are no non-trivial automorphisms of $\Xi.$ Therefore, the only possibility is that $b_{1}=b_{2}$ and $F=\textnormal{id.}$ This proves that $\phi_{\Xi}$ is injective.
\item The collection $\{\phi_{\Xi}(B_{\Xi})\}_{\Xi\in \widetilde{\mathcal{O}\mathfrak{p}}_{\Sigma}(\mathrm{G})}$\footnote{Strictly speaking, we should choose a subset of objects in $\widetilde{\mathcal{O}\mathfrak{p}}_{\Sigma}(\mathrm{G})$ which is surjective onto $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}).$ We make no further mention of this set-theoretic issue, assuming the appropriate strengthening of the axiom of choice which makes this choice possible.} forms the basis of a topology on the set $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}).$
Recall that given a set $S,$ a subset $\mathfrak{B}$ of the power set of $S$ is a basis for a topology $\mathfrak{T}$ if and only if the elements of $\mathfrak{B}$ cover $S$, and for every $U, V\in \mathfrak{B}$ and any $s\in U\cap V,$ there exists $s\in W\subset U\cap V$ such that $W\in \mathfrak{B}.$
That the collection $\{ \phi_{\Xi}(\mathcal{B}_{\Xi})\}_{\Xi\in \widetilde{\mathcal{O}\mathfrak{p}}_{\Sigma}(\mathrm{G})}$ covers $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ is obvious. The second condition follows from the fact that the restriction of a universal deformation of $\Xi$ with base $\mathcal{B}$ to an open subset $\mathcal{B}^{\prime}\subset \mathcal{B}$ such that $0\in \mathcal{B}^{\prime}$ is still a universal deformation of $\Xi.$ Therefore, the collection $\{\phi_{\Xi}(B_{\Xi})\}_{\Xi\in \widetilde{\mathcal{O}\mathfrak{p}}_{\Sigma}(\mathrm{G})}$ forms the base for a topology on the set $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$.
\item The collection $\{\phi_{\Xi}, B_{\Xi}\}_{\Xi\in \widetilde{\mathcal{O}\mathfrak{p}}_{\Sigma}(\mathrm{G})}$ forms a holomorphic atlas on the topological space $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}).$ Upon proving this, we obtain a (potentially non-Hausd\"orff) complex manifold structure on $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}).$
Suppose that $\Omega\in \textnormal{Im}(\phi_{\Xi})\cap \textnormal{Im}(\phi_{\Xi^{\prime}})\neq \emptyset.$ We need to show that
\begin{align}
\phi_{\Xi^{\prime}}^{-1}\circ \phi_{\Xi}: \phi_{\Xi}^{-1}\left(\textnormal{Im}(\phi_{\Xi})\cap \textnormal{Im}(\phi_{\Xi^{\prime}})\right) \rightarrow \phi_{\Xi^{\prime}}^{-1}\left(\textnormal{Im}(\phi_{\Xi})\cap \textnormal{Im}(\phi_{\Xi^{\prime}})\right)
\end{align}
is holomorphic.
Let
\begin{align}
(U,0_{\Xi}):=\phi_{\Xi}^{-1}\left(\textnormal{Im}(\phi_{\Xi})\cap \textnormal{Im}(\phi_{\Xi^{\prime}})\right)\subset (\mathcal{B}_{\Xi},0_{\Xi})
\end{align}
and
\begin{align}
(V, 0_{\Xi^{\prime}}):=\phi_{\Xi^{\prime}}^{-1}\left(\textnormal{Im}(\phi_{\Xi})\cap \textnormal{Im}(\phi_{\Xi^{\prime}})\right)\subset (\mathcal{B}_{\Xi^{\prime}},0_{\Xi^{\prime}})
\end{align}
where
\begin{align}
\Omega=\phi_{\Xi}(0_{\Xi})=\phi_{\Xi^{\prime}}(0_{\Xi^{\prime}})
\end{align}
By construction, $(V, 0_{\Xi^{\prime}})$ is the base of a universal deformation of $\Omega.$ Since $(U,0_{\Xi})$ is the base of another deformation of $\Omega,$ there exists a unique holomorphic map
\begin{align}
f: (U,0_{\Xi})&\rightarrow (V, 0_{\Xi^{\prime}}) \\
0_{\Xi}&\mapsto 0_{\Xi^{\prime}},
\end{align}
defined by the fact that $(V, 0_{\Xi^{\prime}})$ is the base of a universal deformation of $\Omega.$
By construction, $f=\phi_{\Xi^{\prime}}^{-1}\circ \phi_{\Xi}.$
This completes the proof.
\item The topology on $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ is Hausd\"orff.
By construction, the map $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma}$ is continuous, and by Theorem \ref{param opers 2}, the fibers of this map are Hausdorff with respect to the subspace topology: they are biholomorphic to $\bigoplus_{i=1}^{\ell} \textnormal{H}^{0}(X, \mathcal{K}^{m_{i}+1})$. Since $\mathcal{T}_{\Sigma}$ is Hausd\"orff, this implies that $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ is Hausd\"orff.
\end{enumerate}
This completes the proof that $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ is a Hausd\"orff complex manifold.
For any $\Sigma$-marked $\mathrm{G}$-oper $\Xi,$ the base $\mathcal{B}_{\Xi}$ of any universal deformation is an open subset of the complex vector space $\mathbb{H}^{1}(X, \mathcal{A}^{\bullet})$, which by Theorem \ref{Kuranishi family} is of dimension $\textnormal{dim}_{\mathbb{C}}(G)(g-1)+(3g-3).$
This establishes the equality
\begin{align}
\textnormal{dim}_{\mathbb{C}}(\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}))=\textnormal{dim}_{\mathbb{C}}(G)(g-1)+(3g-3).
\end{align}
Finally, our construction of the holomorphic structure on $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ implies that the derivative of the map $\mathcal{P}: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma}$ identifies with the induced map in hyper-cohomology
\begin{align}\label{derivative map}
\mathbb{H}^{1}(X, \mathcal{A}^{\bullet})\rightarrow \textnormal{H}^{1}(X, \Theta)
\end{align}
of the map of complexes
\begin{center}
\begin{tikzcd}
\mathcal{A}^{\bullet}:=\mathcal{A}(E_{\mathrm{B}})\arrow{r} \arrow{d}{\sigma_{\mathrm{B}}}
& \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}] \arrow{d} \\ \label{induced map}
\Theta \arrow{r}
& 0.
\end{tikzcd}
\end{center}
As the kernel of the aforementioned map is given by the complex
\begin{align}
\mathcal{A}_{0}^{\bullet}:= \mathcal{E}_{\mathrm{B}}[\mathfrak{b}]\rightarrow \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}],
\end{align}
the map \eqref{derivative map} extends to an exact sequence
\begin{align}
\mathbb{H}^{1}(X, \mathcal{A}^{\bullet})\rightarrow \textnormal{H}^{1}(X, \Theta)\rightarrow
\mathbb{H}^{2}(X, \mathcal{A}_{0}^{\bullet}).
\end{align}
The hyper-cohomology group $\mathbb{H}^{2}(X, \mathcal{A}_{0}^{\bullet})$ vanishes by Theorem \ref{Kuranishi family}. Therefore, the map $\mathcal{P}: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma}$ has complex linear surjective derivative at every point, and thus $\mathcal{P}$ is a holomorphic submersion (see Corollary \ref{cor sur}). This completes the proof.
\end{proof}
The proof of Theorem \ref{global opers} also implies the following result.
\begin{theorem}
The space $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ is the base of a universal family of $\Sigma$-marked $\mathrm{G}$-opers. Therefore, $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ is a fine moduli space.
\end{theorem}
\begin{proof}
The universal families lying over the bases $\mathcal{B}_{\Xi}$ used to construct charts on $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ glue together to yield a universal family of $\Sigma$-marked $\mathrm{G}$-opers over $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}).$
\end{proof}
As another consequence of Theorem \ref{global opers} and the discussion of $\Sigma$-marked developed $\mathrm{G}$-opers in Section \ref{g opers} we obtain the following result.
\begin{theorem}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type. Then the space of $\Sigma$-marked developed $\mathrm{G}$-opers has the structure of a Hausd\"{o}rff complex manifold such that the set map
\begin{align}
\mathcal{D}: \mathcal{DO}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})
\end{align}
is a biholmorphism of complex manifolds.
Furthermore, the mapping class group $\textnormal{Mod}(\Sigma)$-action on each side is holomorphic and $\mathcal{D}$ is mapping class group equivariant.
\end{theorem}
\subsection{Marked $\mathrm{G}$-opers and the Hitchin base}
This very short section generalizes the identification of complex projective structures with the cotangent bundle of Teichmuller space from Theorem \ref {Hubbard} to the setting of $\mathrm{G}$-opers. As always in this discussion, $\mathrm{G}$ is a complex simple Lie group of adjoint type.
If $\mathrm{G}$ is a complex simple Lie group of adjoint type with Lie algebra $\mathfrak{g},$ let $\mathcal{B}_{\Sigma}(\mathrm{G})$ be the bundle over Teichm\"uller space whose fiber
over a $\Sigma$-marked Riemann surface $X$ is the space $\bigoplus_{i=1}^{\ell} \textnormal{H}^{0}(X, \mathcal{K}^{m_{i}+1}),$ and where the positive integers $\{m_{i}\}_{1}^{\ell}$ are the exponents of $\mathfrak{g}.$ The set $\mathcal{B}_{\Sigma}(\mathrm{G})$ has the structure of a holomorphic vector bundle over $\mathcal{T}_{\Sigma}.$
\begin{theorem}\label{h base}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type. For every $C^{\infty}$-section $s$ of the projection
\begin{align}
\pi: \mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))\rightarrow
\mathcal{T}_{\Sigma},
\end{align}
there is a commutative diagram
\begin{center}
\begin{tikzcd}
\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\arrow{r}{\phi_{s}}\arrow{d}{\mathcal{P}}
& \mathcal{B}_{\Sigma}(\mathrm{G}) \arrow{d} \\
\mathcal{T}_{\Sigma} \arrow{r}{\textnormal{id.}}
& \mathcal{T}_{\Sigma},
\end{tikzcd}
\end{center}
where $\phi_{s}$ is a diffeomorphism.
If $s$ is holomorphic, then the diffeomorphism $\phi_{s}$ is holomorphic.
\end{theorem}
\textbf{Examples}: The most obvious section $s_{\mathcal{F}}$ of $\pi: \mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))\rightarrow
\mathcal{T}_{\Sigma}$ is given by selecting the Fuchsian uniformizing $\textnormal{PSL}(2,\mathbb{C})$-oper from Section \ref{models} in every fiber. The section $s_{\mathcal{F}}$ is not holomorphic, and yields a diffeomorphism
\begin{align}
\phi_{s_{\mathcal{F}}}:\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{B}_{\Sigma}(\mathrm{G})
\end{align}
which maps the sub-manifold of Fuchsian uniformizing $\mathrm{G}$-opers onto the zero section of $\mathcal{B}_{\Sigma}(\mathrm{G}).$
Holomorphic sections of $\pi: \mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))\rightarrow
\mathcal{T}_{\Sigma}$ can be obtained utilizing Bers' simultaneous uniformization theorem (see the remark follow Theorem \ref{Hubbard}) which therefore yields a family of biholomorphisms onto $\mathcal{B}_{\Sigma}(\mathrm{G})$
parameterized by $\mathcal{T}_{\Sigma}.$
\begin{proof}
Let $s$ be a section of $\pi$ and $X\in \mathcal{T}_{\Sigma}.$ Using Theorem \ref{param opers 2}, This induces a bijection (using the base-point $s(X)),$
\begin{align}
\mathcal{O}\mathfrak{p}_{X}(\mathrm{G})\simeq \mathcal{B}_{X}(\mathrm{G}).
\end{align}
By Theorem \ref{unique iso}, the dependence of this isomorphism on $X$ is determined by the regularity of the section $s$\footnote{Here, the real justification comes from inspecting the proof in \cite{BD05}: it is clear from the explicit construction of the map in \cite{BD05} that the regularity of $\phi_{s}$ matches the regularity of $s.$}. This completes the proof.
\end{proof}
\section{The holonomy map and pre-symplectic geometry} \label{hol map pre}
\subsection{The holonomy map}
Finally, we come to the study of the forgetful map from $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$
to the space of $C^{\infty}$-flat $\mathrm{G}$-bundles on $\Sigma,$ and prove that it is a holomorphic immersion when $\mathrm{G}$ is a complex simple Lie group of adjoint type.
Consider the category $\widetilde{\mathcal{F}}_{\Sigma}(\mathrm{G})$ whose objects are $C^{\infty}$-flat bundles $(E_{\mathrm{G}}, \omega)$ over $\Sigma.$ Morphisms in this category are given by commutative diagrams
\[
\begin{tikzcd}
E_{\mathrm{G}} \arrow{d} \arrow{r}{\phi}
& E_{\mathrm{G}}^{\prime} \arrow{d} \\
\Sigma \arrow{r}{h}
& \Sigma
\end{tikzcd}
\]
such that the $C^{\infty}$-map $h: \Sigma\rightarrow \Sigma$ is isotopic to the identity and $\phi$ is a smooth isomorphism of $\mathrm{G}$-bundles such that $\phi^{\star}\omega^{\prime}=\omega.$
It is well known that the natural topology on the set of isomorphism classes in $\widetilde{\mathcal{F}}_{\Sigma}(\mathrm{G})$ is non-Hausd\"orff, but upon restricting to a suitable sub-category we can remedy this situation.
Consider the full subcategory $\widetilde{\mathcal{F}}_{\Sigma}^{\star}(\mathrm{G})$ whose objects consist of irreducible flat $\mathrm{G}$-bundles
whose automorphism group is equal to the center of $\mathrm{G}$. Let $\mathcal{F}_{\Sigma}^{\star}(\mathrm{G})$ denote the set of isomorphism classes. The following theorem is well known (see \cite{GOL84}), though the discussion there deals with the equivalent question for homomorphisms $\pi\rightarrow G.$
\begin{theorem}
Let $\mathrm{G}$ be a connected complex semi-simple Lie group. Then, the set $\mathcal{F}_{\Sigma}^{\star}(\mathrm{G})$ admits the structure of a Hausd\"orff complex manifold of dimension $(2g-2)\textnormal{dim}_{\mathbb{C}}(\mathrm{G})$ where $g$ is the genus of $\Sigma.$
\end{theorem}
The following result of Beilinson-Drinfeld \cite{BD05} allows us to only consider the complex manifold $\mathcal{F}_{\Sigma}^{\star}(\mathrm{G}).$
\begin{proposition}\label{autos}
Let $\mathrm{G}$ be a connected complex semi-simple Lie group and suppose $\Xi:=(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ is a $\Sigma$-marked $\mathrm{G}$-oper. Then,
\begin{enumerate}
\item The automorphism group of $\Xi$ is equal to the center of $\mathrm{G}.$
\item The induced $C^{\infty}$-flat $\mathrm{G}$-bundle $(E_{\mathrm{G}}, \omega)$ on $\Sigma$ is irreducible with automorphism group equal to the center of $\mathrm{G}.$
\end{enumerate}
\end{proposition}
Given a $\Sigma$-marked $\mathrm{G}$-oper $(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$, let $(E_{\mathrm{G}}, \omega)$ denote the corresponding $C^{\infty}$-flat $\mathrm{G}$-bundle.
By Proposition \ref{autos}, the functor
\begin{align}
\widetilde{\textnormal{H}}:\widetilde{\mathcal{O}\mathfrak{p}}_{\Sigma}(\mathrm{G})&\rightarrow \widetilde{\mathcal{F}}_{\Sigma}^{\star}(\mathrm{G}) \\
(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)&\mapsto (E_{\mathrm{G}}, \omega)
\end{align}
is fully faithful and
descends to a smooth map
\begin{align}
\textnormal{H}: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{F}_{\Sigma}^{\star}(\mathrm{G}).
\end{align}
We now prove the main theorem of this article.
\begin{theorem}\label{hol immersion}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type. The map
\begin{align}
\textnormal{H}:\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{F}_{\Sigma}^{\star}(\mathrm{G})
\end{align}
is a holomorphic immersion.
Moreover, there is a (natural in $\mathrm{G}$) commutative diagram
of holomorphic maps
\begin{center}
\begin{tikzcd}
\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C})) \arrow{r}{\textnormal{H}} \arrow{d}{\iota_{\mathrm{G}}}
& \mathcal{F}_{\Sigma}^{\star}(\textnormal{PSL}_{2}(\mathbb{C})) \arrow{d}{\iota_{\mathrm{G}}} \\
\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}) \arrow{r}{\textnormal{H}}
&\mathcal{F}_{\Sigma}^{\star}(\mathrm{G}).
\end{tikzcd}
\end{center}
\end{theorem}
\textbf{Remark:} For $G=\textnormal{PSL}_{2}(\mathbb{C})$, this was proved independently (and with varying methods) by Earle \cite{EAR81}, Hejhal \cite{HEJ78} and Hubbard \cite{HUB81}. Our proof is in the spirit of the proof of Hubbard \cite{HUB81}: in particular we will identify the differential of $\textnormal{H}$ with a certain induced map in hyper-cohomology, and use a differential-geometric argument to prove the injectivity of this map.
\begin{proof}
Let $\Xi:=(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)$ be a $\Sigma$-marked $\mathrm{G}$-oper and recall the two term complex
\begin{align}
\mathcal{A}^{\bullet}:=\mathcal{A}(E_{\mathrm{B}})\xrightarrow{[\hat{\omega}, \ ]} \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}],
\end{align}
which controls the deformation theory of $\Xi.$
First, we show that there is an injective map
\begin{align}\label{oper to flat}
\mathcal{A}^{\bullet}\rightarrow \mathcal{B}^{\bullet}
\end{align}
where
\begin{align}
\mathcal{B}^{\bullet}:= \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]\xrightarrow{[\hat{\omega}, \ ]} \mathcal{K}\otimes \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]
\end{align}
is the holomorphic de-Rham complex of the holomorphic flat bundle $(E_{\mathrm{G}}, \omega, X).$
To define \eqref{oper to flat}, consider the commutative diagram
\begin{equation}
\begin{tikzcd}
\mathcal{A}(E_{\mathrm{B}})\arrow{d}{\iota} \arrow{dr}{\sigma_{\mathrm{B}}} \\
\mathcal{A}(E_{\mathrm{G}}) \arrow{r}{\sigma_{\mathrm{G}}}
& \Theta.
\end{tikzcd}
\end{equation}
where $\iota$ is the inclusion.
If $\hat{\omega}: \Theta\rightarrow \mathcal{A}(E_{\mathrm{G}})$ is the holomorphic flat connection appearing in $\Xi$, define the injective map
\begin{align}\label{to flat bundle}
\iota-\hat{\omega}\circ \sigma_{\mathrm{B}}: \mathcal{A}(E_{\mathrm{B}})\rightarrow \mathcal{A}(E_{\mathrm{G}}).
\end{align}
Since $\hat{\omega}$ is a splitting, $\sigma_{\mathrm{G}}\circ(\iota-\hat{\omega}\circ \sigma_{\mathrm{B}})=0$.
Since $\mathcal{E}_{\mathrm{G}}[\mathfrak{g}]=\textnormal{ker}(\sigma_{\mathrm{G}}),$ the map \eqref{to flat bundle} lifts to a map
\begin{align}
\Phi: \mathcal{A}(E_{\mathrm{B}})\rightarrow \mathcal{E}_{\mathrm{G}}[\mathfrak{g}].
\end{align}
Define the injective map \eqref{oper to flat} by
\begin{equation}
\begin{tikzcd}
\mathcal{A}(E_{\mathrm{B}}) \arrow{d}{\Phi} \arrow{r}{[\hat{\omega}, - ]}
& \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}] \arrow{d} \\
\mathcal{E}_{\mathrm{G}}[\mathfrak{g}] \arrow{r}{[\hat{\omega}, - ]}
& \mathcal{K} \otimes \mathcal{E}_{\mathrm{G}}[\mathfrak{g}],
\end{tikzcd}
\end{equation}
where the right vertical arrow is the inclusion of $\mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}]$ as a locally-free sub-sheaf.
Viewing the complexes $\mathcal{A}^{\bullet}$ and $\mathcal{B}^{\bullet}$ as objects in the abelian category of bounded complexes of coherent analytic sheaves over $X,$ the map \eqref{oper to flat} has a co-kernel $\mathcal{N}^{\bullet},$ and hence there is an exact sequence of complexes
\begin{align}
0\rightarrow \mathcal{A}^{\bullet}\rightarrow \mathcal{B}^{\bullet}\rightarrow \mathcal{N}^{\bullet}\rightarrow 0.
\end{align}
Fortunately, we can identify an explicit model of the complex $\mathcal{N}^{\bullet}$ consisting of locally-free sheaves. Let $\Psi: \Theta\rightarrow \mathcal{E}_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}]$ be the second fundamental form of $\omega$ relative to $E_{\mathrm{B}}$ and $\mathcal{N}^{0}:=\textnormal{coker}(\Psi).$ Since $\Psi$ is an injective morphism of the corresponding bundles, $\mathcal{N}^{0}$ is a holomorphic vector bundle on $X.$
We now show that there is an exact sequence
\begin{align}\label{seq nat}
0\rightarrow \mathcal{A}(E_{\mathrm{B}})\xrightarrow{\Phi} \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]\xrightarrow{p} \mathcal{N}^{0}\rightarrow 0.
\end{align}
Consider the commutative diagram with exact top row
\begin{center}
\begin{tikzcd}
&
0
&
&
& \\
0
& \mathcal{N}^{0} \arrow{l} \arrow{u}
& \mathcal{E}_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}] \arrow{l}
& \Theta \arrow[swap]{l}{\Psi}
& 0 \arrow{l} \\
&\mathcal{E}_{\mathrm{G}}[\mathfrak{g}] \arrow{u}{p}
& \mathcal{A}(E_{\mathrm{G}}) \arrow{ur}{\sigma_{\mathrm{G}}} \arrow[dashrightarrow]{u}
&
& \\
&
& \mathcal{A}(E_{\mathrm{B}}) \arrow{ul}{\Phi} \arrow{u}{\iota} \arrow[swap]{uur}{\sigma_{\mathrm{B}}}
&
& \\
&
& 0 \arrow{u}.
&
&
\end{tikzcd}
\end{center}
The map $p:\mathcal{E}_{\mathrm{G}}[\mathfrak{g}]\rightarrow \mathcal{N}^{0}$ is defined as the composition of the surjective projections
\begin{align}
\mathcal{E}_{\mathrm{G}}[\mathfrak{g}]\rightarrow \mathcal{E}_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}]\rightarrow \mathcal{N}^{0},
\end{align}
and therefore $p$ is surjective.
The vertical dashed arrow
\begin{align}
\mathcal{A}(E_{\mathrm{G}})\dashrightarrow \mathcal{E}_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}]
\end{align}
is defined as the composition $\Psi\circ \sigma_{\mathrm{G}}$ so that the diagram commutes.
Note that
\begin{align}
0\rightarrow \mathcal{A}(E_{\mathrm{B}})\xrightarrow{\iota} \mathcal{A}(E_{\mathrm{G}})\dashrightarrow \mathcal{E}_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}]
\end{align}
is \emph{not} exact in the middle, though the initial map $\iota$ is injective. Since $\Psi$ is injective, there \emph{is} an exact sequence
\begin{align}
0\rightarrow \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]\rightarrow \mathcal{A}(E_{\mathrm{G}})\dashrightarrow \mathcal{E}_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}].
\end{align}
By commutativity and the exactness of the top row,
$\mathcal{A}(E_{\mathrm{B}})\subset \textnormal{ker}(p).$
Since $p$ is a surjective map of holomorphic
vector bundles, $\textnormal{ker}(p)$ is a holomorphic vector bundle such that
\begin{align}
\textnormal{rk}(\textnormal{ker}(p))=\textnormal{rk}(E_{\mathrm{G}}[\mathfrak{g}])-\textnormal{rk}(\mathcal{N}^{0}).
\end{align}
Now we compute,
\begin{align}
\textnormal{rk}(\mathcal{A}(E_{\mathrm{B}}))&=\textnormal{rk}(E_{\mathrm{B}}[\mathfrak{b}])+\textnormal{rk}(\Theta) \\
&= \textnormal{rk}(E_{\mathrm{G}}[\mathfrak{g}])-\textnormal{rk}(E_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}])+\textnormal{rk}(\Theta) \\
&=\textnormal{rk}(E_{\mathrm{G}}[\mathfrak{g}])-\textnormal{rk}(\mathcal{N}^{0}) \\
&= \textnormal{rk}(\textnormal{ker}(p)).
\end{align}
Therefore, since $\mathcal{A}(E_{\mathrm{B}})\subset \textnormal{ker}(p),$ this implies that $\mathcal{A}(E_{\mathrm{B}})= \textnormal{ker}(p).$ This defines the promised exact sequence \eqref{seq nat}
\begin{align}
0\rightarrow \mathcal{A}(E_{\mathrm{B}})\xrightarrow{\Phi} \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]\xrightarrow{p} \mathcal{N}^{0}\rightarrow 0.
\end{align}
Defining $\mathcal{N}^{1}:= \mathcal{K}\otimes \mathcal{E}_{B}[\mathfrak{g}/\mathfrak{g}^{-1}],$ we obtain a short exact sequence of complexes
\[
\begin{tikzcd}
0\arrow{d}
& 0\arrow{d} \\
\mathcal{A}(E_{\mathrm{B}}) \arrow{d}{\Phi} \arrow{r}{[\hat{\omega}, -]}
& \mathcal{K}\otimes \mathcal{E}_{\mathrm{B}}[\mathfrak{g}^{-1}] \arrow{d} \\
\mathcal{E}_{\mathrm{G}}[\mathfrak{g}] \arrow{d}{p} \arrow{r}{[\hat{\omega}, -]}
& \mathcal{K} \otimes \mathcal{E}_{\mathrm{G}}[\mathfrak{g}] \arrow{d}\\
\mathcal{N}^{0} \arrow{r} \arrow{d}
& \mathcal{K}\otimes \mathcal{E}_{B}[\mathfrak{g}/\mathfrak{g}^{-1}] \arrow{d} \\
0
&0,
\end{tikzcd}
\]
where the bottom horizontal arrow is uniquely defined by exactness and commutativity.
Define the complex $\mathcal{N}^{\bullet}$ via
\begin{align}
\mathcal{N}^{\bullet}:= \mathcal{N}^{0}\rightarrow \mathcal{K}\otimes \mathcal{E}_{B}[\mathfrak{g}/\mathfrak{g}^{-1}].
\end{align}
The holomorphic De-Rham complex $\mathcal{B}^{\bullet}$ is a resolution of the local system $\mathbb{E}_{G}[\mathfrak{g}]_{\omega}$ defined by the holomorphic flat connection $\omega.$ Therefore, there is a canonical isomorphism $\textnormal{H}^i(X, \mathbb{E}_{G}[\mathfrak{g}]_{\omega})\simeq \mathbb{H}^{i}(X, \mathcal{B}^{\bullet}).$
Taking the relevant chunk of the long exact sequence in hyper-cohomology yields,
\begin{align}
...\rightarrow \mathbb{H}^{0}(X, \mathcal{N}^{\bullet}) \rightarrow \mathbb{H}^{1}(X, \mathcal{A}^{\bullet})\rightarrow \textnormal{H}^{1}(X, \mathbb{E}_{G}[\mathfrak{g}]_{\omega})\rightarrow...
\end{align}
If $\mathbb{H}^{0}(X, \mathcal{N}^{\bullet})=\{0\}$, then the $\mathbb{C}$-linear map
\begin{align}
\mathbb{H}^{1}(X, \mathcal{A}^{\bullet})\rightarrow \textnormal{H}^{1}(X, \mathbb{E}_{G}[\mathfrak{g}]_{\omega})
\end{align}
is injective. But, there is a canonical commutative diagram of $\mathbb{C}$-linear maps
\[
\begin{tikzcd}
T_{[(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)]}\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}) \arrow{r}{d\textnormal{H}} \arrow{d}
& T_{H([(E_{\mathrm{G}}, E_{\mathrm{B}}, \omega, X)])} \mathcal{F}_{\Sigma}^{\star}(\mathrm{G}) \arrow{d} \\
\mathbb{H}^{1}(X, \mathcal{A}^{\bullet}) \arrow{r}
& \textnormal{H}^{1}(X, \mathbb{E}_{G}[\mathfrak{g}]_{\omega}),
\end{tikzcd}
\]
where the vertical arrows are $\mathbb{C}$-linear isomorphisms. This proves that $d\textnormal{H}$ is $\mathbb{C}$-linear. Therefore, $\textnormal{H}$ is holomorphic. Hence, if $\mathbb{H}^{0}(X, \mathcal{N}^{\bullet})=\{0\},$ it follows that $\textnormal{H}$ is a holomorphic immersion.
To show that $\mathbb{H}^{0}(X, \mathcal{N}^{\bullet})=\{0\},$ we will verify the apriori stronger vanishing $\textnormal{H}^{0}(X, \mathcal{N}^{0})=\{0\}.$ Given the short exact sequence
\begin{align}\label{normal LES}
0\rightarrow \Theta\xrightarrow{\Psi} \mathcal{E}_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}]\rightarrow \mathcal{N}^{0}\rightarrow 0
\end{align}
and using the fact that $\textnormal{H}^{0}(X, \mathcal{E}_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}])=0$ \cite{AB83}[pg. 592], the vanishing of $\textnormal{H}^{0}(X, \mathcal{N}^{0})$ is equivalent to the injectivity of the map
\begin{align}\label{injective map}
\textnormal{H}^{1}(X, \Theta)\rightarrow \textnormal{H}^{1}(X, \mathcal{E}_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}])
\end{align}
induced by $\Psi$ arising in the long exact sequence of \eqref{normal LES}.
To achieve this, recall from section \ref{models} that there is a $C^{\infty}$-bundle isomorphism
\begin{align}
E_{\mathrm{G}}[\mathfrak{g}]\simeq\bigoplus_{i=-m^{\ell}}^{m_{\ell}} K^{i}\otimes \mathfrak{g}_{i}.
\end{align}
Furthermore, in these coordinates the holomorphic structure on $E_{\mathrm{G}}[\mathfrak{g}]$ is defined by the following $\overline{\partial}$-operator (see Section \ref{models}):
\begin{align}\label{partial op}
\overline{\partial}\left(\sum_{i=-m_{\ell}}^{m_{\ell}} \beta_{i}\otimes V_{i}\right)=
\sum_{i=-m}^{m} \overline{\partial}_{i} \beta_{i}\otimes V_{i}+h\cdot \beta_{i}\otimes\textnormal{ad}(e_{1})(V_{i}).
\end{align}
As in Section \ref{models}, $h$ is the hermitian metric on $\Theta$ arising from the uniformizing hyperbolic metric on $X,$
and $\overline{\partial}_{i}$ is the $\overline{\partial}$-operator defining the holomorphic structure on $i$-th pluri-canonical bundle $K^{i}.$ Finally $e_{1}\in \mathfrak{g}_{1}$ is a principal nilpotent element.
In these coordinates, there is a $C^{\infty}$-isomorphism
\begin{align}
E_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}]\simeq \bigoplus_{i=-m_{\ell}}^{-1} K^{i}\otimes \mathfrak{g}_{i},
\end{align}
and the previously defined $\overline{\partial}$-operator \eqref{partial op} defines a holomorphic structure on $E_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}].$
Moreover, by Proposition \ref{unif oper}, the second fundamental form $\Psi: \Theta\rightarrow \mathcal{E}_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}]$ is given by
\begin{align}
\Theta &\rightarrow \bigoplus_{i=-m_{\ell}}^{-1} K^{i}\otimes \mathfrak{g}_{i} \\
\xi &\mapsto \xi\otimes f_{1}.
\end{align}
Here, recall that $f_{1}\in \mathfrak{g}_{-1}$ is a principal nilpotent element such that $\{f_{1},x, e_{1}\}$ are an $\mathfrak{sl}_{2}$-triple in $\mathfrak{g}$ (see Section \ref{models}).
We need some more Lie-theoretic preliminaries before continuing: in particular we need a basis of $\mathfrak{g}$ which is well adapted to the $\mathfrak{sl}_{2}$-triple $\{f_{1}, x, e_{1}\}$.
The above $\mathfrak{sl}_{2}$-triple induces an injective homomorphism of Lie algebras
\begin{align}
\iota_{\mathfrak{g}}: \mathfrak{sl}_{2}(\mathbb{C})\rightarrow \mathfrak{g}.
\end{align}
With respect to the induced adjoint action of $\mathfrak{sl}_{2}(\mathbb{C})$ on $\mathfrak{g}$, the Lie algebra $\mathfrak{g}$ decomposes as a sum of simple $\mathfrak{sl}_{2}(\mathbb{C})$-modules
\begin{align}
\mathfrak{g}=\bigoplus_{i=1}^{\ell} W_{i},
\end{align}
where the dimension of $W_{i}$ is $2m_{i}+1;$ this is one way to define the exponents $\{m_{i}\}_{i=1}^{\ell}$ of $\mathfrak{g}.$
It is a standard fact in Lie theory that there exists $H_{i}\in W_{i}$ regular semi-simple elements, with $H_{1}=x,$ such that
\begin{align}
\{\textnormal{ad}^{j}(f_{1})(H_{i})\}_{j=1}^{m_{i}}
\end{align}
is a basis of
\begin{align}
W_{i}\cap \bigoplus_{k=-m_{\ell}}^{-1} \mathfrak{g}_{k}.
\end{align}
This yields a basis
\begin{align}\label{BASIS}
\{\{\textnormal{ad}^{j}(f_{1})(H_{i})\}_{i=1}^{\ell}\}_{j=1}^{m_{i}}
\end{align}
of
\begin{align}
\mathfrak{g}/\mathfrak{b}\simeq \bigoplus_{k=-m_{\ell}}^{-1} \mathfrak{g}_{k}.
\end{align}
With these Lie theoretic preliminaries out of the way, we utilize the Dolbeault resolution and assume there exists $\mu\in \mathcal{A}^{(0,1)}(X, \Theta_{X})$ such that
\begin{align}
\Psi(\mu)=\mu\otimes f_{1}=0\in \textnormal{H}^{(0,1)}(X, E_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}]).
\end{align}
If we show that this implies that $\mu=0\in \textnormal{H}^{(0,1)}(X, \Theta),$ this will prove the injectivity of the map \ref{injective map}, thereby proving that the holonomy map $\textnormal{H}$ is an immersion.
With respect to the basis \eqref{BASIS},
\begin{align}
\mu\otimes f_{1}=0\in \textnormal{H}^{(0,1)}(X, E_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}])
\end{align}
if and only if there exists smooth sections
\begin{align}
\{\beta_{i}^{j}\}_{i=1}^{\ell} \subset \mathcal{A}^{0}(X, \mathcal{K}^{-j})
\end{align}
for $1\leq j \leq m_{\ell},$
and a smooth section
\begin{align}\label{explicit}
s=\sum_{j=1}^{m_{\ell}}\left(\sum_{i=1}^{\ell} \beta_{i}^{j}\otimes \textnormal{ad}(f_{1})^{j}(H_{i})\right)
\end{align}
of $E_{\mathrm{B}}[\mathfrak{g}/\mathfrak{b}]$
which satisfies
\begin{align}\label{zero eq}
\overline{\partial}s=\mu \otimes f_{1}.
\end{align}
Expanding \eqref{zero eq} using the explicit form \eqref{explicit} and using the induced decomposition $\mathfrak{g}/\mathfrak{b}=\bigoplus_{i=-m_{\ell}}^{-1}\mathfrak{g}_{i}$ leads to the explicit system of equations:
\begin{align}
\sum_{i=1}^{\ell} \overline{\partial}_{-1}\beta_{i}^{1}\otimes \textnormal{ad}(f_{1})(H_{i}) + h\cdot \beta_{i}^{2}\otimes [e_{1}, \textnormal{ad}(f_{1})^{2}(H_{i})]=\mu\otimes f_{1},
\end{align}
and
\begin{align}
\sum_{i=1}^{\ell} \overline{\partial}_{-j}\beta_{i}^{j} \otimes \textnormal{ad}(f_{1})^{j}(H_{i}) + h\cdot \beta_{i}^{j+1}\otimes [e_{1}, \textnormal{ad}(f_{1})^{j+1}(H_{i})]=0,
\end{align}
for all $2\leq j\leq m_{\ell}.$
We proceed by induction starting at $j=m_{\ell}.$ Since $\textnormal{ad}(f_{1})^{m_{\ell}+1}=0$, we arrive at the equation
\begin{align}
\sum_{i=1}^{\ell} \overline{\partial}_{-m_{\ell}}\beta_{i}^{m_{\ell}} \otimes \textnormal{ad}(f_{1})^{m_{\ell}}(H_{i})=0.
\end{align}
Since $\textnormal{H}^{0}(X, \mathcal{K}^{-m_{\ell}})=\{0\},$ this implies that $\beta_{i}^{m_{\ell}}=0$ for all $1\leq i\leq \ell.$
Continuing by induction, the fact that $\textnormal{H}^{0}(X, \mathcal{K}^{-j})=\{0\}$ for all $1\leq j\leq m_{\ell}$ implies
\begin{align}
\beta_{i}^{j}=0
\end{align}
for all $1\leq i \leq \ell$ and for all $2\leq j \leq m_{\ell}.$
Hence, we arrive at the final equation
\begin{align}
\sum_{i=1}^{\ell} \overline{\partial}_{-1}\beta_{i}^{1}\otimes \textnormal{ad}(f_{1})(H_{i})=\mu\otimes f_{1}.
\end{align}
But, recalling that $\textnormal{ad}(f_{1})(H_{1})=[f_{1}, x]=2f_{1},$ we obtain
\begin{align}
\label{eq 1}2\overline{\partial}_{-1} \beta_{1}^{1}&=\mu, \\
\sum_{2}^{\ell} \overline{\partial}_{-1} \beta_{i}^{1}\otimes \textnormal{ad}(f_{1})(H_{i})&=0.
\end{align}
Therefore, $\overline{\partial}_{-1} \beta_{i}^{1}=0$ for all $2\leq i\leq \ell$ which implies that
$\beta_{i}^{1}=0$ for all $2\leq i \leq \ell.$
Finally, remembering that $K^{-1}\simeq \Theta,$ \eqref{eq 1} implies $\mu=0\in \textnormal{H}^{(0,1)}(X, \Theta)\simeq \textnormal{H}^{1}(X, \Theta).$
This proves that the map \eqref{injective map} is injective and subsequently $\textnormal{H}^{0}(X, \mathcal{N}^{0})=0.$ This completes the proof that the map $\textnormal{H}$ is an immersion.
\iffalse
The fact that the map $H$ is a local biholomorphism if and only if $G\simeq \textnormal{PSL}_{2}(\mathbb{C})$ follows immediately from the inverse function theorem, the fact that $\textnormal{dim}_{\mathbb{C}}(G)\geq 3$, and a calculation of dimensions as follows:
\begin{align}
\textnormal{dim}_{\mathbb{C}}(\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}))&=(3g-3)+(g-1)\textnormal{dim}_{\mathbb{C}}(G) \\
&\leq (2g-2)\textnormal{dim}_{\mathbb{C}}(G)\\
&=\textnormal{dim}_{\mathbb{C}}(\mathcal{F}^{\textnormal{top},\star}_{G}(\Sigma)),
\end{align}
with equality if and only if $G\simeq\textnormal{PSL}_{2}(\mathbb{C})$ if and only if $\textnormal{dim}_{\mathbb{C}}(G)=3.$
Therefore, $H$ is a local biholomorphism if and only if $\textnormal{dim}_{\mathbb{C}}(G)=3$ if and only if $G\simeq\textnormal{PSL}_{2}(\mathbb{C})$ which completes the proof.
\fi
\end{proof}
Recall that the holonomy map identifies the complex manifold $\mathcal{F}_{\Sigma}^{\star}(\mathrm{G})$ with the space of conjugacy classes $\textnormal{Hom}^{\star}(\pi, G)/\mathrm{G}$ of irreducible homomorphisms with centralizer equal to the center of $G.$ Theorem \ref{hol immersion} translates into the following statement in terms of developed $\mathrm{G}$-opers.
\begin{corollary}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type. Then the holonomy map
\begin{align}
\textnormal{H}: \mathcal{DO}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \textnormal{Hom}^{\star}(\pi, G)/G
\end{align}
is a holomorphic immersion and there is a commutative diagram
\begin{center}
\begin{tikzcd}
\mathcal{DO}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C})) \arrow{r}{\textnormal{H}} \arrow{d}{\iota_{\mathrm{G}}}
& \textnormal{Hom}^{\star}(\pi, \textnormal{PSL}_{2}(\mathbb{C}))/\textnormal{PSL}_{2}(\mathbb{C}) \arrow{d}{\iota_{\mathrm{G}}} \\
\mathcal{DO}\mathfrak{p}_{\Sigma}(\mathrm{G}) \arrow{r}{\textnormal{H}}
&\textnormal{Hom}^{\star}(\pi, \mathrm{G})/\mathrm{G}.
\end{tikzcd}
\end{center}
\end{corollary}
\subsection{The pre-symplectic geometry of opers}
In this final section, we show that the space $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ admits a natural holomorphic pre-symplectic form where $\mathrm{G}$ is a complex simple Lie group of adjoint type. Furthermore, we show that the holomorphic vector bundle $\mathcal{B}_{\Sigma}(\mathrm{G})$ over $\mathcal{T}_{\Sigma}$ has a natural holomorphic pre-symplectic form for which the isomorphism of Theorem \ref{h base} is a holomorphic pre-symplectomorphism when the defining section is holomorphic and Lagrangian.
These results are an extension of the theorem of Kawai \cite{KAW96} which states
that the bi-holomorphism
\begin{align}
\mathcal{CP}_{\Sigma} \simeq T^{\star}\mathcal{T}_{\Sigma}
\end{align}
provided by a Bers' section is a complex symplectic map (up to a constant factor). See Loustau \cite{LOU15} for a
nice clarification/discussion of this result.
The tangent space to $\mathcal{F}_{\Sigma}^{\star}(\mathrm{G})$ at a $C^{\infty}$-flat bundle $(E_{\mathrm{G}}, \omega)$ is isomorphic to the first hyper-cohomology $\mathbb{H}^{1}(\Sigma, \mathcal{B}_{\textnormal{top}}^{\bullet})$ of the smooth De-Rham complex
\begin{align}
\mathcal{B}_{\textnormal{top}}^{\bullet}:= \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]_{\textnormal{top}}\xrightarrow{[\hat{\omega}, -]} \mathcal{A}^{1}(\Sigma) \otimes \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]_{\textnormal{top}}\xrightarrow{[\hat{\omega}, -]} \mathcal{A}^{2}(\Sigma)\otimes \mathcal{E}_{\mathrm{G}}[\mathfrak{g}]_{\textnormal{top}}.
\end{align}
Here $\mathcal{A}^{i}(\Sigma)$ is the sheaf of germs of smooth complex-valued differential $i$-forms on $\Sigma$ and the subscript top. indicates the relevant sheaves of germs of smooth sections. This map is the $C^{\infty}$-analogue of the bracket defined in Theorem \ref{hol immersion}.
The smooth De-Rham complex $\mathcal{B}_{\textnormal{top}}^{\bullet}$ is a resolution of the local system $\mathbb{E}_{G}[\mathfrak{g}]_{\omega}.$ Hence, there are isomorphisms
\begin{align}
\mathbb{H}^{i}(\Sigma,\mathcal{B}_{\textnormal{top}}^{\bullet})\simeq \textnormal{H}^{i}(X, \mathbb{E}_{G}[\mathfrak{g}]_{\omega})\simeq\mathbb{H}^{i}(X, \mathcal{B}^{\bullet}),
\end{align}
where $\mathcal{B}^{\bullet}$ is the corresponding holomorphic De-Rham complex of $(E_{\mathrm{G}}, \omega, X).$
Next we introduce the Atiyah-Bott-Goldman symplectic form on $\mathcal{F}_{\Sigma}^{\star}(\mathrm{G}).$ Note that there is an isomorphism
\begin{align}
T_{[(E_{\mathrm{G}}, \omega)]}\mathcal{F}_{\Sigma}^{\star}(\mathrm{G})\simeq
\textnormal{H}^{1}(\Sigma, \mathbb{E}_{G}[\mathfrak{g}]_{\omega}).
\end{align}
The cup product induces a non-degenerate, skew-symmetric $\mathbb{C}$-linear map
\begin{align}
\textnormal{H}^{1}(\Sigma, \mathbb{E}_{G}[\mathfrak{g}]_{\omega})\otimes \textnormal{H}^{1}(\Sigma, \mathbb{E}_{G}[\mathfrak{g}]_{\omega})\xrightarrow{\cup} H^{2}\left(\Sigma, \mathbb{E}_{G}[\mathfrak{g}]_{\omega}\otimes \mathbb{E}_{G}[\mathfrak{g}]_{\omega}\right).
\end{align}
Using the Killing form $\mathrm{B}$ on $\mathfrak{g}$ as a coefficient pairing defines a symmetric $\mathbb{C}$-linear map
\begin{align}
\mathrm{B}: H^{2}\left(\Sigma, \mathbb{E}_{G}[\mathfrak{g}]_{\omega}\otimes \mathbb{E}_{G}[\mathfrak{g}]_{\omega}\right)\rightarrow H^{2}(\Sigma, \mathbb{C}).
\end{align}
Finally, taking the cap product with the fundamental class of $\Sigma,$ remembering that $\Sigma$ is \emph{oriented}, defines a non-degenerate skew-symmetric $\mathbb{C}$-linear map
\begin{align}
\eta_{\mathrm{G}}: \textnormal{H}^{1}(\Sigma, \mathbb{E}_{G}[\mathfrak{g}]_{\omega})\otimes \textnormal{H}^{1}(\Sigma, \mathbb{E}_{G}[\mathfrak{g}]_{\omega})\rightarrow \mathbb{C}.
\end{align}
Following Atiyah-Bott \cite{AB83}, Goldman proved \cite{GOL84} that $\eta_{\mathrm{G}}$ defines a non-degenerate, closed holomorphic differential $2$-form on the complex manifold $\mathcal{F}_{\Sigma}^{\star}(\mathrm{G}).$ The complex symplectic form $\eta_{\mathrm{G}}$ is called the \emph{Atiyah-Bott-Goldman} symplectic form.
We now prove that for a complex simple Lie group of adjoint type, the complex manifold $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ admits a closed holomorphic $2$-form of constant rank. Such a $2$-form is called a complex \emph{pre-symplectic} form.
\begin{theorem}\label{sym form}
Let $\mathrm{G}$ be a complex simple Lie group of adjoint type and equip $\mathfrak{g}$ with the Killing form.
Then, the complex manifold $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})$ admits a complex pre-symplectic form $\tau_{\mathrm{G}}$ of constant (complex) rank $6g-6$, for which the fibers of the map
\begin{align}
\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma}
\end{align}
are maximal isotropic sub-manifolds.
Furthermore, the natural holomorphic embedding
\begin{align}
\iota_{\mathrm{G}}:\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))\rightarrow \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})
\end{align}
is a symplectic embedding which satisfies
\begin{align}
\iota_{\mathrm{G}}^{\star}\tau_{\mathrm{G}}=\tau_{\textnormal{PSL}_{2}(\mathbb{C})}.
\end{align}
Finally, the form $\tau_{\mathrm{G}}$ is non-degenerate if and only if $\mathrm{G}$ is
isomorphic to $\textnormal{PSL}_{2}(\mathbb{C}).$
\end{theorem}
\textbf{Remark:} The induced complex symplectic structure on $\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}(2, \mathbb{C}))$ is the usual complex symplectic structure on the moduli space of $\Sigma$-marked complex projective structures (See \cite{LOU15}).
\begin{proof}
By Theorem \ref{hol immersion}, the map
\begin{align}
\textnormal{H}: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{F}_{\Sigma}^{\star}(\mathrm{G})
\end{align}
is a holomorphic immersion. Therefore, $\tau_{\mathrm{G}}:=\textnormal{H}^{\star}\eta_{\mathrm{G}}$ yields a closed holomorphic $2$-form on $\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}).$ By \cite{BD05}, the restriction of $\textnormal{H}$ to the fibers of
\begin{align}\label{projection}
\mathcal{P}: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{T}_{\Sigma}
\end{align}
is a proper Lagrangian embedding. Since Lagrangian sub-manifolds are maximal isotropic sub-manifolds and $\textnormal{H}$ is an immersion, this implies that $\tau_{\mathrm{G}}$ has constant rank.
This immediately implies that the fibers of \eqref{projection} are maximal isotropic submanifolds for $\tau_{\mathrm{G}}.$
By Theorem \ref{hol immersion}, the map
\begin{align}
\textnormal{H}: \mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))\rightarrow \mathcal{F}_{\Sigma}^{\star}(\textnormal{PSL}_{2}(\mathbb{C}))
\end{align}
is a local bi-holomorphism, and therefore $\tau_{\textnormal{PSL}(2,\mathbb{C})}$ is a holomorphic symplectic form.
The commutativity of the diagram
\begin{center}
\begin{tikzcd}
\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C})) \arrow{r}{\iota_{\mathrm{G}}} \arrow{d}{\textnormal{H}}
& \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}) \arrow{d}{\textnormal{H}} \\
\mathcal{F}_{\Sigma}^{\star}(\textnormal{PSL}_{2}(\mathbb{C})) \arrow{r}{\iota_{\mathrm{G}}}
& \mathcal{F}_{\Sigma}^{\star}(\mathrm{G})
\end{tikzcd}
\end{center}
implies that
\begin{align}
\iota_{\mathrm{G}}:\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))\rightarrow \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})
\end{align}
is a symplectic embedding which satisfies
\begin{align}
\iota_{\mathrm{G}}^{\star}\tau_{\mathrm{G}}=\tau_{\textnormal{PSL}_{2}(\mathbb{C})}.
\end{align}
This proves that the rank of $\tau_{\mathrm{G}}$ is $6g-6$ and completes the proof.
\end{proof}
Given $(M, \tau)$ a pre-symplectic manifold, $M$ admits a foliation given by the (integrable) distribution $\textnormal{ker}(\tau).$ The leaf space of this foliation, if it is a manifold, is called the reduced phase space of $(M, \tau),$ and admits a canonical symplectic structure $(M^{\textnormal{red}}, \tau^{\textnormal{red}})$ such that the projection
\begin{align}
R: M\rightarrow M^{\textnormal{red}}
\end{align}
satisfies $R^{\star}\tau^{\textnormal{red}}=\tau$
Theorem \ref{sym form} yields the following result.
\begin{corollary}\label{reduced}
The reduced phase space of the complex pre-symplectic manifold $(\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}), \tau_{\mathrm{G}})$ is canonically isomorphic to $(\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C})), \tau_{\textnormal{PSL}_{2}(\mathbb{C})}).$
\end{corollary}
\begin{proof}
This follows from a general fact in pre-symplectic geometry. Suppose $(M, \tau_{M})$ is a complex pre-symplectic manifold. Let $(N, \tau_{N})$ be a complex symplectic manifold and
\begin{align}
\iota: (N, \tau_{N}) \rightarrow (M, \tau_{M})
\end{align}
a holomorphic embedding such that $\iota^{\star}\tau_{M}=\tau_{N}.$
Suppose further that the dimension of $N$ is equal to the rank of $\tau_{M}.$ If every leaf of the foliation given by $\textnormal{ker}(\tau_{M})$ intersects $\iota(N)$ in exactly one point, then the reduced phase space of $(M,\tau_{M})$ exists, and there is a canonical pre-symplectomorphism
\begin{align}
(N, \tau_{N})\simeq (M^{\textnormal{red}}, \tau_{M}^{\textnormal{red}}).
\end{align}
Since
\begin{align}
\iota_{\mathrm{G}}: \mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}_{2}(\mathbb{C}))\rightarrow \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})
\end{align}
satisfies these properties, this completes the proof.
\end{proof}
We close with a discussion of the identifications
\begin{align}
\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\simeq \mathcal{B}_{\Sigma}(\mathrm{G})
\end{align}
from Theorem \ref{h base} from the point of view of pre-symplectic geometry.
Recall that there is an isomorphism of holomorphic vector bundles
\begin{align}\label{iso cot}
\mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}(\Sigma)\simeq T^{\star}\mathcal{T}_{\Sigma}
\end{align}
over $\mathcal{T}_{\Sigma}$
via the identification of co-tangent vectors to $\mathcal{T}_{\Sigma}$ with holomorphic quadratic differentials.
Being the cotangent bundle of a complex manifold, $T^{\star}\mathcal{T}_{\Sigma}$ has a canonical complex symplectic structure $\omega_{\textnormal{can}}.$ The isomorphism \eqref{iso cot} induces a complex symplectic form $\omega_{\mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}}$ on $\mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}(\Sigma).$
Consider the holomorphic map of vector bundles
\begin{align}
R: \mathcal{B}_{\Sigma}(\mathrm{G})&\rightarrow \mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}(\Sigma) \\
(X,\alpha_{1},...,\alpha_{\ell})&\mapsto (X,\alpha_{1}).
\end{align}
The form $\omega_{\mathcal{B}_{G}}=R^{\star}\omega_{\mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}}$ is a closed, holomorphic $2$-form on $\mathcal{B}_{G}$ of constant rank $6g-6.$
Note that by the proof of Corollary \ref{reduced}, the couple $(\mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}(\Sigma), \omega_{\mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}})$ is canonically isomorphic to the reduced phase space of the complex pre-symplectic manifold $(\mathcal{B}_{\Sigma}(\mathrm{G}), \omega_{\mathcal{B}_{G}}).$
We close with the following theorem.
\begin{theorem}\label{sym id}
Let $s$ be a holomorphic Lagrangian section of
\begin{align}
\pi: \mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}(2,\mathbb{C}))\rightarrow \mathcal{T}_{\Sigma}.
\end{align}
Then the biholomorphism
\begin{align}
\phi_{s}: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G})\rightarrow \mathcal{B}_{\Sigma}(\mathrm{G})
\end{align}
from Theorem \ref{h base}
satisfies
\begin{align}
\phi_{s}^{\star}\omega_{\mathcal{B}_{G}}=\sqrt{-1}\tau_{\mathrm{G}}.
\end{align}
\end{theorem}
\begin{proof}
Consider the commutative diagram
\begin{center}
\begin{tikzcd}
\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}) \arrow{r}{\phi_{s}} \arrow{d}{P}
& \mathcal{B}_{\Sigma}(\mathrm{G}) \arrow{d}{R}\\
\mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}(2,\mathbb{C})) \arrow{r}{\theta_{s}}
& \mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}(\Sigma)
\end{tikzcd}
\end{center}
where
\begin{align}
P: \mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}) \rightarrow \mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}(2,\mathbb{C}))
\end{align}
is the canonical quotient map to the reduced phase space of the holomorphic pre-symplectic manifold $(\mathcal{O}\mathfrak{p}_{\Sigma}(\mathrm{G}),\tau_{\mathrm{G}}).$
Since $s$ is a holomorphic Lagrangian section, a theorem of Kawai \cite{KAW96}, later clarified by Loustau \cite{LOU15}, implies
\begin{align}
\theta_{s}^{\star}\omega_{\mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}}=\sqrt{-1}\tau_{\textnormal{PSL}(2,\mathbb{C})}.
\end{align}
Since $P$ is the canonical quotient map to the reduced phase space,
\begin{align}
P^{\star}\circ \theta_{s}^{\star}\omega_{\mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}}=P^{\star}\sqrt{-1}\tau_{\textnormal{PSL}(2,\mathbb{C})}=\sqrt{-1}\tau_{\mathrm{G}}.
\end{align}
By commutativity of the above diagram and the definition of $\omega_{\mathcal{B}_{\mathrm{G}}},$ this implies
\begin{align}
\sqrt{-1}\tau_{\mathrm{G}}&=P^{\star}\circ \theta_{s}^{\star}\omega_{\mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}} \\
&= \phi_{s}^{\star}\circ R^{\star} \omega_{\mathcal{B}_{\textnormal{PSL}(2,\mathbb{C})}} \\
&= \phi_{s}^{\star} \omega_{\mathcal{B}_{\mathrm{G}}}.
\end{align}
This completes the proof.
\end{proof}
\textbf{Remark}: Every Bers' section (see the remark following Theorem \ref{Hubbard}) of the projection $\pi: \mathcal{O}\mathfrak{p}_{\Sigma}(\textnormal{PSL}(2,\mathbb{C}))\rightarrow \mathcal{T}_{\Sigma}$ is holomorphic Lagrangian, and therefore we obtain a $\mathcal{T}_{\Sigma}$ worth of holomorphic Lagrangian sections which satisfy the hypotheses of Theorem \ref{sym id}.
|
1,314,259,993,213 | arxiv | \section{Introduction}
Nonequilibrium statistical mechanics is at the stem of important physical phenomena, many of them at the border with other sciences such as biology, chemistry, geology and even social sciences.
Differently from equilibrium statistical mechanics, there is no closed theoretical framework to deal with out-of-equilibrium systems, being the theory of stochastic processes~\cite{vanKampen} one of the natural approaches to describe them.
These systems have been traditionally modeled by stochastic differential equations, as well as through Fokker-Planck equations. Moreover, functional path integral approaches have also been introduced~\cite{WioBook2013}.
The latter approach is more adaptive to explore symmetries and general formal aspects of stochastic dynamics, such as out-of-equilibrium fluctuations theorems~\cite{Kurchan1998,Corberi2007, CamillePreprint}.
Recently, clear advances in the path integral approach of multiplicative processes were made~\cite{arenas2010,arenas2012,Arenas2012-2,Miguel2015}. Multiplicative noise naturally describes inhomogeneous diffusion in which fluctuations depend on the state of the system. Concrete applications are very diverse, covering a broad range of interest as, for instance, micromagnetism~\cite{Aron2014} and early life biology~\cite{Goldenfeld2015}.
On the other hand, extended multiplicative systems present challenging phenomena, such as noise-induced phase transitions (NIPT), stochastic resonance and pattern formation~\cite{SanchoBook,DickmanBook2005}.
Although it is possible to define order parameters and susceptibilities in nonequilibrium steady states, the classification of phase transitions in universality classes is still underdeveloped. In fact, a complete general theory of the nonequilibrium Renormalization Group is still lacking. The definition and evaluation of thermodynamical potentials, such as the Helmholtz or Gibbs free energies, are not completely developed in out-of-equilibrium statistical mechanics.
Indeed, there is an actual discussion about the possibility of having a thermodynamic description of nonequilibrium steady states~\cite{DickmanPreprint2015}, although some steps forward for par\-ti\-cular systems have been achieved~\cite{Graham1998,WioBook2012,Wio2002,Wio2007,vanWijland2007,Parrondo2015}.
The main goal of this letter is to provide a general formalism to compute potentials for describing nonequilibrium stationary states reached by a multiplicative Langevin dynamics. Based on a recently introduced path integral formalism~\cite{Miguel2015}, we present nonequilibrium generating functionals, appropriated to compute order parameter correlations. We also introduce a systematic and controlled non-perturbative weak-noise approximation to compute these potentials.
We explicitly compute, in the Gaussian fluctuation approximation, the nonequilibrium potential for a wide class of Langevin dynamics, in arbitrary spatial dimensions and for any stochastic prescription, including It\^o, Stratonovich and anti-It\^o prescriptions.
This last point is essential in the study of entropy production and fluctuation theorems, since time-reversal transformations generally mix different prescriptions~\cite{Arenas2012-2}. As a concrete example, we apply the formalism to a set of models, including the simplest lattice model proposed by Van den Broeck, Parrondo and Toral (VPT)~\cite{Parrondo1994} and a related continuum model of Genovese, Mu\~noz and Sancho~\cite{Sancho1998}.
We show that our approximation procedure correctly captures the physics of NIPT and, differently from equilibrium phase transitions, we establish a deep connection between NIPT and microscopic irreversibility~\cite{crooks2000}.
\section{ Dynamical generating functionals}
We begin by considering a system of Langevin equations given by
\begin{equation}
\frac{dx_i(t)}{dt} = f_i({\bf x}(t)) +
g_{ij}({\bf x}(t))\eta_j(t) \; ,
\label{eq:LangevSystem}
\end{equation}
where $i = 1,\ldots,n$, $j = 1,\ldots,m$, ${\bf x} \in \Re^n$ and $\eta_j(t)$ are $m$ independent Gaussian white noises:
$\left\langle \eta_i(t)\right\rangle = 0$,
$\left\langle \eta_i(t), \eta_j(t')\right\rangle =\sigma^2 \delta_{ij} \delta(t-t')$, where $\sigma^2$ measures the noise intensity.
We use bold face characters for vector variables and summation over repeated indices is understood.
The drift force $f_i({\bf x})$ and the diffusion matrix $g_{ij}({\bf x})$ are, in principle, arbitrary smooth functions of ${\bf x}(t)$. Along this letter, we use the Generalized Stratonovich prescription, parametrized by a real number $0\le \alpha\le 1$~\cite{Arenas2012-2}. Time correlation functions can be computed by performing functional derivatives of a generating functional written in terms of functional integrals. Functional integral techniques in different discretization prescriptions have been developed in Refs.~\cite{Langouche1979,TirapeguiBook1982,Lubensky2007}. We have used a generalization of the Martin-Siggia-Rose-Janssen-deDominicis formalism~\cite{MSR1973,Janssen1976,deDominicis}, that we have recently implemented~\cite{Miguel2015} to deal with the multi-variable dynamics of Eq.~(\ref{eq:LangevSystem}) in the $\alpha$-prescription. The generating functional can be wri\-tten, after integrating out auxiliary variables, in terms of a functional integral over the vector variable ${\bf x}(t)$,
\begin{equation}
Z[{\bf J}]=\int{\cal D}{\bf x}\; {\det}^{-1}(g)\; e^{-\frac{1}{\sigma^2}\left\{S[{\bf x}]-\int_{-\infty}^{\infty}dt' {\bf J}\cdot{\bf x}(t')\right\}} \; ,
\label{eq:ZS}
\end{equation}
where the ``action'' is given by
\begin{eqnarray}
\lefteqn{
S = \int_{t_i}^{t_f} \!\! dt \;\left\{ \frac{1}{2}
\left[ \dot x_\ell - \Gamma_\ell \right]
[g^2]^{-1}_{\ell m}
\left[ \dot x_m - \Gamma_m \right] + \alpha\sigma^2 \partial_k f_k \right.} \label{eq:action-alpha} \\
&+& \left.\frac{1}{2} \alpha^2 \sigma^2\left[ \partial_m g_{kj}(x) \partial_k g_{mj}(x) - \partial_m g_{mj}(x) \partial_i g_{ij}(x) \right] \right\} ,
\nonumber
\end{eqnarray}
with $\Gamma_\ell({\bf x})=f_\ell({\bf x}) - \alpha\sigma^2 g_{\ell j}({\bf x})\partial_i g_{ij}({\bf x})$.
The second and third terms of Eq.~(\ref{eq:action-alpha}), proportional to $\alpha$ and $\alpha^2$ respectively, comes from the non-trivial Jacobian associated with the change of variables
$\eta_i(t)\to x_i(t)$. These terms, together with $\alpha-$calculus rules~\cite{footnote} are essential to implement any consistent approximation procedure.
In Eq.~(\ref{eq:ZS}), ${\bf J}(t)$ is a vectorial source necessary to compute correlation functions.
Formally, $Z[{\bf J}(t)]$ plays the same role as the partition function in equilibrium statistical mechanics. Then, it is immediate to define the functional $F[{\bf J}(t)]=-\sigma^2\ln Z[{\bf J}]$, from which we can
compute a ``local order parameter''
\begin{equation}
M_i(t)\equiv \langle x_i(t)\rangle=-\frac{\delta F[{\bf J}]}{\delta J_i(t)}\; .
\label{eq:deltaF}
\end{equation}
The dynamical variable ${\bf M}(t)$, in principle, is not an order parameter. However, as we will show below, the homogeneous long-time limit behaves like an actual order parameter, detecting order-disorder phase transitions.
In order to define a generating functional in terms of the order parameter, we perform a Legendre transformation in the following way,
\begin{equation}
G[{\bf M}(t)]=\int dt\; {\bf J}(t)\cdot {\bf M}(t)+F[{\bf J}(t)]\; ,
\label{eq:Legendre}
\end{equation}
where ${\bf J}(t)\equiv {\bf J}[{\bf M}(t)]$ is defined by inverting Eq.~(\ref{eq:deltaF}).
It is immediate to verify
$\delta G[{\bf M}]/\delta M_i(t)=J_i(t)$, that can be interpreted as the nonequilibrium state equation.
Assuming that, at long times, the system reaches an homogeneous stationary state, we can define the nonequilibrium potential as
\begin{equation}
G_{\rm st}(M)=\lim_{t\to \infty}\lim_{N\to \infty}\frac{1}{N} G[{\bf M}(t)]\; ,
\label{eq:Gst}
\end{equation}
where $N$ is the number of degrees of freedom and the order parameter
$M=(1/N)\lim_{t\to \infty} \sum_{i=1}^{N} \langle x_i(t)\rangle$.
The expressions in Eqs.~(\ref{eq:Legendre}) and~(\ref{eq:Gst}) are the main proposals of this letter. The potential $G_{\rm st}(M)$ is the analog of the Gibbs free energy for describing nonequilibrium stationary states. The critical behavior of the nonequilibrium state is codified in the analytical properties of $G_{\rm st}(M)$.
\section{ Weak noise expansion}
Due to the factor $1/\sigma^2$ in the exponential of Eq.~(\ref{eq:ZS}), the functional integral can be computed in the saddle-point plus Gaussian fluctuations approximation, for small $\sigma$. Assuming there is only one trajectory $x^0_i(t)$ that extremize the action, we decompose the integration variables as,
\begin{equation}
x_i(t)=x^{0}_{i}(t) + \delta x_{i}(t)\; ,
\label{eq:x0+deltax}
\end{equation}
where $x^{0}_{i}(t)$ is a solution of
\begin{equation}
\left. \frac{\delta S[{\bf x}]}{\delta x_i(t)}\right|_{{\bf x}(t)={\bf x}^0}=J_i(t)
\label{eq:x0}
\end{equation}
and $\delta x_{i}(t)$ represent small fluctuations.
Replacing Eq.~(\ref{eq:x0+deltax}) into Eq.~(\ref{eq:ZS}) and expanding in powers of $\delta x_{i}(t)$ up to quadratic order, we find, after the Gaussian integration,
\begin{equation}
F[{\bf J}]= S[{\bf x}^{0}]- \int dt \;{\bf J}\cdot {\bf x}^{o}
+\frac{\sigma^2}{2} {\rm Tr}{\ln} [{\bf S}^{(2)}]+\ldots \; ,
\label{eq:Ffl}
\end{equation}
where the components of the fluctuation matrix ${\bf S}^{(2)}$ are given by
\begin{equation}
S^{(2)}_{ij}(t,t') = \frac{\delta^{2}S[{\bf x}]}{\delta x_{i}(t)\delta x_{j}(t')}\Bigg \vert_{{\bf {\bf x}(t)}={\bf x}^{0}(t)}, \label{eq:propagator}
\end{equation}
and the ellipsis represents order $\sigma^4$ terms.
Using Eqs.~(\ref{eq:deltaF}) and~(\ref{eq:Ffl}), we find for the order parameter,
\begin{equation}
M_i(t)=
x^{0}_{i}(t) - \frac{\sigma^2}{2}{\rm Tr}\left\{ [{\bf S}^{(2)}]^{-1} \frac{\delta{\bf S}^{(2)}}{\delta J_i(t)}\right\}.
\label{eq:Mfl}
\end{equation}
For $\sigma\to 0$, the order parameter is essentially the ``classical'' solution ${\bf M}(t)={\bf x}^0(t)$, obtained by solving Eq.~(\ref{eq:x0}). Fluctuations change this result in a non-trivial way. To compute the Legendre transformation of Eq.~(\ref{eq:Legendre}), we invert Eq.~(\ref{eq:Mfl}) perturbatively in powers of $\sigma^2$. Retaining the leading order terms, we find the expression
\begin{equation}
G[{\bf M}(t)] = S[{\bf M}(t)]+ \frac{\sigma^2}{2}{\rm Tr}\ln \left\{{\bf S}^{(2)}[{\bf M}(t)]\right\} \; .
\label{eq:Gfl}
\end{equation}
Eq.~(\ref{eq:Gfl}) is the explicit expression of the generating functional of the local order parameter in the Gaussian fluctuations approximation. This general result allows the computation of the time-dependent order parameter by solving the dynamical set of equations $\delta G[{\bf M}]/\delta M_i(t)=0$, with $i=1,\ldots, N$.
At this point, it is important to stress that Eq.~(\ref{eq:Gfl}) was computed assuming that there is only one path $x_0(t)$ which solves Eq.~(\ref{eq:x0}). In the case of multiple solutions, the method should be generalized by computing Gaussian fluctuations around each solution and summing up each contribution~\cite{WioBook2013}.
\section{ Lattice models}
Let us compute $G[{\bf M}(t)]$ for a model consisting in a set of $N$ stochastic variables whose dynamics are driven by the same drift $f(x)$ and the same diffusion function $g(x)$. Each degree of freedom is arranged in a $d$-dimensional hypercubic lattice and we consider short-range lattice couplings. Then, in Eq.~(\ref{eq:LangevSystem}), we will take $f_i({\bf x})=f(x_i)+F_i({\bf x})$ and $g_{ij}({\bf x})=g(x_i)\delta_{ij}$, where $F_i({\bf x})$ represent the lattice couplings.
In the absence of couplings, $F_i({\bf x})=0$, the system converges, at long times, to an equilibrium state. Imposing the Einstein relation $f(x)=-(1/2)g(x)^2 dV(x)/dx$, where $V(x)$ is a classical potential, the equilibrium distribution is $P_{\rm eq}\sim \exp\{-(1/\sigma^2) U_{\rm eq}(x)\}$,
with the potential $U_{\rm eq}(x)=V(x)+(1-\alpha)\ln g^2(x)$~\cite{Arenas2012-2}.
The equilibrium potential depends on the stochastic prescription $\alpha$ and, in general, it is not of the Boltzmann type, except for $\alpha=1$. Considering even potentials, $V(x)=V(-x)$, with a single minimum at $x=0$ and $g''(0)>0$, we have $\langle x\rangle=0$. This behaviour can change completely in the presence of lattice couplings. Let us consider, for instance, the simplest lattice interaction,
\begin{equation}
F_i({\bf x})=\left(\frac{D}{2d}\right) \sum_{x_j\in n(x_i)} \left(x_j-x_i\right)\; ,
\end{equation}
where $n(x_i)$ denotes the set of first neighbors of $x_i$, $D$ is the coupling constant and $d$ is the number of dimensions of the hypercubic lattice.
Due to interactions, the fluctuation matrix $S^{(2)}_{ij}(t,t')$ is not diagonal, neither in time, nor in lattice indexes. For this reason, ${\rm Tr}\ln {\bf S}^{(2)}$ in Eq.~(\ref{eq:Gfl}) is a cumbersome evaluation.
Fortunately, in the stationary and homogeneous limit the coefficients are constants and it is possible to diagonalize ${\bf S}^{(2)}$ by Fourier transforming in time and lattice indexes.
While the frequency spectrum is continuos, the wave-vector modes are discrete for finite systems\cite{Appert-Rolland2008}. Since we are interest in looking for phase transitions, we consider an infinite system by making the limit $N\to \infty$. In this case, the wave-vector spectrum is continuous and the Fourier transform takes the general form $S^{(2)}(\omega,\cos({\vec k}\cdot {\vec a}))$, where ${|\vec a|}$ is the lattice constant. In this representation, the trace should be computed by integrating $\ln S^{(2)}$ over $\omega$ and ${\vec k}$ in the first Brillouin zone of the corresponding dual lattice. Provided we are interested in the long distance behavior of the stationary state, we can further simplify this expression by taking the limit (hydrodynamic regime) $|{\vec k} \cdot {\vec a}|<<1$. This approximation naturally introduces an ultraviolet momentum cut-off proportional to $1/a$. Universal quantities should not depend on the specific value of the cut-off. However, we expect that non-universal features, such as the critical noise, should, in general, depend on it. In this limit, the fluctuation kernel takes the simpler form
$S^{(2)}(\omega,\vec k)=\omega^2+A |\vec k|^2+\Sigma$, where $A$
and $\Sigma$ are constants. Then, we can finally write the nonequilibrium potential as,
\begin{equation}
G_{\rm st}(M)= S_{\rm st}(M)+\frac{\sigma^2}{2}\int\frac{d\omega}{2\pi}\frac{ d^dk}{(2\pi)^d}
\ln\left(\omega^2+A |\vec k|^2+\Sigma\right).
\label{eq:Gstapprox}
\end{equation}
where the coefficients $A(M)$ and $\Sigma(M)$ are functions of the order parameter and can be computed from the local properties of $V(x)$ and the diffusion function $g(x)$, near $x=0$.
It is important to emphasize the applicability range of
Eq.~(\ref{eq:Gstapprox}). $G_{\rm st}(M)$ is the nonequilibrium potential of the stationary state, reached by the multiplicative Langevin dynamics, provided this is an homogeneous state.
To study inhomogeneities, we should define a continuous local order parameter in the thermodynamic limit $M(x)$ and make a gradient expansion to improve Eq.~(\ref{eq:Gstapprox}).
To explore possible phase transitions, we compute the inverse susceptibility in the disordered phase,
\begin{equation}
\chi^{-1}_0=\left. \frac{d^2G_{\rm st}(M)}{dM^2}\right|_{M=0}\; .
\end{equation}
To be specific, let us consider the VPT model, where we choose an harmonic oscillator potential with natural frequency $\Omega$, $V(x)=\Omega^2 x^2$, and a diffusion function $g(x)=1+x^2$.
Computing the coefficients $A$ and $\Sigma$ and exactly integrating out the frequency, the inverse susceptibility takes the form
\begin{eqnarray}
\chi_0^{-1}&=&\frac{\Omega^2}{2}\left(1+2\tilde\sigma^2\right)
\left\{1+\frac{7}{8\pi}\left(\frac{d}{D}\right) \sigma^2(1+\tilde\sigma^2)\right.\times \nonumber \\
&\times&\left.\int_0^{\Lambda(\sigma)} dx\;
\frac{x^{d-1} }{\sqrt{1+x^2}}\left(1+\frac{1}{28}\frac{1-\tilde\sigma^2}{1+\tilde\sigma^2} x^2\right)
\right\},
\label{eq:chi}
\end{eqnarray}
where $\tilde\sigma^2=2 (1-\alpha)\sigma^2/\Omega^2$ and the cut-off $\Lambda(\sigma)=(\pi/a)(2D/d(1+2\tilde\sigma^2))^{1/2}$.
The integral in Eq.~(\ref{eq:chi}) can be done in terms of hypergeometric functions. We observe that the existence of NIPT is determined by the integral in the second line of Eq.~(\ref{eq:chi}), since the other terms are positive definite. Interestingly, this term comes from fluctuations. Thus, the physics of NIPT is correctly captured by Gaussian fluctuations. It is interesting to note, that the very existence of the NIPT does not depend on the specific value of the ultraviolet cut-off. On the other hand, the position of the critical line is, in general, cut-off dependent.
The critical line is defined by $\chi^{-1}_0(\sigma^2, D)=0$.
We depict the results for different values of the parameters in Figure~(\ref{fig:PD}).
\begin{figure}[htb!]
\subfigure[\ $d=2$, $\Omega=1$. Continuous, dashed and dot-dashed lines correspond to $\alpha=0,1/2, 0.8$, respectively.]
{\includegraphics[scale=0.41]{PhaseDiagramalpha.eps}}
\subfigure[\ $\alpha=0$, $\Omega=1$. Continuous, dashed and dot-dashed lines correspond to $d=2,4,6$, respectively.]
{\includegraphics[scale=0.41]{PhaseDiagramd.eps}}
\caption{Phase diagram. Critical lines are defined by $\chi_0^{-1}(\sigma^2, D)=0$, with $\chi_0^{-1}$ given by Eq.~(\ref{eq:chi}). In the exterior part of each line the system is disordered, $M=0$, while in the interior, $M\neq 0$.}
\label{fig:PD}
\end{figure}
In Figure~(\ref{fig:PD}a) we plot the critical line in $d=2$, for different values of the stochastic prescription. Above a minimum lattice coupling $D_{\rm min}$, we found two continuum phase transitions. For very weak noise the system is disordered, $M=0$. At a threshold $\sigma^2_{c1}$, the system orders, $M\neq 0$, breaking in this way $Z_2$ symmetry. For higher values of the noise, $\sigma^2_{c2}>\sigma^2_{c1}$ it gets disordered again.
Comparing the curve $\alpha=1/2$ with the results of Ref.~\cite{Parrondo1997}, we conclude that our procedure correctly describes the behavior of $\sigma_{c1}$. In addition, the extension of the ordered region $(\sigma^2_{c1}-\sigma^2_{c2})$ is much more accurate than usual mean-field approximations. We observe that the ordered area in the $D-\sigma^2$ plane growths with $\alpha$. On the other hand, $\sigma^2_{c1}$ as well as $\sigma^2_{c2}$ are increasing functions of $\alpha$, making the phase transition harder to reach. In fact, in the anti-It\^o prescription, $\alpha=1$, $\chi_0^{-1}$ is positive definite and there is no phase transition. In Figure~(\ref{fig:PD}b), we depict the critical line in the It\^o ($\alpha=0$) prescription for different values of the spatial dimension $d$. We observe that $\sigma^2_{c1}$ is very weakly dependent on $d$. In fact, for $D\to \infty$, the critical noise rapidly converges to $\sigma^2_{c1}=\Omega^2/[2(1-\alpha)]$ for any value of $d$. Conversely, $\sigma^2_{c2}$ is strongly dependent on dimensionality. Form the integral in Eq.~(\ref{eq:chi}), we can see that the dimensionality essentially enter the cut-off as $\Lambda\sim \sqrt{1/d}$. Thus, we can infer the following features on the dependence on the position of the critical line with the cut-off: for greater values of $\Lambda$, the minimum coupling constant $D_{\rm min}$ gets deeper, making easer to reach the phase transition. On the contrary for smaller cut-off, $D_{\rm min}$ rise. In the strongly coupled regime $D> D_{\rm min}$, the threshold for NIPT, $\sigma^2_{c1}$, is quite universal, in the sense that is very weakly dependent on the cut-off (in the same way that it is almost independent on $d$, as can be seen from Figure (\ref{fig:PD}b)). However, the ``normal'' phase transition, $\sigma^2_{c2}$, has strongly non-universal behavior.
\section{ Time reversal and entropy production}
It is interesting to characterize the nonequilibrium steady state from the point of view of stochastic thermodynamics~\cite{seifert2008}. From this perspective, the concept of time reversal stochastic process is essential. For instance, it can be proved that the time reversed evolution of a Markov diffusion process described by an It\^o stochastic differential equation is also a Markov diffusion process, however with a different drift~\cite{Haussmann1986,Millet1989}. Moreover, we have recently showed~\cite{Miguel2015} that a time reversal transformation of a stochastic process defined by a stochastic differential equation in arbitrary prescription $\alpha$, is perfectly well defined and is given by
\begin{equation}
{\cal T} = \left\{
\begin{array}{lcl}
{\bf x}(t) &\to & {\bf x}(-t) \\ & & \\
\alpha &\to & (1-\alpha) \\ & & \\
f_i &\to& f_i +\left(2\alpha-1\right)\; g_{k\ell}\partial_k g_{i\ell}
\end{array}
\right.
\label{eq:TimeReversal}
\end{equation}
Note that time reversal mixes different prescriptions and, for this reason, it is important to have a formalism that can deal with all prescriptions consistently.
With this definition, Crooks relations~\cite{crooks2000} of microscopic reversibility are satisfied in an equilibrium state reached by a multiplicative Langevin dynamics~\cite{Arenas2012-2}. Conversely, in a nonequilibrium steady state, such as the one we are studying in this letter, microscopic reversibility is broken and there is an entropy production characterizing this state.
The increase of entropy in the medium, associated with each individual stochastic trajectory~\cite{seifert2008,Toral2015}, can be defined by $\Delta s_m=S[{\bf x}]-{\cal T}S[{\bf x}]$,
where ${\cal T} S[{\bf x}]$ is the action of the time reversed process~\cite{Arenas2012-2,Miguel2015}. For an equilibrium state, the stochastic entropy is a state function,
$\Delta s_m=\Delta U_{\rm }=U_{\rm eq}({\bf x}_f)-U_{\rm eq}({\bf x}_i)$, where $U_{\rm eq}$ is the equilibrium potential. This is a direct consequence of microscopic reversibility.
Explicitly computing $\Delta s_m$ for the VPT model, we find
\begin{equation}
\Delta s_m=\Delta U[{\bf x}] +\sum_k\int_{t_i}^{t_f} dt\; \dot x_k \left(\frac{g'_k}{g_k^3}\right)W_{\rm os}[{\bf x}]
\label{eq:entropy}
\end{equation}
where $W_{\rm os}({\bf x})=(D/d) \sum_{x_j\in n(x_i)} \left(x_j-x_i\right)^2$ is the potential energy of the lattice coupling and $U(x)=V(x)+W_{\rm os}({\bf x})+(1-\alpha)\ln g^2(x)$.
Clearly, $\Delta s_m$ is not a state function since it depends on the trajectory. The reason behind this behavior is that, in the presence of multiplicative noise, the lattice coupling of the VPT model breaks the Einstein condition, and the stationary state is microscopically irreversible.
We argue that microscopic irreversibility and, consequently, the breakdown of detailed-balance, is the main cause of noise-induced phase transitions.
To support that, let us analyze two illustrative examples. It is known that in an additive process, $g'_k=0$, there is no noise-induced phase transition. On the other hand, the last term of Eq.~(\ref{eq:entropy}) is zero and $\Delta s_m$ is a state function. In other words, in the additive noise case, the steady state is an equilibrium one, detailed balance is satisfied, and no NIPT can take place.
To stronger support this point of view, let us consider a true multiplicative process with a ``slightly'' modified lattice coupling,
\begin{equation}
F_i({\bf x})=\left(\frac{D}{2d}\right) g^2(x_i)\sum_{x_j\in n(x_i)} \left(x_j-x_i\right)\;.
\end{equation}
This is an harmonic first neighbor interaction locally weighted by the function $g^2(x_i)$.
This coupling satisfies Einstein relation since the total drift force can now be written as $f_k({\bf x})= -(1/2)g_k^2\partial_k U(\bf x)$. Consequently, the associated entropy is $\Delta s_m=U({\bf x}_f)-U({\bf x}_i)$, indicating that this system is microscopically reversible. Interestingly, by computing the potential $G_{\rm st}(M)$ and the inverse susceptibility $\chi^{-1}_0$, we verify that the latter is positive definite, implying that there is no phase transition in this model.
These facts strongly suggests that {\em microscopic irreversibility of the steady state is a necessary condition for noise-induced phase transitions}.
\section{Conclusions}
We have presented a path integral formalism to compute potentials for nonequilibrium steady states, reached at long times by multiplicative Langevin dynamics.
The formalism is completely general and can be applied to study, for any stochastic prescription, a variety of models presenting interesting features, such as noise-induced phase transitions, stochastic resonance and pattern formation. We have also developed a controlled weak noise expansion which correctly captures fluctuation-induced phenomena.
In particular, we have analyzed the physics of NIPT by computing the stationary state potential for a general class of lattice models. For the particular VPT model,
we have verified that the approximation developed, not only captures the qualitative behavior, but also improves previous estimations for the critical line.
Finally, we have shown that {\em microscopic irreversibility is a necessary condition for NIPT phenomena}. In such cases, the steady state is characterized by the presence of currents or, equivalently, by a non-zero entropy production rate. This property of the steady state
has no relation with the initial stages of the dynamical evolution, in contrast with other interpretations~\cite{Parrondo1997}, based on the short-time behavior of the order parameter.
We believe that the results presented in this letter open many interesting possibilities to advance our understanding of out-of-equilibrium critical phenomena.
\acknowledgements
We acknowledge Daniel A. Stariolo and Horacio Wio for useful discussions.
The Brazilian agencies CNPq, FAPERJ and CAPES are acknowledged for
partial financial support. D.G.B. acknowledges ICPT for a Senior Associate award.
|
1,314,259,993,214 | arxiv | \section{Introduction}
\label{sec:Introduction}
Dirac and Weyl semimetals are condensed matter materials whose low-energy excitations are described by the
Dirac and Weyl equations, respectively. Generically, the corresponding materials have a band structure
where the valence and conduction bands touch at isolated points (i.e., the Dirac points and the Weyl nodes, respectively).
Theoretically, $\mathrm{A_3Bi}$
(A=Na,K,Rb) and $\mathrm{Cd_3As_2}$ were the first compounds predicted to be Dirac semimetals with topologically
protected Dirac points \cite{Fang,WangWeng}. The existence of Dirac points in $\mathrm{Cd_3As_2}$ and
$\mathrm{Na_3Bi}$ was soon confirmed experimentally via the angle-resolved photoemission spectroscopy (ARPES)
in Refs.~\cite{Borisenko,Neupane,Liu}. Weyl semimetals were first predicted in pyrochlore iridates \cite{Savrasov},
but they were discovered experimentally in TaAs, TaP, NbAs, and NbP
\cite{Tong,Bian,Qian,Long,Xu-Hasan:TaP,Xu-Hasan:NbAs,Xu-Feng:NbP,Shekhar-Nayak:2015,Wang-Zheng:2015,Zhang-Xu:2015}
(for recent reviews, see Refs.~\cite{Hasan-Huang:2017-Rev,Yan-Felser:2017-Rev,Armitage-Vishwanath:2017-Rev}).
As it is well understood now, Weyl semimetals represent a topologically nontrivial phase of matter. Indeed, Weyl nodes are the monopoles of
the Berry curvature~\cite{Berry:1984} whose topological charges are directly connected with their chirality.
According to the Nielsen--Ninomiya theorem~\cite{Nielsen-Ninomiya-1,Nielsen-Ninomiya-2}, Weyl nodes
in crystals always come in pairs of opposite chirality. The corresponding nodes are separated in
momentum and/or energy. Such a nodal structure is also responsible for the existence of topologically
protected surface states, known as the Fermi arcs~\cite{Savrasov,Aji,Haldane}.
Unlike the Weyl nodes, the Dirac points are usually assumed to be topologically trivial because they
are composed from pairs of overlapping Weyl nodes of opposite chirality. By using numerical calculations
\cite{WangWeng,Fang}, however, it was found that the Dirac semimetals $\mathrm{Cd_3As_2}$ and
$\mathrm{A_3Bi}$ (A=Na,K,Rb) possess the Fermi arcs too. This was later confirmed experimentally
by the ARPES data \cite{Xu-Hasan:2015} and the observation of special surface-bulk quantum oscillations in
transport measurements \cite{Potter-Vishwanath:2014,Moll:2016}. It was argued
\cite{Yang-Nagaosa:2014,Gorbar:2014sja,Gorbar:2015waa,Yang-Furusaki:2015,Fang-Fu:2015,Kobayashi-Sato:2015,Burkov-Kim:2015} that the physical
reason for the nontrivial topological properties of $\mathrm{A_3Bi}$ (A=Na,K,Rb) is a
$Z_2$ symmetry that such materials possess. In the classification scheme proposed in Ref.~\cite{Yang-Nagaosa:2014},
such Dirac semimetals belong to the second class in which pairs of Dirac points are created by the
inversion of two bands. This is in contrast to the Dirac semimetals in the first class that possess a single
Dirac point at a time-reversal (TR) invariant momentum. As noted in Ref.~\cite{Burkov-Kim:2015}, the
presence of the $Z_2$ symmetry leads to the $Z_2$ anomaly that could affect transport properties.
The latter were recently discussed in Ref.~\cite{Rogatko:2018moa} using the hydrodynamic description.
A complementary view at the $Z_2$ symmetry in $\mathrm{A_3Bi}$ (A=Na,K,Rb) was presented in Refs.~\cite{Gorbar:2014sja,Gorbar:2015waa},
where we argued that these compounds are, in fact, hidden $Z_2$ Weyl semimetals. The discrete
symmetry of the low-energy effective Hamiltonian allows one to split all quasiparticle states into two
separate sectors, each describing a Weyl semimetal with a pair of Weyl nodes and a broken TR
symmetry. Since the $Z_2$ symmetry interchanges states from these two sectors, the TR symmetry
is preserved in the complete theory.
The degeneracy of opposite chirality states in the Dirac semimetals and the presence of the $Z_2$ symmetry
are expected to have profound consequences. The fact that the Berry curvature becomes a matrix with a
non-Abelian structure \cite{Wilczek:1984dh} could manifest itself, for example, in unusual transport properties
of the Dirac semimetals. The latter could be studied, for example, by employing the chiral kinetic theory
\cite{Son:2012wh,Stephanov:2012ki,Chen:2014cla,Manuel:2014dza} generalized to the case of
degenerate states \cite{Shindou:2005vfm,Culcer-Niu:2005,Chang:2008zza,Xiao:2009rm}.
The main motivation for this study is to investigate how the momentum-dependent gap term and the non-Abelian nature of the Berry curvature affect the quasiclassical properties of electron wave packets
in $Z_2$ Weyl semimetals. In particular, we consider the propagation of wave packets in external electric and magnetic fields.
Note that, in the absence of the
non-Abelian corrections to the Berry curvature, the semiclassical motion of chiral quasiparticles was
already considered in Ref.~\cite{Gorbar:2017dtp}, where the (pseudo-)magnetic lens was proposed.
It was found that while the primary contribution to the spatial splitting of quasiparticles of different chirality is related to the interplay of magnetic and strain-induced pseudomagnetic fields, the Abelian Berry curvature plays also an important, albeit auxiliary, role.
In this study, we investigate how the trajectories of the
wave packets change due to the presence of the off-diagonal gap term and the non-Abelian nature of
the Berry curvature. Of particular interest is the question as to whether the splitting of the wave packets
from different Dirac points (or, equivalently, valleys) and different chiral sectors can be achieved
without a background pseudomagnetic field.
The paper is organized as follows. In Sec.~\ref{sec:Model}, the low-energy effective model of the
Dirac semimetals $\mathrm{A_3Bi}$ (A=Na,K,Rb) and its linearized version are introduced.
We present the semiclassical equations of motion with the non-Abelian corrections in Sec.~\ref{sec:wavepacket-and-eqs}. The motion of the
electron wave packets in external electric and magnetic fields is investigated in Sec.~\ref{sec:trajectories-pm-DP}.
The results are discussed and summarized in Sec.~\ref{sec:Summary}. The expressions for the Berry connection, the Berry curvature,
and the magnetic moment of wave packets are given in Appendix \ref{sec:app-exp-expressions-lin-alpha}.
\section{Model}
\label{sec:Model}
In this section, we describe the low-energy model of the Dirac semimetals $\mathrm{A_3Bi}$ (A=Na,K,Rb)
as well as its linearized version and underlying symmetries. The corresponding quasiparticle Hamiltonian
derived in Ref.~\cite{Fang} reads as
\begin{equation}
\label{low-energy-Hamiltonian}
H(\mathbf{k}) = \epsilon_0(\mathbf{k}) I_4 + H_{4\times 4},
\end{equation}
where $I_4$ is the $4\times 4$ unit matrix, $\epsilon_0(\mathbf{k}) = C_0 + C_1k_z^2+C_2k_{\perp}^2$, $k_{\perp}=\sqrt{k_x^2+k_y^2}$, and
\begin{equation}
\label{low-energy-Hamiltonian4x4}
H_{4\times 4} =
\left( \begin{array}{cccc}
M(\mathbf{k}) & v_Fk_+ & 0 & \Delta^{*}(\mathbf{k}) \\
v_Fk_- & -M(\mathbf{k}) & \Delta^{*}(\mathbf{k}) & 0 \\
0 & \Delta(\mathbf{k}) & M(\mathbf{k}) & -v_Fk_- \\
\Delta(\mathbf{k}) & 0 & -v_Fk_+ & -M(\mathbf{k}) \\
\end{array}
\right).
\end{equation}
The matrix Hamiltonian $H_{4\times 4}$ is naturally split into $2\times2$ blocks. The diagonal blocks are defined in terms of the
quadratic function $M(\mathbf{k}) = M_0 - M_1 k_z^2-M_2k_{\perp}^2$ and $v_Fk_{\pm}$, where $k_{\pm} = k_x\pm ik_y$.
The off-diagonal blocks are determined by the function $\Delta(\mathbf{k}) = \alpha k_zk_{+}^2$ that
plays a crucial role in this study and whose physical meaning will be discussed later.
The numerical values of parameters in Hamiltonian (\ref{low-energy-Hamiltonian}) can be determined by fitting the energy
spectrum obtained by the first-principles calculations \cite{Fang} and equal
\begin{equation}
\label{model-parameters}
\begin{array}{lll}
C_0 = -0.06382~\mbox{eV},\qquad
& C_1 = 8.7536~\mbox{eV\,\AA}^2,\qquad
& C_2 = -8.4008~\mbox{eV\,\AA}^2,\\
M_0=-0.08686~\mbox{eV},\quad
& M_1=-10.6424~\mbox{eV\,\AA}^2,\qquad
& M_2=-10.3610~\mbox{eV\,\AA}^2,\\
v_F=2.4598~\mbox{eV\,\AA}.
\end{array}
\end{equation}
Note that the Fermi velocity $v_F$ is given in energy units. Since no specific value for $\alpha$, which determines the
magnitude of the off-diagonal terms, was quoted in Ref.~\cite{Fang}, we will treat it as a free, albeit small, parameter
below. In addition to the model parameters in Eq.~(\ref{model-parameters}), we will also need the transport scattering time $\tau$.
For the purposes of this study, we use $\tau \approx10^{-10}~\mbox{s}$, which is an
estimated value of the scattering time in $\mathrm{Cd_3As_2}$ \cite{Liang-Ong:2015}.
The energy eigenvalues of Hamiltonian (\ref{low-energy-Hamiltonian}) are given by the following expression:
\begin{equation}
\label{energy-dispersion}
\epsilon(\mathbf{k})=\epsilon_0(\mathbf{k}) \pm \sqrt{M^2(\mathbf{k})+v_F^2k_{\perp}^2+|\Delta(\mathbf{k})|^2}.
\end{equation}
As is clear, a nonzero $\epsilon_0(\mathbf{k})$ introduces an asymmetry between the positive (electrons) and
negative (holes) energy branches and, consequently, breaks the particle-hole symmetry. The square root term
vanishes at the two Dirac points, $\mathbf{k}^{(\pm)}_0=\left(0, 0, \pm \sqrt{m}\right)$, where $\sqrt{m}= \sqrt{M_0/M_1}$.
By using the low-energy parameters in Eq.~(\ref{model-parameters}), we find that $\sqrt{m}\approx 0.0903~\mbox{\AA}^{-1}$.
The latter defines the characteristic momentum scale in the low-energy Hamiltonian. Therefore, by equating the last two terms
under the square root in Eq.~(\ref{energy-dispersion}) and setting $k_z=k_{\perp}=\sqrt{m}$, we can estimate the characteristic value
of parameter $\alpha$, i.e.,
\begin{equation}
\label{model-alpha-def-2}
\alpha^{*} = \frac{v_F}{m} \approx 301.384~\mbox{eV}\mbox{\AA}^3.
\end{equation}
In order to get a better insight into the role of the off-diagonal term in the low-energy Hamiltonian, we plot the
corresponding energy spectra for $\alpha=0$ and $\alpha=10\alpha^{*}$ in the two panels of
Fig.~\ref{fig:model-energy-full}. As expected from Eq.~(\ref{energy-dispersion}), there are two Dirac points
well separated in $k_z$. The term $\Delta(\mathbf{k})$ plays the role of a momentum-dependent gap function that
mixes eigenstates of opposite chirality. While $\Delta(\mathbf{k})$ can profoundly change the spectrum of quasiparticles
for sufficiently large $k_{\perp}$, it vanishes at the Dirac points. Thus, the upper and lower $2 \times 2$ blocks of
Hamiltonian~(\ref{low-energy-Hamiltonian4x4}) still describe quasiparticle states of opposite chirality in a sufficiently
close vicinity of the Dirac points, although, strictly speaking, the notion of chirality is rigorous only at $\alpha=0$.
As discussed in detail in Refs.~\cite{Gorbar:2014sja,Gorbar:2015waa}, the actual form of function $\Delta(\mathbf{k})$
is consistent with the discrete $Z_2$ symmetry, implying that the Dirac semimetals $\mathrm{A_3Bi}$ (A=Na,K,Rb)
are effectively the hidden $Z_2$ Weyl semimetals. The quasiparticle states of these materials can be naturally split by using the ud (up-down)
symmetry \cite{Gorbar:2014sja}
\begin{equation}
\label{model-ud-parity}
U_{\chi}=\Pi_{k_z\to-k_z} \left(
\begin{array}{cc}
I_2 & 0 \\
0 & -I_2 \\
\end{array}
\right),
\end{equation}
where $\Pi_{k_z\to-k_z}$ is the operator that changes the sign of the $z$ component of momentum and $I_2$ is the
$2\times2$ unit matrix. The TR symmetry is broken in each of the $Z_2$ sectors that signifies the presence of the Weyl semimetal phase with the Weyl nodes separated by $2\sqrt{m}$.
Since the chirality of the nodes in different $Z_2$ sectors is opposite, the
complete model preserves the TR symmetry and has two Dirac points.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.45\textwidth]{Fig1a.eps}\hfill
\includegraphics[width=0.45\textwidth]{Fig1b.eps}\hfill
\caption{The energy spectrum of the low-energy model (\ref{low-energy-Hamiltonian})
at $\alpha=0$ (left panel) and $\alpha=10\alpha^{*}$ (right panel), where $\alpha^{*}$
is the characteristic value defined in Eq.~(\ref{model-alpha-def-2}) and $\epsilon_{+d}(0)$ is defined in Eq.~(\ref{model-epsk0}).}
\label{fig:model-energy-full}
\end{center}
\end{figure}
In order to simplify the calculations, we will omit the term $\epsilon_0(\mathbf{k})$ and linearize
Hamiltonian (\ref{low-energy-Hamiltonian}) in the vicinity of the Dirac points $\mathbf{k}^{(\pm)}_0$.
By expanding $M(\mathbf{k})$ up to the linear order in deviations $\mathbf{\delta{k}}=\mathbf{k}-\mathbf{k}^{(\pm)}_0$
and performing the unitary transformation with $U_x=\mbox{diag}(\sigma_x, I_2)$, we obtain
\begin{equation}
\label{model-Hamiltonian-canonical-plus}
H_{\rm lin}^{(+)}(\tilde{\mathbf{k}})=\left(
\begin{array}{cc}
v_F\left(\tilde{k}_x\sigma_x+\tilde{k}_y\sigma_y-\tilde{k}_z\sigma_z\right) & \alpha\left(\sqrt{m}+\tilde{k}_z\right)\tilde{k}_{-}^2 \\
\alpha\left(\sqrt{m}+\tilde{k}_z\right)\tilde{k}_{+}^2 & -v_F\left(\tilde{k}_x\sigma_x+\tilde{k}_y\sigma_y-\tilde{k}_z\sigma_z\right) \\
\end{array}
\right)
\end{equation}
in the vicinity of the Dirac point at $\mathbf{k}_0^{(+)}$ and
\begin{equation}
\label{model-Hamiltonian-canonical-minus}
H_{\rm lin}^{(-)}(\tilde{\mathbf{k}})=\left(
\begin{array}{cc}
v_F\,(\tilde{\mathbf{k}}\cdot\bm{\sigma}) & -\alpha\left(\sqrt{m}-\tilde{k}_z\right)\tilde{k}_{-}^2 \\
-\alpha\left(\sqrt{m}-\tilde{k}_z\right)\tilde{k}_{+}^2 & -v_F\,(\tilde{\mathbf{k}}\cdot\bm{\sigma}) \\
\end{array}
\right)
\end{equation}
in the vicinity of the Dirac point at $\mathbf{k}_0^{(-)}$. Here $\bm{\sigma}$ are the Pauli matrices and
$\tilde{\mathbf{k}} = ( k_x,k_y,2\sqrt{M_0M_1}\delta k_z/v_F )$. In the model at hand $2\sqrt{M_0M_1}
\approx0.78v_F$ and, consequently, the quasiparticle energy spectra near the Dirac points can be approximately considered as isotropic $\tilde{\mathbf{k}}=(k_x,k_y,\delta k_z )$.
The corresponding positive branches of the energies for Hamiltonians (\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}) are given by
\begin{equation}
\label{energy-dispersion-lin}
\epsilon^{(\pm)}(\tilde{\mathbf{k}})= \sqrt{v_F^2\tilde{k}^2+\alpha^2
\left(\sqrt{m}\pm\tilde{k}_z\right)^2 k_{\perp}^4 },
\end{equation}
where the superscript labels the Dirac points at $\mathbf{k}_0^{(\pm)}$. Note that we keep the $\tilde{k}_z$
term in the off-diagonal components in Eqs.~(\ref{model-Hamiltonian-canonical-plus}) and
(\ref{model-Hamiltonian-canonical-minus}) because, as will be clear below, it is relevant for the Berry
connection and the magnetic moment, which contain derivatives with respect to the $z$ component of
momentum. Also, this term plays an important role in determining the wave packet velocity.
Obviously, the dynamics of quasiparticles can be reliably described in terms of the two independent linearized
Hamiltonians (\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}) only for
sufficiently small energies and deviations of momentum, $|\delta k_z|\lesssim\sqrt{m}$.
The constraint $|\delta k_z|\lesssim\sqrt{m}$ also ensures that the internode transitions are negligible.
In order to obtain the characteristic energy scales of the low-energy region, we
calculate the height of the energy ``domes" in the full Hamiltonian (\ref{low-energy-Hamiltonian}) at
$\mathbf{k}=\mathbf{0}$, see also Fig.~\ref{fig:model-energy-full}. The corresponding values read as
\begin{equation}
\label{model-epsk0}
\epsilon_{+d}(0) = 23.0~\mbox{meV}, \qquad
\epsilon_{-d}(0) = -150.7~\mbox{meV}
\end{equation}
for positive and negative energies, respectively. Without the term $\epsilon_0(\mathbf{k})$, we have
\begin{equation}
\label{model-epsk0-pm}
\epsilon_{+d}(0)\big|_{\epsilon_0=0}=-\epsilon_{-d}(0)\big|_{\epsilon_0=0} = 86.9~\mbox{meV}.
\end{equation}
In order to simplify our notations, in the following we assume that the momentum $\mathbf{k}$ is measured
from the corresponding Dirac points, i.e., we replace $\delta k_z$ with $k_z$.
The energy spectrum (\ref{energy-dispersion-lin}) at each Dirac point is doubly degenerate in energy with the
corresponding wave functions given by
\begin{eqnarray}
\label{WPE-psi-def-p}
\psi_{+, \mathbf{k}}^{(\pm)} &=& \frac{v_F k_{\perp}}{\sqrt{\left[\epsilon^{(\pm)}(\mathbf{k}) \pm v_F k_z\right]^2+v_F^2k_{\perp}^2 +\alpha^2\left(k_z\pm\sqrt{m}\right)^2k_{\perp}^2}}
\left(
\begin{array}{c}
1 \\
\frac{\epsilon^{(\pm)}(\mathbf{k})\pm v_Fk_z}{v_F k_{-}} \\
0 \\
\frac{\alpha \left(k_z\pm\sqrt{m}\right)k_{+}^2}{v_Fk_{-}} \\
\end{array}
\right), \\
\label{WPE-psi-def-m}
\psi_{-, \mathbf{k}}^{(\pm)} &=&
\frac{v_F k_{\perp}}{\sqrt{\left[\epsilon^{(\pm)}(\mathbf{k}) \mp v_F k_z\right]^2+v_F^2k_{\perp}^2 +\alpha^2\left(k_z\pm\sqrt{m}\right)^2k_{\perp}^2}}
\left(
\begin{array}{c}
0 \\
\frac{\alpha \left(k_z\pm\sqrt{m}\right)k_{-}^2}{v_Fk_{-}} \\
1 \\
-\frac{\epsilon^{(\pm)}(\mathbf{k}) \mp v_Fk_z}{v_F k_{-}} \\
\end{array}
\right).
\end{eqnarray}
Here the upper index corresponds to the Dirac points at $\mathbf{k}_0^{(\pm)}$, which are described by the linearized Hamiltonians
(\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}), respectively. As we will see below,
this degeneracy is responsible for the non-Abelian nature of the Berry curvature in the Dirac semimetals $\mathrm{A_3Bi}$
(A=Na,K,Rb). It also implies that the semiclassical equations of motion for a degenerate case
\cite{Shindou:2005vfm,Culcer-Niu:2005,Chang:2008zza,Xiao:2009rm} should be used.
In order to describe this degeneracy, we introduce the following transformation
that can be viewed as an analog of the discrete chiral symmetry:
\begin{equation}
\label{model-ud-parity-D}
\Gamma_5=\Pi_{\alpha\to-\alpha} \left(
\begin{array}{cc}
I_2 & 0 \\
0 & -I_2 \\
\end{array}
\right).
\end{equation}
Note that $\Gamma_5$ is not a true symmetry because it does not commute with the linearized Hamiltonians (\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}).
The wave functions
$\psi_{+, \mathbf{k}}^{(\pm)}$ and $\psi_{-, \mathbf{k}}^{(\pm)}$ are the eigenstates of $\Gamma_5$, i.e., $\Gamma_5\psi_{+, \mathbf{k}}^{(\pm)}=\psi_{+, \mathbf{k}}^{(\pm)}$ and $\Gamma_5\psi_{-, \mathbf{k}}^{(\pm)}=-\psi_{-, \mathbf{k}}^{(\pm)}$, that describe the states with the positive and negative chirality, respectively, in the limit $\alpha \to 0$.
In addition, the positive branch of the band energy for the linearized Hamiltonians (\ref{model-Hamiltonian-canonical-plus}) and
(\ref{model-Hamiltonian-canonical-minus}) is
\begin{equation}
\label{WPE-eps-def}
\epsilon^{(\pm)}(\mathbf{k}) = \sqrt{v_F^2 k^2 +\alpha^2 \left(\sqrt{m}\pm k_z\right)^2 k_{\perp}^4}\approx v_F k + O(\alpha^2).
\end{equation}
Henceforth, we will consider only the electron wave packets with positive energies.
\section{Non-Abelian corrections to the equations of motion}
\label{sec:wavepacket-and-eqs}
In this section, we consider the electron wave packets in the $Z_2$ Weyl semimetals and present the corresponding equations of
motion. Since we treat the Dirac points as independent and, consequently, there are no internode mixing terms, the superscript $\pm$
for all quantities will be omitted in this section. An electron wave packet centered at $\mathbf{r}(t)$ and $\mathbf{q}(t)$ is
defined as a superposition of the Bloch states $\phi_{n, \mathbf{k}}=e^{i\mathbf{k}\mathbf{r}}\psi_{n,\mathbf{k}}$, i.e.,
\begin{equation}
\label{WPE-W-def}
W = \sum_{n=\pm} \int \frac{d\mathbf{k}}{(2\pi)^3} a(t,\mathbf{k}) \eta_n(t,\mathbf{k}) \phi_{n, \mathbf{k}}.
\end{equation}
Here $n=\pm$ denotes the degenerate chiral states and $a(t,\mathbf{k})$ is a normalized distribution
centered at $\mathbf{r}(t)$ and $\mathbf{q}(t)$. Finally, $\eta_n(t,\mathbf{k})$ denotes the partial
contributions or weights of the degenerate states, satisfying the normalization condition
$\sum_{n=\pm}|\eta_n(t,\mathbf{k})|^2=1$.
As is well known, the nontrivial topological properties of Weyl semimetals are captured by the monopole-like
Berry curvature \cite{Berry:1984} at the Weyl nodes. Because of the additional $\Gamma_5$-chirality degree
of freedom at each Dirac point in the $\mathrm{A_3Bi}$ (A=Na,K,Rb) semimetals, the corresponding Berry
connection is a $2\times 2$ matrix. Its elements are defined by
\begin{equation}
\label{WPE-Berry-connection-def}
\mathbf{A}_{nm}(\mathbf{q}) = -\frac{i}{2}\left(\psi_{n, \mathbf{q}}^{\dag} \partial_{\mathbf{q}} \psi_{m, \mathbf{q}} - \psi_{m, \mathbf{q}}^{\dag} \partial_{\mathbf{q}}
\psi_{n, \mathbf{q}}\right).
\end{equation}
The explicit expressions for $\mathbf{A}_{nm}$ are given by Eqs.~(\ref{exp-expressions-lin-alpha-A++})--(\ref{exp-expressions-lin-alpha-A--})
in Appendix \ref{sec:app-exp-expressions-lin-alpha}. (Note that the off-diagonal components of $\mathbf{A}_{nm}$ vanish when $\alpha=0$.)
The Berry curvature has a non-Abelian structure, i.e.,
\begin{equation}
\label{WPE-Berry-curvature-def}
\bm{\Omega}_{nm}(\mathbf{q}) = -\frac{i}{\hbar}\sum_{l=\pm}\left[(D_{\mathbf{q}})_{nl} \times (D_{\mathbf{q}})_{lm} \right]
=\frac{1}{\hbar}\left[\partial_{\mathbf{q}} \times \mathbf{A}_{nm}(\mathbf{q})\right] + \frac{i}{\hbar}\sum_{l=\pm}
\left[\mathbf{A}_{nl}(\mathbf{q})\times \mathbf{A}_{lm}(\mathbf{q})\right],
\end{equation}
where $(D_{\mathbf{q}})_{nl} = \partial_{\mathbf{q}} \delta_{nl} +i \mathbf{A}_{nl}(\mathbf{q})$ is the covariant derivative.
The components of $\bm{\Omega}_{nm}(\mathbf{q})$ are given by
Eqs.~(\ref{exp-expressions-lin-alpha-Omega++})--(\ref{exp-expressions-lin-alpha-Omega--})
in Appendix \ref{sec:app-exp-expressions-lin-alpha}. The semiclassical Hamiltonian is defined by
\begin{equation}
\label{WPE-H-cal-def}
\mathcal{H}_{nm}(\mathbf{r},\mathbf{q}) = \left[\epsilon(\mathbf{q}) -e\varphi(\mathbf{r})\right]\delta_{nm}
+\left(\mathbf{M}_{nm}(\mathbf{q})\cdot\mathbf{B}\right)
\end{equation}
and contains the band, electrostatic, as well as the magnetization energy determined by the
magnetic moment of the wave packet, i.e.,
\begin{equation}
\label{WPE-magnetic-moment-def}
\mathbf{M}_{nm}(\mathbf{q}) = i\frac{e}{2\hbar c} \left[(\partial_{\mathbf{q}}\psi^{\dag}_{n, \mathbf{q}}) \times \left\{H(\mathbf{q}) -\epsilon(\mathbf{q}) I_2 \right\}(\partial_{\mathbf{q}}\psi_{m, \mathbf{q}})\right].
\end{equation}
Here $H(\mathbf{q})$ is given by $H_{\rm lin}^{(\pm)}(\mathbf{q})$ in Eqs.~(\ref{model-Hamiltonian-canonical-plus})
and (\ref{model-Hamiltonian-canonical-minus}) for the Dirac points at $\mathbf{k}_0^{(\pm)}$, respectively, and the
band energy $\epsilon(\mathbf{q})$ is defined by Eq.~(\ref{energy-dispersion-lin}). The explicit expressions for the
components of the magnetic moment (\ref{WPE-magnetic-moment-def}) are given by
Eqs.~(\ref{exp-expressions-lin-alpha-M++})--(\ref{exp-expressions-lin-alpha-M--}) in Appendix \ref{sec:app-exp-expressions-lin-alpha}.
The equations of motion for the non-Abelian wave packet in constant external electric $\mathbf{E}$ and magnetic $\mathbf{B}$
fields are given by \cite{Culcer-Niu:2005}
\begin{eqnarray}
\label{WPE-r-eq-def}
\dot{\mathbf{r}} &=& \mathbf{v}(\mathbf{q}) + \hbar \left[\dot{\mathbf{q}} \times \bm{\Omega}(\mathbf{q})\right],\\
\label{WPE-q-eq-def}
\hbar \dot{\mathbf{q}} &=& -e\mathbf{E} - \frac{e}{c}\left[\dot{\mathbf{r}}\times\mathbf{B}\right] - \frac{\hbar \mathbf{q}}{\tau},\\
\label{WPE-eta-eq-def}
i\hbar\, \dot{\eta}_n &=& \left[\left(\mathbf{M}_{nm}(\mathbf{q})\cdot\mathbf{B}\right)
+ \hbar \left(\dot{\mathbf{q}}\cdot\mathbf{A}_{nm}(\mathbf{q})\right)\right] \eta_{m},
\end{eqnarray}
where the wave packet's velocity is defined by
\begin{eqnarray}
\label{WPE-v-mean-def}
\mathbf{v}(\mathbf{q}) &=& \frac{1}{\hbar}\sum_{n,m,l=\pm} \eta^{\dag}_n \left[(D_{\mathbf{q}})_{nl}, \mathcal{H}_{lm}(\mathbf{r},\mathbf{q})\right] \eta_m = \frac{1}{\hbar} \partial_{\mathbf{q}}\epsilon(\mathbf{q})\nonumber\\
&+& \frac{1}{\hbar}\sum_{n,m,l=\pm} \eta^{\dag}_n \left\{ \delta_{ln}\left[\partial_{\mathbf{q}}\left(\mathbf{M}_{nm}(\mathbf{q})\cdot\mathbf{B}\right)\right] + i\left[\mathbf{A}_{nl}(\mathbf{q})\left(\mathbf{M}_{lm}(\mathbf{q})\cdot\mathbf{B}\right) -\left(\mathbf{M}_{nl}(\mathbf{q})\cdot\mathbf{B}\right)\mathbf{A}_{lm}(\mathbf{q})\right]\right\} \eta_m
\end{eqnarray}
and the Berry curvature reads
\begin{equation}
\label{WPE-Omega-mean-def}
\bm{\Omega}(\mathbf{q}) = \sum_{n,m=\pm} \eta^{\dag}_n \bm{\Omega}_{nm}(\mathbf{q}) \eta_m.
\end{equation}
It is worth noting that the non-Abelian equations of motion (\ref{WPE-r-eq-def})--(\ref{WPE-eta-eq-def}) differ from those for Abelian wave packets by the presence of an additional equation for the weights of the degenerate states $\eta_n$, i.e., Eq.~(\ref{WPE-eta-eq-def}).
Note also that a phenomenological dissipative term
$\hbar \mathbf{q}/\tau$ was introduced in Eq.~(\ref{WPE-q-eq-def}). Physically, it captures the effects of
scattering of the electrons on impurities, defects, and phonons in the relaxation time approximation.
The system of equations (\ref{WPE-r-eq-def})--(\ref{WPE-eta-eq-def}) can be rewritten in a more convenient form where all derivatives
are grouped on the left-hand sides, i.e.,
\begin{eqnarray}
\label{WPE-r-eq-exp}
\dot{\mathbf{r}} \left[1-\frac{e}{c} \left(\bm{\Omega}\cdot\mathbf{B}\right)\right]
&=& \mathbf{v} -e\left[\mathbf{E}\times\bm{\Omega}\right] -\frac{e}{c} \mathbf{B} \left(\bm{\Omega}\cdot\mathbf{v}\right) -\frac{\hbar\left[\mathbf{q}\times\bm{\Omega}\right]}{\tau},\\
\label{WPE-q-eq-exp}
\hbar \dot{\mathbf{q}} \left[1-\frac{e}{c} \left(\bm{\Omega}\cdot\mathbf{B}\right)\right]
&=& \mathbf{F},\\
\label{WPE-eta-eq-exp}
i\hbar\, \dot{\eta}_n \left[1-\frac{e}{c} \left(\bm{\Omega}\cdot\mathbf{B}\right)\right] &=& \sum_{m=\pm}\left\{\left(\mathbf{F}\cdot\mathbf{A}_{nm}\right)
+\left(\mathbf{M}_{nm}\cdot\mathbf{B}\right)\left[1-\frac{e}{c} \left(\bm{\Omega}\cdot\mathbf{B}\right)\right]\right\} \eta_{m}.
\end{eqnarray}
Here we used the following the short-hand notation:
\begin{equation}
\label{force}
\mathbf{F}=-e\mathbf{E} -\frac{e}{c} \left[\mathbf{v}\times \mathbf{B}\right] + \frac{e^2}{c} \bm{\Omega}\left(\mathbf{E}\cdot\mathbf{B}\right) -\frac{\hbar \mathbf{q}}{\tau} \left[1+\frac{e}{c} \left(\bm{\Omega}\cdot\mathbf{B}\right)\right] + \frac{e\hbar \bm{\Omega}}{c\tau} \left(\mathbf{q}\cdot\mathbf{B}\right).
\end{equation}
For simplicity of presentation, here we omitted the arguments of $\bm{\Omega}$, $\mathbf{v}$, $\mathbf{M}_{nm}$,
and $\mathbf{A}_{nm}$. As is easy to see, the presence of the non-Abelian corrections complicates significantly the
equations of motion. As a result, the latter can be solved only numerically. The corresponding solutions for the cases
of perpendicular and parallel electric and magnetic fields are discussed in the next section.
\section{Wavepackets motion}
\label{sec:trajectories-pm-DP}
As discussed in the previous section, the time evolution of the coordinates, momenta, and partial weights
of the wave packets is described by Eqs.~(\ref{WPE-r-eq-exp}), (\ref{WPE-q-eq-exp}), and (\ref{WPE-eta-eq-exp}).
The corresponding equations should be also supplemented by the initial conditions. In view of the translation
invariance of the problem, we can set without the loss of generality the initial coordinates of the wave packet
to be at the origin of the coordinate system, i.e.,
\begin{equation}
\label{trajectories-Ey-Bz-init-val-r}
\mathbf{r}(t=0)=\mathbf{0}.
\end{equation}
As for the initial value of the wave packet's momentum, it is convenient to match it with the steady-state
value determined by the electric field in the relaxation time approximation, i.e.,
\begin{equation}
\label{trajectories-Ey-Bz-init-val-q}
\mathbf{q}(t=0)= - \frac{e\tau \mathbf{E}}{\hbar}.
\end{equation}
Concerning the initial conditions for the partial weights $\eta_{\pm}$, it is natural to assume that the
wave packets are non-chiral with respect to the $\Gamma_5$ transformation (i.e., the probabilities to
find an electron in the positive and negative $\Gamma_5$-chirality states are equal), i.e.,
\begin{equation}
\label{trajectories-Ey-Bz-init-val-eta-1}
\eta_{+}(t=0)= \eta_{-}(t=0)= \frac{1}{\sqrt{2}}.
\end{equation}
For the sake of completeness, however, in Sec.~\ref{sec:trajectories-pm-DP-exact-tau-2-polarization} we will also consider
the case of the initially polarized wave packets with
\begin{equation}
\label{trajectories-pm-DP-exact-tau-2-polarization-eta-1}
\eta_{+}(t=0) = 1, \qquad \eta_{-}(t=0) = 0
\end{equation}
and
\begin{equation}
\label{trajectories-pm-DP-exact-tau-2-polarization-eta-2}
\eta_{+}(t=0) = 0, \qquad \eta_{-}(t=0) = 1.
\end{equation}
Let us begin our consideration with the case when the background magnetic field is absent, $\mathbf{B}=\mathbf{0}$.
As is easy to see, the structure of the equations of motion (\ref{WPE-r-eq-exp}), (\ref{WPE-q-eq-exp}),
and (\ref{WPE-eta-eq-exp}) drastically simplifies, i.e.,
\begin{eqnarray}
\label{WPE-r-eq-exp-B=0}
\dot{\mathbf{r}} &=& \mathbf{v} -e\left[\mathbf{E}\times\bm{\Omega}\right] -\frac{\hbar\left[\mathbf{q}\times\bm{\Omega}\right]}{\tau},\\
\label{WPE-q-eq-exp-B=0}
\hbar \dot{\mathbf{q}} &=& -e\mathbf{E} -\frac{\hbar \mathbf{q}}{\tau},\\
\label{WPE-eta-eq-exp-B=0}
i\hbar\, \dot{\eta}_n &=& -\sum_{m=\pm}\left(\left[e\mathbf{E} +\frac{\hbar \mathbf{q}}{\tau} \right]\cdot\mathbf{A}_{nm}\right)\eta_{m}.
\end{eqnarray}
For the initial conditions in Eqs.~(\ref{trajectories-Ey-Bz-init-val-r}) and (\ref{trajectories-Ey-Bz-init-val-q}),
we obtain the following analytical solution:
\begin{eqnarray}
\label{WPE-r-sol-B=0}
\mathbf{r}(t) &=& \mathbf{v} t,\\
\label{WPE-q-sol-B=0}
\mathbf{q}(t) &=& -\frac{\tau e\mathbf{E}}{\hbar},\\
\label{WPE-eta-sol-B=0}
\eta_n(t) &=& \eta_n(0),
\end{eqnarray}
which describes the inertial motion of wave packets with no mixing of the $\Gamma_5$-chirality states.
It is worth noting that this result is valid for both full and linearized Hamiltonians given in Eq.~(\ref{low-energy-Hamiltonian})
as well as Eqs.~(\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}), respectively.
As we will see below, the dynamics of wave packets becomes considerably more
complicated when an external magnetic field is present. The cases of parallel and perpendicular electromagnetic
fields are studied in the next two subsections.
\subsection{Parallel electric and magnetic fields}
\label{sec:trajectories-pm-DP-exact-tau-2-Ey-By}
In this subsection, we study the motion of wave packets in parallel electric and magnetic fields when the initial
chiral weights are equal, i.e., $\eta_{+}(t=0)=\eta_{-}(t=0)=1/\sqrt{2}$. In order to solve the equations of motion
numerically, we set $\mathbf{E}=E\hat{\mathbf{y}}$ and $\mathbf{B}=B\hat{\mathbf{y}}$, where $E=200~\mbox{V/m}$ and $B=10~\mbox{G}$.
The position vectors $\mathbf{r}^{(\pm)}$ of the wave packets from different valleys are shown
in Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r}. We find that the momentum-dependent chirality-mixing
leads to a noticeable splitting of the wave packets in the $x$ and $z$ directions that increases with time.
At the same time, the splitting in the $y$ direction is negligible. We also find that the non-Abelian terms give rise
to periodic oscillations of the wave packets around their overall linear trajectories. As we argue below, the physical
origin of such oscillations is connected with the precession of the magnetic moment.
Further, we find that the momenta of wave packets $\mathbf{q}^{(\pm)}$ evolve similarly to the coordinates.
In particular, there is a negligible
relative splitting in the $y$ components of momenta, but the $x$ and $z$ components of $\mathbf{q}^{(\pm)}$ oscillate
with time. Unlike the coordinates, however, the average splitting of the $x$ and $z$ components of momenta
does not increase with time. In this connection, we should remark that the wave packet energies never exceed the
threshold value (\ref{model-epsk0-pm}) and, thus, the numerical analysis remains within the range of applicability of
the low-energy theory.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{Fig2a.eps}\hfill
\includegraphics[width=0.45\textwidth]{Fig2b.eps}\\
\caption{The positions $\mathbf{r}^{(\pm)}$ of the wave packets as a function of time $t$ (left panel) and the splitting
$\Delta \mathbf{r}=\mathbf{r}^{(+)}-\mathbf{r}^{(-)}$ between the coordinates of the wave packets from different Dirac
points (right panel). The red, blue, and green lines correspond to the $x$, $y$, and $z$ components, respectively.
The solid and dashed lines represent the results for the wave packets described by Hamiltonians
(\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}), respectively.
We used $\alpha=0.5\alpha^{*}$, $\mathbf{E}=E\hat{\mathbf{y}}$, and $\mathbf{B}=B\hat{\mathbf{y}}$, where
$E=200~\mbox{V/m}$ and $B=10~\mbox{G}$.}
\label{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r}
\end{center}
\end{figure}
The trajectories of the wave packets and the probabilities $|\eta_{\pm}|^2$ for the wave packets from different
valleys to be in certain $\Gamma_5$-chirality states are presented in the left and right panels of
Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r-3D}, respectively. The projections of trajectories onto the
$x$-$y$, $x$-$z$, and $y$-$z$ planes are shown in the three panels of Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r-2D}.
The timescale is set to $t\leq t_{\rm max}=5~\mbox{ns}$.
In agreement with the results in Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r}, the trajectories
for the wave packets from different Dirac points are clearly separated. The origin of the splitting of the wave packets
from different valleys in the $x$ and $z$ directions, seen Figs.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r-3D}
and \ref{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r-2D}, can be traced back to the nontrivial structure of the low-energy
Hamiltonians (\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}). It is remarkable
that the magnitude of the average splitting linearly increases with time. By making use of this fact, we estimate
that the spatial separation can reach a few micrometers for a centimeter-size crystal, provided the latter is sufficiently
clean and the quasiparticle mean free path is sufficiently long. Such a splitting could provide an observational signature
for the nontrivial wave packets dynamics in the Dirac semimetals $\mathrm{A_3Bi}$ (A=Na,K,Rb). One should note,
however, that the splitting is largely washed away in strong magnetic fields (e.g., $B=100~\mbox{G}$) when the
Lorentz force starts to dominate and causes the trajectories to overlap. [Note that the separation of the wave packets can be resolved experimentally only on the spatial scales larger than wave packet's characteristic sizes, i.e., $\lambda \gtrsim 2\pi/k$.]
The small spiral-like features on top of the linear separation in Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r-2D} (see also the left panel in Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r-3D})
can be traced back to the oscillations of the partial weights $\eta_{\pm}$. This is also confirmed by the results for
$|\eta_{\pm}|^2$ in the right panel of Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r-3D}, which demonstrate
that the propagation of the wave packets is accompanied by a weakly oscillating splitting of the $\Gamma_5$-chirality.
From a physics viewpoint, these oscillations are related to the precession of the magnetic moment of the wave packet.
They are determined by the non-Abelian nature of the Berry curvature and the nontrivial structure of the magnetic
moment. From an observational viewpoint, however, these features could be very difficult to detect. Indeed, while the
oscillations could be made larger by increasing the magnetic field, the valley separation becomes weak in such
a regime.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.59\textwidth]{Fig3a.eps}\hfill
\includegraphics[width=0.39\textwidth]{Fig3b.eps}
\caption{Left panel: The trajectories of the wave packets from different Dirac points for $t\leq t_{\rm max}=5~\mbox{ns}$.
The red and blue lines represent the wave packets described by Hamiltonians (\ref{model-Hamiltonian-canonical-plus})
and (\ref{model-Hamiltonian-canonical-minus}) at $\alpha=0.5\alpha^{*}$, respectively. The black line corresponds to
the case $\alpha=0$, where the wave packets do not split.
Right panel: The time dependence of the probabilities $|\eta_{\pm}|^2$ to find the wave packets in
certain $\Gamma_5$-chirality states. The red and blue lines correspond to the wave packets described by
Hamiltonians (\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}).
The solid and dashed lines describe $|\eta_{+}|^2$ and $|\eta_{-}|^2$, respectively.
We used $\mathbf{E}=E\hat{\mathbf{y}}$ and $\mathbf{B}=B\hat{\mathbf{y}}$, where
$E=200~\mbox{V/m}$ and $B=10~\mbox{G}$.}
\label{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r-3D}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\hspace{-0.32\textwidth}(a)\hspace{0.32\textwidth}(b)\hspace{0.32\textwidth}(c)\\[0pt]
\includegraphics[width=0.32\textwidth]{Fig4a.eps}\hfill
\includegraphics[width=0.32\textwidth]{Fig4b.eps}\hfill
\includegraphics[width=0.32\textwidth]{Fig4c.eps}
\caption{The projections of the wave packet trajectories onto the following planes: $x$-$y$ (panel a),
$x$-$z$ (panel b), and $y$-$z$ (panel c). The red and blue lines correspond to the wave packets described
by Hamiltonians (\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}), respectively.
We used $t\leq t_{\rm max}=5~\mbox{ns}$, $\alpha=0.5\alpha^{*}$, $\mathbf{E}=E\hat{\mathbf{y}}$, and
$\mathbf{B}=B\hat{\mathbf{y}}$, where $E=200~\mbox{V/m}$ and $B=10~\mbox{G}$.}
\label{fig:trajectories-pm-DP-exact-tau-2-Ey-By-r-2D}
\end{center}
\end{figure}
Before proceeding to the case of the perpendicular electric and magnetic fields, let us provide some
underlying reasons for the spatial valley separation of wave packets. Because of a rather complicated
structure of Eqs.~(\ref{WPE-r-eq-exp}), (\ref{WPE-q-eq-exp}), and (\ref{WPE-eta-eq-exp}), we
present only a rough qualitative description. To start with, we assume that the changes of the
partial weights are negligible, i.e., $\eta_{\pm}(t)=const$. In such a case, the spiral-like motion on
top of the mostly linear splitting disappears. Then, it is easy to check that the remaining spatial
separation is driven primarily by the velocity term in Eq.~(\ref{WPE-r-eq-exp}), i.e., the first term
on the right-hand side. In fact, the separation in both $x$ and $z$ directions is related to the same $z$
component of velocity $\mathbf{v}$. While the effect of $v_z$ on the motion in the $z$ direction
is obvious, the splitting in the $x$ direction is achieved indirectly. In particular, the $x$ component of the
velocity is mainly determined by the corresponding component of the momentum, which, in turn, is
generated by the Lorentz force $e v_zB_y/c$ in Eq.~(\ref{WPE-q-eq-exp}), i.e., the second term in expression (\ref{force}). Obviously, such a splitting is induced only when $\mathbf{B}\neq \mathbf{0}$.
However, the presence of nonzero $\alpha$ in the energy dispersion relation (\ref{energy-dispersion-lin})
plays a key role as well: it gives a nonzero $v_z$ everywhere away from the Dirac points and makes
an efficient spatial separation of the wave packets possible. Thus, the valley splitting of wave packets
is in large part connected with the special form of the momentum-dependent chirality-mixing
term $\Delta(\mathbf{q})$ in the low-energy Hamiltonian.
\subsection{Perpendicular electric and magnetic fields}
\label{sec:trajectories-pm-DP-exact-tau-2-Ey-Bz}
In this subsection, we consider the motion of the wave packets in perpendicular electric and magnetic fields.
We use the same magnitudes of the electric and magnetic fields as in the previous subsection, but the
magnetic field is now in the $z$ direction, i.e., $\mathbf{B}=B\hat{\mathbf{z}}$.
Again, $\alpha=0.5\alpha^{*}$, which is sufficiently small to ensure that the relative contribution of the off-diagonal terms, quantified by $\Delta(\mathbf{q})/(v_Fq)$, would remain small for the timescales used in our numerical calculations.
Let us note also that, because of the off-diagonal gap term
$\propto\alpha \sqrt{m} k_{\pm}^2$, the dynamics for the two possible orientations of the magnetic field,
i.e., $\mathbf{B}=B\hat{\mathbf{z}}$ and
$\mathbf{B}=B\hat{\mathbf{x}}$, are not equivalent. In fact, for sufficiently large
timescales, the semiclassical approximation fails in the latter case. Therefore, in this study, we will not
discuss it.
The evolution of the positions $\mathbf{r}^{(\pm)}$ for the wave packets from different Dirac points (valleys)
is shown in Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-Bz-r}. As expected in the perpendicular electric
and magnetic fields, the coordinates of the wave packets oscillate in the plane normal to $\mathbf{B}$ (i.e.,
the $x$ and $y$ coordinates), albeit have a nonharmonic pattern. We also found that the non-Abelian
terms lead to the oscillation-like motion of the wave packets along the $z$ axis, as well as to a small splitting
of the trajectories of the wave packets from different Dirac points (see the right panel in
Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-Bz-r}). We checked that the wave packet momenta
oscillate too and split slightly when $\alpha\neq0$. Remarkably, however, the $z$ components of momenta
vanish. We conclude, therefore, that the slow motion of the wave packets in the $z$ direction is caused exclusively
by the non-Abelian effects. In all cases presented, we verified that the energies of the wave packets remain
sufficiently small to justify the use of the linearized model.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{Fig5a.eps}\hfill
\includegraphics[width=0.45\textwidth]{Fig5b.eps}\\
\caption{The positions $\mathbf{r}^{(\pm)}$ of the wave packets as a function of time $t$ (left panel) and the splitting
$\Delta \mathbf{r}=\mathbf{r}^{(+)}-\mathbf{r}^{(-)}$ of the coordinates of the wave packets from different Dirac points
(right panel). The red, blue, and green lines correspond to the $x$, $y$, and $z$ components, respectively.
The solid and dashed lines represent the results for the wave packets described by Hamiltonians
(\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}), respectively.
We used $\alpha=0.5\alpha^{*}$, $\mathbf{E}=E\hat{\mathbf{y}}$ and $\mathbf{B}=B\hat{\mathbf{z}}$, where
$E=200~\mbox{V/m}$ and $B=10~\mbox{G}$.}
\label{fig:trajectories-pm-DP-exact-tau-2-Ey-Bz-r}
\end{center}
\end{figure}
We present the trajectories of the wave packets and the probabilities $|\eta_{\pm}|^2$ to find the
wave packets from different valleys in certain chiral states in the left and right panels of
Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-Bz-r-3D}, respectively. The projections of the
wave packet trajectories onto the $x$-$y$, $x$-$z$, and $y$-$z$ planes are shown in the three
panels of Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-Bz-r-2D}. The timescale is set to
$t\leq t_{\rm max}=0.35~\mbox{ns}$.
As is clear from the results in the left panel of Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-Bz-r-3D},
the non-Abelian corrections lead to rapid oscillations of the wave packets in the $z$ direction. Note
that while the amplitude of oscillations increases, their period decreases with time. Such a behavior
suggests that the quasiclassical approximation gradually breaks down. The spatial oscillations of the
wave packets can be traced back to an oscillatory time dependence of the partial weights and disappear if
one enforces constant weights $\eta_{\pm}(t)$.
In general, the trajectories of the wave packets from
different Dirac points are asymmetric with respect to the
$x$-$y$ plane. On the other hand, by taking into account their substantial overlap (see the left panel of Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-Bz-r-3D}),
we think that the valley separation cannot be easily achieved in this case. We checked, however,
that trajectories change qualitatively at sufficiently large magnetic fields and the valley separation
in the $z$ direction becomes possible at least in principle, although its magnitude is estimated to be
rather small.
It is interesting to point that, according to the right panel in Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-Ey-Bz-r-3D},
the wave packets from different Dirac points develop nonzero and opposite in sign $\Gamma_5$-chirality
polarizations. Such polarizations have an interesting oscillatory pattern with the absolute values of the
partial weights reaching almost constant values at sufficiently large timescales. In summary, while the
valley separation is weak, the deviation of the wave packets from the $x$-$y$ plane clearly provides an
evidence for the non-Abelian effects.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.59\textwidth]{Fig6a.eps}\hfill
\includegraphics[width=0.39\textwidth]{Fig6b.eps}
\caption{Left panel: The trajectories of the wave packets corresponding to different Dirac points or valleys for $t\leq t_{\rm max}=0.35~\mbox{ns}$.
The red and blue lines represent the wave packets described by Hamiltonians (\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}) at $\alpha=0.5\alpha^{*}$, respectively. The black line corresponds to the case $\alpha=0$, where the wave packets are not split.
Right panel: The probabilities $|\eta_{\pm}|^2$ to find the wave packets in certain $\Gamma_5$ states for $\alpha=0.5\alpha^{*}$.
The red and blue lines correspond to the wave packets described by Hamiltonians (\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}).
The solid and dashed lines describe $|\eta_{+}|^2$ and $|\eta_{-}|^2$, respectively.
We used $\mathbf{E}=E\hat{\mathbf{y}}$ and $\mathbf{B}=B\hat{\mathbf{z}}$, where
$E=200~\mbox{V/m}$ and $B=10~\mbox{G}$.}
\label{fig:trajectories-pm-DP-exact-tau-2-Ey-Bz-r-3D}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\hspace{-0.32\textwidth}(a)\hspace{0.32\textwidth}(b)\hspace{0.32\textwidth}(c)\\[0pt]
\includegraphics[width=0.32\textwidth]{Fig7a.eps}\hfill
\includegraphics[width=0.32\textwidth]{Fig7b.eps}\hfill
\includegraphics[width=0.32\textwidth]{Fig7c.eps}
\caption{The projections of the wave packets trajectories onto the following planes:
$x$-$y$ (panel a), $x$-$z$ (panel b), and $y$-$z$ (panel c). The red solid and blue
dashed lines correspond to the wave packets described by Hamiltonians
(\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}), respectively.
We used $t\leq t_{\rm max}=0.35~\mbox{ns}$, $\alpha=0.5\alpha^{*}$,
$\mathbf{E}=E\hat{\mathbf{y}}$, and $\mathbf{B}=B\hat{\mathbf{z}}$, where
$E=200~\mbox{V/m}$ and $B=10~\mbox{G}$.}
\label{fig:trajectories-pm-DP-exact-tau-2-Ey-Bz-r-2D}
\end{center}
\end{figure}
\subsection{Motion of wave packets for chirally polarized initial states}
\label{sec:trajectories-pm-DP-exact-tau-2-polarization}
In this subsection, for completeness, we investigate the motion of wave packets when the initial states
are chirally polarized. We limit ourselves to the two limiting configurations given by
Eqs.~(\ref{trajectories-pm-DP-exact-tau-2-polarization-eta-1}) and (\ref{trajectories-pm-DP-exact-tau-2-polarization-eta-2}).
For the sake of brevity, we investigate only the most interesting case of parallel electric and
magnetic fields. The corresponding results are shown in Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-polarization-Ey-By-r-3D}
with the projections onto the $x$-$y$, $x$-$z$, and $y$-$z$ planes presented in the three panels of
Fig.~\ref{fig:trajectories-pm-DP-exact-tau-2-polarization-Ey-By-r-2D}. The timescale is limited to
$t\leq t_{\rm max}=1~\mbox{ns}$. While the probabilities to find wave packets in states with fixed
chirality are not shown, we checked that they weakly oscillate around their initial values. Just like
in the case of the non-chiral wave packets discussed in Sec.~\ref{sec:trajectories-pm-DP-exact-tau-2-Ey-By},
the physical origin of this subdominant oscillating motion can be traced to the precession of the magnetic moment.
As one can easily see from Figs.~\ref{fig:trajectories-pm-DP-exact-tau-2-polarization-Ey-By-r-3D} and
\ref{fig:trajectories-pm-DP-exact-tau-2-polarization-Ey-By-r-2D}, the trajectories of the wave packets
corresponding to different Dirac points but with the same initial $\Gamma_5$ weights are completely
split and the amplitude of the splitting increases with time. On the other hand, the wave packets with
different initial $\Gamma_5$ weights are weakly separated. Therefore, in the case of the nonequal
initial weights~(\ref{trajectories-pm-DP-exact-tau-2-polarization-eta-1}) and
(\ref{trajectories-pm-DP-exact-tau-2-polarization-eta-2}), there is a sufficiently weak splitting of the chiral
wave packets on top of the relatively large valley splitting.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.75\textwidth]{Fig8.eps}
\caption{The trajectories of the wave packets in parallel electric and magnetic fields.
The red and blue lines correspond to the wave packets for the initial weights
(\ref{trajectories-pm-DP-exact-tau-2-polarization-eta-1}) described by Hamiltonians
(\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}), respectively.
The green and brown lines correspond to the wave packets for the initial weights
(\ref{trajectories-pm-DP-exact-tau-2-polarization-eta-2}) described by Hamiltonians
(\ref{model-Hamiltonian-canonical-plus}) and (\ref{model-Hamiltonian-canonical-minus}), respectively.
We used $t\leq t_{\rm max}=1~\mbox{ns}$, $\alpha=0.5\alpha^{*}$, $\mathbf{E}=E\hat{\mathbf{y}}$ and $\mathbf{B}=B\hat{\mathbf{y}}$,
where $E=200~\mbox{V/m}$ and $B=10~\mbox{G}$.}
\label{fig:trajectories-pm-DP-exact-tau-2-polarization-Ey-By-r-3D}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\hspace{-0.32\textwidth}(a)\hspace{0.32\textwidth}(b)\hspace{0.32\textwidth}(c)\\[0pt]
\includegraphics[width=0.32\textwidth]{Fig9a.eps}\hfill
\includegraphics[width=0.32\textwidth]{Fig9b.eps}\hfill
\includegraphics[width=0.32\textwidth]{Fig9c.eps}
\caption{The projections of the wave packet trajectories onto the following planes: $x$-$y$ (panel a), $x$-$z$ (panel b), and $y$-$z$ (panel c).
The red solid and blue dashed lines correspond to the wave packets for the initial weights
(\ref{trajectories-pm-DP-exact-tau-2-polarization-eta-1}) described by Hamiltonians (\ref{model-Hamiltonian-canonical-plus}) and
(\ref{model-Hamiltonian-canonical-minus}), respectively.
The green and brown lines correspond to their counterparts with the initial weights (\ref{trajectories-pm-DP-exact-tau-2-polarization-eta-2}).
We used $t\leq t_{\rm max}=1~\mbox{ns}$, $\alpha=0.5\alpha^{*}$, $\mathbf{E}=E\hat{\mathbf{y}}$ and $\mathbf{B}=B\hat{\mathbf{y}}$,
where $E=200~\mbox{V/m}$ and $B=10~\mbox{G}$.}
\label{fig:trajectories-pm-DP-exact-tau-2-polarization-Ey-By-r-2D}
\end{center}
\end{figure}
\section{Summary and discussions}
\label{sec:Summary}
In this paper, we investigated the dynamics of the electron wave packets in the Dirac semimetals $\mathrm{A_3Bi}$ (A=Na,K,Rb).
We showed that due to the hidden $Z_2$ Weyl nature of these materials \cite{Gorbar:2014sja}, the semiclassical motion of
the wave packets is qualitatively affected by the non-Abelian contributions when external electric $\mathbf{E}$ and magnetic
$\mathbf{B}$ fields are applied to the system. These contributions arise due to the degeneracy of the electron states and the
chirality-mixing from $\Delta(\mathbf{k})$ in the effective Hamiltonian. Unlike the usual mass (gap) term in the Dirac Hamiltonian,
$\Delta(\mathbf{k})$ is momentum-dependent and vanishes at the Dirac points. As a result, the gapless energy spectrum is
preserved and the chirality remains well defined in the close vicinity of the Dirac points. The doubly degenerate states
near each Dirac point can be classified with respect to the $\Gamma_5$ transformation. [Since at small $\Delta(\mathbf{k})$
the latter is approximately the same as the chiral transformation, we use the term chirality to classify the corresponding
states.]
It is found that when $\mathbf{E}\parallel\mathbf{B}$ and the magnetic field is sufficiently small, the trajectories of
wave packets from different valleys (or, equivalently, Dirac points) are spatially split in the plane perpendicular to the fields.
What is more important, the magnitude of the valley separation grows linearly with time. One might speculate,
therefore, that a substantial splitting could be achieved in macroscopic systems when the quasiparticle mean free
path is sufficiently large. (Note that the propagation of wave packets depends on the relative phase of the weights
whose values, however, would be difficult to control in experiments.) Interestingly, the non-Abelian corrections
allow for a spiral-like motion of the wave packets on top of the almost linear separation. As is clear, the physical
origin of such spiraling is connected with the precession of the magnetic moment. The same effect allows also
for a small oscillating chirality polarization of the wave packets. While the amplitude of the spirals is estimated
to be relatively small, the linear splitting of the trajectories due to the momentum-dependent chirality-mixing term
could reach micrometers for centimeter-size crystals. When the wave packets are initially chirality polarized,
there is a weak splitting of the chiral wave packets on top of the well-pronounced valley separation. The latter
has the same origin as for the nonpolarized wave packets. In the case of a strong magnetic field, the Lorentz force
dominates that leads to weakly separated trajectories of the wave packets from different Dirac points.
Therefore, we believe that the setup with the parallel electric and magnetic fields allows for a spatial splitting
of the wave packets that can be, in principle, tested experimentally.
When the electric and sufficiently weak magnetic fields are perpendicular, the valley separation is negligible
for the equal initial weights of the degenerate chirality states. On the other hand, the non-Abelian
corrections lead to a well-pronounced oscillating motion of the wave packets in the direction parallel to the magnetic field.
If detected, such deviations from the usual in-plane motion could provide another signature of the non-Abelian effects.
In addition, there is also a weak chirality polarization of the states from different Dirac points, which, however,
is not easily accessible because the trajectories from different valleys are not well-split.
The situation changes at sufficiently large magnetic fields, when the trajectories from different valleys are separated
along the direction of the magnetic field. However, the separation is nonmonotonic and is estimated to be relatively weak.
Therefore, while the case of the perpendicular electric and magnetic fields contains interesting physics, it might be difficult
to realize experimentally.
It is instructive to compare the obtained results with those in Ref.~\cite{Gorbar:2017dtp}, where the valley and chirality
splitting was shown to be possible by applying a superposition of magnetic and strain-induced pseudomagnetic fields. In the absence
of the chirality-mixing term $\Delta(\mathbf{k})$ and the non-Abelian corrections to the Berry curvature, however, it was
critical to include a pseudomagnetic field. Without the latter, the right- and left-handed beams from different valleys
would overlap and form nonchiral beams that do not correspond to a certain valley. In contrast, as we showed in the
study here, the non-Abelian effects and the gap term can lead to both valley and chirality splittings even in
the absence of a pseudomagnetic field. While the effects are estimated to be rather small, they can be experimentally accessible via certain local probes.
\begin{acknowledgments}
The work of E.V.G. was partially supported by the Program of Fundamental Research of the
Physics and Astronomy Division of the National Academy of Sciences of Ukraine.
The work of V.A.M. and P.O.S. was supported by the Natural Sciences and Engineering Research Council of Canada.
The work of I.A.S. was supported by the U. S. National Science Foundation under Grants No.~PHY-1404232
and No.~PHY-1713950.
\end{acknowledgments}
|
1,314,259,993,215 | arxiv | \section{Introduction}
\label{Intro}
Astrophysical research is an intent driver for advances
in computer science, especially so for high performance computing and data
intensive calculations. We are used to the continuous increase of processor power
which increases the potential of computer
based analysis.
Even faster is the rise of sensor size and storage capacity, both of which
in recent years have grown even stronger than \emph{Moore's Law} would predict.
Unfortunately, this trend of growing data volumes also increases the complexity of the
data management, as well as the processing, analysis and visualisation. Above a
certain level, new methods have to be applied, e.g. the management of data becomes
a task that is no longer trivial enough for a file system alone.
This challenge affects many other domains outside of
Astrophysics in the same way, and it is an important challenge
to find answers, since in several research areas further progress depends on
the successful processing of data volumes in the high Terabyte or Petabyte scale.
One solution for improved data management is the recent success in meta data
standardisation and advanced corresponding protocols. In astrophysics this
approach has led to the international "Virtual Observatory" initiative, which
now allows for a fast search within extensive volumes of diverse stored
data.
But computer science itself has also researched ways to improve infrastructure
usage and simplify the processing of information.
The most compelling answer of recent years was the
massive development in Grid computing, where a new software layer is used to
connect distributed information infrastructures like clusters, storage servers
and desktops to a loose network (see: \citep{ITF09}).
Several research grid infrastructures
were successfully set up in the past years. The most impressive example is the US "TeraGrid",
funded since 2001 by the National Science Foundation. It offers over a petaflop of total
compute capabilities and many different services and gateways to thousands of US scientists.
Like the Open Science Grid, TeraGrid is based on the Globus Toolkit, enlarged
by an auxilary software package set.
The European enterprise \emph{EGEE} ("Enabling Grids for E-SciencE") was started
2004 as a EU project, sponsored from the European Union's research framework.
EGEE was at the beginning mostly driven by the CERN's new large Hadron Collider and its demand
for compute power. It currently combines about 40.000 CPUs and will in 2010 be transferred
into a new body called EGI (European Grid Initiative). It will then focus mostly on the role
to coordinate the collaboration of the national grid initiatives with supported middlewares
limited to gLite, UNICORE and ARC.
The German national Grid initiative was inaugurated in 2004 by the Federal Research ministry.
It has seen two main stages: \dgrid{} 1 (2005-2008) focussed on Grid application for
fundamental sciences, whereas \dgrid{} 2 (2007-2010) mostly researched Grid use in applied sciences
and industry.
The \astrogridd{} project was part of the first \dgrid{} initiative and started
in 2005. Five major German astronomy institutes participated: AIP, AEI, MPA,
MPE, and ZAH, together with computer science groups from the ZIB Supercomputer
center and TUM. They collaborated on the common project goal: To establish
a collaborative working environment for astronomy which provides the users
with the powerful and reliable software tools and allows easy access to
compute and storage facilities for their scientific work.
To achieve this the projected aimed to:
\begin{itemize}
\item set up a grid-based infrastructure for astronomical
and astrophysical research
\item embed existing computational facilities, astronomical software applications,
data archives and instruments
\item integrate this grid infrastructure into the national \dgrid{} environment
\item provide support for other astronomical groups to join
\item strengthen international partnerships
\end{itemize}
\astrogridd{} has reached these goals in its setup phase which ended early 2009.
The most important results were the first Virtual Organisation management,
now the \dgrid{}-standard (see \ref{VOM}), integration of
special hardware \dgrid{} (\ref{Nbo}, \ref{Rob}) and the production run of
one of the most compute intensive scientific grid application to date (\ref{Geo}).
We hereby present our experiences and results in some detail. The paper
is grouped into two main chapters: First the astrophysical applications
(\ref{Use}) and secondly our developments in information technology (\ref{Services}).
In the summary (\ref{Summary}) we give an outlook on our future plans.
\section{Astronomy and Grid: Astronomical Use Cases running on the
project network}
\label{Use}
Most areas of Astronomical research can profit from
e-Science concepts and grid technology in particular.
In the course of the project, a total of twenty selected astronomic pilot
applications were modified for grid use and implemented.
Use cases ranged from
compute-intensive simulations running on clusters, task-farming
jobs to explore large parameter spaces, analyzing programs accessing
astronomical databases, to complex and specific applications as
described below.
These \textit{use cases} also served to define the requirements for
\astrogridd{} components.
When considering a grid implementation for a given application, it
is decisive to compare how time-consuming and complex
the task will be compared to the benefits, such as speed gain.
Before we describe examples in detail we will state general experiences
for different application classes.
For \emph{large simulations}, e.g. from cosmology
{\it (Mare Nostrum, \cite{URL})}, a grid environment is ideal to reduce typical obstacles.
In a grid infrastructure, a unified and standardised interface
is provided to access the grid-enabled resources of a high performance
computing center. The Grid offers a common way to
execute calculations and manage resulting data.
Also many details, such as efficient data transfer, are handled by
the Grid middleware. The need to learn details about a specific
center is minimised.
{\em Taskfarming jobs}
benefit from the grid infrastructure since there now is a multitude of
resources available to them, as shown for the Geo600-example (\ref{Geo}).
Especially applications with limited requirements can gain immensely from a grid
implementation, where many hundreds of instances can be executed concurrently.
{\em Robotic telescopes} (\ref{Rob}) serve as an example for special scientific
hardware. When combined to a worldwide
network on the basis of grid middleware, this brings important advantages to
coordinated observations.
Typical tasks for such a network are multi-wavelength campaigns or
the continuous monitoring of transient astronomical objects.
A grid based network simplifies
coordination and infrastructure management, since grid devices such as storage
servers and databases are easy to connect. Moreover, global grid schedulers can
automatically coordinate and optimise the observations.
For {\em large data sets} like the Sloan Digital Sky Survey {\it (SDSS, \cite{URL})}
or the Millenium simulation archive \citep{2005Natur.435..629S},
efficient processing poses a huge
problem. The data often have inconsistent formats
and interfaces, and the methods still vary how to define subsets and
correlate them, or even run algorithms against them.
To select data, the scientist needs access to a given database and, in most cases,
also access to additional data files. Corresponding results must be stored in
some accessible device. Since the data volumes are growing large
and the catalogues may be distributed, techniques for data discovery
searching, and transmission (data streaming) are applied,
combined with mechanisms for parallelisation and load-balancing
for the computing processes.
At this point, Grid data processing overcomes the limits of the centralised
data processing approach where so far large volumes of data are
transferred to the application that requests them.
The alternative is to distribute the data processing
within the grid and the use of storage facilities accessible via grid methods.
Whenever possible the application is executed at the location of the data.
Many solutions and design decisions, such as described in the last paragraph,
rely on the
work and standards of the Virtual Observatory.
Hence \astrogridd{} collaborates closely
with the German Astrophysical Virtual Observatory (GAVO), for example
when using GAVOs easy-to-use data access interface to $N$-body simulations.
Via GAVO's participation in the IVOA activities,
\astrogridd{} also participated from the developments
where grid middleware is used to provide VObs services.
We will continue the collaboration between \astrogridd{} and GAVO
in the creation of a virtual data center for astronomy.
To support users in the deployment of their application, we compiled an
\emph{application-to-grid} guide that illustrates the steps
to grid-enable simple applications {\it (App2Grid, \cite{URL})}.
\subsection{Compute-Intensive Generic Applications}
\label{Compute}
Many compute-intensive applications can be subdivided into
multiple small parallel tasks that can run independently,
e.g. on multiple grid resources.
This can usually be achieved by partitioning the physical properties
of the relevant parameter space. In the following, we will discuss three
such compute-intensive grid applications, namely the task-farming
use case Dynamo, NBDOY6++ as an example use case with little I/O,
and the gravitation wave analysis tool GEO600.
We have found that a grid implementation for this application type
can be very beneficial and achieved within a manageable timeframe.
\subsubsection{Dynamo}
\label{Dyn}
The \emph{Dynamo} package shows how to use the
advantages of grid computing without complex programming.
Grid implementation is achieved by a shell script,
that is lean, relatively simple to understand and easy to configure.
It provides a grid connection
for the purpose of \emph{task farming} of serial programs, i.e. the
launching of many
instances of scientific software where the input differs for each run.
We call this type of application \emph{atomic}, since as a serial
calculation it requires no further communication until the results are
produced.
The scientific problem for this example is derived from the field of
Magneto-Hydro-Dynamics. Rotation and turbulence in stars,
accretion disks, and galaxies produce a magnetic field by the dynamo
effect. In the
case shown here the numerical simulation solves the induction
equation with a turbulent electromotive force (alpha tensor).
The general parameter dependence as well as the time
development of a given set are studied, with special focus on the
``flip-flop''-phenomenon of star spots \citep[see][]{dynamo_elstner2005}.
For grid task farming with varying input sets the script reads in any
number of input directories, each of which contains different data.
Together with the executable, the job is then submitted iteratively to
grid resources specified in a list and executed there.
Intermediate output can be retrieved on the fly;
a visualisation example is shown in Fig.~\ref{fig:dynamo-output}.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{img/fig1}
\caption{Example output of a Dynamo run, showing real time results of four
different grid resources}
\label{fig:dynamo-output}
\end{figure}
This solution is currently being applied to a similar use case for
GAVO.
Upgrades of the software would properly include GridSphere
and improve the stage-in process.
The script package can be downloaded from the \astrogridd{} use case
web pages {\it (Dynamo, \cite{URL})}. Users with a demand for atomic, serial jobs
should find this solution easy to implement within \astrogridd{} or
within similar, Globus-based grids.
\subsubsection{NBODY6++ and {$\varphi$}GRAPE}
\label{Nbo}
NBODY6++ and {$\varphi$}GRAPE are two variants of a family of high-order
accurate direct $N$-body simulation codes, which are built upon the
development of a series of earlier versions (1-6) of NBODY codes
\citep{Aarseth1999}. {$\varphi$}GRAPE is the only parallel code of
this type to use special purpose GRAPE6 hardware \citep{Harfst2007}
based on GRAPE which has
been designed by the University of Tokyo to accelerate gravitational force
computations between particles \citep{Makino2003, Fukushige2005}.
While {$\varphi$}GRAPE
is just a plain direct parallel NBODY code using a 4th order Hermite integrator
with
hierarchical block time steps, NBODY6++ is a parallel version of NBODY6 (with
regularisation
of close encounters, Ahmad-Cohen neighbour scheme, and other features), which is
optimised for parallel general purpose supercomputers \citep{Spurzem1999}.
Examples for applications where gravitational forces between many bodies have to
be calculated are globular clusters,
young forming star clusters or central dense star clusters in galactic nuclei.
Recent typical research using direct $N$-body simulations includes,
e.g., models of
galactic star clusters with many binaries \citep{Hurley2007} or
massive binary black holes embedded in dense stellar systems leading to
coalescence
and gravitational wave emission \citep{Berczik2005, Berczik2006, Berentzen2009}.
NBODY6++ and {$\varphi$}GRAPE are use cases of \astrogridd{} which supports
deployment and execution of these as jobs on its resources using single and
parallel hardware,
as well as parallel hardware with special purpose GRAPE cards.
The ZAH offers the 32 node GRACE cluster {\it (GRACE, \cite{URL})} as a resource
of \astrogridd{}, with
reconfigurable specialised hardware to a total peak speed of 4 Teraflop/s \citep{Harfst2007,
Spurzem2007, Spurzem2008}.
Another resource with GRAPE hardware integrated in the \astrogridd{} is a cluster
at the Main Astronomical Observatory in Kiev, Ukraine {\it (MAOKIEV, \cite{URL})}, also
an example of collaboration made possible on the basis of a grid Virtual
Organisation.
Submission of an NBODY job starts with a shell script preparing
an XML-based job description
which is then staged and transported
through the \astrogridd{} Globus middleware. Input data, output data and files
go along with the job submission process. Future goals are to allow the
submission of NBODY jobs through a portlet under the \astrogridd{} web portal and
an integration of the \astrogridd{} file management system to allow handling of
large datasets independent of the job staging process, see deployment
instructions and tutorial {\it (NBODY6++, \cite{URL})}.
\subsubsection{GEO600}
\label{Geo}
The GEO600 use case is a task farming
application. It uses the \textit{Einstein@Home} application
for analysing the data of the GEO600 Laser Interferometer near Hannover,
in order to find signals of gravitational
waves.
Einstein@Home is an ideal candidate for a grid
application because of multi-platform support, well tested software
base, simple resource requirements, built-in checkpoint and recovery
methods, adjustable run time, and linear scaling with node number.
Within the \astrogridd{} project we developed the software
for grid deployment, job statistics and the details for
constant production mode runs, such as restart
after a regular job end and cleanup of recoverable errors.
The deployment is triggered by a script which is invoked in a
Web Service Grid Resource Allocation and Management (WS--GRAM)
job to all grid machines on which the GEO600 jobs should run.
As prerequisites on the target resource only
Subversion (to retrieve the GEO600 source code) and a Perl
interpreter are necessary. All other required
software is installed during the deployment.
Depending on the number of currently pending and active tasks, the submission
script will automatically determine when to submit new tasks to a grid
resource. To establish a continuous submission scheme it is
therefore sufficient to invoke the script periodically on the
target.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{img/fig2.pdf}
\caption{GEO600 CPU Time November 2008, taken from {\it (GEO600-statistics, \cite{URL})}
with x-axis days of month, y-axis CPU hours consumed (sum over all used Grid resources)}
\label{fig:GEO600Stats}
\end{figure}
The intermediate data are stored on the execution hosts since
a central server approach would significantly slow down the job
transfer rates. The submission of the GEO600 jobs can be controlled
from a single workstation, from which the execution hosts are contacted
directly. We plan to use the \astrogridd{} scheduler Gridway for the
distributions of the GEO600 jobs in a future update.
Furthermore it is foreseen to extend the GEO600 use case to
grids based on other grid middlewares than Globus, such as gLite or
Uniform Interface to Computing Resources (Unicore). This would
allow a further distribution of the Einstein@Home jobs in the grids
available.
The GEO600 use case has been running in production mode for more than a year,
and it consumes around 100\hspace*{1mm}000 CPU hours a day on \dgrid{} resources
(see Fig.~\ref{fig:GEO600Stats}).
\subsection{Advanced Applications}
\label{Advanced}
Special purpose astrophysical applications and complex tool environments
can also benefit from a grid infrastructure. We have chosen four relatively
different use cases to represent this class of
astrophysical applications and to show how we approach an implementation.
First, {\em Clusterfinder} is a use case involving
both the deployment and performance of a typical compute-intense
data analysis application and the extensive use of distributed data
resources.
{\em Cactus} shows how monitoring and steering
methods for parallel numerical simulations in the grid can be generalised,
and how a web portal can provide user-friendly
assessment of grid jobs and visualisation.
Access to {\em robotic telescopes} as a grid resource represents a unique approach
to a grid with heterogeneous elements.
Finally, the Planck Process Coordinator Workflow Engine
{\em ProC} has been grid enabled to demonstrate the power of grid computing
when applied to the complex workflow of processing the data product of a
satellite mission. It is a useful example for the handling of observation
which may exceed the local
capabilities or must be organised to suit the demands of a locally distributed
working group.
\subsubsection{Clusterfinder}
\label{Clu}
Clusterfinder is an example for the deployment
of a compute-intense astrophysical application that uses distributed
data, and its increase in performance.
The scientific purpose of Clusterfinder is to reliably identify clusters
of galaxies. It correlates the signature of X-ray images with that in catalogues
of optical observations in order to study the large scale
structure of the universe.
Scanning at optical wavelengths to look for areas
with an unusually large number of galaxies is not an
unambiguous method to identify large clusters, as the galaxies may
be spread out along the line of sight.
Also the observation of the X-ray emission of the hot gas between
galaxies will result in some false identifications as
there are many other X-ray sources.
In order to combine both sources of information, the theory of point processes is applied
to calculate the statistical likelihood of a cluster at any point in space,
and peaks in the combined likelihood are extracted into a catalogue of
galaxy clusters {\it (Clusterfinder, \cite{URL})}.
Data retrieval and the calculations can easily be parallelised as
the algorithm for any point in the sky depends only on data from
nearby points, making Clusterfinder well-suited for grid implementation.
Input of the Clusterfinder program consists of a cosmology and galaxy cluster
model, together with the grid of sky coordinates and redshifts on which the
likelihood is to be calculated.
Scanning the available data consumes about 20,000 CPU-
hours per model. This entails over two years on a single processor or only
several days when the resources of \astrogridd{} and \dgrid{} are used.
An exploratory calculation on a smaller area can be executed on the grid in one night.
To implement Clusterfinder for a grid environment, two software tools were developed:
A "grid-module" handles the installation and compilation on the resource, and an
"environment" suite ensures that the necessary files and connections are available
on any resource.
The logistics of performing Clusterfinder calculations on the grid
involves
splitting the calculation into jobs that can run in parallel,
identifying grid hosts with the capacity to accept a job at the given
time, reassembling the individual results into a coherent whole, and
documenting the internal and external conditions under which the
calculation was carried out. A single calculation is then submitted
as a globus job and calculates a
likelihood map with a given set of parameters.
The results are collected using either the
post-staging capabilities of Globus or by direct grid transfer using
the globus-url-copy command.
In the case of Clusterfinder, special consideration has been given to the
input data. The SDSS and ROSAT all sky survey
{\it (RASS, \cite{URL})} catalogues are too large to copy the
complete data set to a grid node. Therefore the
makefile controlling the Clusterfinder workflow is set up to request
just the data needed from these catalogues.
A demonstration version of Clusterfinder is available as a portal
application. The user can input coordinates and retrieve the corresponding likelihood
map. It is planned to extend this portal to provide a production version
of Clusterfinder as a grid service, including control over all the input
parameters and even the files for the cosmological model.
\subsubsection{Cactus}
\label{Cac}
The {\em Cactus Computational ToolKit (CCTK)} {\it (Cactus, \cite{URL})} is an open
source, general purpose software framework designed to solve large-scale
systems of partial differential equations on supercomputers using finite
differencing techniques.
In the Astrophysics science community Cactus is used to numerically
simulate extremely massive bodies, such as neutron stars and black holes, and
analyse the gravitational wave signal patterns emitted by these objects
as predicted by Einstein's theory of General Relativity.
In \astrogridd{} we have developed application-specific techniques for Cactus
which enable scientists to manage their simulations more efficiently and in a
more collaborative context.
Many of these methods make use
of standard grid technology internally {\it (Deliv. 6.6, \cite{URL})}.
As an example for online application monitoring and steering, users can connect
to a running Cactus simulation just like any standard secure
Hypertext Transfer Protocol (HTTP) web service, with a browser of their favorite
choice.
User authentication and authorisation is based on X.509 grid certificates
(see Section \ref{VOM}).
When logged in, users can query an up-to-date status of the
simulation (e.g. the physical simulation time or
{\tt stdout/stderr} log output).
Built-in online visualisation methods are available
to analyse intermediate simulation data graphically via dynamic generation
of 1D line or 2D surface plots, thus allowing users to evaluate the quality of
the simulation while the application is still running.
Once authorised, they can also steer the simulation by interactively changing
parameters, triggering a checkpoint to be written, or by terminating the job gracefully.
Each Cactus simulation submitted to some supercomputer or grid resource
can also announce itself at startup to the \astrogridd{} information service,
by sending an RDF document with metadata uniquely describing the simulation.
The information service is then able to keep a history of all simulations
submitted by Cactus users. To access and search that simulation database we
provide
a Cactus portlet, based on {\em GridSphere} (see Section \ref{Inter}) as a
standardised web interface.
After logging into the portal, users can query the list of Cactus runs
and filter it by owner, execution host, specific parameter settings
etc. Queries are implemented as Cactus-specific GridSphere portlets
{\it (Deliv. 7.5, \cite{URL})},
allowing the user to easily navigate through the list of simulations and browse
individual query results.
Also available in the portal are the results of nightly Cactus integration
tests, which are performed automatically on various machines in the grid,
in order to verify the correctness of the latest development version of the
code.
\subsubsection{Robotic Telescopes}
\label{Rob}
In recent years a growing number of ground-based robotic telescopes have been
comissioned in
astronomy, due to their increased technical reliability. Robotic astronomy
allows observations from sites which may be astronomically favourable,
but are otherwise remote or even hostile for human
operators, e.g. Antarctica.
With more robotic telescopes becoming operational,
there has been increasing interest in interconnecting them.
Such a telescope network can accomplish new types of obervations.
Examples are an uninterrupted observational campaign
over many hours independent of day time and weather as required in
astro-seismology, and rapid multi-wavelength observations in case of
transient events.
\astrogridd{} contributes to this development with
the {\it (OpenTel, \cite{URL})} software package.
OpenTel achieves the integration of robotic telescopes into the \astrogridd{}
infrastructure and implements a telescope network based on grid middleware.
Each telescope thus acts as an individual grid resource with its own
grid certificate. One immediate
advantage provided by grid technology is the direct connection to compute and
storage resources for data analysis and archiving. Additionally, grid user and
virtual organisation management provides a good solution for the central
management of access rights.
The metadata management relies on Stellaris
(cf. section \ref{Inf}) and the {\it (Usage Record format \cite{URL})} of the Global
Grid Forum transformed into RDF. The metadata is retrieved from Stellaris
using Simple Protocol and RDF Query Language (SPARQL) queries. The monitoring of
observations is similar to the
observation of jobs described below in section \ref{Mon}.
The \textit{Robotic Telescope Markup
Language} (RTML) \citep{RTML} of the \textit{Heterogeneous Telescope Network}
(HTN) \citep{HTN} serves as the protocol for observation requests.
The \textit{OpenTel Tools} package provides programs for the tasks of observation
(job) submission, cancellation, and status queries. The programs are based on
commands of the Globus Toolkit and are executed from the command line.
Further details are described in {\it (Deliv 5.3, \cite{URL})} and in the package documentation.
Several user interfaces have been
developed to simplify operation management:
the \textit{OpenTel Tools}, the \textit{Telescope Map}, the
\textit{Telescope Timeline}, a broker, and a scheduler.
The \textit{Telescope Map} is an interactive user interface shown in Fig.
\ref{fig:TelescopeMap}. It is an extension of the \astrogridd{} \textit{Resource
Map} (section \ref{Mon}) for displaying geographic locations of telescopes
and their properties such as available filters. Also displayed
are day and night regions as well as weather information.
\begin{figure}[th]
\centering
\includegraphics[width=\linewidth]{img/fig3.pdf}
\caption{The Telescope Map is an interactive user interface for the selection
of telescopes. Daytime, weather conditions as well as the geographic
location and properties of the telescopes are displayed.}
\label{fig:TelescopeMap}
\end{figure}
The \textit{Telescope Timeline} is another interactive user interface useful for
monitoring {\it (Deliv. 2.7, \cite{URL})}. It is an extension of the \astrogridd{}
\textit{Timeline} (\ref{Mon}) and displays information about executed observations with an
appearance similar to Fig. \ref{fig:JobTimeline}.
The \textit{broker} achieves an automatic selection of telescopes based on the
requirements of an observation {\it (Deliv. 5.5, \cite{URL})}. Filters and geographic coordinates
but also the
dynamic data such as the current weather conditions are examples for selection
criteria.
The \textit{network scheduler} generates observation schedules of the desired
duration {\it (Deliv. 5.8, \cite{URL})}.
Whenever necessary, an observation is handed over to be continued by another
telescope of the network. An example
for a 24\,h observation of the star Gliese 586A (Gl586A) in the small network of Fig.
\ref{fig:TelescopeMap} is shown in Fig. \ref{fig:Schedule}.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{img/fig4.pdf}
\caption{Altitudes versus observation time for the network schedule of a simulated 24\,
h observation of Gl586A. The plot is produced by the
OpenTel scheduler. The intersections of the altitude curves provide the
time intervals for the observations by the different telescopes.
Schedules are optimised for object altitude.}
\label{fig:Schedule}
\end{figure}
The OpenTel software has been tested with the AIP's robotic telescope STELLA-I
\citep{STELLA} and simulated networks. It is available at {\it (OpenTel, \cite{URL})}.
\subsubsection{The ProC workflow engine for scientific grid-computing}
\label{Pro}
The Process Coordinator (ProC) is a scientific workflow engine. It was originally
developed as an integral component of the software infrastructure for the Planck
Surveyor satellite mission of the European Space Agency
\citep{2000SPIE.4011....2B}.
Currently, two sets of scientific programs are being executed using the ProC,
each forming a problem-domain
specific toolbox. One is the simulation and data analysis package required for
the Planck mission and cosmic
microwave background (CMB) research \citep{2006A&A...445..373R}. The other is a
post-processing package
for GADGET-simulations of cosmic structure formation
\citep{2005MNRAS.364.1105S}, shown in Fig.
\ref{fig:applications2}. Both cosmological
research areas are expected to benefit strongly from the parallel computing
resources now being accessible
for parameter space sampling problems via the grid-enabled ProC.
\begin{figure}[!ht]
\centering
\includegraphics[width=\linewidth]{img/fig5.pdf}
\caption{ProC supported simulation: Galaxy collision calculated with the
GADGET-simulation package steered by the ProC sampling control element.}
\label{fig:applications2}
\end{figure}
The ProC software package consists of three components: a graphical workflow
editor,
a graphical user interface for workflow execution, and a workflow engine,
equipped with
an application programing interface (API) and a versatile command line interface
for expert users. The ProC is implemented platform independently in Java and
uses the extensible markup language (XML).
One of the advantages of using the ProC, compared to simple scripting, is
its ability to automatically
recognize opportunities for reusage of previously generated computational
results and for parallel execution
of computational units. This latter capability can exploit multiple cores on a
single processor, multiple
processors cooperating in a local cluster, or the hundreds of compute elements
offered by a dispersed grid.
With the help of the ProC Pipeline Editor the user is able to compose and modify
scientific workflows consisting of programs, data flows, and control elements of the ProC library.
Strong data-typing assures that only valid connections between modules can be
made.
The ProC's feature set includes typical control elements (e.g. loops),
a fork/join mechanism, and specialised
``sampling'' elements for the investigation
of high-dimensional parameter spaces via various algorithms. These elements
permit user-controlled parallel
execution of the same program segment on different data.
Within \astrogridd{} the ProC was grid-enabled with the help of the Grid Application
Toolkit (GAT, cf. subsection \ref{subsecGAT}).
In sample runs we used 200 compute elements simultaneously on a remote grid node.
The need to deploy non-portable scientific code to a large number of grid nodes
entailed the development of a comprehensive package of environment modules.
Upon request the ProC package is available free of charge for scientific
computing purposes.
\section{The \astrogridd{} Services}
\label{Services}
In this section we describe the architecture of our
grid implementation and explain the role of several of its components and
services.
We decided to base the astrophysical community grid
on a recent version of Globus Toolkit (GT4) as a
most widespread and advanced middleware solution.
However, grid middleware capabilities are only
generic functions and need enhancements to be of actual use.
In more general terms the middleware serves as an abstraction layer or
translation interface. It connects the \emph{resource} (the individual hardware
and its operating system) with the \emph{grid
resource API} (application programming interface) and with a set of uniform commands
and applications, called the \emph{middleware API}. The last interface is
the one presented to the grid users and grid applications.
An operational grid thus in some ways resembles a
nonlocal operating system with enhanced capabilities, such as distributed storage
or access to connected clusters and their batch systems.
In a second step we then modified or added architecture elements as necessary
for Astronomical applications.
The result is shown in Fig.~\ref{fig:architecture}.
At the resource level we find compute elements (CE), storage elements (SE), and instruments.
While compute and storage elements are common to all grids and can be properly managed by the basic
middleware,
the inclusion of instruments (e.g. robotic telescopes) is one of the additions made by
\astrogridd{}. Another addition is \astrogridd{}'s central information service Stellaris (\ref{Inf})
which stores metadata of components, services and data (yellow block in Fig.~\ref{fig:architecture}).
We further extended the middleware \emph{capabilities} for job and file management (green block in Fig.~\ref{fig:architecture})
by adding data stream management (\ref{Data}). Other components were enhanced: Monitoring
and steering were attached to the Stellaris information service (blue block in Fig.~\ref{fig:architecture}).
With our Virtual Organisation management we achieved user and group management
based on the GT4 security layer (red block in Fig.~\ref{fig:architecture}), to implement a grid that can easily be used by
collaborations to share access rights and data.
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{img/fig6.pdf}
\caption{Sketch of \astrogridd{} architecture, showing the layers and services
that are involved in a grid application, and the pathes of interaction.}
\label{fig:architecture}
\end{figure}
Fig.~\ref{fig:architecture} only illustrates the architecture components.
Not shown is the underlying, interconnecting network and the security layer.
\subsection{Working with \astrogridd{} Resources}
\label{Working}
Each system with built-in security requires the user and even services (hosts,
databases etc.) to authenticate themselves.
The grid uses X509-certificates, i.e. public/private key encryption,\label{X509} for this purpose.
At least one Grid Certification Authority per country
provides such certificates for users, resources and services. With this certificate it is possible
to log onto other grid resources from any grid enabled workstation.
The following subsections
describe some details of grid and resource work.
The subsection about \emph{VOrg Management} shows how the collaborative concept of
virtual organisations and the security layer are tied together. Then, a brief overview
of the procedures for
integrating a resource into the grid is provided.
The last paragraphs introduce different interfaces
provided by \astrogridd{}.
\subsubsection{VO Management}
\label{VOM}
Virtual Organisations (VOrgs, often somewhat confusingly called \emph{VO}'s)
are a central element of any grid. In some
aspects they are the grid representation of the more familiar ``group'' concept
of an operating system. A VOrg is formed by any number of users with a
common intention to share resources, data and access rights in a grid.
In \astrogridd{} any user is authenticated
by an individual X.509 certificate. However, the certificate itself does not allow
access to resources of \astrogridd{} or \dgrid{},
since that right is restricted to members of our
main VOrg ``\astrogridd{}''. Thus each user must also register for membership to
this VOrg.
To improve the registration process and administer the members, \astrogridd{}
uses a service written
by Fermilab, the \emph{Virtual Organisation Membership Registration Service},
{\it (VOMRS, \cite{URL})}. The
registration service itself is only accessible with the user's certificate
installed in the web browser. During the registration process some
of the user's work details are collected, such as name and institution.
The user also has to choose which of the available VOrgs he wants to
belong to. Upon verification by the user's institute,
the VOrg Administrator will grant the membership status.
Additionally to the
main VOrg, in \astrogridd{}
currently four smaller VOrgs exist. These
Sub-Organisations are used by specific institutes for internal grids,
for our robotic telescope resources and our collaboration with GAVO.
To connect the VOrg member database of \astrogridd{} with each resource, we
developed a separate
service. At each resource this service regularly queries the central VOrg
database for changes, and the resulting user list is applied to the resource's
local access management. When an accepted VOrg member then logs on to the resource and
is properly authenticated by the Globus Toolkit, he is mapped to an individual,
local UNIX user account.
Our extension to the VOMRS offers a number of options for local resource
administrators, e.g.
to import only specific VOrgs or white- or blacklist single users. The system
also supports OGSA-DAI (see Section \ref{Data}) and Unicore user formats and cluster
options. Individuals who change their
``distinguished name'' string, e.g. due to a change of institution, can be
mapped back to their former grid account. Even if there is in general no guarantee
for user data to persist in the grid, it is often
useful to re-gain an existing environment of local settings and libraries.
\astrogridd{} established the VORMS based solution in 2006. Since then it was
stable in operation, managing
the about 100 users of \astrogridd{}.
The successful concept was then also adopted by the German \dgrid{}
where it became the standard form of user management.
\subsubsection{Resource integration}
\label{Res}
\astrogridd{} is currently comprised of about 20 grid resources provided
by its member institutions: computer clusters, workstations,
data storage servers, as well as a telescope server.
German astronomers apply for inclusion of a computer resource into
\astrogridd{} on an individual basis; all German academic institutions
are eligible by default. Resources of an Ukrainian institution
have also been included for collaboration.
Ideally, to bring a resource onto the grid takes about fifteen hours for
an experienced administrator. In practice more time may be required,
due to complications in networking,
retrieving certificates, and operating system peculiarities.
Why would an institute invest that work and put their valuable computer resources on
the grid? First, there is anyway considerable overhead for sharing of
resources between institutions: accounts have to be set up, ports opened for
special communications, etc. These problems are solved by bringing
resources onto the grid and using the tools and standard solutions it provides.
Second, on the grid, a resource
has a much wider group of users and can be used to full potential.
All steps required to bring hosts on-line as \astrogridd{} resources are
described at {\it (AGD-Globus, \cite{URL})}.
\subsubsection{Monitoring}
\label{Mon}
In a distributed, diverse grid environment, the monitoring of its parts and processes
is of central importance for users and administrators.
Monitoring can in principle be divided in two categories: resource
and job monitoring.
Resource monitoring for compute and storage resources is realised in
\astrogridd{} through the \textit{Monitoring and Discovery System} (MDS) of the
Globus Toolkit. MDS is a suite of web services to monitor and discover
resources and services on Grids. The gathered information is displayed on
the \astrogridd{} resources overview web page {\it (MDS, \cite{URL})}.
An independent monitoring mechanism has been developed for
telescope resources, which handles telescope-specific information
such as weather.
As a complementary interface to the resource list view, \astrogridd{}
has developed a \textit{resource map} as an advanced user interface for displaying collected
resource information topographically. The \textit{Telescope Map}
in Fig. \ref{fig:TelescopeMap}, discussed in Section \ref{Rob},
is a specialisation for telescopes. Both are based on the Google maps API.
When a resource is selected, additional information about its load and usage
is displayed. The information is obtained via SPARQL queries from {\it (Stellaris, \cite{URL})},
after it has been extracted from MDS, converted into RDF and uploaded to
Stellaris.
The Resource Map can be accessed at {\it AGResourceMap, \citep{URL})}. The
software can be obtained from the \astrogridd{} web page.
Job monitoring is based on globus' audit logging. Audit logging writes job
status information into a database. This information is translated into RDF/XML
and transferred to Stellaris.
The \astrogridd{} \textit{timeline} was developed as a plain user interface to job
information. It is based on the {\it (simile timeline
\cite{URL})}. Jobs are represented by horizontal lines of length
proportional to the job duration. A colour code represents the status. For each
job, additional information such as user ID and name of executable can be
displayed. The search for information can be limited with keywords and in the public area
the details are strongly reduced for privacy reasons.
\begin{figure}[th]
\centering
\includegraphics[width=\linewidth]{img/fig7.pdf}
\caption{The Timeline is an interactive user interface for displaying status,
progress and general information of grid jobs. The top area displays hours,
each line displaying the duration of a job and an identifyer. A mouseclick
opens up an information window (inset), displaying indepth information about the job.
In the below areas the scope of display is days and months. }
\label{fig:JobTimeline}
\end{figure}
Further details about monitoring can be found in {\it (Deliv. 5.9, \cite{URL})}.
\subsubsection{User and Developer Interfaces}
\label{Inter}
In \astrogridd{} there are four different ways available for actual grid use.
The middleware itself provides a commandline interface as well as an API for software.
The Grid Application Tool (GAT) provides an alternative API which hides the underlying grid
middleware and makes its use
transparent. And finally, GridSphere enables developers to quickly
develop portlets for grid applications. Both GAT and GridSphere
do not require the installation of a grid middleware on the
submission host, and it is also possible to use them on Windows machines.\\
{\it The Globus Commandline and API} \nopagebreak
The \astrogridd{} resources are grid enabled by Globus middleware. They can thus be
accessed via the command line interface of Globus.
This
interface allows data transfers and submission of jobs to the
grid and provides many more operations.
For applications, Globus offers a rich API for each component of the
middleware.
The {\it Grid Application Toolkit} \nopagebreak
\label{subsecGAT}
{\it (GAT, \cite{URL})} is an API which offers grid access irrespective of the
middleware which connects the resource to the grid. The GAT Engine and
preliminary adaptors have been developed as part of the EU funded {\it ({Gridlab,} \cite{URL})}.
Within the \astrogridd{} project the Java implementation of
JavaGAT is used. \astrogridd{} added adaptors for SGE, PBS, WS--GRAM
and gLite, and recently also a UNICORE adaptor
(UNICORE 6) was contributed by the DGI--2 project. JavaGAT currently
features adaptors to all the grid middlewares, which are used in \dgrid{}.
JavaGAT uses the security layers of the middleware.
\begin{figure}[ht]
\centering
\includegraphics[width=.5\textwidth]{img/fig8.pdf}
\caption{JavaGAT Architecture}
\label{abb:GATArchitecture}
\end{figure}
The availability of
``local adaptors'' enables the programmer to develop the application
logic without a connection to the grid. The developed application has then
access to all grid middlewares for which JavaGAT adaptors are available.\\
{\it GridSphere}
Like GAT, {\it (GridSphere, \cite{URL})} was developed
as part of Gridlab in 2002. The main
goal of the portal related work focused on building a reliable, structured web interface to
support the European and global grid community.
A portal application can store the specifics of a grid job and run it from any
standard Web browser.
GridSphere is JSR 168 compliant
and thus portlets running in
GridSphere can run as well in other portal frameworks.
GridSphere comes with a variety of core portlets providing all the basic
functionality, such as profile personalisation,
layout customisation and administrative use.
The GridSphere \astrogridd{} portal offers a portlet for
Clusterfinder, and a Cactus Portlet is available at AEI.
\subsection{Components of the \astrogridd{} Architecture}
\label{Components}
The middleware of the \astrogridd{} builds on existing grid tools to
integrate diverse types of resources.
To accommodate the specific requirements of the \astrogridd{} community,
existing components were extended or substituted by newly developed ones.
However, to let other communities benefit from these developments,
we aim at generic solutions wherever possible.
The following subsections describe
(\textit{1.})~the information service Stellaris,
for central storage of all metadata and status information
(\textit{2.})~enhanced data storage capabilities of the grid,
(\textit{3.})~grid access to data sources, efficient data transport
and data streams,
and
(\textit{4.})~options for job submission.
\subsubsection{Information Service}
\label{Inf}
\label{Stellaris}
The goal of the \astrogridd{} information service,
Stellaris~\citep{ges07_stellaris}, is to provide a uniform framework
for storage and querying of grid related information and
metadata. Typical usage scenarios result in questions such as:
\textit{Was data-set X already analysed with program Y and parameter
set Z? Where is the output data from August 12th last year? Why did
my last grid job fail? Who created the data producing the graph from
the latest number of Science and where can I find it?}
Within \astrogridd{}, we distinguish between four different types of
metadata: (1)~\textit{resource metadata} describes properties of the
shared resources (e.g: for a telescope the aperture, filters, ccd, capabilities), (2)~\textit{activity state} reflects the current and
logged state of activities in the grid such as the location and
characteristics of jobs and file transfers (e.g. user, name of telescope, its location, start and end of observation, priority), (3)~\textit{application
metadata} describes the program and its input parameters (e.g.: RA/Dec of the target, requested filters, etc.), and
(4)~\textit{scientific metadata}, which includes information about the
provenance of data-sets which are used (science project, type of data (image, table), provenance, references, etc.). In order to respond to
the previously stated example questions we will often need to query
metadata of more than one of the information types. Therefore, the
integration of metadata from many different sources is a strong
requirement on the information service. We solve this problem by using
the common metadata model {\it (RDF, \cite{URL})} for all the
information types.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/fig9.pdf}
\caption{The \astrogridd{} information service framework}
\label{fig:stellaris-architecture}
\end{figure}
The information system architecture in \astrogridd{} (see
Fig.~\ref{fig:stellaris-architecture}) consists of three main
components; Stellaris, the information service, data producers
(applications, grid resources, and services) and data consumers
(applications, services and users). The Stellaris service itself is
designed around two World Wide Web Consortium (W3C) standards:
RDF for metadata
representation and {\it (SPARQL, \cite{URL})} which is used for
querying the information service. Thereby, we can benefit from
existing tools for e.g. data integration and visualisation developed
by the web-community at large. The {\it (Stellaris software, \cite{URL})}
was developed within the \astrogridd{} project and is made
available under the Apache Open Source license.
\subsubsection{File Management}
\label{Fil}
\definecolor{sourcecomment}{gray}{0.35}
\newcommand{\sourcecomment}[1]{\textcolor{sourcecomment}{\sourcecode{#1}}}
\newcommand{\sourcecode}[1]{\texttt{#1}}
\newenvironment{sourcecodeENV}{\noindent\begin{quote}\small\ttfamily}{\end{quote
}}%
\newcommand{\sourcecode{glo\-bus-url-co\-py}}{\sourcecode{glo\-bus-url-co\-py}}
\newcommand{\sourcecode{glo\-bus-rls-cli}}{\sourcecode{glo\-bus-rls-cli}}
The \astrogridd{} Data Management (ADM) has been developed as a tool for
distributed file management.
It offers access to the user's files through the concept of a virtual file
system via the command line, a web interface,
or a programming interface.
Globus contains a software tool denoted as
Globus Replica Location Service (RLS), which allows to manage file
replicas across the grid resources. We found the latter to be somewhat difficult
to use
with job submission through the GridWay service to an execution
host, whose selection is not directly controlled by the user.
Our ADM system delivers
proper software tools to identify files and tag them with metadata independent
of
the original job execution environment. This is especially useful if the user
needs to deploy data
files required for job start and to
access files after a job execution for post-processing.
ADM uses a relational database to store a unique file descriptor, i.e. a logical
file iden\-ti\-fier
for each file,
plus meta data for each file or directory, e.g. the owner and a timestamp to log
when
the entry has been registered with the filesystem.
While file ownership and creation time\-stamp are mandatory, and ADM
transparently cares for their
maintenance, meta data and individual files can be endowed with custom (user-
defined) properties.
ADM provides the command line client \sourcecode{adm}, including a C-library,
which offers an easy-to-use access
to the stored
files. Furthermore, ADM ships with a web interface which permits to browse the
virtual filesystem graphically.
\subsubsection{Data base access and data stream management}
\label{Data}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/fig10.pdf}
\caption{The \astrogridd{} database management. Users can access the
data sets both interactively and with batch jobs. The actual nodes
on which the data sets reside is kept transparently.}
\label{fig:databases}
\end{figure}
Access to \emph{databases} storing observational and simulation data has
become an important part of daily astronomical work. Depending on the
various application requirements and data characteristics, databases
store the actual raw measurements, results and / or the according metadata.
\astrogridd{} considers it a major task to develop database
technology further for building scalable data management
infrastructures. We are motivated by a growing number of users and especially the
expected data rates of forthcoming projects, such as the
Panoramic Survey Telescope and Rapid Response System (Pan-STARRS)
or LOFAR.
Due to the distributed nature of data sets and research groups, using a
grid-based approach is a natural choice for the astrophysics community.
The Open Grid Services Architecture---Data Access and
Integration {\it (OGSA-DAI, \cite{URL})} services enable the
integration of databases in grid environments and they are part of the
Globus grid middleware. Therefore we chose OGSA-DAI to provide database
data on resources within the \astrogridd{} and \dgrid{} infrastructure.
Fig.~\ref{fig:databases} gives an overview of the \astrogridd{} database
management.
In order to reduce the network traffic induced by distributed queries on
various data sources and to achieve load balancing within the community
grid, various load balancing techniques have been tested and
evaluated~\citep{hisbase-vldb2007,community-training,hisbase-fgcs2008,workload-aware-hisbase}.
Especially data-centric applications, such as the Clusterfinder use case
(Section~\ref{Clu}), benefit from the increased throughput introduced by
load-balancing techniques for their database accesses (in the case of
Clusterfinder to the SDSS and ROSAT databases). The database relations have
a fixed schema, which is also available via the metadata of the database
system used. Data access and manipulation is performed via the
standardised query language SQL. In future we also plan to support the
Virtual Observatory Query Language (VOQL, formerly ADQL, {\it \cite{URL})}),
a specialised query language for astronomical data based on SQL and an important
effort by the International Virtual Observatory Alliance (IVOA).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/fig11.pdf}
\caption{The \astrogridd{} data stream management. Users can publish
and subscribe to data streams and share their stream-enabled
operators using operator repositories. Internally, the data stream
services provide optimisation capabilities. An example of an operator would be a function
performing a RA/DEC transformation into various coordinate systems. Another example operator
would be a Java program listening for specific data from a data stream of an instrument source.}
\label{fig:datastreams}
\end{figure}
Another prevalent processing model for e-Science data are \emph{data
streams}. Sensor sources (e.g., telescopes, satellites) continuously
generate such data output. Due to the fundamental importance of these
sensors within astrophysics, we investigate efficient data stream
processing models within \astrogridd{}. An important initial processing
step of data streams is data filtering. Existing middleware structures
do not offer such a processing model (yet).
XML or XML-based protocols are the
de-facto communication standard for web services and as well many astronomical
IVOA protocols. Therefore, \astrogridd{} uses XML-based processing
of data streams that are published by data sources and scientific
applications can subscribe to. In order to increase the reusability of
data streams for multiple subscriptions, the query processing is
performed by installing individual processing steps (\emph{operators})
within the grid network.
Running a data stream management within astrophysics requires means to
define and commonly share scientific operators based on already
implemented functionality. A reusable operator
is e.g. a chi-squared filter for configurable thresholds for quality assurement.
Mobile operator repositories enable researchers to
provide these operators via their own institution (e.g., personal web
page) and to describe the operators with appropriate metadata in the
information service (Section~\ref{Inf}). This considerably facilitates
collaborating researchers to discover and reuse such existing operators.
Signing the operators with the author's grid certificate allows users to
verify the trustworthiness of the operator's source.
Techniques such as early filtering and early aggregation lead to good
results, especially in the context of multi-subscription
optimisation~\citep{KuntschkeSKR-VLDB05DEMO,KuntschkeK-LNCS2006,KuntschkeK-CIKM2006}.
The \astrogridd{} data stream management (see
Fig.~\ref{fig:datastreams}) is available on all \astrogridd{} resources.
By developing data stream processing techniques for grid environments, we
moreover support the conversion from persistent data sets to streams. A
combined, integrated processing of persistent and streaming data, as
required by applications such as SED classification, is
possible and results in better performance~\citep{starglobe}.
\subsubsection{Job Management}
\label{Job}
\label{Gridway}
\astrogridd{} has implemented job management through the independently developed
{\it (GridWay \cite{URL})} Metascheduler
on top of the standard globus middleware layer. As a metascheduler, GridWay
enables large-scale, reliable and efficient sharing of computing resources
managed by different Local Resource Management (LRM) systems, such as the
Portable Batch System (PBS), the Sun Grid Engine (SGE), or the
LSF,
within a single organisation (enterprise grid) or scattered across several
administrative
domains. In the second case GridWay can interact also with other grid middleware
than Globus, such as e.g. Unicore or gLite.
GridWay is meanwhile fully
integrated into the globus open source project, adheres to Globus philosophy and
guidelines for collaborative development and so welcomes code and support
contributions.
GridWay has its own set of line mode commands, such as e.g. gwsubmit, gwstat
or gwhosts to control the available resources and one's own jobs.
GridWay can serve as a comfortable user interface to the entire grid,
similar in style to a local resource management system (LRM, queue system).
Note that resource informations have to be provided through the Globus MDS
information
service and middleware to the GridWay server.
The LRM ``Fork'' means that single processor jobs are accepted to be
started by
a Unix process fork. Another LRM available is PBS (portable batch system) for
parallel jobs.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{img/fig12.pdf}
\caption{Flowchart of Steps to Submit NBODY Job via GridWay}
\label{abb:GridWayNBODY}
\end{figure}
Fig.~\ref{abb:GridWayNBODY} illustrates three stages of a job run,
for the example of an NBODY calculation (\ref{Nbo}). The first step is the
deployment which delivers an XML based job description as described in
Section~\ref{Nbo}. Such XML jobs can be submitted through the standard Globus
GRAM job submission interface and middleware to the Gridway host rather than directly
to the LRM of an execution host.
Gridway then receives this job through Globus and
the Gridway Job Manager acts as a broker and scheduler. It selects an
available execution host through a matchmaking process and submits the job
to it by Globus GRAM. At present we have
implemented a simple round robin strategy for single fork jobs; the GridWay
software in principle allows to implement more complex scheduling algorithms
including user defined parameters. It is always possible to submit jobs targeted to a
certain resource through GridWay, though this is not the desired mode of operation.
The third step is the execution and postprocessing stage, during which it has
to be ensured that the build process properly works on the target resource and
that the user receives the simulation results for postprocessing.
The two-
step submission procedure with two Globus GRAM jobs connected by the GridWay server is
denoted as \emph{GridGateWay}. Note that it is also possible for the user to directly
logon to the GridWay host and use it for job submission directly.
\section{Summary and Outlook}
\label{Con} \label{Summary}
\subsection*{Summary}
\astrogridd{} established a nation-wide pool of compute, data,
and instrument resources accessible for astronomers.
It also integrated special hardware compute resources
like clusters of GRAPE6 boards into the grid. The use case NBODY6++ shows impressively it's
exploit in a grid environment.
Well documented procedures explaining how to bring a resource into
the grid are available. Authentication and authorisation for the use of the grid resources is managed by the
Virtual Organisation. Moreover, the resources of
\astrogridd{} were integrated in \dgrid{}, which in turn provides access to
the resources of the whole \dgrid{} for the \astrogridd{} members.
Robotic telescopes were
also integrated into the grid as a special hardware resource, so they can be accessed like any other compute node.
A variety of typical astronomical applications was brought to the grid.
We investigated simple but compute intensive
task farming applications like Dynamo or GEO600 and showed that it is very easy to
run them on the grid without the need of complex reprogramming. We also
looked into more complex and data intensive tasks like e.g. the Clusterfinder and
ported them to the grid.
The Clusterfinder program, e.g., is now able to scan the
entire available data for one model parameter set within several days, whereas it would
need more than two years on a single processor.
We developed a
set of high-level services: Programmers can now make use of an information
service to handle meta data and to monitor jobs and resources.
Also, they can abstract from interfacing a specific grid middleware and use GAT instead. Moreover,
GridSphere enables a user friendly grid access with any
web browser. The ProC workflow engine supports the composition of
scientific workflows and their parallel grid execution.
Resource brokering and job scheduling is augmented in \astrogridd{} by the
GridWay Metascheduler. Thereby more complex scheduling algorithms can be
implemented.
The \astrogridd{} Data Management ADM handles file staging
in combination with the job submission via GridWay.
It thus provides an easy-to-use access to stored files and their replica in the grid.
The integration of databases and data streams is also provided by
\astrogridd{}. Special attention is paid to optimising techniques that
guarantee good performance results as well for throughput as for response time.
Many of the services summarised above are addressed in
close collaboration with GAVO, whose focus is more on the side of the
scientific user, whereas
\astrogridd{} is solving the technical and infrastructural aspects.
Most of the German community grids, except the High Energy Physics community,
employ the Globus Middleware.
On EU level, gLite developed by EGEE is dominating all grid efforts, whereas internationally, the split
is equal between EGEE/gLite and Globus.
A lot of effort goes into interoperability of these
different middlewares, but sometimes there still are barriers.
\astrogridd{} is collaborating with both EGEE/EGI
as well as the Open Science Grid (OSG).
\subsection*{Outlook}
The important next step is to enlarge the community of grid users. For this
purpose, the consulting
and the support of new users has to be professionalised. We are
able to offer considerable resources in compute power and storage to the
scientific community.
There are some infrastructure elements that we would like to improve, e.g.
our methods for resource brokering and
job scheduling.
Proper and efficient handling of large amounts of data is a key
feature that the grid offers. Upcoming projects
such as LOFAR, PanStarrs or LSST will produce immense data
volumes whose storage, administration, and processing can no
longer be handled by local institutions. Moreover, this data is in many cases
processed in distributed, international working groups.
Grid technology is an appropriate answer to these new challenges. Due to the
parallelisation potential and the security layers of the grid, administration and access can be achieved
even in a complexity where central processing hits its limit.
For this purpose we need a
powerful data management component to enable handling files, data
bases, and data streams in a coherent framework.
\astrogridd{} established a solid basis to cope with these future
challenges arising from forthcoming scientific needs. We are looking
forward to establish our solutions as a cornerstone of German
e-Astronomy.
\
\noindent
{\bf Acknowledgments}\\
This work is supported by the German Federal Ministry of Education and
Research within the \dgrid{} initiative under contracts 01AK804[A-G].
AIP acknowledges support by EFRE, grant No. 9053
ARI-ZAH acknowledges support of the GRACE project by Volkswagen Foundation
grant No. I/80\,041-043 (Project {\sc 'GRACE'}) and by
the Ministry of Science, Research and the Arts of Baden-W\"urttemberg (Az:
823.219-439/30 and /36).
We acknowledge the special memorandum of
understanding between Astrogrid-D and the astronomical segment of
Ukrainian Academic GRID Network.
We thank Ignacio Llorente, Ruben Montero, and Tino V{\'a}zquez of Universidad
Complutense
Madrid, Spain, for help and support in installation and operation of the GridWay
service.
|
1,314,259,993,216 | arxiv | \section*{References}
|
1,314,259,993,217 | arxiv | \section{Introduction}
\label{sec:intro}
Black hole X-ray binaries (BHXB s) that accrete at low Eddington ratios ($\lesssim 0.01 L_{\rm Edd}} % L_{Edd$, where $L_{\rm Edd}} % L_{Edd$ is the Eddington luminosity) are associated with compact radio emission from partially self-absorbed synchrotron jets \citep{blandford79, fender01, remillard06}. These relativistic outflows provide channels for accretion flows to shed angular momentum, and to transport energy out to large distances \citep[e.g.,][]{meier01, fender16, romero17, douna18}. Radio observations of relativistic jets are therefore crucial for understanding how matter is transported through, and away from, accretion flows with low mass accretion rates.
The most weakly accreting black holes reside in the `quiescent' spectral state, which we define here by a soft X-ray spectrum that can be characterised by a power-law photon index\footnote{The photon index $\Gamma$ is defined by $N_E \propto E^{-\Gamma}$, where $N_E$ is the photon number density per unit energy, $E$.}
of $\Gamma \sim 2.1$ \citep[e.g.,][]{tomsick01, kong02, corbel06, plotkin13, reynolds14}. Most BHXB s spend the majority of their time in quiescence, which usually corresponds to (0.5-10 keV) X-ray luminosities $\lx \lesssim 10^{-6} - 10^{-5} L_{\rm Edd}} % L_{Edd$ \citep{plotkin13, plotkin17}. In quiescence, a larger fraction of the radiative power emitted by the accretion flow/jet system appears to be emitted in the radio waveband \citep[e.g.,][although see \citealt{yuan05} for a prediction otherwise]{fender03, corbel13, gallo18}. This increased dominance of radio emission implies that the radio domain can be effective for discovering quiescent BHXB s \citep{maccarone05, fender13}, which would produce less biased samples of BHXB s in the Milky Way compared to the traditional method of discovering BHXB s through X-ray emission during an outburst.
Coordinated radio and X-ray surveys are indeed starting to reveal \textit{candidate} BHXB s in Milky Way globular clusters \citep{strader12, chomiuk13, miller-jones15, shishkovsky18} and in the field \citep{tetarenko16}. Such an approach is highly complementary to other strategies for discovering quiescent BHXB s, through, e.g., H$\alpha$ surveys \citep{casares18}, X-ray surveys \citep[e.g.,][]{agol02, jonker14}, and optical spectroscopic searches capable of discovering \textit{non-accreting} black hole candidates in detached binary systems \citep{giesers18, thompson18}.
At the moment, even our most sensitive radio facilities are only capable of probing the tip of the quiescent BHXB\ population. We have currently detected radio emission from only four nearby quiescent BHXB s ($\lesssim$4 kpc), including one of the most luminous known quiescent systems, V404 Cygni\ ($\lr \approx 10^{28}\,{\rm erg~s}^{-1}$ at 5 GHz; \citealt{hynes04, gallo05, rana16}), and three of the least luminous known systems (A 0620$-$00, XTE J1118+480, and MWC 656; $\lr \approx 10^{26}\,{\rm erg~s}^{-1}$; \citealt{gallo06, gallo14, dzib15, ribo17, dincer18}). Thus, we are still establishing the empirical properties of quiescent BHXB\ radio jets, which is an essential step for eventually defining reliable radio-based selection criteria.
To help inform future radio surveys, this paper focuses on quantifying the radio variability characteristics of quiescent jets. Understanding that variability level is important, as (i) it would establish whether radio variability can discriminate quiescent BHXB s from other classes of radio-emitting objects; and (ii) it would help determine how close in time radio observations must be coordinated with other multiwavelength data.
To open the quiescent radio time domain we focus on V404 Cygni, which represents the luminous end of the quiescent BHXB\ population. V404 Cygni\ contains a dynamically confirmed black hole ($9.0^{+0.2}_{-0.6} M_\odot$) orbiting a K3 III companion \citep{khargharia10} in a long $6.473 \pm 0.001$ day orbit \citep{casares92}, and it was the first transient BHXB to have an unambiguous X-ray detection in quiescence ($4\times10^{33}\,{\rm erg~s}^{-1}$ from 0.2-2.4 keV with the \textit{ROSAT} satellite; \citealt{wagner94}). It also has a well-established distance ($2.39\pm 0.14$ kpc) from radio parallax measurements \citep{miller-jones09}, which has also been measured in the optical (albeit with poorer precision) by \textit{Gaia} \citep{gaia-collaboration18}. Crucially, there already exist $\approx$10$^2$ radio observations of V404 Cygni\ in the Very Large Array (VLA) archive dating back to the 1990s. V404 Cygni\ is the \textit{only} quiescent BHXB\ with such a rich radio dataset (the next best sampled quiescent BHXB s are A0620$-$00 and MWC 656, each of which have 2-3 published radio detections; \citealt{gallo06, dzib15, ribo17, dincer18}). However, the full VLA archive has yet to be synthesized into a single variability study. The quiescent X-ray variability characteristics of V404 Cygni\ have already been well-quantified from minute through month timescales \citep[e.g.,][]{wagner94, hynes04, bradley07, bernardini14, rana16}, making V404 Cygni\ ripe for radio-to-X-ray comparisons.
In this paper we reanalyse all observations of V404 Cygni\ in quiescence with the VLA through 2015, and we also consider 14 observations with the Very Long Baseline Array (VLBA). We describe our data reductions in Section~\ref{sec:obs}. In Section \ref{sec:res} we describe the flux density variability characteristics on long (days through decades) and short (minutes to hours) timescales, and we explore radio spectra on long and short timescales. We discuss the radio jet properties in Section~\ref{sec:disc}, and in Section~\ref{sec:mwcoord} we provide recommendations on how to combat radio variability when coordinating multiwavelength observing campaigns on quiescent BHXB s and on BHXB\ candidates at luminosities comparable to V404 Cygni.
\section{Archival Observations and Data Analysis}
\label{sec:obs}
V404 Cygni\ has undergone two major outbursts during the VLA era, the first discovered on 1989 May 22 \citep{makino89}, and the second outburst on 2015 June 15 \citep{barthelmy15, kuulkers15, negoro15, younes15}. After the main 2015 outburst ended in July (see next paragraph), renewed X-ray activity was observed on 2015 December 21 \citep{malyshev15}, and V404 Cygni\ went through a mini-outburst that lasted $\sim$30 days \citep[see, e.g.,][]{kimura17, munoz-darias17, tetarenko19}.
For the 2015 outburst, from \citet{plotkin17} we consider that V404 Cygni\ re-entered quiescence around 2015 July 23, based on when the X-ray spectrum finished softening from $\Gamma \sim 1.6$ to $\Gamma \sim 2.0-2.1$ (see their Figure 1 and Table 3). We therefore exclude VLA observations on or before July 23 in this study, but we include VLA observations after July 23 (four total). Since the X-ray characteristics after July 23 appear similar to those before the outburst, we infer that the physical properties of the underlying accretion flow are no different during these 2015 observations compared to pre-outburst (and by extension, we do not expect physical differences in the radio jet pre- and post-outburst). We therefore include these four observations from 2015 for completeness. However, out of caution, we generally use different symbols/colors to mark the post-outburst observations in figures within this manuscript, and we often remove the post-outburst observations from statistical tests. We do not include any data during or after the 2015 mini-outburst.
The 1989 outburst lasted much longer than the 2015 outburst \citep[e.g.,][]{tetarenko19}. Similar quality X-ray spectral coverage of the transition into quiescence is not available from 1989, making it difficult to pinpoint when V404 Cygni\ re-entered quiescence. \citet{han92} monitored the 1989 outburst and decay with the VLA for two years, and from their Table 1 the 4.9 and 8.4 GHz flux densities remained brighter than 1 mJy even as late as 1990 September, perhaps indicating elevated accretion (for reference, typical flux densities between outbursts were 0.2--0.4 mJy, e.g., \citealt{gallo05, hynes09, rana16}). We suspect that only the final two epochs in \citet{han92} might represent the source in quiescence (taken on 1991 January 31 and 1991 May 31), but we cannot be certain. We therefore conservatively exclude all epochs already reported by \citet{han92} from our study, and we begin our dataset with a VLA epoch taken on 1991 September 25. Although, we stress that none of our results are (qualitatively) affected if we were to include the final two epochs from \citet{han92}.
We found a total of 129\ observations over five observing frequencies from 1.4 - 22.5~GHz taken with the historical VLA, i.e., before the VLA was upgraded and re-dedicated as the Karl G.\ Jansky VLA in 2012. The majority of these data are at 8.4 GHz (86\ observations). We also found five observations with the upgraded VLA from 4--8 GHz and four taken from 8--12 GHz. We also include 14\ observations with the Very Long Baseline Array (VLBA) at 5.0 GHz (13\ observations in 2014) and at 8.4 GHz (1\ observation in 2008). A summary of observations is provided in Table~\ref{tab:nobs}, and a catalog of flux densities in Table~\ref{tab:obslog}. Note that there are two observations at 8.4 GHz for which we could not obtain good calibration solutions. We include entries for those observations in our catalog (Table~\ref{tab:obslog}) for completeness, but we have omitted them from the tally of observations in Table~\ref{tab:nobs}. In total, we measure flux densities (or limits) for 150\ observations.
\begin{deluxetable}{c c c c}
\tablecaption{Number of Observations per Frequency \label{tab:nobs}}
\decimals
\tablecolumns{4}
\tabletypesize{\footnotesize}
\tablewidth{8in}
\tablehead{
\colhead{Observing Band } &
\colhead{Telescope} &
\colhead{$N_{\rm obs}$} &
\colhead{$N_{\rm det}$}
}
\colnumbers
\startdata
L /1.4 GHz & Historical VLA & 4 & 1 \\ \hline
C / 4.9 GHz & Historical VLA & 24 & 14 \\
\nodata & Upgraded VLA & 5\tablenotemark{a} & 5 \\
\nodata & VLBA & 13 & 13 \\ \hline
X / 8.4 GHz & Historical VLA & 84\tablenotemark{b} & 59 \\
\nodata & Upgraded VLA & 4\tablenotemark{a} & 4 \\
\nodata & VLBA & 1 & 1 \\ \hline
K$_{\rm U}$ / 14.9 GHz & Historical VLA & 11 & 5 \\ \hline
K / 22.5 GHz & Historical VLA & 4 & 0 \\
\enddata
\vspace{0.3cm}
\tablenotetext{a}{Four observations with the upgraded VLA post-2015 outburst were taken in subarray mode, with half of the antennas observing at C-band and the other half at X-band, which we count as separate C- and X-band observations in this table. The fifth upgraded VLA observation was taken at C-band (4-8 GHz) in 2013. We often add a 7.7 GHz flux density measurement from this C-band observation into our analysis of the X-band sample (which is not reflected within this table).}
\tablenotetext{b}{There are two additional observations in the archive at this frequency for which we could not obtain a calibration solution.}
\tablecomments{Column (1) observing band and frequency.
Column (2) the telescope.
Column (3) the number of observations.
Column (4) the number of detections.}
\end{deluxetable}
\renewcommand\arraystretch{1}
\begin{deluxetable*}{c c c C C C C C c c c }
\tablecaption{Catalog of Radio Observations \label{tab:obslog}}
\decimals
\tablecolumns{11}
\tabletypesize{\footnotesize}
\tablehead{
\colhead{Date} &
\colhead{MJD} &
\colhead{Program ID} &
\colhead{Configuration} &
\colhead{$\tau_{\rm source}$} &
\colhead{Frequency} &
\colhead{$f_\nu$} &
\colhead{$\alpha_r$} &
\colhead{Primary} &
\colhead{Secondary} &
\colhead{PI} \\%11
\colhead{} &
\colhead{} &
\colhead{} &
\colhead{} &
\colhead{(min)} &
\colhead{(GHz)} &
\colhead{(mJy)} &
\colhead{} &
\colhead{Calibrator} &
\colhead{Calibrator} &
\colhead{}
}
\colnumbers
\startdata
1991 Sep 25 & 48525.012 & AH424 & {\rm BnA} & 36.8 & 4.9 & 0.238 \pm 0.042 & 1.59 \pm 0.36 & 3C286 & 2025+337 & Han \\
\nodata & 48525.012 & AH424 & {\rm BnA} & 38.7 & 8.4 & 0.560 \pm 0.046 & \nodata & 3C286 & 2025+337 & Han \\
\nodata & 48524.984 & AH424 & {\rm BnA} & 55.7 & 14.9 & <0.695 & \nodata & 3C286 & 2025+337 & Han \\
1991 Oct 31 & 48561.022 & AH390 & {\rm BnA} & 59.8 & 4.9 & 0.321 \pm 0.028 & -0.72 \pm 0.33 & \nodata & 2025+337 & Hjellming \\
\nodata & 48561.004 & AH390 & {\rm BnA} & 46.0 & 8.4 & 0.218 \pm 0.034 & \nodata & \nodata & 2025+337 & Hjellming \\
\enddata
\tablecomments{This table is available in its entirety in the online journal. Only a portion is shown here to illustrate its form and content.
Column (1) calendar date of each observation.
Column (2) modified Julian day.
Column (3) program code for the VLA or VLBA.
Column (4) VLA configuration (or `VLBA' to denote VLBA observations).
Column (5) dwell time on V404 Cygni\ in minutes.
Column (6) observing frequency. Historical VLA observations include 100 MHz of bandwidth, observations after the upgrade include up to 1024 MHz.
Column (7) peak radio flux density at the observing frequency in column (6). All error bars are reported at the 68\% confidence level (and they include systematic errors on the flux density calibration scale); upper limits are at the 5$\sigma_{\rm rms}$ level. Blank entries indicate that we could not calibrate the observations.
Column (8) radio spectral index ($f_\nu \propto \nu^{\alpha_r}$) for epochs with multifrequency data taken within 30 min.
Column (9) the primary flux calibrator used for each observation. For blank entries, we manually set the flux scale using the expected flux density of the secondary calibrator, based on time-adjacent observations that used the same secondary calibrator at the same frequency.
Column (10) the secondary flux calibrator.
Column (11) principal investigator of observing program. }
\end{deluxetable*}
\subsection{Historical VLA}
\label{sec:histvla}
The vast majority of data from the historical VLA were obtained through campaigns led by either Robert Hjellming or Michael Rupen. All observations were taken in continuum mode, using two 50 MHz wide spectral windows. We reduced historical VLA observations following standard procedures within the Astronomical Image Processing System version 31DEC14 \citep[{\sc aips};][]{greisen03}. We set the flux scale (using either 3C 48 or 3C 286) with the task {\sc setjy} and (time-dependent) \citet{perley13} coefficients. The complex gain solutions were then solved using scans of a secondary point-source calibrator (usually J2025+337, although see Table~\ref{tab:obslog} for exceptions), and the flux scale was bootstrapped from the primary calibrator using the task {\sc getjy}. Primary flux calibrator scans were not included for 24 observations. For those epochs, we manually set the flux scale to the expected value of the secondary phase calibrator by interpolating the flux densities reported by {\sc getjy} of time-adjacent observations of the same calibrator at the same frequency. In these cases, we add a systematic uncertainty based on the level of variability of the secondary calibrator in nearby epochs, typically $\approx$5\%. Finally, we add 5\% and 10\% systematic uncertainties to the flux scale for observations below and above 10 GHz, respectively.
The data were imaged using the task {\sc imagr}, using Briggs weighting with a robust value of zero to help minimise sidelobes from nearby sources in the field. Of particular note is that a bright Jansky-level source lies 16.6 arcmin southeast of V404 Cygni\ (J2025+337, which was often used as the phase calibrator). This source is difficult to deconvolve during the imaging process because of bandwidth smearing (given the 50 MHz spectral windows of the historical VLA).
We visually inspected a random subset of images of V404 Cygni\ over a range of frequencies and array configurations. We find that sidelobe artifacts from J2025+337\ do not reach V404 Cygni\ at observing frequencies $>$8 GHz.
At frequencies $<$8 GHz,
there are multiple instances where artifacts from J2025+337\ appear to increase the noise level near the location of V404 Cygni, but there are not cases where those artifacts obviously bias the measured flux densities.
Flux densities were measured using the task {\sc jmfit}, by fitting a two-dimensional Gaussian (fixed to the width of the synthesized beam) at the known location of V404 Cygni. Given potential systematics raised by J2025+337\ at lower frequencies, we require detections to display peak flux densities at $>$5$\sigma_{\rm rms}$, where $\sigma_{\rm rms}$ is the root-mean-square (rms) noise measured in a blank region of the sky (with upper limits for non-detections calculated as $5\sigma_{\rm rms}$).
\subsection{Karl G. Jansky Very Large Array}
\label{sec:obs:jvla}
Our sample also includes five epochs with the upgraded VLA. The first was taken in 2013 \citep{rana16}, and the other four were taken at the end of the 2015 outburst after the system re-entered quiescence \citep{plotkin17}.
\subsubsection{2013 Quiescence}
\label{sec:obs:rana}
The 2013 observation lasted $\approx$9 hours in C band (4-8 GHz) in B configuration (maximum baseline\,$\approx 11$\, km), under program code 13B-016 (PI Corbel). Two basebands of bandwidth 1024 MHz were placed at 5.25 and 7.45 GHz. The sources 3C 286 and J2025+337\ were used as the primary and secondary calibrators, respectively.
Similar data reduction steps were performed as described in Section \ref{sec:histvla}, except for the following: we used the Common Astronomy Software Application v5.1.1 \citep[{\sc casa};][]{mcmullin07}, to allow us to account for the larger fractional bandwidth; we Hanning smoothed the data to avoid radio frequency interference from bleeding into nearby frequency channels; we used the primary flux calibrator to solve for delay and complex bandpass solutions; and we imaged the field using the task {\sc clean}, using Briggs weighting (robust=1.0), and two Taylor terms to model the frequency dependence over the 2 GHz bandwidth. In these observations, the effects of bandwidth smearing on J2025+337\ were not significant, on account of the smaller frequency channels (2 MHz vs.\ 50 MHz), so we were able to adequately deconvolve J2025+337. Flux densities were measured by fitting a two-dimensional Gaussian with the {\sc casa} task {\sc imfit}, forcing a point source (fixed to the width of the synthesized beam) at the known location of V404 Cygni.
\subsubsection{2015 Post-outburst}
\label{sec:obs:outburst}
Four observations from the end of the 2015 decay are included in this study (from 28 July - 5 August), under program code SG0196 (PI Plotkin). As noted at the beginning of Section~\ref{sec:obs}, V404 Cygni\ re-entered quiescence by the time these observations were taken, according to its X-ray spectral behavior \citep{plotkin17}. All four observations were taken in the most extended A configuration (maximum baseline $\approx$30 km) using the VLA in subarray mode, where about half of the antennas observed from 4-8 GHz, and the other half observed from 8-12 GHz, yielding strictly simultaneous multi-frequency coverage over four frequencies centered at 5.2, 7.5, 8.6, 11.0 GHz (with 1024 MHz bandwidth at each frequency). These data were reduced using the same procedures as described in Section~\ref{sec:obs:rana} (see \citealt{plotkin17} for details).
\subsection{VLBA}
We monitored V404 Cygni\ with the VLBA over 13 (approximately) fortnightly observations between 2014 February 3 and August 22, under program code BM399 (PI Miller-Jones). Each observation lasted 2\,hr, yielding $\approx$56\,min on source. We observed with all available antennas, using 256 MHz of bandwidth centered on a frequency of 4.98\,GHz. We used J2025+337 \citep[][16.6\,arcmin from V404 Cygni]{ma98} as both the fringe finder and phase reference calibrator, and the somewhat more distant J2023+3153 \citep[][1.99$^{\circ}$ from V404 Cygni]{ma98} as an astrometric check source. We also identfied an additional VLBA observation at 8.4 GHz reported by \citet{miller-jones09} taken on 2008 November 17 under program code BM290, taken in dual circular polarization with 64 MHz bandwidth per polarization (133 min on source), which we re-reduced.\footnote{Program BM290 contained four other observations at 8.4 GHz, which were already included in our sample because they used the phased VLA with the VLBA. We used the phased VLA observations in preference to the VLBA, because the phased VLA uses a standard flux calibrator, while the VLBA observations rely on system temperatures for amplitude calibration.}
We calibrated the data using {\tt AIPS} (version 31DEC15) following standard procedures. For the 8.4 GHz observation, we used geodetic blocks at the start and end of the observation to correct for unmodelled clock errors and tropospheric delays. For all observations, we applied updated Earth orientation parameters, and we corrected for ionospheric dispersive delays using total electron content maps. We used system temperature information for amplitude calibration, corrected the phases for parallactic angle effects as the antenna feeds rotated with respect to the sky, and iteratively imaged and self-calibrated the phase reference source J2025+337 to derive a model for calibrating the phase, delay and rate solutions, which were then applied to the target. V404 Cygni was too weak for self-calibration. After imaging with natural weighting (for maximum sensitivity), we determined the source flux density by fitting a point source in the image plane. V404 Cygni\ remains point-like at VLBA resolutions in quiescence \citep{miller-jones08}, such that we do not expect to resolve out any jetted radio emission, allowing a fair flux density comparison between VLA and VLBA epochs.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.48]{f1}
\caption{Radio light curves spanning 1991-2015 from the VLA and VLBA (the numbers of observations at each frequency are summarised in Table~\ref{tab:nobs}). We only show frequencies here with at least five data points. Filled symbols represent detections and open symbols represent $5\sigma_{\rm rms}$ upper limits. The red diamonds represent epochs taken with the VLBA, and the cyan stars represent four epochs taken between July and August of 2015 at the end of the outburst, after V404 Cygni\ had re-entered quiescence. Error bars are often smaller than the size of each symbol. The grey shaded regions mark time periods during the 1989 and 2015 X-ray outbursts, and the 2015-2016 mini-outburst. }
\label{fig:lc}
\end{center}
\end{figure*}
\section{Results}
\label{sec:res}
\subsection{Long-term Variability}
\label{sec:res:var:long}
Light curves are shown in Figure~\ref{fig:lc} at 4.9, 8.4, and 14.9 GHz (we omit our other two frequencies, 1.4 and 22.5 GHz as they each contain only 4 observations). To quantify the distribution of flux densities at each frequency in the presence of non-detections, we perform a survival analysis. Using {\tt survfit} in {\tt R} (within the {\tt survival} package\footnote{\url{https://CRAN.R-project.org/package=survival}
) we calculate the survival function, $S(\log f_\nu)$, via the Kaplan-Meier estimator \citep[see, e.g.,][for a description of the Kaplan-Meier estimator and examples of astrophysical applications]{feigelson85}. We then estimate the cumulative distribution function as $P(\log f_{\nu}) = 1 - S(\log f_\nu)$, which we display in Figure~\ref{fig:fluxhist} at 4.9 and 8.4 GHz (omitting the four observations from 2015). In Section~\ref{sec:obs:spind} we image the 2013 observation with the upgraded VLA at four separate frequencies, centered at 5.0, 5.5, 7.2, and 7.7 GHz (each using 512 MHz bandwidth), to measure a spectral index. We therefore include the 5.0 GHz flux density from 2013 in the 4.9 GHz distribution here, and the 7.7 GHz flux density in the 8.4 GHz distribution. In total, our 4.9 and 8.4 GHz distributions include \nstatscband\ and \nstatsxband\ data points, respectively (of which \nstatscbandLim\ and \nstatsxbandLim\ are upper limits, respectively). Detections (at the $>$5$\sigma_{\rm rms}$ level)
range between 0.14 -- 1.35 mJy bm$^{-1}$.
We compare the cumulative distribution functions in Figure~\ref{fig:fluxhist} to lognormal distributions using the Peto \& Peto modification of the Gehan-Wilcoxon test, as implemented by {\tt cendiff} in the {\tt R} package {\tt Nondetects and Data Analysis for Environmental Data (NADA\footnote{\url{https://CRAN.R-project.org/package=NADA}}
). The distributions of flux densities from V404 Cygni\ are not statistically different from lognormal distributions ($p=0.81$ and 0.93 at 4.9 and 8.4 GHz, respectively) with $\left<\log f_{\rm 4.9}/{\rm mJy}\right> = -0.53\pm 0.19$ and $\left< \log f_{\rm 8.4}/{\rm mJy}\right> = -0.53\pm 0.30$, where $f_{\rm 4.9}$ and $f_{\rm 8.4}$ are flux densities at 4.9 and 8.4 GHz, respectively, in units of mJy. The quoted errors represent standard deviations on the lognormal distribution (i.e., they are not errors on the mean).\footnote{The quoted numbers for the lognormal distributions are not biased by a small number of individual observations in the small- or large flux density tails of the distributions. We show this by bootstrapping the flux density distributions 100 times at each frequency, selecting subsamples of 30 and 50 at 4.9 and 8.4 GHz, respectively, with replacement. After running the same survival analysis on each bootstrapped distribution, we find average values from the 100 distributions to be $\left<\log f_{\rm 4.9}/{\rm mJy}\right> = -0.54\pm 0.19$ and $\left< \log f_{\rm 8.4}/{\rm mJy}\right> = -0.54\pm 0.28, where errors represent standard deviations$}
\begin{figure}
\includegraphics[scale=0.4]{f2}
\caption{The cumulative distribution of the logarithm of flux densities (calculated as $1-S(\log f_\nu)$, where $S(\log f_\nu)$ is the survival function from the Kaplan-Meier estimator). The gray shaded area represents the 90\% confidence interval. The top panel is for 4.9 GHz (\nstatscband\ observations), and the bottom panel is 8.4 GHz (\nstatsxband\ observations).
Both flux density distributions are consistent with lognormal distributions, which are illustrated by red solid lines, where $\left<\log f_\nu\right> = -0.53 \pm 0.19$ and $-0.53 \pm 0.30$ at 4.9 and 8.4 GHz, respectively, where $f_\nu$ is the flux density in mJy, and errors represent standard deviations of the lognormal distributions.}
\label{fig:fluxhist}
\end{figure}
We find that V404 Cygni\ shows significant flux density variations that are in excess of statistical fluctuations from measurement errors (which are typically $\pm$0.04 mJy bm$^{-1}$ with the historical VLA). Taking 8.4 GHz as example, among \nstatsxbandDet\ detections we find a reduced $\chi^2_r$ of 46 (for \nstatsxbandDof\ degrees of freedom, as compared to a model with constant flux density). We note that $\chi^2_r$ is slightly biased by one observation from 2013 with higher signal-to-noise using the upgraded VLA. Removing that observation still suggests significant intrinsic variability ($\chi_r^2 = 33$ for \nstatsxbandDofVlaonly\ degrees of freedom).
The fractional rms variability for all \nstatsxbandDet\ observations at 8.4 GHz is $F_{\rm var} = 54 \pm 6$ \% \citep{vaughan03}, which is not biased by the higher signal-to-noise observations (i.e., we also calculate $F_{\rm var} = 54 \pm 6\%$ if we exclude the 2013 observation). If we consider the statistical fluctuations induced by variability to have a standard deviation of $\sigma_{f_\nu, \rm var} = \pm f_\nu F_{\rm var}$, then propagating errors would yield $\sigma_{\log f_\nu, {\rm var}} = \pm F_{\rm var}/\ln 10$, such that a 54\% fractional rms variability translates to 0.23 dex in logarithmic space.
To further quantify the flux variability we produce the first-order structure function $V(\tau)$, which characterizes the amount of variability power as a function of time scale in the case of irregularly sampled data,
\begin{equation}
V(\tau) = \left< \left[f \left(t + \tau \right) - f \left(t \right)\right]^2\right>,
\end{equation}
where $f(t)$ is a flux density measurement at time $t$, and $\tau$ is a time delay. In the following analysis, we include only the \nstatsxbandDet\ detections at 8.4 GHz. For every pair of data points ($t_i, t_j$) in our light curve, we calculate $V_{ij}(\tau_{ij}) = \left[f \left(t_j\right)) - f \left(t_i\right)\right]^2$, where $\tau_{\rm ij} = t_j - t_i$. We then bin our set of $V_{\rm ij}$ measures by time difference ($\tau_{\rm ij}$) so that each bin contains 50 data points, and we take the average of those 50 measurements to calculate $V(\tau)$ (where we adopt the midpoint of all time differences within each bin as the value for the time delay $\tau$).\footnote{Our error bars represent the error on the mean value of $V(\tau)$ in each time bin. We define our errors in this manner for ease of comparison to \citet{bernardini14} who provide a structure function for the quiescent X-ray variability of V404 Cygni.} The structure function is shown in Figure~\ref{fig:sf}, where we probe long-term variability over timescales of $\sim$10 -- 4000 days.
The slope of the structure function provides information on the power distribution of flux variations (see, e.g., \citealt{hughes92} for details, which we summarise below). For example, if variability is characterised as flicker noise (i.e., a power function $P(F) \propto F^{-1}$, where $F$ is the inverse timescale, i.e., frequency, of fluctuations), then the structure function $V(\tau)$\,=\,constant. The constant is expected to be 2$\sigma_{\rm var}^2$, with $\sigma_{\rm var}^2$ being the variance of the observed flux densities. One could also obtain $V(\tau)=2 \sigma_{\rm var}^2$ on long timescales if the structure function probes white noise (e.g., one interpretation would be that the probed timescales are longer than the characteristic timescale on which shot noise variations dampen). As another example, red noise variations ($P(F) \propto F^{-2}$) produce first-order structure functions of the form $V(\tau) \propto \tau$, which can often be interpreted as disturbances that follow a random-walk process \citep[e.g.,][]{kelly09}. Finally, flux variations smaller than the level of a typical error bar on flux density measurements ($\sigma_{\rm err}$) are not meaningful (assuming that $\sigma_{\rm err}$ is dominated by statistical noise). Thus, one expects the structure function to satisfy $V(\tau) \geq 2 \sigma_{\rm err}^2$ at all timescales.
The structure function in Figure~\ref{fig:sf} appears flat, with a best-fit slope and normalisation of $\beta=-0.11\pm0.07$ and $V_0 = 0.16\pm0.04$ mJy$^{2}$ (where $V(\tau) = V_0 \tau^\beta$). Measuring the variance directly from the \nstatsxbandDet\ data points yields 2$\sigma_{\rm var}^2 = 0.11$ mJy$^{2}$. Therefore, the structure function appears consistent with plateauing near 2$\sigma_{\rm var}^2$, which signifies either flicker or white noise. In the latter case, we would be probing jet disturbances on timescales ($\gtrsim$10 days) that are longer than characteristic damping timescales.
\begin{figure}
\includegraphics[scale=0.45]{f3}
\caption{The first-order structure function, including observations taken between 7.7 -- 8.4 GHz (\nstatsxbandDet\ measurements, omitting upper limits). We bin the structure function to contain 50 data points per time delay. Horizontal dashed and dotted lines illustrate, respectively, twice the average measurement error squared (2$\sigma_{\rm err}^2$) and twice the variance (2$\sigma_{\rm var}^2$) of the \nstatsxbandDet\ data points (note, the dotted line is not a fit). Flux density variations are well in excess of statistical fluctuations from measurement errors. The flat slope indicates either flicker noise variations (i.e., a power function $P(F) \propto F^{-1}$), or shot noise disturbances that resemble (uncorrelated) white noise when probed on timescales longer than a characteristic damping timescale of $\lesssim$10 days.}
\label{fig:sf}
\end{figure}
\begin{figure*}
\includegraphics[scale=0.44]{f4}
\caption{Eleven observations that are long enough to produce light curves on sub-hour time resolution. Grey triangles represent 10 min time bins (3 min for the final two observations in 2015), and open triangles are 2$\sigma_{\rm rms}$ upper limits. For the historical VLA (observations from 1998-2009) we overplot 30 min time bins as red circles (2$\sigma_{\rm rms}$ limits as open circles), and for the VLBA observation (2008 November 17) we only include 30 min time bins. Note the slightly different central frequencies for each light curve (top right of each panel). Flares that increase the flux density by factors of 2-4 are common on minute to hour timescales, but there is not a single template that can describe all flares in terms of their amplitudes, rise times, and decay times. Note that the three epochs from 2015 were taken at the very end of the 2015 outburst decay, after the source had re-entered quiescence according to its X-ray signatures \citep[see][]{plotkin17}.}
\label{fig:shrtlc}
\end{figure*}
\subsection{Short-term Variability}
\label{sec:res:var:short}
We also examine variability on sub-hour timescales, focusing on observations long enough (usually $\gtrsim$90 min on source) to produce light curves over multiple time bins (we are able to achieve time resolutions ranging from 3-30 min, see Figure~\ref{fig:shrtlc}). We focus on observations near 8 GHz where we have 11 long observations total, including seven from the historical VLA, one from the VLBA, and three from the upgraded VLA.
\begin{deluxetable}{c c c C}
\tablecaption{Short-term Variability Statistics \label{tab:varstats}}
\decimals
\tablecolumns{4}
\tabletypesize{\footnotesize}
\tablewidth{8in}
\tablehead{
\colhead{Date} &
\colhead{$\left(\chi^2_r\right)_{\rm flux}$} &
\colhead{deg of freedom} &
\colhead{$\beta$}
}
\colnumbers
\startdata
1998 May 04 & 11.9 & 11 & \nodata \\
1998 Oct 25 & 9.0 & 14 & \nodata \\
2000 Jul 21 & 8.2 & 38 & \nodata \\
2003 Jul 29 & 2.1 & 26 & \nodata \\
2007 Dec 02 & 8.5 & 16 & \nodata \\
2008 Nov 17 & 35.4 & 6 & \nodata \\
2009 Feb 15 & 1.5 & 13 & \nodata \\
2009 Apr 26 & 3.7 & 23 & \nodata \\
2013 Dec 02 & 7.7 & 52 & 0.29 \pm 0.05 \\
2015 Aug 01 & 2.8 & 32 & 0.10 \pm 0.11 \\
2015 Aug 05 & 12.3 & 32 & 1.08 \pm 0.08 \\
\enddata
\tablecomments{Column (1) calendar date of each observation.
Columns (2)-(3) reduced $\chi^2_r$ for each light curve (including only detections) and the number of degrees of freedom.
Column (4) the best-fit powerlaw index to the structure function, when available ($V(\tau) \propto \tau^\beta$). }
\end{deluxetable}
\renewcommand\arraystretch{1}
\begin{figure}
\includegraphics[scale=0.5]{f5}
\caption{First order structure functions for observations with the upgraded VLA, binned to 50 data points per time delay. Symbols and lines have the same meaning as in Figure~\ref{fig:sf}), except that the red lines represent fits to the data. Note that even when V404 Cygni\ does not show obvious flares (e.g., 2015 August 1), the observed flux density variations are still in excess of expectations from measurement errors. The structure function on 2015 August 5 is consistent with red noise ($V(\tau) \propto \tau$). }
\label{fig:sfshrt}
\end{figure}
We characterize the level of variability on each epoch by calculating the reduced $\chi^2_r$ of the flux densities in each light curve. For this calculation we ignore upper limits (which causes us to underestimate $\chi^2_r$ in some cases). Our measured $\chi^2_r$ values are listed in Table~\ref{tab:varstats}. Treating $\chi^2_r > 3$ as a crude diagnostic for significant intrinsic variations, we find obvious variability on 8/11 epochs. However, we cannot determine from $\chi^2_r$ alone if we should consider the other three epochs as non-variable. Rather, lower $\chi^2_r$ values imply that variability on those epochs is less extreme relative to fluctuations from statistical noise, and further tests are required (see next paragraph).
The three epochs with the upgraded VLA each display different short-term variability characteristics (i.e., the flux density is slowly decreasing with time on 2013 December 2, there are no obvious flares on $>$3 min timescales on 2015 August 1, and we observe the beginning of a flare in 2015 August 5). These three epochs therefore provide useful illustrative examples, and as such we display their structure functions in Figure~\ref{fig:sfshrt}. We fit a powerlaw to each structure function, and we find powerlaw indices of $\beta= 0.29 \pm 0.05, 0.10 \pm 0.11$ and $1.08\pm0.08$ on 2013 December 2, 2015 August 1, and 2015 August 5, respectively ($V(\tau) \propto \tau^\beta$). Note that the structure function on 2015 August 1 is flat, plateauing at $2 \sigma_{\rm var}^2$, indicating that V404 Cygni\ indeed shows (uncorrelated) variability in excess of statistical measurement noise fluctuations on 2015 August 1, even though $\chi_r^2 = 2.8$.
\subsubsection{Quiescent Flares}
\label{sec:res:qflares}
In this subsection we summarise some of the flaring behaviour of V404 Cygni\ in quiescence. We note that factor of $>$2 changes in flux density appear to be common. However, we do not attempt to define a duty cycle, on account of the limited number of observations (11) with short-term light curves, for which we do not have uniform time resolution.
The 2007 December 2 observation \citep[first reported by][]{miller-jones08} provides an extreme example of rapid short-term variability, where the 8.4 GHz flux density increased by a factor of 3--4 in $<$10 min, reaching 1.4 mJy, followed by a slower decay that lasted at least 30 min. (The beginning of that light curve also shows a series of three alternating detections and non-detections, suggesting a factor of $>$2.5 flux variability over 10 min). However, not all short-term variability follows a pattern of a fast rise followed by a slower decay. For example, on 1998 May 4 a factor of 2-3 flare rose over $\approx$20 min, on 2009 April 26 a rise and a decay of a factor $\approx$3 in flux density occurred over $\sim$60-90 min in each direction, and on 2015 August 5 V404 Cygni\ displayed an increase in flux density by a factor $>$2 over $>$60 min.
We also see a potential variety in decay timescales. For example, the 2007 December 2 flare that quickly rose in $<$5-10 min took at least 30 min to decay. At the other extreme, the $\approx$9 hour 2013 December 2 observation appears to be decreasing in flux the entire time (with some smaller-scale variations superposed). If a flare preceded the beginning of that observation, then that implies some flares decay on timescales of at least several hours. From the above we conclude that tens of minutes to hours represent reasonable minimum characteristic timescales for the damping of radio flares. Although flares appear common, V404 Cygni\ also undergoes quieter periods of time, where either flares are absent or low-amplitude flares occur on short timescales $<$3 min (e.g., 2015 August 1).
\begin{figure*}
\includegraphics[scale=0.45]{f6}
\vspace{-1.7cm}
\caption{Epochs with multifrequency information taken within 30 min, with each panel showing detections as solid symbols (error bars are smaller than the size of each symbol) and 5$\sigma_{\rm rms}$ upper limits as open circles. The radio spectral index $\alpha_r$ (or limit) is provided in the top right corner of each panel, with the gray shaded regions representing 68\% confidence intervals on $\alpha_r$ (and dashed lines representing limits). Note that we observe a range of steep, flat, and inverted spectra, but non-simultaneity could be exaggerating the true spread in $\alpha_r$ (except for the final five epochs where the multifrequency information from the upgraded VLA is strictly simultaneous). The average spectral index is consistent with a flat radio spectrum, $\left<\alpha_r\right> = 0.02 \pm 0.65$. However, the 2013 December 2 observation highlights that the strictly simultaneous spectral index can be negative on some epochs ($\alpha_r = -0.26 \pm 0.05$). }
\label{fig:spinds}
\end{figure*}
\subsection{Radio Spectra}
\label{sec:obs:spind}
For 24 epochs there are observations at multiple frequencies within $\pm$30 min, from which we measure spectral indices ($f_{\nu} \propto \nu^{\alpha_r}$) as shown in Figure~\ref{fig:spinds}. For epochs with exactly two frequencies, we calculate $\alpha_r$ (or place a limit) analytically as $\alpha_r = \ln(f_{\nu_1}/f_{\nu_2}) / (\nu_1/\nu_2)$, where $f_{\nu_1}$ and $f_{\nu_2}$ refer to flux densities at frequencies $\nu_1$ and $\nu_2$, respectively. We assign an error on $\alpha_r$ by propagating through statistical errors (we ignore errors on frequency for the historical VLA given the relatively small bandwidth at each frequency, and we also ignore systematic errors from short-term variability since we generally do not know if the spectra were taken during flaring activity). If $>$2 observing frequencies, then we measure $\alpha_r$ through a least squares fit to the spectrum (in log space). We estimate error bars through Monte Carlo simulations where we randomly add noise to each flux density measurement (assuming Gaussian noise with a standard deviation equal to the measurement error on each data point) and then refit the spectral index. We repeat 1000 times and estimate $\sigma_{\alpha_r}$ as the standard deviation on the resulting $\alpha_r$ distribution.
For the 2013 epoch from \citet{rana16} with the upgraded VLA, we create four (strictly simultaneous) images by splitting the bandwidth into four sub-bands with 512 MHz bandwidth (centered at 5.0, 5.5, 7.2, and 7.7 GHz), and we allow the frequency to randomly vary across each 512 MHz when running our Monte Carlo simulations to estimate error bars). We obtain $\alpha_r=-0.26 \pm 0.05$, which is consistent with the value $\alpha_r = -0.27 \pm 0.03$ obtained by \citet[][]{rana16}. For the four VLA observations from 2015, we adopt the spectral indices reported by \citet{plotkin17}, which were calculated using the same least squares fitting method as described above (over 4--12 GHz).
\subsubsection{Long-term Spectral Variations}
\label{sec:obs:spind:long}
We measure a large range of spectral indices, from $-1.5 \lesssim \alpha_r \lesssim +1.6$, with the spread of $\alpha_r$ being larger at lower flux densities (Figure~\ref{fig:spindflux}). However, we argue in Section~\ref{sec:disc:spind} that this spread in $\alpha_r$ is likely attributed largely to statistical and systematic errors (i.e., large error bars on most $\alpha_r$ measurements, especially at lower flux densities, combined with only five epochs having strictly simultaneous multifrequency data).
To quantify the dispersion in radio spectral indices, we fit a Gaussian distribution to the $\alpha_r$ measurements (using a Bayesian framework that allows for both upper and lower limits). We find a mean $\left<\alpha_r\right> = 0.02 \pm 0.17$ and a standard deviation $\sigma_{\alpha_r} = 0.65 \pm 0.15$, where the errors represent 68\% confidence intervals of the posterior distributions. If we exclude the five spectra obtained from the upgraded VLA (which have significantly smaller error bars on $\alpha_r$), we find consistent results: $\left<\alpha_r\right> = -0.07 \pm 0.25$ and $\sigma_{\alpha_r} = 0.78 \pm 0.21$. Throughout we adopt $\left<\alpha_r \right>= 0.02 \pm 0.65$, which is consistent with a flat radio spectral index on average, as expected for a partially self-absorbed compact synchrotron jet.
\begin{figure}
\includegraphics[scale=0.45]{f7}
\caption{Radio spectral index $\alpha_r$ versus 8.4 GHz flux density. The cyan stars represent the four epochs from 2015. The five data points with the smallest error bars were taken with the upgraded VLA, and the other 19 data points were taken with the historical VLA. The dashed horizontal line marks $\alpha_r=0$ for reference. The spectral index tends to become negative only at low flux densities, which may be related to observational effects, such as non-simultaneity and/or most low flux density observations having larger measurement errors. However, some negative spectral indices at low flux densities are likely reflecting intrinsic changes within the jet (e.g., the 2013 December 2 epoch with a well-measured $\alpha_r = -0.26 \pm 0.05$ from strictly simultaneous multifrequency data).}
\label{fig:spindflux}
\end{figure}
\subsubsection{Short-term Spectral Variations}
\label{sec:obs:spind:short}
Among the 11 observations from which we produce sub-hour light curves in Section~\ref{sec:res:var:short}, four contain multi-frequency data allowing us to explore variations in $\alpha_r$ over sub-hour timescales. These epochs include 2013 December 2 with the upgraded VLA \citep[][4--8 GHz]{rana16}, two epochs at the end of the 2015 outburst on August 1 and 5 \citep[4--12 GHz]{plotkin17}, and a historical VLA observation on 2003 July 29 that interleaved observations at 4.9 and 8.4 GHz every $\approx$15 min \citep{hynes04, hynes09}.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.45]{f8}
\caption{Four observations for which we can extract spectral information over sub-hour timescales, including 2003 July 29 (60 min time bins, also see \citealt{hynes09}), 2013 December 2 (10 min time bins, also see \citealt{rana16}), and 2015 August 1 and 2015 August 5 (3 min time bins, also see \citealt{plotkin17}). Note that the multi-frequency information is strictly simultaneous for the final three observations, but not for the 2003 observation. We do not observe meaningful fluctuations in $\alpha_r$ on these short time scales.}
\label{fig:shrtlcSpind}
\end{center}
\end{figure*}
The radio spectra from these four epochs are displayed in Figure~\ref{fig:shrtlcSpind}, where we also display the corresponding light curves for reference.\footnote{For these light curves, we require $S/N>3$ for the flux densities in each time bin to reduce uncertainties on $\alpha_r$. Since the on source integration time should be similar in each time bin, we also remove a small number of time bins ($<$5 across all four sources) where the $\sigma_{\rm rms}$ of the time-resolved images differ by at least a factor of two from the median $\sigma_{\rm rms}$ of all images on each date. Such large variations in $\sigma_{\rm rms}$ might indicate that artifacts remain after the cleaning process, or that there is an unusually large amount of flagged data during that time bin.}
The spectral constraints during the 2003 epoch are not very meaningful, but we include that epoch in the figure for completeness. For the other three epochs, we do not see any obvious changes of $\alpha_r$ with time; variations of $\alpha_r$ are consistent with statistical noise, with reduced $\chi^2_r$ of 1.5, 2.5, and 1.5 (for 52, 32, and 32 degrees of freedom) on 2013 December 2, 2015 August 1, and 2015 August 5, respectively. We note that \citet{rana16} reported that the radio spectrum of V404 Cygni\ switched from optically thick to optically thin over $\approx$10 min periods on 2013 December 2. While we also see some variations in $\alpha_r$ over 10-30 min timescales, they tend to be at the 1--2$\sigma$ level, such that we do not consider those variations to be highly significant relative to the measurement error. We tentatively see marginal indications for a long-term evolution of the spectrum over the $\approx$9 hour observation. Imaging the first 100 min of the observation yields $\alpha_r = -0.39 \pm 0.10$, while a less steep (and potentially flat) spectral index of $\alpha_r = -0.14 \pm 0.14$ is measured from images during the last 100 min. The steeper spectral index during the first 100 min appears to be driven by variations at 7.45 GHz that are not mimicked at 5.25 GHz.
\section{Discussion}
\label{sec:disc}
We have presented radio light curves of V404 Cygni\ in quiescence from 150\ VLA and VLBA observations spanning 1991--2015, and we find that factor of 2--4 variations are common on timescales ranging from minutes to decades. Eleven observations are long enough to produce light curves on sub-hour timescales, from which we conclude that radio flares that last from tens of minutes to hours are common. However, there is not a single template to describe quiescent radio variability, either in flare profile, amplitude, or timescale. The observed variety in flare properties could imply that multiple mechanisms control the radio variability, or that a single type of process yields a range of subtly different radiative signatures (e.g., a shock traveling through a steady jet, where the dissipation of energy is highly sensitive to the local conditions at the location of the shock).
Only for a single large flare (2015 August 5) do we have sufficient time resolution to attempt to statistically characterize its properties during the rise. Its structure function displays a slope $\beta \approx 1$, which means that at least some flares are consistent with red noise (i.e., $P(F) \propto F^{-2}$, although we note that we do not have coverage of the entire flare, which could bias our slope measurement). Considering that the structure function of long-term variations is flat (over $\approx$10-4000 day timescales; Figure~\ref{fig:sf}), we suggest that the quiescent jet of V404 Cygni\ (sometimes) displays flares with `random walk' noise characteristics that dampen to uncorrelated variations on longer timescales. We suspect that the damping time can be as short as tens of minutes to hours, as observed during some flares in Figure~\ref{fig:shrtlc}, but from Figure~\ref{fig:sf} we can only constrain the damping timescale to $\lesssim$10 days.
We do not assert that all flares have red noise characteristics, since there is only one (large) flare for which we have sufficient time resolution to calculate a structure function, and that observation was taken very shortly after the system returned to quiescence following the 2015 outburst. Nevertheless, the behaviour of red noise flares that dampen to uncorrelated variations on long timescales is reminiscent of decade long radio light curves of BL Lac objects, i.e., low-luminosity active galactic nuclei (AGN) with a jet pointed toward Earth. \citet{hughes92} find that BL Lac objects typically show first-order structure functions with slopes $\beta \sim 1$ that become flat at long timescales (i.e., consistent with a damped random walk). They find a broad distribution of characteristic damping timescales from $\approx$1-10 yr for BL~Lac objects.\footnote{This phenomenology is also common for jet-dominated emission in the optical waveband, where BL~Lac object light curves can be well-characterised by a damped random walk \citep{ruan12}, and in some cases also in the gamma-ray (where sporadic flaring over a steady flicker/red noise power spectrum is often observed; e.g., \citealt{abdo10}).}
To first order, after correcting for beaming effects related to BL~Lac orientation, we expect BL Lac objects to be analogous to hard state BHXB s \citep[e.g.,][]{falcke04, kording06}, and, despite their higher Eddington ratios, we believe that BL~Lac objects also make reasonable analogs for comparison to a luminous quiescent BHXB\ like V404 Cygni\ (i.e, both types of systems launch compact radio jets from a black hole fed by inefficient accretion). However, to compare BL~Lac timescales to V404 Cygni\ requires correction for the effects of relativistic beaming from the BL~Lac jets, which is difficult given unknown Doppler factors and redshifts for most of the BL~Lac objects in \citet{hughes92}. As an extreme example, we consider a BL Lac object with a Doppler factor $\delta \approx 60$, which would correspond to a very fast jet with bulk Lorentz factor $\Gamma \approx 50$ \citep[e.g.,][]{lister09} aligned within only 1 degree to our line of sight. In that case, the intrinsic (i.e., rest-frame) characteristic timescales from \citet{hughes92} would be $\lesssim$600 yr (BL~Lac objects tend to have low redshifts, e.g., a median $z\sim0.33$ in \citealt{shaw13}, such that cosmological corrections will be smaller than beaming corrections).
If one were to associate the above timescale with a timescale that scales linearly with black hole mass (e.g., the light travel time across an emitting region that is comparable in size to the radio photosphere of a conical jet), then $\lesssim$600 yr would scale down to $\lesssim$10 min for V404 Cygni\ (comparing the 9 $M_{\rm Sun}$ mass of V404 Cygni\ to a typical $3\times10^8 M_{\rm Sun}$ mass for a BL Lac object, \citealt{plotkin11}). We suspect that most flares from V404 Cygni\ decay on timescales of at least several tens of minutes to hours, longer than expectations from the above mass scaling. Therefore, we are either probing physical variations in V404 Cygni\ that are inaccessible for individual supermassive black holes, or it is possible for some AGN phenomenology to occur on much faster timescales than expected from (simplistic) mass-scaling arguments. We view the latter as a plausible explanation, particularly in light of the existence of changing look AGN, which have accretion disks that appear to operate on much quicker timescales than expected from mass-scaling arguments \citep[e.g.,][]{noda18, dexter18}.
Finally, we note that the radio photosphere of the quiescent jet from V404 Cygni\ (at GHz frequencies) is empirically constrained to be located at a distance $\lesssim$3.4 AU from the black hole \citep{miller-jones08, plotkin17}. The size of the radio emitting region would be even smaller (e.g., by a factor of $\tan \phi$ for a conical jet, where $\phi$ is the jet opening angle; \citealt{miller-jones06}), such that one should not expect radio variations on timescales longer than $\approx$tens of minutes to be causally connected unless the jet is very slow. Thus, long-term variations likely reflect the jet responding to separate disturbances, like changes in the mass accretion rate through the inner flow, or like changing intrinsic properties of the jet (e.g., how internal energy in the jet is partitioned between particles and magnetic field). That both the radio and X-ray display similar long-term variability characteristics (i.e., flat structure functions, see \citealt{bernardini14}) might support that both wavebands release radiative power by tapping into the same energy reservoir, as suggested by, e.g., \citet{malzac04}.\footnote{We note that we are probing longer timescales than considered by \citealt{malzac04}, who consider X-ray variations associated with dynamical timescales at 10-100 Schwarzschild ($\approx 0.1$ s), while in our case the X-ray and radio variations are over timescales longer than several days. However, the general idea that both the radio and X-ray emission regions are responding to a common physical driver seems a reasonable interpretation.}
\subsection{Spectral Variations}
\label{sec:disc:spind}
As noted in Section~\ref{sec:obs:spind:long}, we measure radio spectral indices ranging from $-1.5 < \alpha_r < +1.6$, with a mean $\alpha_r = 0.02$ and a standard deviation $\sigma_{\rm \alpha_r} = 0.65$. We suspect that the large range in $\alpha_r$ can be attributed to the poorer sensitivity of the historical VLA, and multifrequency data that can be offset by $\pm$30 min. For example, a factor of two variability within 30 min would adjust $\alpha_r$ by $\pm1.3$. We are therefore cautious not to over interpret the apparently large range of measured $\alpha_r$, which might not require a physical explanation.
The above variability-related concerns, however, do not preclude the possibility of less extreme spectral variations. The upgraded VLA provides strictly simultaneous multifrequency observations, and the 2013 December 2 epoch ($\alpha_r = -0.26 \pm 0.05$) provides evidence that the spectral index can indeed stray negative at times (as originally pointed out by \citealt{rana16}). We stress that $\alpha_r=-0.26 \pm 0.05$ is inconsistent with a purely optically thin spectrum, as expected from transient ejecta \citep[e.g.,][]{fender99, corbel04}. Possible explanations for a mildly negative spectral index, as observed on 2013 December 2, could include the combination of optically thick and optically thin emission (e.g., the fading optically-thin stage of a flare superposed over a steady, flat spectrum jet), a jet that expands more slowly than a conical jet, or a decelerating jet. Also contributing could be that lower-frequency variations are `smeared', since they are emitted over a larger volume (farther from the black hole) compared to higher-frequency emission. Indeed, the 2013 December 2 light curve (Figure~\ref{fig:shrtlcSpind}) displays dips at 7.45 GHz at approximately 50 and 150 min after the start of the observation that are not mimicked at 5.25 GHz, which likely contributes to the overall negative spectral index.
To our knowledge, the 2013 December 2 observation of V404 Cygni\ is the only observation of a quiescent BHXB\ to show a (well-measured) negative spectral index. However, comparably negative spectral indices have been measured from compact jets in the \textit{hard} state (see, e.g., \citealt{espinasse18} and references therein, although we note many of those observations also suffer from low sensitivity). Reasonable explanations for different spectral indices in the hard state include intrinsic differences in jet properties \citep{espinasse18} \textit{or} differences in inclination \citep{motta18} (or both). One difference in our work, however, is that we are seeing both positive and negative spectral indices from the same source. Therefore, we cannot explain a varying radio spectral index from V404 Cygni\ in quiescence as an inclination effect.
\section{Coordinating Multiwavelength Observations}
\label{sec:mwcoord}
Part of our motivation for this work is to quantify the level to which variability-related systematic uncertainties can influence the location of quiescent BHXBs in the radio/X-ray luminosity plane ($\lr-\lx$). Understanding these uncertainties is important for studies on disk/jet couplings (e.g., fitting $\lr-\lx$ correlation slopes for individual objects), and also for studies that appeal to $\lr-\lx$ to identify new BHXB\ candidates. Below we compare the radio variability of V404 Cygni\ to its X-ray variability (taken from the literature), and we recommend some guidelines for reducing variability-induced systematics when coordinating and interpreting radio/X-ray observations of quiescent BHXB\ (candidates) with luminosities comparable to V404 Cygni.
\subsection{X-ray Variability in Quiescence}
\citet{bernardini14} obtained dense X-ray monitoring of V404 Cygni\ with the \textit{Neil Gehrels Swift Observatory}, taking 33 observations over 75 days. Their study provides the most appropriate comparison to our long-term radio light curve(s) in Figure~\ref{fig:lc}. They find a fractional rms variability in the X-ray of $F_{\rm var, X-ray} = 57 \pm 3$\% ($\sim$0.25 dex), and they obtain a flat structure function over timescales of $\approx$5-80 days. These X-ray results are similar to our radio results, where we find $F_{\rm var, radio} = 54 \pm 6$\% ($\sim$0.23 dex) and a flat structure function (although our study extends to longer timescales of of $\approx$10-4000 days).
While the level of long-term X-ray variability is comparable to that in the radio, short-term X-ray variability can be larger in amplitude. For example, on hour timescales factor of 4--8 X-ray variations are typical \citep[e.g.,][]{bradley07, bernardini14, rana16}; at the extreme end, \citet{hynes04} observed a flare that increased the \textit{Chandra} count rate by a factor $>$20 (also see \citealt{wagner94} for a factor of 10 variation in $<$0.5 days). For comparison, in our work we commonly observe flares that change the flux density by factors of 2--4 in the radio.
Whether or not one should expect correlated variations between the radio and X-ray bands on short timescales is an open question. To our knowledge, there have not been multiwavelength campaigns that simultaneously take radio and X-ray observations \textit{in quiescence} over multiple epochs separated by only 1--2 days, thereby making it impossible to empirically test if radio/X-ray variations are correlated on sub-week timescales. On minute through hour timescales, only two attempts have been published so far that searched for coordinated radio/X-ray variability over observations lasting several hours (2003 July 29 and 2013 December 2), and neither attempt has shown obvious radio/X-ray correlations with light curves on 30--100 min time bins \citep{hynes09, rana16}. However, detecting correlated variations may require finer time resolution.
\subsection{Recommendations for Coordinating Radio and X-ray Observations}
Considering the above X-ray characteristics, and that correlated (short-term) radio and X-ray variability might only be detectable with minute time resolution, we suggest the following strategies when coordinating multiwavelength campaigns on quiescent X-ray binaries:
\begin{itemize}
\item \textit{radio and X-ray observations should be scheduled as simultaneously as possible.} If the source is bright enough to provide sufficient signal-to-noise on $<$10 min time bins in both the radio and X-ray, then (some) individual flares should be resolvable, and one can directly control for multiwavelength variability. Ideally the observations would be long enough to observe the entire flare rise and decay (which can last several hours).
\item \textit{If data are non-simultaneous, or if one does not have sufficient time resolution to resolve individual flares, then inflate the error bars by $\approx$0.25 dex in both the radio and the X-ray.} However, one should still bear in mind that variations as large as factors of 2--4 in the radio and 4--8 in the X-ray are common.
\item \textit{Radio observations should be taken as close as possible to the frequency required to achieve one's science goals.} If spectral constraints from strictly simultaneous multifrequency data are lacking, then we support the standard assumption of a flat radio spectrum when extrapolating radio observations to other frequencies. However, we find evidence that $\alpha_r$ is not always strictly zero, and we recommend propagating errors on flux densities by assuming that the radio spectrum could vary by $\sigma_{\alpha_r} \approx \pm 0.6$ (see Section~\ref{sec:obs:spind:long}).
\end{itemize}
\subsection{A Comparison to the Transitional Millisecond Pulsar PSR J1023+0038}
\label{sec:mwcoord:tmsp}
While quiescent BHXB s tend to have higher $\lr/\lx$ ratios than other classes of Galactic compact accreting objects \citep[e.g.,][]{migliari06, tudor17, gallo18}, the transitional millisecond pulsar (tMSP) PSR J1023+0038 (hereafter J1023) was recently shown to sometimes be an exception to that trend \citep{bogdanov18}. Considering periods when J1023\ does not show radio pulsations as accretion-powered states, J1023\ exhibits aperiodic and rapid switching between two distinct X-ray flux levels, a low mode ($\lx \approx 5 \times 10^{32}~{\rm erg~s}^{-1}$) and a high mode ($\lx \approx 3 \times 10^{33} ~{\rm erg~s}^{-1}$; e.g., \citealt{patruno14, stappers14, jaodand16, bogdanov15}). \citet{bogdanov18} discovered that the low modes are nearly always accompanied by radio flares that both rise and decay on minute timescales. The radio flux density appears anti-correlated with the X-ray flux, although some radio flaring is also observed when the X-ray flux remains in a high mode. During these low X-ray mode states, the increased radio flaring tends to reach $\lr \approx 2 \times 10^{27}\,{\rm erg~s}^{-1}$.
In Figure~\ref{fig:lrlx} we highlight the location of J1023\ in the $\lr-\lx$ plane\footnote{Data taken from \citet{arashlrlx}, see \url{https://github.com/bersavosh/XRB-LrLx_pub}.}
(large green triangles) compared to V404 Cygni\ in quiescence and in the hard state (large blue circles). We also shade in a region that represents the minimum and maximum luminosities displayed by V404 Cygni\ within our radio dataset ($\approx 5\times10^{27} - 5\times10^{28}~{\rm erg~s}^{-1}$, which corresponds to flux densities ranging from 0.14-1.35 mJy if one assumes a flat radio spectrum), and the minimum and maximum X-ray luminosities displayed during the \textit{Swift} campaign from \citet{bernardini14}. We estimate 1-10 keV X-ray luminosities assuming a photon index of 2.1 and a column density of $9\times10^{21}$ cm$^{-2}$ \citep{bernardini14}, using X-ray count rates extracted from the online \textit{Swift}-XRT product generator tool\footnote{\url{http://www.swift.ac.uk/user_objects}} \citep{evans07, evans09}. The shaded region in Figure~\ref{fig:lrlx} assumes that radio and X-ray luminosities are not correlated in quiescence, and therefore represents a conservative range for where one might find V404 Cygni\ in the $\lr-\lx$ plane if considering non-simultaneous data. The mean radio flux density from our study ($\log f_\nu/{\rm mJy} = -0.53$) corresponds to $\log \lr/{\rm erg~s}^{-1}\,\approx 28$, such that V404 Cygni\ likely spends most of its time toward the bottom left of the shaded region (with the upper right region representing periods of flaring activity).
\begin{figure*}
\begin{center}
\includegraphics[scale=0.6]{f9}
\caption{Radio (5 GHz) vs. X-ray (1-10 keV) luminosities for quiescent and hard state BHXB s (blue circles), quiescent and hard state neutron star (NS) X-ray binaries (black squares), accreting millisecond X-ray pulsars (AMXPs, red stars), and transitional millisecond pulsars in accretion-powered states (tMSPs, green triangles). Highlighted with large symbols are V404 Cygni\ (from \citealt{hynes04, corbel08, rana16, plotkin17}) and the tMSP PSR J1023+0038 (from \citealt{deller15, bogdanov18}). All other data points are taken from \citet{arashlrlx}. The gray shaded box illustrates the minimum and maximum luminosities displayed by V404 Cygni\ in quiescence (radio luminosities taken from this work, X-ray luminosities taken from \citealt{bernardini14}), assuming that multiwavelength variations are uncorrelated. At low X-ray luminosities the tMSP J1023\ tends to remain at least a factor of 2.5 radio fainter than V404 Cygni, but it still starts entering parameter space that could be occupied by BHXB s.}
\end{center}
\label{fig:lrlx}
\end{figure*}
After accounting for the range of flux density variations exhibited by V404 Cygni, we expect its radio luminosity to always be $\gtrsim$2.5 times larger than the radio luminosity of J1023, even when J1023\ is in a low X-ray flux (i.e., radio flaring) state. Still, J1023\ ventures very close to the parameter space expected for quiescent BHXB s, and it is easy to envision that it could overlap with a BHXB\ that has similar variability characteristics as V404 Cygni\ but at a slightly lower Eddington ratio. Although J1023\ only ventures toward the BHXB\ parameter space on short timescales $\lesssim$500 s (i.e., over longer time-averaged observations, J1023\ will generally appear radio fainter), other tMSPs could venture toward the BHXB parameter space for longer periods of time (e.g., the tMSP IGR J18245$-$2452 has displayed at least one low mode that lasted for nearly 10 hours, e.g., \citealt{linares14}). We therefore reiterate the conclusion from \citet{bogdanov18} that radio and X-ray luminosities alone are not always sufficient to confidently assert that a quiescent X-ray binary contains a black hole instead of a neutron star.\footnote{Although in some cases, e.g., a system being very radio bright or radio faint, one could reasonably disfavor or favor a neutron star from radio and X-ray luminosities.}
Other multiwavelength data should naturally also be consulted to, e.g., search for an orbital period, and to search for emission lines or other properties to give clues on the nature of the companion star and accreting compact object \citep[e.g.,][]{bahramian17, shishkovsky18, tudor18}.
Lacking detailed multiwavelength data, we assert that radio light curves could have some diagnostic power to determine if one is observing a black hole or a neutron star:
\begin{itemize}
\item \textit{frequent and relatively short radio flares that both rise and decay on rapid (minute) timescales may favor a neutron star.} \citet{bogdanov18} find radio flares that last $\lesssim$500 s, and they rise and decay rapidly (see their Figures 1, 4, and 5). These flares can occur quite frequently at times (e.g., \citealt{bogdanov18} observe as many as 3-4 radio flares per hour during some portions of their observation). For V404 Cygni, we observe less frequent flaring, and we only observe rapid flare rises. The decays tend to proceed on longer timescales of tens of minutes to hours;
\item \textit{an order of magnitude change in the quiescent radio flux density over minute-to-hour timescales might argue for a neutron star.} \citet{deller15} show light curves of J1023\ from 13 different epochs (see their Figure 5), and they find a case where the radio flux density steadily increases by an order of magnitude over $\approx$30 min. While V404 Cygni\ has also been observed to show flares with comparably slow rise times, none of those $>$30 min flares have increased in flux density by an order of magnitude.
\end{itemize}
We stress that the above assertions are predicated on observing a system that is close enough to produce radio light curves on minute timescales (e.g., V404 Cygni\ is at $2.39\pm0.14$ kpc, \citealt{miller-jones09}, and J1023\ is at $1.368^{+0.042}_{-0.039}$ kpc, \citealt{deller12}). Also, in some cases radio light curves will not hold any diagnostic power, since, particularly with shorter observations, J1023\ and V404 Cygni\ can both show time periods of low activity. Furthermore, the variability observed so far from J1023\ and from V404 Cygni\ unlikely represents the full range of variability that can be displayed by either class of systems (e.g., as evidenced by the nearly 10 hour X-ray low mode displayed by the tMSP IGR J18245$-$2452).
If one does not have access to radio light curves with minute time resolution, then taking a radio flux density integrated over long observations could provide a less ambiguous interpretation: over long time intervals, non-flaring time periods will dilute the radio flux density, thereby moving the tMSP farther away from the quiescent BHXB\ space in $\lr-\lx$. Finally, in some cases the X-ray spectrum can provide additional diagnostic power: J1023\ has an X-ray photon index of $\Gamma \approx 1.7$ in both low and high X-ray modes \citep{bogdanov18}, while quiescent BHXB s are well established to have a softer $\Gamma \approx 2.1$ in quiescence \citep{plotkin13, reynolds14}. The radio spectral index, however, is a poor discriminant, given that both V404 Cygni\ and J1023\ exhibit a range of mildly positive and negative spectral indices (\citealt{bogdanov18} report a range extending from $-0.5 \lesssim \alpha_r \lesssim 0.4)$, and that $\alpha_r$ is usually not well-measured in low-luminosity sources.
\section{Summary}
\label{sec:conc}
We present archival radio observations of V404 Cygni\ spanning 24 years (1991--2015), providing the most stringent long-term constraints to date on the radio variability of a synchrotron jet from a quiescent BHXB. We find flux densities that follow a lognormal distribution, with mean and standard deviation (in $\log f_\nu/{\rm mJy}$) of $-0.53 \pm 0.19$ and $-0.53 \pm 0.30$ at 4.9 and 8.4 GHz, respectively. Factor of $>$2-4 variations are common on every observable timescale from minutes to decades. As expected, the radio spectrum of V404 Cygni\ is flat on average ($\left<\alpha_r\right> = 0.02 \pm 0.65$, where the error represents the standard deviation), but we also find that $\alpha_r$ becomes significantly negative on at least one epoch.
Over two decades of observations, V404 Cygni\ displays a flat structure function, such that the long-term flux density variations (days -- years) are consistent with either flicker noise ($P(F)\propto F^{-1}$) or white noise. On epochs when we have sufficient quality data, we observe individual flares that appear to decay within minutes to hours. These results are consistent with an interpretation of shot noise probed over a timescale that is longer than the characteristic damping time of each disturbance. We suspect that typical flare decay timescales are on the order of tens of minutes to hours, but we are formally only capable of constraining that timescale to $<$10 days (with tens of minutes a reasonable lower bound). These properties may be expected from shock instabilities traveling through a steady, compact jet. Given similar characteristics observed in the X-ray band \citep{bernardini14}, the radio variability appears consistent with scenarios where the X-ray and radio radiative processes are powered by a common energy source \citep[e.g.,][]{malzac04}.
Finally, we provide recommendations for combatting variability-induced systematics when attempting to place accreting compact objects onto the radio/X-ray luminosity plane, as is commonly done in surveys for quiescent BHXB s. We recommend that radio and X-ray observations be taken as simultaneously as possible to allow direct detections of flares during each observation. If the data are not of sufficient quality to detect individual flares, or the data are not simultaneous, then we recommend inflating error bars on radio and X-ray luminosities by 0.25 dex. If one must extrapolate radio observations to different frequencies, then ideally one would be able to measure a radio spectrum from strictly simultaneous multi-frequency observations. Otherwise, a flat radio spectrum is a reasonable approximation, except that error bars should also be adjusted according to expectations that the radio spectrum can vary ($\sigma_{\rm \alpha_r} = \pm 0.6$ is a reasonable uncertainty). Finally, we repeat warnings by \citet{bogdanov18} that some accreting neutron stars (i.e., tMSPs) can obtain radio and X-ray luminosities comparable to those achieved by quiescent BHXB s, such that other types of multiwavelength data (e.g., optical, ultraviolet, and X-ray spectra and timing) should be considered when attempting to identify the nature of accreting compact objects when mass functions are not available.
\acknowledgements
We are grateful to Robert Hjellming, who obtained many of the observations presented in this paper and who lay the foundations for understanding stellar sources at radio wavelengths. We are also grateful to Michael Rupen for obtaining a large portion of the VLA observations used in this paper. We thank the anonymous referee for helpful comments. The National Radio Astronomy Observatory is a facility of the National Science Foundation (NSF) operated under cooperative agreement by Associated Universities, Inc. This work made use of data supplied by the UK \textit{Swift} Science Data Centre at the University of Leicester. We acknowledge support from NSF grant AST-1308124. R.M.P. acknowledges support from Curtin University through the Peter Curran Memorial Fellowship. J.C.A.M.J. is supported by an Australian Research Council Future Fellowship (FT140101082). L.C. acknowledges support from NSF AST-1412549. J.S. acknowledges support from the Packard Foundation.
\facilities{VLA, VLBA}
\software{{\tt AIPS} \citep[v31DEC2014, v31DEC2015][]{greisen03}, {\tt Astropy} \citep{astropy-collaboration13}, {\tt CASA} \citep[v5.1.1][]{mcmullin07}}
|
1,314,259,993,218 | arxiv | \section{Introduction}
The behaviour of a magnetic impurities in metals is one of the best studied
problem in condensed matter theory \cite{Hew93}.
In most cases it is a very good approximation to replace the conduction
density of states by a constant as small variations of the density of states
do not lead to a qualitative change of the physical properties
(like the complete screening
of the inpurity spin by the conduction electrons).
The question, whether these physical properties are different when
the impurity is coupled to a Fermi system with a power-law
density of states $\rho(\omega) \propto |\omega|^r$
near the Fermi-level was first discussed by Withoff
and Fradkin \cite{Wit90}.
A number of systems are expected to show this pseudogap density of states.
Among these are certain heavy-fermion superconductors
\cite{Sig91} where the exponent $r$ can take the values $r=1$ or $r=2$
depending on the symmetry of the gap function.
Other candidates are semiconductors whose valence
and conduction bands touch at the Fermi level \cite{Vol85}.
In quasi one-dimensional metals,
which can be viewed as realizations of the Luttinger
model, the exponent $r$ is a function of the
Coulomb interaction\cite{Dar93a} and can take
values between $r\! <\!< \!1$ and $\!r>\!1$.
Recently, the numerical renormalization group method (NRG)
\cite{Wil75,Kri80}
has been applied by Chen and Jayaprakash \cite{Che95} (referred to as CY)
and Ingersent \cite{Ing96}
to the model of an impurity spin coupled to a conduction band
with a powerlaw density of states. In principal, this Kondo
model can be related to a corresponding Anderson model
in the limit of $J\to 0$ via a standard Schrieffer-Wolff transformation
\cite{Sch66}.
There is, however, a transition between a strong-coupling (SC)
fixed point and a local-moment (LM) fixed point for
the Kondo model at {\it finite}
$J$ so that it is a priori not clear whether the behaviour
at this transition will be the same in the Anderson
version of the model.
The results of CY and Ingersent can be summarized as follows.
For any $J>J_{\rm c}$ the system approaches some kind of SC fixed point
with the difference to the standard Kondo model ($r=0$) that
the impurity spin is not completely screened (a residual magnetic
moment of $r/8$ always remains in the zero-temperature limit).
This can be qualitatively understood from the gradually decreasing
density of states of the conduction electrons at the
Fermi level which are responsible for the screening.
The thermodynamic quantities show non-Fermi liquid behaviour in the
SC regime
\begin{eqnarray}
\gamma(T) &=& \frac{C(T)}{T} \propto T^{-r}, \\
\chi_S(T) &=& \frac{r}{8} T^{-1} + a T^{-r}+ b T^{-2r},
\end{eqnarray}
(with $a,b =$ const.).
The critical line $J_{\rm c}(r)$ starts linearly for small $r$ but diverges
at $r=\frac{1}{2}$. In addition, Ingersent has shown that this divergence
only holds
in the particle-hole symmetric case and that a finite $J_{\rm c}$
is restored away from this symmetry.
(This reduction of $J_{\rm c}$ has implications for the
observability of the crossover in experimental situations.)
For any $J<J_{\rm c}$, the system approaches the LM fixed point
where the impurity is effectively decoupled from the
conduction band and a residual magnetic moment of $1/4$ remains.
The thermodynamics in this regime have not yet been investigated.
In this paper, we want to study the behaviour of an Anderson impurity
in a pseudo-gap fermion system where we restrict ourselves to the
symmetric case.
In Sec.\ II, we want to describe our approach to the generalization of
the NRG with a non-constant density of states, and outline
the differences to that of CY and Ingersent.
The resulting formula for the hopping matrix elements
of the semi-infinite chain for {\it all $n$} is given in Sec.\ III.
The numerical results for static properties and the spectral function
are discussed in Sec.\ IV and V, respectively.
\section{Generalization of NRG to nonconstant density of states}
The Hamiltonian we want to study in this paper is the conventional
single-impurity Anderson model
\begin{eqnarray}
H &=& \sum_{\sigma} \varepsilon_{\rm f} f^\dagger_{-1 \sigma}
f_{-1 \sigma}
+ U f^\dagger_{-1 \uparrow} f_{-1 \uparrow}
f^\dagger_{-1\downarrow} f_{-1\downarrow}
\nonumber \\
&+& \sum_{k \sigma} \varepsilon_k c^\dagger_{k\sigma} c_{k\sigma}
+ \sum_{k \sigma} V(\varepsilon_k)
\Big( f^\dagger_{-1 \sigma} c_{k \sigma}
+ c^\dagger_{k\sigma} f_{-1\sigma} \Big).
\label{eq:siam}
\end{eqnarray}
In the model (\ref{eq:siam}), $c_{k\sigma}^{(\dagger)}$ denote standard
annihilation
(creation) operators for band states with
spin $\sigma$ and energy $\varepsilon_k$,
$f_{-1,\sigma}^{(\dagger)}$
those for impurity states with spin $\sigma$ and energy $\varepsilon_{\rm f}$. The
Coulomb interaction for two electrons at the impurity site is given by $U$ and
both subsystems are coupled via an energy dependend hybridization
$V(\varepsilon_k)$ \cite{comment}.
In the following we show that the Hamiltonian (\ref{eq:siam}) is
equivalent to a form which is more convenient for the derivation of
the NRG equations
\begin{eqnarray}
H &=& \sum_{\sigma} \varepsilon_{\rm f} f^\dagger_{-1\sigma}
f_{-1\sigma}
+ U f^\dagger_{-1 \uparrow} f_{-1 \uparrow}
f^\dagger_{-1 \downarrow} f_{-1 \downarrow}
\nonumber \\
&+& \sum_{ \sigma}\int_{-1}^1 {\rm d} \varepsilon \, g(\varepsilon)
a^\dagger_{\varepsilon \sigma} a_{\varepsilon
\sigma}\nonumber \\
&+& \sum_{ \sigma} \int_{-1}^1 {\rm d} \varepsilon \,
h(\varepsilon) \Big( f^\dagger_{-1 \sigma}
a_{\varepsilon \sigma} +
a^\dagger_{\varepsilon \sigma} f_{-1\sigma} \Big),
\label{eq:siam_cont}
\end{eqnarray}
where we introduced a one-dimensional energy
representation for the conduction band with band cut-offs at $\pm 1$,
dispersion
$g(\varepsilon)$ and hybridization $h(\varepsilon)$. The band operators fulfil the
standard fermionic commutation rules $\left[a_{\varepsilon\sigma}^{\dagger},
a_{\varepsilon'\sigma'}\right]=\delta(\varepsilon-\varepsilon')
\delta_{\sigma\sigma'}$.
To establish the equivalence of the Hamiltonians (\ref{eq:siam}) and
(\ref{eq:siam_cont})
we prove that for a specific choice of $g(\varepsilon)$ and $h(\varepsilon)$ they lead
to the same effective action for the impurity degree of freedom.
This effective
action is obtained by integrating over the conduction electron
degrees of freedom. For the Hamiltonian (\ref{eq:siam}) one gets
\begin{eqnarray}
S_{\rm eff}(\psi,\psi^\dagger)
&=& S_{\rm f}(\psi,\psi^\dagger) \nonumber \\
& & \!\!\!\!\!\!\!\!\!\!\!\! \left( {{\beta } \over N} \right)^2
\sum_{\sigma n m} \psi^\dagger_{\sigma n+1}
\psi_{\sigma m-1} \sum_k V(\varepsilon_k)^2
G^c_{n m}(k), \label{eq:Seffk}
\end{eqnarray}
(see for example \cite{Bul94}). $n$ and $m$ count the steps on the imaginary
time axis $[0,\beta]$ with $N$ the number of steps.
$\psi$ and $\psi^\dagger$ are Grassmann numbers corresponding to the
impurity operators. $S_{\rm f}$ describes the unhybridized impurity.
The $G^c_{n m}(k)$ are Green functions for the free conduction electron
system.
The action corresponding to the Hamiltonian (\ref{eq:siam_cont})
can be written as
\begin{eqnarray}
S (\psi,\psi^\dagger,\chi,\chi^\dagger)
&=& S_{\rm f}(\psi,\psi^\dagger) \nonumber \\ & &
\hspace{-2cm} +
\sum_{\sigma n} \int_{-1}^1 {\rm d} \varepsilon \,
\chi_{\varepsilon\sigma n}^\dagger \Big(
\big( 1 - \frac{\beta}{N} g(\varepsilon) \big)
\chi_{\varepsilon\sigma n-1} - \chi_{\varepsilon\sigma n} \Big)
\nonumber \\
& & \hspace{-2cm} -
\frac{\beta}{N} \sum_{\sigma n} \int_{-1}^1 {\rm d} \varepsilon \,
h(\varepsilon) \Big[
\chi_{\varepsilon\sigma n}^\dagger \psi_{\sigma n-1} +
\psi^\dagger_{\sigma n} \chi_{\varepsilon \sigma n-1} \Big].
\end{eqnarray}
$\chi_{\varepsilon\sigma n}^\dagger $ and
$\chi_{\varepsilon \sigma n}$ are Grassmann numbers corresponding to the
conduction electron operators
$a^\dagger_{\varepsilon \sigma}$ and $a_{\varepsilon \sigma}$ .
Integrating over the conduction electron degrees of freedom leads to
\begin{eqnarray}
S_{\rm eff}(\psi,\psi^\dagger)
&=& S_{\rm f}(\psi,\psi^\dagger) \nonumber \\
& & \hspace{-1.5cm}
+\left( {{\beta} \over N} \right)^2
\sum_{\sigma n m} \psi^\dagger_{\sigma n+1}
\psi_{\sigma m-1} \int_{-1}^1 {\rm d} \varepsilon \,
h(\varepsilon)^2 G^c_{n m}(g(\varepsilon)). \nonumber \\
\label{eq:Seffeps}
\end{eqnarray}
To compare the effective actions (\ref{eq:Seffk}) and (\ref{eq:Seffeps})
the sum over $k$ in (\ref{eq:Seffk}) has to be transformed
to the energy integral
\begin{equation}
\sum_k V(\varepsilon_k)^2
G^c_{n m}(k) = \int_{-1}^1 {\rm d} \varepsilon \,
V(\varepsilon)^2 \rho(\varepsilon) G^c_{n m}(\varepsilon).
\end{equation}
This also defines the density of states for the free conduction electrons
$\rho(\varepsilon)$.
The equivalence of the effective actions (\ref{eq:Seffk}) and
(\ref{eq:Seffeps})
leads to the condition
\begin{equation}
\int_{-1}^1 {\rm d} g \frac{\partial\varepsilon(g)}{\partial g}
h(\varepsilon(g))^2
G^c_{n m}(g) \equiv
\int_{-1}^1 {\rm d} \varepsilon \,
V(\varepsilon)^2 \rho(\varepsilon) G^c_{n m}(\varepsilon)
.
\end{equation}
This can only be fulfilled for
\begin{equation}
\frac{\partial \varepsilon (x)}{\partial x}
h(\varepsilon(x))^2 = V(x)^2 \rho(x), \label{eq:diffeq}
\end{equation}
(with $\varepsilon(x)$ the inverse of $g(\varepsilon)$).
For a given $\Delta(x) \equiv \pi V(x)^2 \rho(x)$ there are obviously many ways of
dividing the energy dependence between $\varepsilon(x)$ and the dispersion
$h(\varepsilon(x))$.
One possibility is to choose
\begin{equation}
g(\varepsilon) = \varepsilon \ \ \ \ {\rm and} \ \ \ \ \
h(\varepsilon)^2 = \frac{1}{\pi}\Delta(\varepsilon). \label{eq:pos1}
\end{equation}
For $\Delta(\varepsilon) = \Delta$ eq.\ (\ref{eq:pos1}) corresponds to the
standard case (see eq.\ (2.4) in \cite{Kri80}).
It might also be convenient to set $h(\varepsilon)=h$.
Together with the condition $\varepsilon(-1)=-1$ and $\varepsilon(1)=1$
this leads to
\begin{displaymath}
\varepsilon(g) = -1 + \frac{1}{\pi h^2}\int_{-1}^{g} {\rm d} x \Delta(x)
\ \ \ \ {\rm and}
\end{displaymath}
\begin{equation}
h^2=\frac{1}{2\pi}\int_{-1}^{1} {\rm d} \varepsilon \Delta(\varepsilon)
.
\label{eq:pos2}
\end{equation}
This equations also reduce to $\varepsilon(g) = g$ and $h^2 = \frac{1}{\pi}\Delta$
for a constant $\Delta(\varepsilon) = \Delta$.
Equations (\ref{eq:pos1}) and (\ref{eq:pos2}) have already been derived by
CY \cite{Che95a}. In a subsequent publication \cite{Che95}
these authors use eq.\ (\ref{eq:pos2}) for the mapping of the Kondo model
on a semi-infinite chain (see Appendix A for a discussion of the resulting
hopping matrix elements).
The first possibility eq.\ (\ref{eq:pos1}) has a conceptual disadvantage
arising from the logarithmic discretization of the conduction band.
Within each interval $[x_{n+1},x_n]$ and $[-x_{n},-x_{n+1}]$, with $x_n=\Lambda^{-n}$,
the conduction electron operators are expressed in terms of a Fourier
expansion. As long as $h(\varepsilon)^2$ is constant in each interval, the impurity
couples only to the average component ($p\!=\!0$) of the conduction electrons.
Therefore it is reasonable to neglect all the $p\!\ne\!0$-states (this becomes exact in
the limit $\Lambda\! \to\! 1$). This line of reasoning obviously does not
hold for eq.\ (\ref{eq:pos1}).
On the other hand, the energy dependence of $\Delta(\varepsilon)$ can be
taken into account in the hybridization by defining $h(\varepsilon)^2$
as the mean value
\begin{equation}
{h^{\pm}_n}^2 = \frac{1}{d_n} \int^{\pm} {\rm d} \varepsilon
\frac{1}{\pi}\Delta(\varepsilon), \label{eq:h_mean}
\end{equation}
\begin{equation}
\int^{+} {\rm d} \varepsilon \equiv
\int_{x_{n+1}}^{x_n} {\rm d} \varepsilon, \quad
\int^{-} {\rm d} \varepsilon \equiv
\int_{-x_n}^{-x_{n+1}} {\rm d} \varepsilon,
\end{equation}
(with $d_n = x_n-x_{n+1}$) in each interval of the logarithmic
discretization.
This is so far not an approximation as the remaining energy dependence
will be incorporated in the dispersion.
The advantage of an energy dependent hybridization as in eq.\ (\ref{eq:h_mean}) is that the
resulting dispersion has the form $g(\pm x_n) = \pm x_n$ for all $n$,
i.e.\ at all points $x_n$ of the logarithmic discretization.
This "linear" form
(for intermediate values $g(\varepsilon)\! =\! \varepsilon$ is not fulfilled) leads to
a scaling behaviour of the hopping matrix elements (see eq.\ (\ref{eq:H_with_t_n}))
of the form $t_n \propto \Lambda^{-n/2}$, slightly modified due to the structure
of $\Delta(\varepsilon)$. The representation eq.\ (\ref{eq:pos2}), however, leads to
a scaling with an effective $\Lambda_{\rm eff}$ not equal to $\Lambda$ which might
even depend on the number of iterations thus making the analyses (of the fixed points,
the relevant energy scale, etc.) more difficult.
For these reasons, we take the representation eq.\ (\ref{eq:h_mean}) in the following.
This gives for the hybridization part of the discretized Hamiltonian
\begin{equation}
H_{\rm hyb} = \sqrt{\frac{\xi_0}{\pi}} \left[
f^\dagger_{-1\sigma}f_{0\sigma} +
f^\dagger_{0\sigma}f_{-1\sigma} \right],
\end{equation}
with
\begin{equation}
f_{0\sigma} = \frac{1}{\sqrt{\xi_0}} \sum_n \left[
\gamma_n^+ a_{n\sigma} + \gamma_n^- b_{n\sigma} \right],
\end{equation}
\begin{equation}
\xi_0 = \sum_n \left( (\gamma_n^+)^2 +(\gamma_n^-)^2 \right)
= \int_{-1}^1 {\rm d} \varepsilon \Delta(\varepsilon),
\end{equation}
\begin{equation}
(\gamma_n^\pm)^2 = \int^{\pm} {\rm d} \varepsilon \,
\Delta(\varepsilon).
\end{equation}
The discrete conduction electron operators $a_{n\sigma}$ ($b_{n\sigma}$)
for positive (negative) $\varepsilon$ correspond to those introduced in
\cite{Wil75,Kri80}.
According to the differential equation (\ref{eq:diffeq}) we should now have to solve for
$\varepsilon(x)$ and invert $\varepsilon(x)$ to obtain the dispersion
$x(\varepsilon) \equiv g(\varepsilon)$.
This is actually not necessary because the single-particle energies
in the conduction electron part of the
discretized Hamiltonian
\begin{equation}
H_{\rm c} = \sum_{n\sigma} \left[ \xi_n^+ a^\dagger_{n\sigma} a_{n\sigma}
+ \xi_n^- b^\dagger_{n\sigma} b_{n\sigma} \right]
\end{equation}
only depend on the integral over $g(\varepsilon)$
\begin{equation}
\xi_n^\pm = \frac{1}{d_n} \int^{\pm} {\rm d} \varepsilon
\, g(\varepsilon).
\end{equation}
It can be shown that the discrete energies $\xi_n^\pm$ are
given by
\begin{equation}
\xi_n^\pm = \frac{\int^\pm {\rm d} \varepsilon \Delta(\varepsilon) \varepsilon}{
\int^\pm {\rm d} \varepsilon \Delta(\varepsilon) }
.
\end{equation}
This equation, together with the form of the hybridization part has already
been used by Sakai et al.\ \cite{Sak94}, although no derivation was given in
their article.
The discretized Hamiltonian for the single-impurity Anderson model
now takes the form
\begin{eqnarray}
H &=& \sum_{\sigma} \varepsilon_{\rm f} f^\dagger_{-1\sigma}
f_{-1\sigma}
+ U f^\dagger_{-1 \uparrow} f_{-1 \uparrow}
f^\dagger_{-1 \downarrow} f_{-1 \downarrow}
\nonumber \\
&+& \sum_{n\sigma} \left[
\xi_n^+
a^\dagger_{n\sigma} a_{n\sigma} +
\xi_n^-
b^\dagger_{n\sigma} b_{n\sigma} \right] \nonumber \\
&+& \sqrt{\frac{\xi_0}{\pi}} \left[
f^\dagger_{-1\sigma}f_{0\sigma} +
f^\dagger_{0\sigma}f_{-1\sigma} \right].
\label{eq:Hdisc}
\end{eqnarray}
\section{Pseudogap density of states --- Mapping on semi-infinite chain}
We now consider a $\Delta(\omega)$ of the form
\begin{equation}
\Delta(\omega)=\Delta_0 |\omega|^r, \ \ \ \ -1\le \omega \le 1
.
\end{equation}
The discrete energies $\xi_n^\pm$ of the conduction electrons
and the hybridization matrix elements $\gamma_n^\pm$
between impurity and the conduction electrons
take the form
\begin{equation}
\xi_n^+ = - \xi_n^- = \frac{r+1}{r+2}
\frac{1 -\Lambda^{-(r+2)}}{1-\Lambda^{-(r+1)}}
\Lambda^{-n} \label{eq:xi}
\end{equation}
and
\begin{equation}
\left( \gamma_n^+ \right)^2 =
\left( \gamma_n^- \right)^2 = \frac{\Delta_0 }{r+1} \Lambda^{-n(r+1)}
\left( 1 - \Lambda^{-(r+1)} \right).\label{eq:gamma}
\end{equation}
The mapping of the discretized Hamiltonian (\ref{eq:Hdisc})
onto the semi-infinite chain form
\begin{eqnarray}
H &=& \sum_{\sigma} \varepsilon_{\rm f} f^\dagger_{-1\sigma}
f_{-1\sigma}
+ U f^\dagger_{-1 \uparrow} f_{-1 \uparrow}
f^\dagger_{-1 \downarrow} f_{-1 \downarrow}
\nonumber \\
&+& \sum_{\sigma n=0}^\infty t_n \left[
f^\dagger_{n\sigma}f_{n+1\sigma} +
f^\dagger_{n+1\sigma}f_{n\sigma}
\right] \label{eq:Hsemiinf} \\
&+& \sqrt{\frac{\xi_0}{\pi}} \left[
f^\dagger_{-1\sigma}f_{0\sigma} +
f^\dagger_{0\sigma}f_{-1\sigma} \right], \label{eq:H_with_t_n}
\end{eqnarray}
($\xi_0=\frac{2\Delta_0 }{r+1}$) is described in \cite{Wil75} and \cite{Kri80}.
The only difference appearing here is the $r$-dependence of the
$\xi_n^\pm$ and $\gamma_n^\pm$.
(Note that in the non-symmetric case additional terms of the form
$\varepsilon_n f^\dagger_{n\sigma}f_{n\sigma}$ are generated.)
For the hopping matrix elements $t_n$ we find the following expressions.
\begin{eqnarray}
t_n &=& \Lambda^{-n/2} \,\frac{r+1}{r+2} \,
\frac{1-\Lambda^{-(r+2)}}{1-\Lambda^{-(r+1)}}
\left[ 1 - \Lambda^{-(n+r+1)} \right]\nonumber \\
&\times&
\left[ 1 - \Lambda^{-(2n+r+1)} \right]^{-1/2}
\left[ 1 - \Lambda^{-(2n+r+3)} \right]^{-1/2} \label{eq:tneven}
\end{eqnarray}
for even $n$ and
\begin{eqnarray}
t_n &=&
\Lambda^{-(n+r)/2} \,\frac{r+1}{r+2} \,
\frac{1-\Lambda^{-(r+2)}}{1-\Lambda^{-(r+1)}}
\left[ 1 - \Lambda^{-(n+1)} \right]\nonumber \\
&\times&
\left[ 1 - \Lambda^{-(2n+r+1)} \right]^{-1/2}
\left[ 1 - \Lambda^{-(2n+r+3)} \right]^{-1/2} \label{eq:tnodd}
\end{eqnarray}
for odd $n$.
The equations (\ref{eq:tneven}) and (\ref{eq:tnodd}) have been verified
numerically and by analytical calculation of $t_0$ and $t_1$.
In the limit $n\to \infty$ (\ref{eq:tneven}) and (\ref{eq:tnodd})
reduce to
\begin{equation}
t_n \stackrel{n\to\infty}{\longrightarrow}
\frac{r+1}{r+2} \,
\frac{1-\Lambda^{-(r+2)}}{1-\Lambda^{-(r+1)}}
\Lambda^{-n/2} \left\{
\begin{array}{lcl}
1 &:& n\ {\rm even}\\
\Lambda^{-r/2} &:& n\ {\rm odd}
\end{array}
\right. . \label{eq:tnred}
\end{equation}
This limit of the hopping matrix elements has also been found
by Ingersent \cite{Ing96} although the formula for {\it all} $n$ is
not given in his paper.
The result obtained by CY is discussed in the appendix.
An analytical form of the $t_n$ for all $n\ge0$ can only be given when the powerlaw
$\Delta(\omega)=\Delta_0 |\omega|^r$ extends to the band edges. In any
experimental realization, however, we expect this powerlaw only to
be valid near the Fermi level. On the other hand, numerical studies show that
any deviation from the form (23) close to the band edges merely affects the
first coefficients, while the asymptotic behaviour again depends on $r$ only
and is given by (30). Thus the qualitative behaviour near the possible
low temperature fixed points is not affected by the exact
form of $\Delta(\omega)$ away from the Fermi level.
\section{Results for static properties}
The Hamiltonian (\ref{eq:Hsemiinf}) is solved with the NRG
for the parameters $\varepsilon_{\rm f} = -U/2 = 10^{-3}$,
$\Lambda = 2.5$ and different values for $r$ and $\Delta_0$.
At each iteration step we keep $\approx 500$ states which
is sufficient for the calculation of thermodynamic properties.
We first want to discuss the phase-diagram of Fig.\ 1
where we have plotted the critical value $\Delta_{\rm c}$ versus $r$.
For any $\Delta_0 > \Delta_{\rm c}$ the system flows to a strong-coupling
fixed point (SC) similar to the fixed point in the standard case
\cite{Kri80}. The energy spectrum at this fixed point
can be explained by removing the first conduction electron site
from the chain due to its strong coupling to the impurity.
The remaining chain, however, has a different structure as compared
to the $r\!=\!0$-case. Therefore this SC fixed point has not the Fermi liquid
properties of the standard single-impurity Anderson model (see below).
For $\Delta_0 < \Delta_{\rm c}$ the system always flows to the local-moment
fixed point (LM) with the impurity effectively decoupled from the
conduction band. Again, the resulting energy levels are in agreement
with those of the free conduction electron chain.
For both the Kondo model and the Anderson model $\Delta_{\rm c} (r)$
diverges at $r\!=\!\frac{1}{2}$
and we find for the Anderson model a logarithmic divergence
\begin{equation}
\Delta_{\rm c,A} (r) \propto -\ln \left( \frac{1}{2} -r \right)
.
\end{equation}
However, the behaviour of $\Delta_{\rm c} (r)$
for $0\!<\!r\!<\!\frac{1}{2}$ is quite different for both models.
Ingersent finds an extended linear region $\Delta_{\rm c} (r)\propto r$
which is approximately valid up to values of $r=0.4$
(see inset of Fig.\ 1).
In our case, $\Delta_{\rm c} (r)$ also starts linearly and is
in agreement with the result for the Kondo model up to
$r\approx 0.02$, but increases far more rapidly for larger $r$.
The difference to \cite{Ing96} is mainly due to the fact that for the
parameters used here, the f-level lies within the pseudogap density of
states. Under the assumption that the relevant coupling $\Delta^\prime$ for
this problem is (approximately) the value $\Delta(\omega=\varepsilon_{\rm
f})$ we have the exponential dependence
\begin{equation}
\Delta^\prime(r) \approx \Delta_0 |\varepsilon_{\rm f}|^r =
\Delta_0 e^{r\ln |\varepsilon_{\rm f}|} \label{eq:Delta^prime}.
\end{equation}
As $\ln |\varepsilon_{\rm f}|$ has a large negative value, $\Delta^\prime(r)$
is strongly supressed for increasing $r$ so that a much larger $\Delta_0$
is needed to reach the strong coupling fixed point. This increase of the
parameter regime in which local moment formation is observed has also been
found by Gonzalez-Buxton and Ingersent \cite{Gon96} who applied a poor man's
scaling approach to the Anderson version of the pseudogap problem.
To show that eq.\ (\ref{eq:Delta^prime}) basically explains the difference
between the Kondo model and the Anderson model, we have plotted in the
inset of Fig.\ 1 both $\Delta_{\rm c,K}(r)$ for the Kondo model and
$\Delta_{\rm c,A}^\prime (r) =\Delta_{\rm c,A}(r) \cdot \exp (-7.9 \cdot r) $.
(the value 7.9 was chosen in order to fit $\Delta_{\rm c,A}^\prime (r)$
to $\Delta_{\rm c,K}(r)$)
The linear region of $\Delta_{\rm c,A}^\prime (r)$ now extends to
$r\approx 0.4$.
The remaining difference between $\Delta_{\rm c,K}(r)$ and
$\Delta_{\rm c,A}(r)$ is due to the fact that
the Kondo model and the Anderson model
are related via the Schrieffer-Wolff transformation \cite{Sch66}
only in the limit $J\to 0$ (corresponding to $V^2/U \to 0$).
Therefore, the agreement of the results for both models is
only guaranteed for $\Delta \to 0$. Away
from the line $\Delta = 0$, there is no exact mapping between the
Kondo version and the Anderson version.
The critical coupling $\Delta_{\rm c} (r)$ is determined as
follows. Fig.\ 2 shows the temperature dependence of the effective
magnetic moment for $U=0.001$, $\varepsilon_{\rm f}=-U/2$, $r=0.48$
and different values
of $\Delta$. In this graph, the LM fixed point (characterized by
$\mu_{\rm res} \equiv \mu_{\rm eff}(T\to 0) = 1/4$) is reached within the
given temperature range for
$\Delta = 0.01$ and $\Delta = 0.02$. The value $\mu_{\rm res} = r/8 = 0.06$
corresponding to the SC fixed point is clearly approached for
$\Delta = 0.16$ while for $\Delta = 0.04$ this value should be reached at a
much lower temperature.
From Fig.\ 2 we determine $\Delta_{\rm c} (r=0.48)$ as
$\approx 0.03$ and repeat this procedure for different values of $r$.
Similar results for $\mu_{\rm eff} (T)$ have been obtained by
Ingersent and CY.
The value $\mu_{\rm res} = r/8$ at the SC fixed point can also be
derived directly from the semi-infinite chain form of the
free conduction electron chain at this fixed point.
One simply has to compare the effective magnetic moment for the system
with and without the first conduction electron site.
The temperature dependence of the specific heat coefficient
$\gamma(T)=C(T)/T$ in the SC regime
is shown in Fig.~3. The low temperature behaviour of $\gamma(T)$
is described by a power law of the form
\begin{equation}
\gamma(T) = c_1 T^{-r} + c_2 T^{-2r} . \label{eq:gamma2}
\end{equation}
Although eq.\ (\ref{eq:gamma2}) resembles an expansion in
$T^{-r}$, there cannot be any terms like $T^{-3r}$, $T^{-4r}$ etc.\
as the corresponding entropy would then diverge for $T\to 0 $
(for e.g.\ $r\!=\!0.4$).
The exponent $\alpha$ defined by $\gamma(T) \propto T^\alpha$
is shown in the inset of Fig.\ 3. In an intermediate temperature regime,
$\alpha $ approaches the value $-r$ consistent with the result
of CY.
However, for lower temperatures another term with the exponent
$\alpha = -2r$ is dominating.
This term is strongly suppressed in the intermediate regime
due to $c_2 \!<\!<\! c_1$.
This crossover from the $T^{-r}$ to the $T^{-2r}$
behaviour is {\it not} due to a crossover
to a new low temperature fixed point.
For the spin susceptibility we confirm the result given by CY:
\begin{equation}
\chi(T)= \frac{r}{8}
T^{-1} + c_1^\prime T^{-r} + c_2^\prime T^{-2r}.
\end{equation}
In the LM regime we find
\begin{equation}
\gamma(T) = c_3 T^{r-1} .
\end{equation}
and
\begin{equation}
\chi(T) = \frac{1}{4}T^{-1} + c_3^\prime T^{r-1} .
\end{equation}
\section{Results for the spectral function}
The impurity spectral function
\begin{eqnarray}
A(\omega) &=& \frac{1}{Z}\sum_{nm}
\bigg\vert \Big< n \Big\vert f^\dagger_{-1\sigma}
\Big\vert m \Big>
\bigg\vert^2
\delta \big( \omega -(E_{n} -E_{m}) \big)
\nonumber \\
& & \hspace{1cm} \times \left( e^{-\beta E_m} + e^{-\beta E_n} \right),
\label{eq:Ageneral}
\end{eqnarray}
(with the partition function $Z\!=\!\sum_m \exp(-\beta E_m)$),
has not yet been calculated in the previous papers on the pseudogap
problem.
We assume that the groundstate energy $E_g$ is set to zero and
concentrate on the zero-temperature limit, in which the
spectral function takes the form
\begin{eqnarray}
A(\omega) &=& \frac{1}{Z} \Big\{ 2
\sum_{n_g m_g}
\bigg\vert \Big< n_g \Big\vert f^\dagger_{-1\sigma}
\Big\vert m_g \Big> \bigg\vert^2 \delta(\omega)
\nonumber \\
& & \hspace{1cm} + \sum_{n_g m_e}
\bigg\vert \Big< n_g \Big\vert f^\dagger_{-1\sigma}
\Big\vert m_e \Big> \bigg\vert^2 \delta(\omega+E_{m_e})
\nonumber \\
& & \hspace{1cm} + \sum_{n_e m_g}
\bigg\vert \Big< n_e \Big\vert f^\dagger_{-1\sigma}
\Big\vert m_g \Big> \bigg\vert^2 \delta(\omega-E_{n_e}) \Big\}.
\label{eq:Azero}
\end{eqnarray}
Here, the partition function $Z$ equals
the total degeneracy of the groundstate. The $n_g,m_g$ label all states
with energy $E\!=\!E_g\!=\!0$ and the $n_e,m_e$ label the excited states.
The first term in eq.\ (\ref{eq:Azero}) would correspond to a transition
between different states with $E\!=\!0$,
but
such a term (resulting in a $\delta$-function at the Fermi level) is
not present in the NRG results.
There is one state with excitation energy $E_{\rm ex} \!\to\! 0$ for
$N\to\infty$ but its matrix element with the ground state
vanishes as $N\to\infty$.
In order to obtain the full frequency dependence of the spectral function
within the NRG, it is necessary to combine the information of all iteration
steps as in each iteration the results are only given for
a certain frequency range (see also \cite{Cos94,Sak89}).
In Fig.\ 4 we show results for the spectral function
for $r\!=\!0.25$, $\Delta > \Delta_{\rm c}$ (solid line, SC regime),
$r\!=\!0.25$, $\Delta < \Delta_{\rm c}$ (dotted line, LM regime)
and $r\!=\!0.75$ (dashed line, LM regime). For these calculations we used
$\Lambda = 2$ and kept $\approx800$ states at each iteration.
We find that $A(\omega)$ diverges as $|\omega|^{-r}$ for $\omega\to 0$
for any set of parameters which lies in the SC regime.
Note that this result suggests the conventional behaviour
$A(\omega)\sim 1/(\pi\Delta(\omega))$ as $\omega\to0$ for the SC case. Together
with the result $\gamma(T)\sim T^{-r}$ (neglecting the second term in (33) for
the moment) one could be tempted to interpret these results within a standard
Fermi liquid approach. Let us however emphasize that
in spite of these results the system is not a Fermi liquid for any $r>0$.
This observation becomes more evident in the LM regime, where the behaviour of
the spectral function is qualitatively different. Namely, in contrast to
the SC case we find that
the spectral function vanishes as $A(\omega)\propto |\omega|^{r}$ here.
In addition, no qualitative difference, apart from the exponent, can be observed between
the cases $r\!>\!0.5$ and $r\!<\!0.5$.
\section{summary}
To summarize, we have studied the problem of an Anderson impurity in a
pseudo-gap Fermi system at particle-hole symmetry using a generalization of the
numerical renormalization group method \cite{Wil75,Kri80}.
We find a behaviour similar to that of the corresponding Kondo model
investigated by CY \cite{Che95} and Ingersent \cite{Ing96}.
However, the critical line $\Delta_{\rm c}$ separating the strong-coupling
and local-moment regimes of these two models shows a quite different form
between $r\!=\!0$ (where $\Delta_{\rm c}$ starts linearly) and
$r\!=\!1/2$ (where $\Delta_{\rm c}$ diverges). This difference
is mainly due to the fact that we have chosen the f-level
to lie within the pseudogap.
In both strong-coupling and local-moment regime the thermodynamic
quantities specific heat and spin-susceptibility show powerlaw behaviour.
We also presented the first calculations of the impurity spectral function for
this model. We find $A(\omega)\propto |\omega|^{-r}$ in the strong-coupling
regime and $A(\omega)\propto |\omega|^r$ in the local-moment regime.
We do not find any indication that the local moment fixed points for
$r\!<\!1/2$ and $r\!>\!1/2$ are different.
As shown by Ingersent for the Kondo model, the critical values
$\Delta_{\rm c}$ take finite values as soon as particle-hole symmetry is
violated. It is of course interesting to see, whether this reduction of
the critical coupling is the same for the Anderson model (work on this
problem is in progress).
Another interesting question is the relevance of the model studied here in the
context of the dynamical mean field theory (for recent reviews see
\cite{Pru95,Geo96}). The effective single impurity
Anderson model appearing in the dynamical mean field
theory is coupled to a (self-consistently
determined) effective medium. There is a possibility that the density of
states corresponding to this effective medium develops a pseudo-gap structure
under certain conditions (e.g.\ near the metal-insulator transition).
Also, the density of states of the infinite dimensional generalization
of the honeycomb ($d\!=2$) and diamond ($d\!=\!3$) lattices is
proportional to $|\omega|$ near the Fermi level.
We wish to thank J.\ Keller and G.\ M.\ Zhang
for a number of stimulating discussions. One of us (R.B.) was supported
by a grant from the Deutsche Forschungsgemeinschaft, grant No.\ Bu965-1/1.
|
1,314,259,993,219 | arxiv | \section{Introduction}
A nonempty finite set $C \subset \mathbb{S}^{n-1}$ is called a {\em spherical code}.
The geometry of spherical codes is related to the properties of the Gegenbauer polynomials \cite{Sze}; we consider their
normalized version $\{P_i^{(n)}(t)\}_{i=0}^\infty$ satisfying the following three-term recurrence relation
\[ (i+n-2)P_{i+1}^{(n)}(t)=(2i+n-2)tP_i^{(n)}(t)-iP_{i-1}^{(n)}(t), \]
$i=1,2,\ldots$, with initial conditions $P_0^{(n)}(t)=1$ and $P_1^{(n)}(t)=t$.
Given a code $C \subset \mathbb{S}^{n-1}$, the quantities
\begin{equation}\label{Mk0}
M_i(C):=\sum_{x,y\in C} P^{(n)}_i(\langle x , y \rangle )=|C|+\sum_{x,y\in C, x \neq y} P^{(n)}_i(\langle x , y \rangle ), \ i \geq 1 \end{equation}
are called {\it moments} of $C$. Here $\langle x,y \rangle $ is the usual inner product of $x,y \in \mathbb{S}^{n-1}$.
The well known positive definiteness of the Gegenbauer polynomials \cite{Sch1942} implies that $M_i(C) \geq 0$ for every $i \geq 1$.
The case of equality (for some indices $i$) is quite important. The concept of spherical $T$-designs was introduced by Delsarte and Seidel \cite{DS89} in 1989.
\begin{definition} \label{T-designs} \cite{BBTZ17} Let $T$ be a finite set of positive integers. A spherical code $C \subset \mathbb{S}^{n-1}$
is called a spherical $T$-design if $M_i(C)=0 \mbox{ for all } i \in T$.
\end{definition}
The classical case $T=\{1,2,\ldots,m\}$ leads to the spherical $m$-designs introduced by Delsarte, Goethals and Seidel \cite{DGS} in 1977
(see also \cite{Lev-chapter}). The case of $T$ consisting of even integers was considered by Bannai et al in \cite[Section 6.1]{BBTZ17}
(see also \cite{BOT,DS89,ZBBKY17}). In this paper we consider $T$ consisting of several consecutive even integers $2,4,\ldots$
(see \cite{DS89,KP10,Wal-book}).
\begin{definition} Let $k$ be a positive integer. The set $C \subset \mathbb{S}^{n-1}$ is called a spherical $(k,k)$-design if
$M_{2i}(C)=0$ for every $i=1,2,\ldots,k$.
\end{definition}
It seems that spherical $(k,k)$ designs were first considered in \cite{KP10} (called semi-designs there).
Recently, theory was developed (see \cite{Wal-book} and references therein) and relations to tight frames (i.e., $(1,1)$-designs)
were investigated. However, up to best of our knowledge, linear programming for spherical $(k,k)$-designs is not developed yet.
In Section 2 we formulate three main problems that can be attacked by linear programming. General linear programming
bounds are derived in Section 3. Section 4 is devoted to universal lower bound for the minimum possible cardinality of
$(k,k)$ designs for fixed $n$ and $k$ and its optimality. In Section 5 we show some examples and classification results
for codes attaining the universal bound.
\section{Cardinality and energy problems for spherical $(k,k)$-designs}
Designs are, in general sense, good approximations of the space they live. Thus it natural to know designs with as less as
possible points. Thus, we are interested in the quantity
\[ \mathcal{M}(n,k):=\min \{ |C|: C \subset \mathbb{S}^{n-1} \mbox{ is a $(k,k)$-design}\}, \]
the minimum possible cardinality of a $(k,k)$-design in $\mathbb{S}^{n-1}$.
Recently, importance of energy of spherical designs was recognized as interesting (see \cite{BDHSS15,GS19} and references therein).
The spherical designs appear to be energy effective; i.e. the upper and lower bounds for their energy are often close each other. Thus it
is natural to consider energy problems for spherical $(k,k)$-designs.
\begin{definition}
Given a (potential) function $h(t):[-1,1] \to [0,+\infty]$ and a code $C \subset \mathbb{S}^{n-1}$, the {\em $h$-energy} of $C$ is
\[ E_h(C):=\sum_{x, y \in C, x \neq y} h(\langle x,y \rangle). \]
\end{definition}
Therefore, we are also interested in the minimum and maximum possible $h$-energy of a $(k,k)$-design in $\mathbb{S}^{n-1}$ with given cardinality; i.e., in the quantities
\[ \mathcal{L}_h(n,k,M):=\min \{E_h(C): C \in \mathbb{S}^{n-1} \mbox{ is a $(k,k)$-design}, |C|=M \}, \]
and
\[ \mathcal{U}_h(n,k,M):=\max \{E_h(C): C \in \mathbb{S}^{n-1} \mbox{ is a $(k,k)$-design}, |C|=M \}. \]
We will introduce general linear programming framework for bounding for the quantities $\mathcal{M}(n,k)$, $\mathcal{L}_h(n,k,M)$,
and $\mathcal{U}_h(n,k,M)$. Then we will derive a universal (in sense of Levenshtein) bound for $\mathcal{M}(n,k)$
as our derivation allows investigations of the optimality of the bounds and the designs which (if exist) would attain these bounds.
Universal bounds for the energy quantities $\mathcal{L}_h(n,k,M)$ and $\mathcal{U}_h(n,k,M)$ will be considered elsewhere.
\section{General linear programming bounds}
For any real polynomial $f(t)$ we consider its Gegenbauer expansion
\[ f(t)=\sum_{i=0}^m f_i P_i^{(n)}(t), \]
where $m=\deg(f)$, and define the following sets of polynomials
\[ F_{n,k}:=\{ f(t) \, : \, f_0>0, f_i \leq 0, i=1,3,\ldots,2k-1 \mbox{ and } i \geq 2k+1\}, \]
\[ G_{n,k}:=\{ f(t) \, : \, f_0>0, f_i \geq 0, i=1,3,\ldots,2k-1 \mbox{ and } i \geq 2k+1\}. \]
Since any Gegenbauer polynomial $P_j^{(n)}(t)$ is an odd/even function for odd/even $j$,
any polynomial $f(t)$ which is an even function has $f_i=0$ for its Gegenbauer coefficients with odd $i$.
This yields that if $\deg(f) \leq 2k$, then $f$ belongs to both $F_{n,k}$ and $G_{n,k}$.
Further, we define
\[ M_{n,k}:=\{ f(t) \in F_{n,k}\, : f(t) \geq 0 \ \forall \, t \in [-1,1] \}, \]
\[ L_{n,k}^{(h)}:=\{ f(t) \in G_{n,k}\, : f(t) \leq h(t) \ \forall \, t \in [-1,1] \}, \]
\[ U_{n,k}^{(h)}:=\{ f(t) \in F_{n,k}\, : f(t) \geq h(t) \ \forall \, t \in [-1,1] \}. \]
Linear programming for spherical designs was introduced by Delsarte, Goethals and Seidel \cite{DGS} and
developed for energy bounds by Yudin \cite{Y}. All three bounds in Theorem \ref{thm_lp} below follow easily from the identity
\begin{equation}
\label{main}
|C|f(1)+\sum_{x,y\in C, x \neq y} f(\langle x,y\rangle) = |C|^2f_0 + \sum_{i=1}^m f_i M_i
\end{equation}
(see, for example, \cite[Equation (1.20)]{lev92}, \cite[Equation (3)]{ZBBKY17}), which
serves as a key source of estimations by linear programming.
It follows easily by computing in two ways the sum $\sum_{x,y\in C} f(\langle x,y\rangle)$ and using the definition of the moments.
We are now in a position to formulate the general linear programming theorems for the
quantities $\mathcal{M}(n,k)$, $\mathcal{L}_h(n,k,M)$, and $\mathcal{U}_h(n,k,M)$.
\begin{theorem} \label{thm_lp}
a) If $n \geq 2$ and $k$ are positive integers and $f \in M_{n,k}$, then $\mathcal{M}(n,k) \geq f(1)/f_0$.
b) If $n \geq 2$, $k$, and $M \geq 2$ are positive integers, $h$ is a potential function, and $f \in L_{n,k}^{(h)}$, then
$\mathcal{L}_h(n,k,M) \geq M(f_0M-f(1))$.
c) If $n \geq 2$, $k$, and $M \geq 2$ are positive integers, $h$ is a potential function, and $f \in U_{n,k}^{(h)}$, then
$\mathcal{U}_h(n,k,M) \leq M(f_0M-f(1))$.
\end{theorem}
\begin{proof}
a) Let $C \subset \mathbb{S}^{n-1}$ be a $(k,k)$-design and $f \in M_{n,k}$. We apply \eqref{main} for $C$ and $f$.
Since $M_i \geq 0$ for all $i$ and, in particular, $M_{2i}(C)=0$ for $i=2,4,\ldots,2k$, and $f_i \leq 0$ for all odd $i$ and for all even
$i>2k$, the right hand side of \eqref{main} does not exceed $f_0|C|^2$.
The sum in the left hand side is nonnegative because $f(t) \geq 0$ for every $t \in [-1,1]$. Thus the left hand side
is at most $f(1)|C|$ and we conclude that $|C| \geq f(1)/f_0$. Since this inequality follows for every $C$, we have
$\mathcal{M}(n,k) \geq f(1)/f_0.$
b) Now let $C \subset \mathbb{S}^{n-1}$ be a $(k,k)$-design of cardinality $M$ and $f \in L^{(h)}_{n,k}$. We rewrite the left hand side of \eqref{main} for $C$ and $f$
\begin{equation} \label{re-main-energy}
f(1)|C|+E_h(C)+\sum_{x,y\in C, x \neq y} \left( f(\langle x,y\rangle)-h(\langle x,y\rangle)\right)=|C|^2f_0 + \sum_{i=1}^m f_i M_i
\end{equation}
to involve the energy $E_h(C)$.
Similarly to a), we conclude that the right hand side of \eqref{re-main-energy} is at least $f_0|C|^2$ and the
left hand side does not exceed $f(1)|C|+E_h(C)$ (observe that the sum in the left hand side is nonpositive because of the
condition $f(t) \leq h(t)$ for every $t \in [-1,1]$).
Therefore $E_h(C) \geq |C|(f_0|C|-f(1))$. Since this follows for
every such $C$, we conclude that $\mathcal{L}_h(n,M,k) \geq M(f_0M-f(1))$.
c) If $C \subset \mathbb{S}^{n-1}$ is a $(k,k)$-design of cardinality $M$ and $f \in U^{(h)}_{n,k}$, then as in b)
we use \eqref{re-main-energy} to see that $E_h(C) \leq |C|(f_0|C|-f(1))$, whence
$\mathcal{U}_h(n,M,k) \leq M(f_0M-f(1))$.
\end{proof}
The conditions for achieving equality in all three bounds of Theorem \ref{thm_lp} are obviously the same -- one need to have
inner products $\langle x,y \rangle$, $x,y \in C$, $x \neq y$, only equal to roots of $f(t)$, and $f_iM_i=0$ for all odd $i$
and all $i \geq 2k+1$.
We conclude this section with an application of the addition formula (see \cite[Theorem 3.3]{DGS}, \cite[Section 3]{Lev-chapter} in the designs' context)
\[ P_i^{(n)} (\langle x,y \rangle) = \frac{1}{r_i} \sum_{j=1}^{r_i} v_{ij}(x) v_{ij}(y) \]
where $r_i=\dim \mbox{Harm}(i)$ and $\{v_{ij}(x): j=1,2,\ldots,r_i\}$ is an orthonormal basis of Harm$(i)$,
the space of homogeneous harmonic polynomials of degree $i$ on $\mathbb{S}^{n-1}$.
\begin{theorem} \label{moments-degs}
We have $M_i(C)=0$ if and only if $\sum_{x \in C} P_i^{(n)}(\langle x,y \rangle)=0$ for any fixed $y \in C$.
\end{theorem}
\begin{proof}
Computing $M_i(C)$ by the addition formula, we see that $M_i(C)=0$ if and only if $\sum_{x \in C} v(x)=0$ for
each $v \in \mbox{Harm}(i)$. Using this and the addition formula again we obtain that the double sum in
\eqref{Mk0} splits into $|C|$ sums each one equal to $0$. Indeed, for fixed $y \in C$, we consecutively obtain
\[ \sum_{x \in C} P_i^{(n)}(\langle x,y \rangle)=\sum_{x \in C} \frac{1}{r_i} \sum_{j=1}^{r_i} v_{ij}(x)\overline{v_{ij}(y)} =
\frac{1}{r_i} \sum_{j=1}^{r_i} \overline{v_{ij}(y)} \sum_{x \in C}v_{ij}(x) = 0, \]
which completes the proof. \end{proof}
\section{A universal bound for $\mathcal{M}(n,k)$}
Suitable polynomials in Theorem \ref{thm_lp} may give universal (in sense of Levenshtein \cite{Lev-chapter}) bounds. We
present here such a bound for $\mathcal{M}(n,k)$ using a polynomial which is suggested from the choice of
Delsarte, Goethals and Seidel in \cite{DGS}.
Denote
$B(n,m):=\min\{|C|: C \subset \mathbb{S}^{n-1} \mbox{ is a spherical $m$-design}\}$.
The Delsarte-Goethals-Seidel bound \cite{DGS}
\begin{equation}
\label{DGS-bound}
B(n,m) \geq D(n,m):= \left\{ \begin{array}{ll}
\ds 2\binom{n+k-2}{k-1}, & \mbox{ if $m=2k-1$,} \\[12pt]
\ds \binom{n+k-1}{k}+\binom{n+k-2}{k-1}, & \mbox{ if $m=2k$}.
\end{array}
\right.
\end{equation}
was obtained by linear programming via the polynomials
\begin{eqnarray}
\label{DGS-poly}
d_{m}(t) = \left\{
\begin{array}{ll}
(t+1)\left(P_{k-1}^{1,1}(t)\right)^2, & \mbox{if } m=2k-1 \\
\left(P_k^{1,0}(t)\right)^2, & \mbox{if }m=2k
\end{array} \right. .
\end{eqnarray}
Here $P_i^{1,1}(t)$ and $P_i^{1,0}(t)$ are polynomials called adjacent\footnote{In fact, they are (normalized) Jacobi polynomials
with parameters }
by Levenshtein
(see \cite{lev92,Lev-chapter}). What is important for us is that $P_i^{1,1}(t)=P_i^{(n+2)}(t)$ is again a Gegenbauer polynomial,
in particular, it is an even or odd function.
\begin{theorem}
We have
\begin{equation} \label{lb-card}
\mathcal{M}(n,k) \geq {n+k-1 \choose k}.
\end{equation}
If a $(k,k)$-design $C \subset \mathbb{S}^{n-1}$ attains this bound, then all inner products $\langle x,y \rangle$ of distinct $x,y \in C$
are among the zeros of $P_k^{(n+2)}(t)$.
\end{theorem}
\begin{proof}
We are going to use the polynomial $f(t)=\left(P_k^{(n+2)}(t)\right)^2=d_{2k+1}(t)/(t+1)$
in Theorem \ref{thm_lp}a).
It is obvious that $f(t) \geq 0$ for every $t \in [-1,1]$. Moreover, since $P_k^{(n+2)}(t)$ is an odd or
even function, its square is an even function. Then $f_i=0$ for every odd $i$ in the Gegenbauer expansion of
our $f(t)$ and we conclude that $f \in M_{n,k}$.
The calculation of $f(1)/f_0$ follows from the classical one by noting that (obviously) $f(1)=d_{2k+1}(1)/2$
and the Gegenbauer coefficients $f_0$ of the polynomials $f(t)$ and $d_{2k+1}(t)$ coincide since
\[ \int_{-1}^1 f(t) (1-t^2)^{(n-3)/2} dt = \int_{-1}^1 d_{2k+1}(t) (1-t^2)^{(n-3)/2} dt. \]
Thus our bound $f(1)/f_0$ is equal to $D(n,2k+1)$, i.e. half of the value of the Delsarte-Goethals-Seidel bound for $(2k+1)$-designs.
If a $(k,k)$-design $C \subset \mathbb{S}^{n-1}$ attains the bound \eqref{lb-card}, then equality in \eqref{main}
follows (for $C$ and our $f(t)$). Since $f_iM_i(C)=0$ for every $i$, the equality $|C|=f(1)/f_0$ is equivalent to
\[ \sum_{x,y\in C, x \neq y} \left(P_k^{(n+2)} (\langle x,y\rangle)\right)^2=0, \]
whence $P_k^{(n+2)} (\langle x,y\rangle)=0$ whenever $x $ and $y$ are distinct points from $C$.
\end{proof}
The bound \eqref{lb-card} was obtained by Waldron \cite[Exercise 6.23]{Wal-book} in different way (see also (5.10) in \cite{DS89}
which concerns the case $T=2k$).
The linear programming interpretation is new and answers the optimality question for $T=\{2,4,\ldots,2k\}$ (see the
optimality discussion in Section 3 in \cite{ZBBKY17}).
\begin{theorem}\label{opt}
The bound \eqref{lb-card} is optimal in the sense that it can not be improved by using in Theorem \ref{thm_lp}a)
a polynomial from $M_{n,k}$ of degree at most $2k$.
\end{theorem}
\begin{proof} We use a special case of the quadrature formula in Levenshtein's Theorem 5.39 from \cite{Lev-chapter}, namely
\begin{equation} \label{quad-2k}
f_0=\frac{f(1)+f(-1)}{D(n,2k+1)}+\sum_{i=1}^k \rho_i^{(k)} f(t_i^{1,1}),
\end{equation}
where the weights $\rho_i^{(k)}$ are positive and $t_1^{1,1}<t_2^{1,1}<\cdots<t_k^{1,1}$ are the
zeros of $P_k^{1,1}(t)$. The formula \eqref{quad-2k} holds true for every real polynomial of degree
at most $2k$. Defining, as in \cite{NN03}, test functions
\[ Q_j^{(n)}(k):=\frac{P_j^{(n)}(1)+P_j^{(n)}(-1)}{D(n,2k+1)}+\sum_{i=1}^k \rho_i^{(k)} P_j^{(n)}(t_i^{1,1}), \ j=1,2,\ldots, \]
one proves that the bound \eqref{lb-card} can be improved by Theorem \ref{thm_lp}a) if and only if $Q_j^{(n)}(k)<0$
for some $j$. It follows from \eqref{quad-2k} that $Q_j^{(n)}(k)$, $j \leq 2k$, is equal to the Gegenbauer coefficient $f_0$ of
$P_j^{(n)}(t)$, which is, of course, 0 for $1 \leq j \leq 2k$ (in fact, $Q_j^{(n)}(k)=0$ for every odd $j$).
Therefore $Q_j^{(n)}(k)<0$ is impossible for $1 \leq j \leq 2k$, which completes the proof.
\end{proof}
\begin{remark}
Optimality results using test functions as above originate from \cite{BDB96}, where necessary and
sufficient conditions for existence of improvements of
the Levenshtein bounds were proved (see also Theorem 5.47 in \cite{Lev-chapter}). The corresponding result for the
Delsarte-Goethals-Seidel bound was proven in \cite{NN03}.
\end{remark}
\section{On codes attaining the bound \eqref{lb-card}}
The basic example of spherical $(k,k)$-designs comes naturally from antipodal spherical $(2k+1)$-designs.
A spherical code $C$ is called antipodal if $C=-C$.
\begin{example} \label{ex1} Let $C \subset \mathbb{S}^{n-1}$ be an antipodal spherical $(2k+1)$-design. Consider the
spherical code $C^\prime \subset \mathbb{S}^{n-1}$ formed by the following rule: from each pair $(x,-x)$ of
antipodal points of $C$ exactly one of the points $x$ and $-x$ belongs to $C^\prime$. Then $C^\prime$ is
a spherical $(k,k)$-design. Indded, it is easy to see in \eqref{Mk0} that $M_{2i}(C^\prime)=M_{2i}(C)/2=0$ for $i=1,2,\ldots,k$
because of $|C^\prime|=|C|/2$ and $P_{2i}^{(n)}(t)=P_{2i}^{(n)}(-t)$ for every $t$.
So any orthonormal basis is an $(1,1)$-design and its "doubling" gives a (tight) spherical 3-design. Further, any six
points of the icosahedron no two of which are antipodal form a $(2,2)$-design since the icosahedron is a (tight) 5-design.
There are many similar examples (see \cite{Wal-book}).
\end{example}
The other direction of Example \ref{ex1} works as follows. If $C \subset \mathbb{S}^{n-1}$ is a $(k,k)$-design
and $C \cap -C = \phi$, then $C \cup -C$ is an antipodal $(2k+1)$-design by using \eqref{Mk0}.
It follows from Example \ref{ex1} and its reverse that the bound \eqref{lb-card} is attained exactly when there exist
an antipodal spherical $(2k+1)$-design with $ 2{n+k-1 \choose k}$ points. Such designs are called tight and were
classified by Bannai and Damerell \cite{BD1,BD2}. Their classification immediately implies the following.
\begin{theorem} \label{tight-bd}
If $C \subset \mathbb{S}^{n-1}$ is a $(k,k)$-design with $\mathcal{M}(n,k)={n+k-1 \choose k}$ points, then one of the following holds true:
(i) $k=1$ and $C$ defines an orthonormal basis of $\mathbb{R}^n$;
(ii) $k=2$, $n=3$ or $n=u^2-2$, where $u$ is an odd positive integer;
(iii) $k=3$, $n=3v^2-4$, where $v \geq 2$ is a positive integer;
(iv) $k=5$, $n=24$.
\end{theorem}
Examples for (ii) and (iii) are only known for $u=3$ and 5 and $v=2$ and 3, respectively. The distance distributions of the
related tight spherical 5- and 7-designs for (ii) and (iii) were found by the author in \cite{Boy95}. The related tight 11-design
for (iv) is formed by the $2{28 \choose 5}$ vectors of minimum norm in the Leech lattice.
\begin{theorem}
There exist no $(2,2)$-designs on $\mathbb{S}^{n-1}$, $n \geq 3$, with ${n+1 \choose 2}+1$ points.
\end{theorem}
\begin{proof}
We first see that a spherical $(2,2)$-design of $1+n(n+1)/2$ points cannot possess a pair of antipodal points.
Assume that $C$ is such a design. Using the Gegenbauer expansion of $t^4$ and the conditions $M_2(C)=M_4(C)=0$,
we obtain by Theorem \ref{moments-degs} that
\[ 1+\sum_{x \in C \setminus \{y\}} \langle x,y \rangle^4 = \frac{3|C|}{n(n+2)} \]
for any fixed $y \in C$. Using this for $y$ such that $\langle x,y \rangle =-1$ for some $x \in C$, we obtain
$3|C|/n(n+2)-2 \geq 0 \iff 3(n^2+n+2) \geq 4n(n+2)$,
which gives a contradiction.
Let $C \subset \mathbb{S}^{n-1}$ is a $(2,2)$-design with ${n+1 \choose 2}+1$ points. Since $C \cap -C =\phi$, we conclude that
$C \cup -C$ is an antipodal 5-design with $n^2+n+2$ points. Now the proof is completed by noting that the
nonexistence of such designs for $n \geq 3$ was shown by Reznick \cite{Rez95}.
\end{proof}
|
1,314,259,993,220 | arxiv | \section{Introduction}
De nombreuses applications pratiques nécessitent des problèmes de prise de décision séquentiels, où un agent doit choisir la meilleure action parmi plusieurs alternatives. Des exemples de telles applications incluent les essais cliniques~\cite{durand2018contextual}, systèmes de recommandation~\cite{bouneffouf2012hybrid,
bouneffouf2012following,allesiardo2014neural,
bouneffouf2013situation,bouneffouf2012considering,
bouneffouf2012exploration,bouneffouf2013applying,
bouneffouf2013risk,bouneffouf2013role,
bouneffouf2013drars,bouneffouf2013improving,
bouneffouf2014contextual,
bouneffouf2013towards,bouneffouf2008role,
bouneffouf2013contextual,bouneffouf2013impact,
bouneffouf2014recommandation,
bouneffouf2015sampling,
bouneffouf2013hybrid,
bouneffouf2013mobile,
bouneffouf2016exponentiated,
bouneffouf2013evolution,
bouneffouf2013apprentissage,
bouneffouf2014context,
bouneffouf2014etude,
bouneffouf2014freshness,
allesiardo2014prise,
bouneffouf2011temporal,
bouneffouf2014r,
bouneffouf2013proposition,
bouneffouflearning,
bouneffouf2013logique,
bouneffouf2014ant,
bouneffouf2016contextual,
bouneffouf2016multi,
bouneffouf2013exponentiated,
bouneffouf2016theoretical,
bouneffouf2013temporal,
bouneffouf2013optimizing,
bouneffouf2016ensemble,
bouneffouf2017context,
bouneffouf2017bandit,
bouneffoufdrars,
lin2018adaptive,
bouneffouf2018nystrom,
balakrishnan2018using,
riemer2019scalable,
balakrishnan2019incorporating,
bouneffouf2018eigenspectrum,
lin2018adaptive,
bouneffouffollowing,
riemer2017generative,
lin2018contextual,
choromanska2019beyond,
bouneffouf2020survey,
bouneffouf2013drars,
djallelrisk,
upadhyay2018bandit,
liu2019automated,
bouneffouf2019optimal,
noothigattu2018interpretable,
yurochkin2019online,
lin2019reinforcement,
noothigattu2019teaching,
aggarwal2019can,
balakrishnan2019using,
mehta2019ai,
liu2020admm,
sharma2020data,
balakrishnan2020constrained,
lin2020story,
varshneyteaching,
bouneffouf2020hyper,
lin2020unified,
bouneffouf2016learning,
lin2020online,
ram2020solving,
bouneffouf2020online,
bouneffouf2020contextual,
bouneffouf2013location,
toutanova2014proceedings,
leung2012neural,
jin2014neural,
bouneffouf2020computing,
bouneffouf2020spectral,
bouneffoufonline,
gupta20162016,
bouneffouf2015sampling,
bouneffouf2012contextual,
bouneffoufbandit,
bouneffoufsurvey,
bouneffoufspectral,
bouneffouf2018online,
bouneffouftoward} et la détection d'anomalies~\cite{Ding:2019}. Dans certains cas, des informations secondaires ou un contexte sont associés à chaque action (par exemple, le profil d'un utilisateur), et le retour d'information, ou récompense, est limité à l'option choisie. Par exemple, dans les essais cliniques, le contexte est le dossier médical du patient (par exemple, état de santé, antécédents familiaux, etc.), les actions correspondent aux options de traitement comparées et la récompense représente le résultat du traitement proposé (par exemple, succès ou échec). Un aspect important affectant le succès à long terme dans de tels contextes est de trouver un bon compromis entre l'exploration (par exemple, essayer une nouveau traitement) et l'exploitation (choisir le traitement le plus connue à ce jour).
Ce compromis inhérent entre l'exploration et l'exploitation existe dans de nombreux problèmes de prise de décision séquentiels, et est traditionnellement formulé comme le problème des bandit, qui se présente comme suit: Étant donné $K$ actions possibles, ou "bras", chacun associé à une distribution de probabilité de récompense fixe mais inconnue ~\cite{LR85,UCB}, à chaque itération, un agent sélectionne un bras à jouer et reçoit une récompense, échantillonnée à partir de la distribution de probabilité du bras respectif indépendamment des actions précédentes. La tâche d'un agent est d'apprendre à choisir ses actions afin que les récompenses cumulées au fil du temps soient maximisées.
Notez que l'agent doit essayer différentes bras pour apprendre leurs récompenses (c'est-à-dire explorer le gain), et également utiliser ces informations apprises afin de recevoir le meilleur gain (exploiter les gains appris). Il existe un compromis naturel entre l'exploration et l'exploitation. Par exemple, essayer chaque bras exactement une fois, puis jouer le meilleur d'entre. Cette approche est souvent susceptible de conduire à des solutions très sous-optimales lorsque les récompenses des bras sont incertaines. Différentes solutions ont été proposées pour ce problème, basées sur une formulation stochastique ~\cite{LR85,UCB,BouneffoufF16} et une formulation bayésienne ~\cite{AgrawalG12}; cependant, ces approches ne tenaient pas compte du contexte ou des informations secondaires dont disposait l'agent.
Il est à noté que le problème des bandits peut être vu comme la forme la plus simple d'apprentissage par renforcement, dans laquelle l'agent est sans état. Lorsque le système a des états, les actions provoquent des changements d'états et les récompenses dépendent également des états. Par conséquent, dans l'apprentissage par renforcement, les récompenses à différentes étapes ne sont pas indépendantes les unes des autres. En fait, les algorithmes classiques pour l'apprentissage par renforcement (avec états) utilisent souvent des solutions au problème des bandits multi-bras comme sous-programmes pour définir des politiques exploration exploitation dans l'apprentissage par renforcement. Par exemple, il est bien connu $\epsilon$-greedy L'algorithme de bandits multi-bras est souvent combiné avec l'algorithme de programmation dynamique de Bellman pour l'apprentissage par renforcement afin de définir les choix d'actions. En outre, de nombreux algorithmes d'apprentissage par renforcement, lorsqu'ils sont appliqués à des systèmes sans état, se réduisent à des algorithmes de bandit multi-bras.
Une version particulièrement utile du MAB est le contextual multi-arm bandit (CMAB), ou simplement le contextual bandit, où à chaque itération, avant de choisir un bras, l'agent observe un $N$-dimensions du contexte, ou vecteur de features.
L'agent utilise ce contexte, ainsi que les récompenses des bras jouées dans le passé, pour choisir quel bras jouer dans l'itération actuelle. Au fil du temps, le but de l'agent est de collecter suffisamment d'informations sur la relation entre les vecteurs de contexte et les récompenses, afin qu'il puisse prédire le prochain meilleur bras à jouer en regardant le contexte actuel \cite{langford2008epoch,AgrawalG13}. Différents algorithmes ont été proposés pour le cas général, dont LINUCB~\cite{Li2010}, Neural Bandit \cite{AllesiardoFB14} et Contextual Thompson Sampling (CTS)~\cite{AgrawalG13}, où une dépendance linéaire est généralement supposée entre la récompense attendue d'une action et son contexte.
Nous allons maintenant fournir un aperçu des diverses applications du bandit au problèmes de la vie réelle (santé, réseau informatique, finance, et au-delà), ainsi qu'en apprentissage automatique. En particulier, lorsque les approches bandit peuvent aider à améliorer le réglage des hyperparamètres et d'autres choix algorithmiques importants dans l'apprentissage supervisé, l'apprentissage actif et l'apprentissage par renforcement.
\section{Applications des Bandits }
Le bandit stochastique aborde les défis associés à la présence d'incertitude dans la prise de décision séquentielle. Ce type d'incertitude a une interaction complexe avec le dilemme de l'exploration exploitation et fournit donc un formalisme naturel pour la plupart des problèmes de prise de décision.
\subsection{Santé}
\textbf {Essais cliniques. } La collecte de données pour évaluer l'efficacité du traitement sur des animaux pendant tous les stades de la maladie peut être difficile lors de l'utilisation de procédures conventionnelles d'allocation de traitement aléatoire, car de mauvais choix de traitements peuvent entraîner une détérioration de la santé du sujet. Les auteurs de \cite{durand2018contextual} visent à concevoir une stratégie d'allocation adaptative pour améliorer l'efficacité de la collecte de données en allouant plus d'échantillons pour explorer des traitements prometteurs. Ils présentent cette application comme un problème de bandit contextuel et introduisent un algorithme pratique d'exploration exploitation dans ce cadre. Le travail repose sur le sous-échantillonnage pour comparer les options de traitement en utilisant une quantité équivalente d'informations. Ils étendent la stratégie de sous-échantillonnage au contexte de bandit contextuel en appliquant un sous-échantillonnage dans la régression avec processus gaussien.
Warfarine est l'anticoagulant oral le plus utilisé dans le monde; cependant, l'administration d'un dosage précis reste un défi important, car le dosage approprié peut être très variable entre les individus en raison de divers facteurs cliniques, démographiques et génétiques. Les médecins suivent actuellement une stratégie à dose fixe: les patients commencent avec une dose de 5 mg / jour (ce qui est la posologie appropriée pour la majorité des patients) et ajustent lentement la dose au cours de quelques semaines en suivant les taux d’anticoagulant du patient. Cependant, une posologie initiale incorrecte peut entraîner des conséquences très néfastes telles qu'un accident vasculaire cérébral (si la dose initiale est trop faible) ou une hémorragie interne (si la dose initiale est trop élevée). Ainsi, les auteurs de \cite{bastani2015online} abordent le problème de l'apprentissage et de l'attribution d'un dosage initial approprié aux patients en modélisant le problème comme un bandit avec des covariables de haute dimension, et proposent un nouvel algorithme de bandit efficace basé sur l'estimateur LASSO.
\textbf{Modélisation du cerveau et du comportement.} S'inspirant des études comportementales de la prise de décision humaine chez les patients souffrant de différents troubles mentaux, les auteurs de \cite{bouneffouf2017bandit} proposent un cadre paramétrique général pour le problème des bandits qui étend l'approche standard d'échantillonnage de Thompson pour incorporer les biais de traitement des récompenses associés à plusieurs conditions neurologiques et psychiatriques, y compris les maladies de Parkinson et d'Alzheimer, le trouble de déficit de l'attention/ hyperactivité (TDAH), la dépendance et la douleur chronique. Ils démontrent empiriquement, du point de vue de la modélisation comportementale, que leur model peut être considéré comme une première étape vers un modèle de calcul unificateur capturant les anomalies du traitement des récompenses dans plusieurs conditions mentales.
\subsection{La finance}
Ces dernières années, la sélection séquentielle de portefeuilles a suscité un intérêt croissant à l'intersection de l'apprentissage automatique et de la finance quantitative. Le compromis entre l'exploration et l'exploitation, dans le but de maximiser la récompense cumulative, est une formulation naturelle des problèmes de choix de portefeuille. Dans \cite{shen2015portfolio}, les auteurs ont proposé un algorithme de bandit pour faire des choix de portefeuille en ligne en exploitant les corrélations entre plusieurs bras. En construisant des portefeuilles orthogonaux à partir de plusieurs actifs et en intégrant leur approche au cadre des bandits, les auteurs dérivent la stratégie de portefeuille optimale représentant une combinaison d'investissements passifs et actifs selon une fonction de récompense ajustée au risque.
Dans \cite{huo2017risk}, les auteurs intègrent la conscience du risque dans le cadre classique du bandit et introduisent un nouvel algorithme pour la construction de portefeuille. En filtrant les actifs en fonction de la structure topologique du marché financier et en combinant la politique optimale de bandit avec la minimisation d'une mesure de risque, ils parviennent à un équilibre entre le risque et le rendement.
\subsection{Tarification dynamique}
Les entreprises de vente en ligne sont souvent confrontées au problème de tarification dynamique: l'entreprise doit décider des prix en temps réel pour chacun de ses multiples produits. L'entreprise peut mener des expériences de prix (faire des changements de prix fréquents) pour se renseigner sur la demande et maximiser les profits à long terme. Les auteurs de \cite{misra2018dynamic} proposent une politique d'expérimentation dynamique des prix, où l'entreprise ne dispose que d'informations incomplètes sur la demande. Pour ce paramètre général, les auteurs dérivent un algorithme de tarification qui équilibre le fait de gagner un profit immédiat par rapport à l'apprentissage pour les bénéfices futurs. L'approche combine un bandit avec une identification partielle de la demande des consommateurs à partir de la théorie économique. Semblable à \cite{misra2018dynamic}, les auteurs de \cite{mueller2018low} considèrent la tarification multi-produits dynamique de haute dimension avec un modèle de demande linéaire de faible dimension évolutif. Ils montrent que le problème de maximisation des revenus se réduit à une optimisation convexe de bandit en ligne avec des informations secondaires données par les demandes observées. L'approche applique un algorithme d'optimisation convexe de bandit dans un espace projeté de faible dimension couvert par les caractéristiques du produit latent, tout en apprenant simultanément cette durée via la décomposition en valeur singulière en ligne d'une matrice contenant les demandes observées.
\subsection{Systèmes de recommandation}
Les systèmes de recommandation sont fréquemment utilisés dans diverses applications pour prédire les préférences de l'utilisateur. Cependant, ils sont également confrontés au dilemme exploration-exploitation lorsqu'ils font une recommandation, car ils doivent exploiter leurs connaissances sur les éléments précédemment choisis qui intéressent l'utilisateur, tout en explorant de nouveaux éléments susceptibles de plaire à l'utilisateur. Les auteurs de \cite{zhou2017large} abordent ce défi en utilisant le paramètre bandit, en particulier pour les systèmes de recommandation à grande échelle qui ont un nombre vraiment grand ou infini d'éléments. Ils proposent deux approches de bandit à grande échelle dans des situations où aucune information préalable n'est disponible. Une exploration continue de leurs approches peut résoudre le problème du démarrage à froid dans les systèmes de recommandation. Dans les systèmes de recommandation contextuels, la plupart des approches existantes se concentrent sur la recommandation d'éléments pertinents aux utilisateurs, en tenant compte des informations contextuelles, telles que l'heure, le lieu ou les aspects sociaux. Cependant, aucune de ces approches n’a pris en compte le problème de l’évolution du contenu des utilisateurs. Dans \cite{bouneffouf2012contextual}, les auteurs introduisent un algorithme qui prend en compte cette dynamique. Il est basé sur une exploration / exploitation dynamique et peut équilibrer de manière adaptative les deux aspects, en décidant quelle situation est la plus pertinente pour l'exploration ou l'exploitation.
En ce sens, \cite{bouneffouf2014freshness} propose d'étudier la "fraîcheur" du contenu de l'utilisateur à travers le problème du bandit. Ils introduisent l'algorithme Freshness-Aware Thompson Sampling pour la recommandation de nouveaux documents.
\subsection{Maximisation de l'influence}
Les auteurs de \cite{vaswani2017model} considèrent la maximisation de l'influence (IM) dans les réseaux sociaux, qui est le problème de maximiser le nombre d'utilisateurs qui prennent conscience d'un produit en sélectionnant un ensemble d'utilisateurs auxquels exposer le produit. Ils proposent une nouvelle paramétrisation qui rend non seulement le cadre indépendant du modèle de diffusion sous-jacent, mais aussi statistiquement efficace pour apprendre des données.
Ils donnent une fonction de substitution monotone et submodulaire correspondante, et montrent qu'il s'agit d'une bonne approximation de l'objectif original de la MI. Ils considèrent également le cas d'un nouveau marketeur cherchant à exploiter un réseau social existant, tout en apprenant simultanément les facteurs régissant la propagation de l'information. Pour cela, ils développent un algorithme de bandit basé sur LinUCB. Les auteurs de \cite{wen2017online} étudient également le problème de maximisation de l'influence en ligne dans les réseaux sociaux mais sous le modèle de cascade indépendant. Plus précisément, ils essaient d'apprendre l'ensemble des "meilleures graines ou influenceurs" dans un réseau social en ligne tout en interagissant à plusieurs reprises avec lui. Ils abordent les défis de l'espace d'action combinatoire, car le nombre d'ensembles d'influenceurs réalisables augmente de manière exponentielle avec le nombre maximum d'influenceurs et un retour limité, car seule la partie influencée du réseau est observée.
\subsection{Récupération de l'information}
Les auteurs de \cite{losada2017multi} soutiennent que le processus de sélection itérative de recherche d'informations peut être naturellement modélisé comme un problème de bandit contextuel. Le modèle de bandit conduit à des méthodes très efficaces pour l'arbitrage des documents. Dans ce cadre d'attribution des bandits, ils proposent sept nouvelles méthodes de jugement de documents, dont cinq sont des méthodes stationnaires et deux sont des méthodes non stationnaires. Cette étude comparative comprend les méthodes existantes conçues pour l'évaluation basée sur la mise en commun et les méthodes existantes conçues pour la méta-recherche. Dans la recherche d'informations mobiles, les auteurs de \cite{bouneffouf2013contextual} introduisent un algorithme qui aborde ce dilemme dans le domaine de la recherche d'informations basées sur le contexte (CBIR). Il est basé sur une exploration / exploitation dynamique et il peut équilibrer de manière adaptative les deux aspects en décidant quelle situation d’utilisateur est la plus pertinente pour l’exploration ou l’exploitation. Dans un cadre en ligne délibérément conçu, ils effectuent des évaluations auprès des utilisateurs mobiles.
\subsection{Systèmes de Dialogue}
\textbf {Sélection de réponse de dialogue.} La sélection de réponse de dialogue est une étape importante vers la génération de réponse naturelle dans les agents conversationnels. Les travaux existants sur les modèles conversationnels se concentrent principalement sur l'apprentissage supervisé hors ligne à l'aide d'un large ensemble de paires contexte-réponse. Dans \cite{LiuYLM18}, les auteurs se concentrent sur l'apprentissage en ligne de la sélection des réponses dans les systèmes de dialogue. Ils proposent un modèle de bandit contextuel avec une fonction de récompense non linéaire qui utilise une représentation distribuée du texte pour la sélection de réponse en ligne. Un LSTM bidirectionnel est utilisé pour produire les représentations distribuées du contexte de dialogue et des réponses, qui servent d'entrée à un bandit contextuel. Ils proposent une méthode d'échantillonnage personnalisée de Thompson qui est appliquée à un espace de caractéristiques polynomiales pour approximer la récompense.
\textbf{Systèmes de dialogue.} L'objectif de la pro-activité dans les systèmes de dialogue est d'améliorer la convivialité des agents conversationnels en leur permettant d'initier des conversations. Alors que les systèmes de dialogue sont devenus de plus en plus populaires, les systèmes de dialogue actuels axés sur les tâches sont principalement réactifs, car les utilisateurs humains ont tendance à lancer des conversations. Les auteurs de \cite{silander2018contextual} proposent d'introduire le paradigme des bandits contextuels comme cadre pour des systèmes de dialogue proactifs. Les bandits contextuels ont été le modèle de choix pour le problème de la maximisation des récompenses avec rétroaction partielle car ils correspondent bien à la description de la tâche, ils explorent également la notion de mémoire dans ce paradigme, où ils proposent deux modèles de mémoire différentiables qui agissent comme des parties du fonction d'estimation de récompense paramétrique. Le premier, les réseaux de mémoire sélective par convolution, utilise une sélection d'interactions passées dans le cadre de l'aide à la décision. Le deuxième modèle, appelé réseau de mémoire attentive contextuelle, met en oeuvre un mécanisme d'attention différentiable sur les interactions passées de l'agent. Le but est de généraliser le modèle classique des bandits contextuels aux contextes où les informations temporelles doivent être incorporées et exploitées de manière apprenable.
\textbf{Systèmes de dialogue multi-domaines.} Construire des agents de dialogue multi-domaines est une tâche difficile et un problème ouvert dans l'IA moderne. Dans le domaine du dialogue, la capacité d'orchestrer plusieurs agents de dialogue formés indépendamment, ou compétences, pour créer un système unifié est d'une importance particulière. Dans \cite{upadhyaybandit}, les auteurs étudient la tâche d'orchestration du dialogue en ligne, où ils définissent l'orchestration postérieure comme la tâche de sélectionner un sous-ensemble de compétences qui répond le mieux à une entrée utilisateur en utilisant des fonctionnalités extraites à la fois de l'entrée utilisateur et de l'individu compétences. Pour tenir compte des coûts variés associés à l'extraction des caractéristiques des compétences, ils considèrent l'orchestration postérieure en ligne avec un budget d'exécution des compétences. Ce paramètre est formalisé en tant que bandit attentif au contexte avec observations, une variante des bandits attentifs au contexte, puis l'évalue sur des ensembles de données conversationnelles simulées.
\subsection{Détection d'une anomalie}
Les auteurs de \cite{Ding:2019} étudient le problème de la détection d'anomalies dans un cadre interactif. Leur objectif est de maximiser les véritables anomalies présentées à l'expert humain après épuisement d'un budget donné. Parallèlement à cette ligne, ils formulent le problème à travers le cadre de bandit et développent un nouvel algorithme de bandit contextuel collaboratif, qui modélise explicitement les attributs et les dépendances de noeuds de manière transparente dans un cadre commun, et gère le dilemme exploration-exploitation lors de l'interrogation.
Les transactions par carte de crédit susceptibles d'être frauduleuses par les systèmes de détection automatisés sont généralement transmises à des experts humains pour vérification. Pour limiter les coûts, il est courant de ne sélectionner que les transactions les plus suspectes pour enquête. Les auteurs de \cite{soemers2018adapting} affirment qu'un compromis entre l'exploration et l'exploitation est impératif pour permettre l'adaptation aux changements de comportement. L'exploration consiste en la sélection et l'investigation des transactions dans le but d'améliorer les modèles prédictifs, et l'exploitation consiste à enquêter sur les transactions détectées comme suspectes. Modélisant la détection des transactions frauduleuses comme une récompense, ils utilisent un apprenant d'arbre de régression incrémentiel pour créer des grappes de transactions avec des récompenses attendues similaires. Cela permet l'utilisation d'un algorithme de bandit contextual(CMAB) pour fournir le compromis exploration / exploitation.
\subsection{ Télécommunication}
Dans \cite{boldrini2018mumab}, un modèle de bandit a été utilisé pour décrire le problème de la meilleure sélection de réseau sans fil par un dispositif multi-Radio Access Technology (multi-RAT), dans le but de maximiser la qualité perçue par l'utilisateur final. Le modèle proposé étendre le modèle MAB classique de deux manières. Premièrement, il prévoit deux actions différentes: mesurer et utiliser; deuxièmement, il permet aux actions de s'étaler sur plusieurs étapes de temps. Deux nouveaux algorithmes conçus pour tirer parti de la plus grande flexibilité offerte par le modèle muMAB ont également été introduits. Le premier, appelé mesure-utilisation-UCB1 est dérivé de l'algorithme UCB1, tandis que le second, appelé Mesure avec intervalle logarithmique, est conçu de manière appropriée pour le nouveau modèle afin de tirer parti de la nouvelle action de mesure, tout en en utilisant agressivement le meilleur bras.
Les auteurs de \cite{KerkoucheAFVM18} démontrent la possibilité d'optimiser les performances de la technologie Long Range Wide Area Network. Les auteurs suggèrent que les nœuds utilisent des algorithmes de bandit multi-bras, pour sélectionner les paramètres de communication (facteur d'étalement et puissance d'émission). Les évaluations montrent que de telles méthodes d'apprentissage permettent de gérer bien mieux le compromis entre la consommation d'énergie et la perte de paquets qu'un algorithme Adaptive Data Rate adaptant les facteurs d'étalement et les puissances de transmission sur la base des valeurs du rapport signal sur interférence et du rapport de bruit.
\subsection{Bandit dans les applications réelles: résumé et orientations futures}
\begin{table}[h]
\scriptsize
\caption {Application des Bandits dans la vie réelle}
\label{tab:Life}
\begin{tabular}{|l|r|l|l|l|}
\hline
& & Non- & & Non- \\
& MAB & stat & CMAB & stat \\
& & MAB & & CMAB \\
\hline
Santé & $\surd$ & & $\surd$ & \\ \hline
La finance & $\surd$ & & & \\ \hline
Tarification dynamique & & $\surd$ & & \\ \hline
Système de recommandation & $\surd$ & $\surd$ & $\surd$ & $\surd$ \\ \hline
Maximisation & $\surd$ & & & \\ \hline
Système de dialogue & & & $\surd$ & \\ \hline
Télécomunication & $\surd$ & & & \\ \hline
Détection d'anomalie & $\surd$ & & & \\ \hline
\end{tabular}
\end{table}
Le tableau \ref{tab:Life} fournit un résumé des formulations de problèmes de bandit utilisées dans diverses applications spécifiques à un domaine. Le choix du modèle de bandit est souvent spécifique au domaine. Par exemple, il est évident que le bandit non stationnaire n'a pas été utilisé dans les applications de soins de santé, car des changements significatifs ne sont pas attendus dans le processus de prise des décisions de traitement, c'est-à-dire pas de transition dans l'état du patient; de telles transitions, si elles se produisaient, seraient mieux modélisées en utilisant l'apprentissage par renforcement plutôt que le bandit non stationnaire. Il existe clairement d'autres domaines où le bandit non stationnaire est un cadre plus approprié, mais il semble que ce paramètre n'ait pas encore été étudié de manière significative dans les domaines de la santé. Par exemple, la détection d'anomalie est un domaine dans lequel un bandit contextuel non stationnaire pourrait être utilisé, car dans ce contexte, l'anomalie pourrait être contradictoire, ce qui signifie que tout bandit appliqué à ce paramètre devrait avoir une sorte de condition de dérive, afin de s'adapter à de nouveaux types d'attaques. Un autre constat est qu'aucun des travaux existants n'a tenté de développer un algorithme capable de résoudre ces différentes tâches en même temps, ou d'appliquer les connaissances obtenues dans un domaine à un autre domaine, ouvrant ainsi une direction de recherche sur le multitask et le transfer learning dans le cadre de bandit. De plus, étant donné la nature en ligne du problème de bandit, le lifelong learning serait une prochaine étape naturelle.
\section{Bandit pour un meilleur apprentissage automatique}
Dans cette section, nous décrivons comment les algorithmes de bandit pourraient être utilisés pour améliorer d'autres algorithmes, par ex. diverses techniques d'apprentissage automatique.
\subsection{Selection d'algorithm }
La sélection de l'algorithme est généralement basée sur des modèles de performances d'algorithme, appris au cours d'une séquence d'apprentissage hors ligne distincte, qui peut être d'un coût prohibitif. Dans des travaux récents, ils ont adopté une approche en ligne, dans laquelle un modèle de performance est mis à jour de manière itérative et utilisé pour guider la sélection. Le compromis exploration-exploitation qui en résultait a été représenté comme un problème de bandit avec des conseils d'experts, en utilisant un solveur existant pour ce jeu, cela nécessitait l'utilisation d'une limite arbitraire sur les temps d'exécution de l'algorithme, annulant ainsi le regret optimal du solveur. Dans \cite{GaglioloS10}, un cadre plus simple a été proposé pour représenter la sélection d'algorithmes comme un problème de bandit, en utilisant des informations partielles et une limite inconnue sur les pertes.
\subsection{Optimisation des hyperparamètres}
\cite{li2016hyperband} a formulé l'optimisation des hyperparamètres comme un problème de bandit non stochastique d'exploration pure où des ressources prédéfinies, telles que des itérations, des échantillons de données ou des fonctionnalités sont allouées à des configurations échantillonnées aléatoirement. Ces travaux ont introduit un nouvel algorithme, Hyperband, pour ce cadre et analyse ses propriétés théoriques, offrant plusieurs garanties. En outre, Hyperband était comparé aux méthodes d'optimisation bayésiennes populaires; il a été observé qu'Hyperband peut fournir une accélération plus grande par rapport à ses concurrents sur une variété de problèmes d'apprentissage.
\subsection{Sélection des Features}
Dans un apprentissage supervisé en ligne classique, la véritable étiquette d'un échantillon est toujours révélée au classificateur, contrairement à un bandit où une mauvaise classification se traduit par une récompense nulle, et seule la classification correcte donne la récompense 1. Les auteurs de ~ \cite{wang2014online} étudie le problème de la sélection des fonctionnalités en ligne, où le but est de faire des prédictions précises en utilisant seulement un petit nombre de fonctionnalités actives à l'aide de l'algorithme epsilon-greedy. Les auteurs de \cite{BouneffoufRCF17} abordent le problème de la sélection de fonctionnalités en ligne en abordant le problème d'optimisation combinatoire dans le cadre de bandit stochastique avec retour de bandit, en utilisant l'algorithme d'échantillonnage de Thompson.
\subsection{Bandit pour l'apprentissage actif}
L'étiquetage de tous les exemples dans un cadre de classification supervisée peut être coûteux. Les stratégies d'apprentissage actif résolvent ce problème en sélectionnant les exemples non étiquetés les plus utiles pour obtenir l'étiquette et pour former un modèle prédictif. Le choix des exemples à étiqueter peut être vu comme un dilemme entre l'exploration et l'exploitation sur l'espace d'entrée. Dans \cite{bouneffouf2014contextual}, une nouvelle stratégie d'apprentissage actif gère ce compromis en modélisant le problème d'apprentissage actif comme un problème de bandit contextuel.
ils proposent un algorithme séquentiel appelé Active Thompson Sampling (ATS), qui, à chaque tour, attribue une distribution d'échantillonnage sur le clusteur, échantillonne un point de cette distribution et interroge l'oracle pour cette étiquette de point d'échantillonnage. Les auteurs de \cite{ganti2013building} proposent également un algorithme d'apprentissage actif basé sur des groupes de bandits à plusieurs bras pour le problème de la classification binaire. Ils utilisent des idées telles que des limites de confiance inférieures et une régularisation auto-concordante tirée de la littérature sur les bandits à plusieurs bras pour concevoir leur algorithme.
\subsection{Clustering}
\cite{SublimeL18} considère le clustering collaboratif, qui est un paradigme d'apprentissage automatique concerné par l'analyse non supervisée de données complexes à vues multiples à l'aide de plusieurs algorithmes fonctionnant. Les applications bien connues du clustering collaboratif incluent le clustering à vues multiples et le clustering de données distribué, où plusieurs algorithmes échangent des informations afin de s'améliorer mutuellement. L'un des principaux problèmes du clustering collaboratif et à vues multiples est d'évaluer quelles collaborations seront bénéfiques ou préjudiciables. De nombreuses solutions ont été proposées à ce problème, et toutes concluent que, à moins que deux modèles ne soient très proches, il est difficile de prédire à l'avance le résultat d'une collaboration. Pour résoudre ce problème, les auteurs de \cite{SublimeL18} proposent un algorithme collaboratif de clustering peer to peer basé sur le principe des bandits multi-bras non stochastiques pour évaluer en temps réel quels algorithmes ou vues peuvent apporter des informations utiles.
\subsection{Apprentissage par renforcement}
Les systèmes cyber-physiques autonomes jouent un rôle important dans nos vies. Pour s'assurer que les agents se comportent de manière alignée sur les valeurs des sociétés dans lesquelles ils opèrent, nous devons développer des techniques qui permettent à ces agents non seulement de maximiser leur récompense dans un environnement, mais aussi d'apprendre et de suivre les contraintes implicites assumées par la société. Dans \cite{noothigattu2018interpretable}, les auteurs étudient un cadre où un agent peut observer des traces de comportement de membres de la société mais n'a pas accès à l'ensemble explicite de contraintes qui donnent lieu au comportement observé. Au lieu de cela, l'apprentissage par renforcement inverse est utilisé pour apprendre de telles contraintes. C'est contraintes sont ensuite combinées avec une fonction de valeur orthogonale grâce à l'utilisation d'un orchestrateur contextuel qui choisit entre deux politiques. L'orchestrateur de bandit contextuel permet à l'agent de mélanger les politiques de manière novatrice, en prenant les meilleures actions à partir d'une politique de maximisation de la récompense.
\subsection{Bandit pour l'apprentissage automatique: \\ Résumé et orientations futures}
\begin{table}[]
\scriptsize
\caption {Bandit pour l'apprentissage automatique}
\label{tab:ML}
\begin{tabular}{|l|r|l|l|l|}
\hline
& MAB & Non & CMAB & Non \\
& & Station- & & Station- \\
& & MAB & & CMAB\\
\hline
Sélection d'algorithme & & $\surd$ & & \\ \hline
Optimisation des paramètres & $\surd$ & & & \\ \hline
Sélection des fonctionnalités & $\surd$ & $\surd$ & & \\ \hline
Apprentissage actif & $\surd$ & & $\surd$ & \\ \hline
Clustering & $\surd$ & & & \\ \hline
RL & $\surd$ & $\surd$ & $\surd$ & \\ \hline
\end{tabular}
\end{table}
Le tableau \ref{tab:ML} résume les types de problèmes de bandit utilisés pour résoudre les problèmes d'apprentissage automatique mentionnés ci-dessus. Nous voyons, par exemple, que le bandit contextuel n'a pas été utilisé dans la sélection des hyperparamètres. Cette observation pourrait indiquer une direction pour les travaux futurs, où des informations secondaires pourraient être utilisées dans la sélection des caractéristiques. En outre, le bandit non stationnaire a rarement été pris en compte dans ces situations problématiques, ce qui suggère également des extensions possibles des travaux actuels. Par exemple, le bandit contextuel non stationnaire pourrait être utile dans le cadre de sélection de caractéristiques non stationnaires, où trouver les bonnes caractéristiques dépend du temps et du contexte lorsque l'environnement ne cesse de changer. Notre principale observation est également que chaque technique ne résout qu'un seul problème d'apprentissage automatique à la fois; Ainsi, la question est de savoir si un paramètre de bandit et des algorithmes peuvent être développés pour résoudre simultanément plusieurs problèmes d'apprentissage automatique, et si le transfert et l'apprentissage continu peuvent être réalisés dans ce contexte. Une solution pourrait être de modéliser tous ces problèmes dans un cadre de bandit combinatoire, où l'algorithme de bandit trouverait la solution optimale pour chaque problème à chaque itération; ainsi, le bandit combinatoire pourrait en outre être utilisé comme outil pour faire progresser l'apprentissage automatique.
\section{Conclusions}
\label{sec:Conclusion}
Dans cet article, nous avons passé en revue certains des travaux récents les plus notables sur les applications du bandit et du bandit contextuel, à la fois dans des domaines réels et dans l'apprentissage automatique. Nous avons résumé, de manière organisée (tableaux 1 et 2), diverses applications existantes, par types de paramètres de bandit utilisés, et discuté des avantages de l'utilisation des techniques de bandit dans chaque domaine. Nous décrivons brièvement plusieurs problèmes importants et des extensions futures prometteuses.
En résumé, le cadre du bandit, comprenant à la fois le bandit multi-bras et le bandit contextuel, est actuellement des domaines de recherche très actifs et prometteurs, et de multiples nouvelles techniques et applications émergent chaque année. Nous espérons que notre enquête pourra aider le lecteur à mieux comprendre certains aspects clés de ce domaine passionnant et à avoir une meilleure perspective sur ses avancées notables et ses promesses futures.
\bibliographystyle{ieeetr}
|
1,314,259,993,221 | arxiv | \section{Introduction}
Confinement in QCD and in Yang-Mills theories in general
is associated with long-range (infrared) effects. It is thus
necessary to study the infrared behavior of the theory's Green's
functions using nonperturbative methods. These studies include
numerical (lattice) as well as analytic methods and consider basic
(gauge-dependent) quantities --- such as gluon and ghost propagators
--- in order to test the predictions of the
so-called confinement scenarios (see e.g.\ \cite{Maas:2006qw}
and references therein). In the case of lattice simulations, one
has at one's disposal a true first-principles method, with no uncontrolled
approximations. On the other hand, extreme care must be taken to extract
the true infrared behavior of the propagators from lattice data, since
significant systematic errors may affect the extrapolations that are
needed in order to get physical results. The most important such errors
are Gribov-copy effects\footnote{The problem of Gribov-copy effects has
been extensively studied on the lattice
\cite{Cucchieri:1997dx,Cucchieri:1997ns,Silva:2004bv,Maas:2008ri}.
We comment briefly on this issue in our Conclusions.}
--- related to the fact that the relevant objects
are gauge-dependent quantities --- and finite-size effects. The latter
is especially important in the investigation of the infrared limit, since
the smallest nonzero momentum that can be represented on a lattice of
linear extension $L$ is proportional to $1/L$. Thus, a sensible range of
small momenta can only be properly simulated on a very large lattice.
Here, we consider carefully the elimination of finite-size effects
through a better control of the extrapolation of our data to the
infinite-volume limit, as described below.
The extrapolation of gluon- and ghost-propagator data to
infinite lattice volume is a delicate task, since the correct volume
dependence of the data may not be easily inferred from the behavior
on medium-size lattices and since some quantities, such as the zero-momentum
gluon propagator, are quite noisy. For these reasons it proves very helpful
to obtain constraints on the infrared behavior of the propagators,
as the upper and lower bounds presented here.
We remark that these bounds are valid at each lattice volume $V$ and
must be extrapolated to infinite volume, just as for the propagators.
The additional advantage, besides establishing a range of allowed values
for the propagators, is that the bounds are written in terms of ``friendly''
quantities --- i.e.\ easier to compute, better behaved or more intuitive than
the propagators themselves. It will therefore be more convenient
to study the volume dependence of the bounds first, in order to assess
the volume dependence of the propagators.
We describe and apply the gluon and ghost bounds --- in pure $SU(2)$
theory and Landau gauge --- respectively
in Sections \ref{gluon} and \ref{ghost} below. As can be seen
from our analysis, we obtain a finite nonzero gluon propagator
and a tree-level-like ghost propagator in the infrared limit. Possible
implications of these results for the currently accepted confinement
scenarios are discussed in the Conclusions.
\section{Gluon Bounds}
\label{gluon}
Rigorous upper and lower bounds have been introduced in Ref.\
\cite{Cucchieri:2007rg} for the gluon propagator at zero momentum,
defined as
\begin{equation}
D(0) \; = \; \frac{V}{d (N_c^2 - 1)} \sum_{\mu, b}
\langle | {\widetilde A}^b_{\mu}(0) |^2 \rangle \; ,
\end{equation}
where $ {\widetilde A}^b_{\mu}(p) $ is the Fourier transform of the
gluon field $A^b_{\mu}(x)$ in pure $SU(N_c)$ gauge theory,
$\langle \cdot \rangle$ stands for the
path integral (Monte Carlo) average, $V = N^d$ is the lattice volume
and we consider $d$ space-time dimensions. Let us define the quantity
\begin{equation}
{M}(0) \, = \, \frac{1}{d (N_c^2 - 1)}
\sum_{b,\mu} | {\widetilde A}^b_{\mu}(0) | \; .
\label{eq:mag}
\end{equation}
It is straightforward to show that this quantity is related
to $D(0)$ as
\begin{equation}
V \, {\langle {M}(0) \rangle}^2 \, \leq \; D(0)\;
\leq \; V d (N_c^2 - 1) \, \langle {{M}(0)}^2 \rangle \; ,
\label{eq:Dbounds}
\end{equation}
which provides us with rigorous upper and lower bounds for $D(0)$ that
must be satisfied at every volume $V$.
We can now try to interpret the quantities ${\langle {M}(0) \rangle}^2$ and
$\langle {{M}(0)}^2 \rangle$, to obtain perhaps an understanding of their
volume dependence. We start by noting that if we take the above
``magnetization'' without the absolute value, i.e.\ considering
\begin{equation}
{M}'(0) \, = \, \frac{1}{d (N_c^2 - 1)}
\sum_{b,\mu} {\widetilde A}^b_{\mu}(0) \; ,
\end{equation}
we get a null Monte Carlo average: ${\langle {M}'(0) \rangle} = 0$.
Because of the absolute value, the quantity defined in Eq.\ (\ref{eq:mag})
has a nonzero average at finite $V$, but it should go to zero
at least as fast as $V^{-1/d}$, as shown in \cite{Zwanziger:1990by}.
We now note that
$\, V \langle {{M}(0)}^2 \rangle $
is essentially the susceptibility associated with the magnetization ${M}'(0)$
(since the average of this magnetization is zero).
For a $d$-dimensional spin system one thus
expects to see $ \,V \langle {{M}(0)}^2 \rangle \sim const$, i.e.\ the
statistical variance of the magnetization is proportional to the inverse
of the volume, a behavior known as {\em self-averaging}.
At the same time, considering the statistical fluctuations in the Monte Carlo
sampling of ${M}(0)$, we would expect ${\langle {M}(0) \rangle}^2 $ to have
the same volume dependence
as $\langle {{M}(0)}^2 \rangle$ \cite{Cucchieri:2007rg}.
The simple statistical argument presented above suggests that both
${\langle {M}(0) \rangle}^2$ and $\langle {{M}(0)}^2 \rangle$ should show a
volume dependence as $1/V$, implying (for $d>2$) a much stronger approach
to zero than the limiting behavior for $M(0)$ obtained in \cite{Zwanziger:1990by}
and mentioned earlier. On the other hand, the suppression with $1/V$ is
compensated by the volume factor for both bounds in Eq.\ (\ref{eq:Dbounds}).
Consequently, if this suggested behavior for the susceptibilities is verified,
$D(0)$ converges to a nonzero constant in the infinite-volume limit.
Note that the bounds in Eq.\ (\ref{eq:Dbounds}) apply to any gauge and
that they can be immediately extended to the case $D(p)$ with $p \neq 0$.
We have investigated the volume dependence of the bounds for pure $SU(2)$
gauge theory in Landau gauge, considering physical lattice volumes of up to
$\,a^4 V \approx (27 \,\mbox{fm})^4$. We find remarkably good agreement with
the predicted $1/V$ behavior for
${\langle {M}(0) \rangle}^2$ and $\langle {{M}(0)}^2 \rangle$, as can be seen
in plots and tables in Ref.\ \cite{Cucchieri:2007rg}. More precisely, by
fitting the two quantities to $1/V^{\alpha}$ we get the
exponents $\alpha$ respectively 0.995(10) and 0.998(10).
Analogously, an analysis for the $SU(3)$ case (considering somewhat smaller
volumes) yields the exponents 1.058(6) and 1.056(6) \cite{Oliveira:2008uf}.
A similar behavior is also obtained by a study introducing a modified
gauge-fixing procedure (in order to check for possible Gribov-copy effects)
\cite{flip}.
Finally, a finite nonzero gluon propagator has been recently
obtained using improved actions and anisotropic lattices \cite{Gong:2008td}.
We remark that this behavior has also been clearly observed on very large lattices
in $3d$ \cite{Cucchieri:2003di,Cucchieri:2007rg}
but not in $2d$ \cite{Maas:2007uv,Cucchieri:2007rg}.
\section{Ghost Bounds}
\label{ghost}
Rigorous lower and upper bounds for the ghost propagator $G(p)$
were proposed in \cite{Cucchieri:2008fc}. We recall that $G(p)$ is
given by the inverse of the Faddeev-Popov (FP) matrix ${\cal M}$
and that an infrared enhancement of $G(p)$ with respect to the tree-level
ghost propagator $G(p)\sim p^{-2}$ is generally expected as a sign of
confinement. By straightforward calculations --- independent of the ones
performed in the gluon case --- we can establish bounds for the ghost
propagator. In Landau gauge, for any nonzero momentum $p$, one finds
\begin{equation}
\frac{1}{N_c^2 - 1} \, \frac{1}{\lambda_{min}} \, \sum_a \,
| {\widetilde \psi_{min}(a,p)} |^2 \,
\leq \, G(p) \, \leq \, \frac{1}{\lambda_{min}} \; ,
\label{eq:Gineq}
\end{equation}
where $\lambda_{min}$ is the smallest nonzero eigenvalue of the FP
operator ${\cal M}$ and ${\widetilde \psi_{min}(a,p)}$ is the corresponding
eigenvector. Note that the upper bound is independent of the momentum $p$.
If we now assume $\lambda_{min}\sim L^{-\nu}$ and
$\,G(p) \sim p^{-2-2\kappa}$ at small $p$, we have that
$2+2\kappa \leq \nu$, i.e.\ $\nu > 2$, is a necessary condition for
the infrared enhancement of $G(p)$. A similar analysis can be carried
out \cite{Cucchieri:2006hi} for a generic gauge condition.
Consider the Gribov region $\Omega$, where all eigenvalues
of ${\cal M}$ are positive. In the infinite-volume limit,
entropy favors configurations near the Gribov horizon $\partial \Omega$
(where $\lambda_{min}$ goes to zero). Thus, inequalities such as (\ref{eq:Gineq})
can tell us if one should expect an enhancement $G(p)$
when the Boltzmann weight gets concentrated\footnote{For example,
in $4d$ Maximally Abelian gauge one sees
that $\lambda_{min}$ goes to zero at large volume but the ghost propagator
stays finite at zero momentum \cite{MAG}.} on $\partial \Omega \,$.
Our study in the $SU(2)$ Landau case \cite{Cucchieri:2008fc}, using the very large
lattices mentioned in the previous section, suggests that $\nu<2$ (for $d=4$).
This tree-level-like behavior is confirmed if one considers the dressing function
$p^2 G(p)$. Indeed, the data for the dressing function
can be well fitted by $a - b \log(1 + c p^2)$ \cite{Cucchieri:2008fc},
supporting $\kappa = 0$.
This is also observed in $d=3$. For $d=2$ enhancement is observed, with
a behavior $\sim p^{-2 \kappa}$ and $\kappa$ between 0.1 and 0.2
\cite{Maas:2007uv,Cucchieri:2008fc}.
\section{Conclusions}
By using rigorous upper and lower bounds to
constrain the infrared behavior of gluon and ghost propagators,
we obtain a finite nonzero gluon propagator at zero momentum and
an essentially constant ghost dressing function in the infrared
limit. These results seem to contradict the commonly
accepted confinement scenarios of Gribov-Zwanziger and Kugo-Ojima
\cite{Cucchieri:2006xi}. However, as pointed out in \cite{Cucchieri:2008yp},
the above results are not completely in disagreement with the
Gribov-Zwanziger approach.
In particular, it has been recently shown \cite{3d4d} that using
the Gribov-Zwanziger approach, i.e.\ by restricting the functional integration
to the Gribov region $\Omega$, one can also obtain in $3d$ and
$4d$ a finite nonzero gluon propagator and a tree-level-like ghost propagator
in the infrared limit. It is interesting that the same approach cannot be
applied to the $2d$ case \cite{Dudal:2008xd}.
Let us also note \cite{Cucchieri:2008yp} that even though the
Gribov-Zwanziger and the Kugo-Ojima confinement scenarios seem to predict
similar infrared behavior for the propagators, it is not clear how to relate the
(Euclidean) cutoff at the Gribov horizon to the (Minkowskian) approach of
Kugo-Ojima \cite{Zwanziger:2003cf}.
Similar results for the gluon and ghost propagators are
obtained by various groups using very large lattice
volumes \cite{largevolume}, both in the $SU(2)$ and in the $SU(3)$ cases.
[The equivalence between the infrared propagators in $SU(2)$ and $SU(3)$
gauge theories can be seen e.g.\ in \cite{Cucchieri:2007zm}.]
Of course, one should also recall that the region $\Omega$ is actually not
free of Gribov copies and that the configuration space should be identified
with the so-called fundamental modular region $\Lambda$. On the other hand,
the restriction to $\Lambda$ and the numerical
verification of the Gribov-Zwanziger scenario are separate issues
\cite{Cucchieri:2008yp}.
Indeed, this scenario is based on the restriction of
the configuration space to the region $\Omega$, which includes $\Lambda$.
Finally, as explained in \cite{Cucchieri:1997dx}, the restriction to
$\Lambda$ can only make the ghost propagator less singular, as confirmed by
recent lattice data \cite{Maas:2008ri}.
\noindent{\bf Acknowledgements}
The authors acknowledge partial support from the Brazilian Funding
Agencies FAPESP and CNPq. T.M. also thanks the Theory Group at DESY-Zeuthen
for hospitality and the Alexander von Humboldt Foundation for financial
support.
|
1,314,259,993,222 | arxiv | \section{INTRODUCTION}
Traditionally the model of two-level system (TLS) is used to
describe the interaction of electromagnetic radiation with atoms
\cite{mand}. This model is quite reasonable if radiation frequency
and transition frequency of the corresponding two levels are very
close to each other. It was suggested to utilize Rydberg atoms
controlled by electromagnetic fields as qubits in quantum
information technologies \cite{s3}. There are other implementations
of the qubits. Trapped ions \cite{s1},\cite{s2}, semiconductor
quantum dots \cite{s4}, superconducting Josephson junctions can be
used for this purpose. Qubits based on the Josephson junctions are
recognized now to be the most promising for realization of quantum
information processing devices (see, for example, Refs.
\cite{urb}-\cite{wallr}). There is a technological opportunity to
couple qubits via transmission lines. Individual photons can act as
transmitters of quantum states between remote qubits.
The aforesaid illustrates motivations to study TLS coupled to
transmission lines. Atoms having a large dipole moment (Rydberg
atoms) as well as transmission lines (including optical waveguides)
that concentrate radiation energy in small volumes are used to
increase the coupling. Strong interaction is desired for many
applications whose aim is to achieve effective influence of one
subsystem on the other. At the same time increase of interaction
results in more pronounced nonlinearity of the system. This
complicates theoretical analysis. Therefore many theoretical results
were obtained only numerically (see, for example,
\cite{dro}-\cite{buz}). Fortunately, analysis can be simplified
considerably for some particular states of the system. First of all,
a single-photon Fock state of the incident radiation should be
mentioned (see, for example, recent papers \cite{she}-\cite{fan}).
Also things get simplified if the incident light is in a coherent
state. For example, Ref. \cite{fan} deals with radiation which is
initially in a single-mode coherent state. Much earlier paper
\cite{dom} considers more general and more important for
applications multimode coherent-state pulses. Results of Ref.
\cite{dom} gives a possibility not only to obtain the reflectance
and transmittance of a wave packet but also to study spatial
structure of outgoing radiation and its dependence on the incident
pulse shape. Moreover, paper \cite{dom} describes effective
photon-photon "interaction" induced by coupling of the radiation
with atoms.
Different formalisms are used in the cited papers. Method of
scattering matrix \cite{shi} which is equivalent to the input-output
formalism \cite{wal} is applied to photon scattering by TLS in
\cite{fan}. Authors of \cite{dom} use an alternative approach based
on calculation of Poynting vectors to study similar physical
systems.
Recently we have applied method of photon phase-space distribution
function to the problem of light propagation in the Earth atmosphere
\cite{ber}, \cite{ber1}. In the present paper we use this method for
description of light propagation in waveguides. We obtain a spatial
structure and spectrum of the transmitted and reflected radiation
which are useful for design of radiation with desired properties. Besides we analyze
physical nature of the phase-space distribution functions. It is
shown that they, like $Q-$ or $W-$distributions, can be negative for
some specific parameters of the incident radiation.
Also equations describing fluctuations of outgoing photons are
derived and solved. We show that the variance of the reflected
radiation may be essentially lower than that of a coherent-state
pulse. Thus, few-photon pulses with favorable statistical properties
can be generated in course of radiation-TLS interaction.
In the next Section, one-dimensional distribution functions are
defined in terms of creation and annihilation operators of waveguide
modes. The standard Hamiltonian describing light propagation and
interaction with TLS is used to derive evolution equations.
\section{HAMILTONIAN AND PHOTON DISTRIBUTION FUNCTIONS}
We consider a model Hamiltonian describing a two-level atom coupled
to a single-polarization waveguide. The waveguide modes are assumed
to form a one-dimensional continuum. Then the Hamiltonian is given
by ($\hbar=1$)
\begin{equation}\label{one}
H = \int dk(\omega_k^ll_k^\dag l_k + \omega_k^rr_k^\dag r_k)+
\frac {\omega_a}2\sigma_z+g\int dk\big [\sigma_+(l_k+ r_k)+
(l_k^\dag +r_k^\dag)\sigma_-\big],
\end{equation}
where $l_k$ and $r_k$ are the annihilation operators of photons
propagating from the left side to the right side and vice versa,
respectively. Photon frequencies are denoted correspondingly by
$\omega_k^{l,r}$. Notations $l_k^\dag$ and $r_k^\dag$ stand for the
creation operators. For a symmetric waveguide the linearized in $k$
dispersions in the vicinity of $\omega^{l,r}=\omega_0$ are given by
$\omega_k^{l,r }=\omega_0\pm vk$ where $v$ and $-v$ ($v>0$) are
velocities of waves propagating from the left and from the right,
respectively. Atomic operators $\sigma _z$ and $\sigma _\pm$ are
defined by Pauli matrices: $\sigma_{\pm}=\frac 12(\sigma_x\pm
i\sigma_y)$, $\sigma_+\sigma_-=(\sigma_z+1)/2$.
Field variables follow the usual bosonic commutation rules
\[ [l_k,l_{k^\prime} ^\dag]=[r_k,r_{k^\prime}^\dag]=\delta (k-k^\prime),\]
while the rest of commutators vanish. Also, field variables commute
with atomic variables.
The first term in the right side of Eq. \ref{one} describes
electromagnetic field in the waveguide. The second term is the
Hamiltonian of a two-level atom with transition frequency
$\omega_a$. The third term describes the radiation-atom interaction
whose strength is determined by parameter $g$. It is assumed that
the atom is positioned at the origin of the coordinate system which
makes the Hamiltonian to be independent explicitly of the atom
coordinate. The interaction is presented in the rotating-wave
approximation. Recently an approach which is free of this widely
used constraint is developed in Ref. \cite{bez}.
Photons moving from the left can be described by their density in
the phase space ($x,q$-space). The corresponding function is defined
as
\begin{equation}\label{two}
f^l(x,q,t)=\frac 1{2\pi}\int dke^{-ikx}l^\dag _{q+k/2}l_{q-k/2},
\end{equation}
where all operators are given in the Heisenberg picture. The
distribution function (\ref{two}) is defined by analogy with the 3D
case (see more details in Ref. \cite{sus}). By integrating
(\ref{two}) over $x$ we obtain the density of $l$-photons in the
momentum space:
\begin{equation}\label{thr}
\hat{n}^l(q,t)\equiv \int dxf^l(x,q,t)=l^\dag _{q}(t)l_{q}(t).
\end{equation}
Accounting for the linear
dependence $\omega^l(k)$ we can conclude that the average value
$\langle \hat{n}^l(q,t)\rangle$ determines the spectral distribution
of photons moving from the left. The spectrum can be obtained from
Eq. (\ref{thr}) by changing $q\rightarrow(\omega-\omega_0)/v$.
Similarly we can express the photon density in the coordinate
space, $\hat{\rho}_l(x,t)$, in terms of the distribution function,
$f^l(x,q,t)$, as:
\begin{equation}\label{fou}
\hat{\rho}_l(x,t)\equiv\int dqf^l(x,q,t)=\frac 1{2\pi}\int
dqdke^{-ikx} l^\dag _{q+k/2}l_{q-k/2}.
\end{equation}
Furthermore, by integrating $\hat{\rho}_l(x,t)$ over $x$ in the
range of localization of the transmitted pulse we obtain the
operator of total number of the transmitted photons, $N_l$, as
\begin{equation}\label{fiv}
\hat{N}_l(t)=\int dx\hat{\rho}_l(x,t)=\int dql^\dag _q(t)l_q(t).
\end{equation}
Expression (\ref{fiv}) can be used for obtaining both transmittance
and fluctuations of the transmitted photons.
Similar relationships for the $r$-photons follow from Eqs.
(\ref{two})-(\ref{fiv}) by replacing $l\rightarrow r$.
Characteristics of the outgoing radiation depend on the initial
state of the system and on the evolution of the above-mentioned
operators.
\section{EVOLUTION EQUATIONS}
Evolution of the system variables is governed by a set of coupled
Heisenberg equations
\begin{equation}\label{six}
(\partial_t+i\omega_q^l)l_q=-ig\sigma_-,
\end{equation}
\begin{equation}\label{sev}
(\partial_t+i\omega_q^r)r_q=-ig\sigma_-,
\end{equation}
\begin{equation}\label{eig}
(\partial_t+i\omega_a)\sigma_-=ig\sigma_z\int dq(l_q+r_q),
\end{equation}
\begin{equation}\label{nin}
\partial_t\sigma_z=-i2g\int
dq[\sigma_+(l_q+r_q)-(l_q^++r_q^+)\sigma_-].
\end{equation}
Equations for variables $l_q^\dag ,r_q^\dag , \sigma_+$ can be
obtained by Hermitian conjugation of Eqs. (\ref{six})-(\ref{eig}).
Following Ref. \cite{wal} we represent a formal solution of Eq.
(\ref{six}) as
\begin{equation}\label{ten}
l_q(t)=\tilde{l}_q(t)-ig\int_{t_0}^tdt^\prime
e^{-i\omega_q^l(t-t^\prime)}\sigma_-(t^\prime),
\end{equation}
where $\tilde{l}_q(t)=l_q(t_0)e^{-i\omega_q^l(t-t_0)}$ and $t>t_0$.
It is assumed that the pulse, localized at $t=t_0$ on the left from
the atom, moves to the right. The first term in the right side of
Eq. (\ref{ten}) describes the free-field propagation, while the
second one represents the atom radiation. By integrating Eq.
(\ref{ten}) over $q$, we obtain a useful relationship
\begin{equation}\label{ele}
\int dql_q(t)=\int dq\tilde{l}_q(t)-\frac {i\pi g}v\sigma_-(t),
\end{equation}
which is widely used in the literature. Then evolution of the atomic
operators is governed by the equations
\begin{equation}\label{tve}
(\partial_t+i\omega_a+\Gamma/2)\sigma_-=ig\sigma_z\int
dq(\tilde{l}_q+\tilde{r}_q),
\end{equation}
\begin{equation}\label{thir}
(\partial_t+\Gamma)(\sigma_z+1)=-i2g\int
dq[\sigma_+(\tilde{l}_q+\tilde{r}_q)-(\tilde{l}_q^++\tilde{r}_q^+)\sigma_-],
\end{equation}
where $\Gamma=4\pi g^2/v$ and the tilde means the dependence on $t$
similar to the dependence $\tilde{l}_q(t)$. If we consider the
tilded variables as given functions then we have a closed set of
linear equations for obtaining atomic variables
$\sigma_\pm(t),\sigma_z(t)$. Using Eq. (\ref{tve}) we can exclude
variables $\sigma_\pm$ from Eq. (\ref{thir}). When
$\Gamma(t-t_0)\gg1$ we get
\begin{equation}\label{four}
(\partial_t+\Gamma)(\sigma_z+1)
\end{equation}
\[=-2g^2\int dq
dk\int_{t_0}^tdt^\prime[e^{(i\omega_a-\Gamma/2)(t-t^\prime)}(\tilde{l}^\dag_k+
\tilde{r}^\dag_k)_{t^\prime}\sigma_z(t^\prime)(\tilde{l}_q+
\tilde{r}_q)_t+H.c.].\]
The distribution function and the atom
variables $\sigma_\pm(t)$ are related by \[f^l(x,q,t)=\frac
1{2\pi}\int dke^{-ikx}[\tilde{l}^\dag_{q+k/2}(t)+
ig\int_{t_0}^tdt^\prime
e^{i\omega_{q+k/2}^l(t-t^\prime)}\sigma_+(t^\prime)]\]
\begin{equation}\label{fift}
\times[\tilde{l}
_{q-k/2}(t)-
ig\int_{t_0}^tdt^\prime
e^{-i\omega_{q-k/2}^l(t-t^\prime)}\sigma_-(t^\prime)].
\end{equation}
Eq. (\ref{fift}) follows directly from Eqs. (\ref{two}) and
(\ref{ten}). By integrating over $q$, we obtain the expression for
the photon density
\begin{equation}\label{fift1}
\hat{\rho}_l(x,t)=\tilde{\rho}_l(x,t)+\frac
\Gamma{4v}\Sigma_{t-x/v}+i\frac gv\int
dq(\sigma_+\tilde{l}_q-\tilde{l}_q^\dagger\sigma_-)_{t-x/v},
\end{equation}
where $\Sigma\equiv\sigma_z+1$, $x>0$ and $\tilde{\rho}_l(x,t)$ is
presented in terms of the "free" operators $\tilde{l}(t)^\dagger$
and $\tilde{l}(t)$. The reflected photons can be described by the
operator
\begin{equation}\label{fift2}
\hat{\rho}_r(x,t)=\tilde{\rho}_r(x,t)+\frac
\Gamma{4v}\Sigma_{t+x/v}+i\frac gv\int
dq(\sigma_+\tilde{r}_q-\tilde{r}_q^\dagger\sigma_-)_{t+x/v},
\end{equation}
where $x<0$ and $\tilde{\rho}_r(x,t)$ is defined via
$\tilde{r}(t)^\dagger $ and $\tilde{r}(t)$. In what follows we will
omit the term $\tilde{\rho}_r(x,t)$ because of the absence of
photons propagating from the right at $t=t_0$.
\section{SINGLE-PHOTON FOCK STATE}
We consider the simplest situation when the incident Gaussian wave
packet contains only one photon distributed among waveguide modes.
When this photon propagates from the left the single-photon Fock
state can be defined as \cite{mand}
\begin{equation}\label{sixt}
|1_l\rangle=\frac {w^{1/2}}{\pi^{1/4}}\int
dke^{-ikx_0}e^{-k^2w^2/2}l^\dag_k(t_0)|0\rangle ,
\end{equation}
where $|0\rangle$ is the vacuum state of the system. The coefficient before the
integral is the normalization constant.
The average value of the initial distribution function is given by
\begin{equation}\label{seve}
\langle 1_l|f^l(x,q,t_0)|1_l\rangle=\frac 1\pi
e^{-(x-x_0)^2/w^2}e^{-q^2w^2} .
\end{equation}
It follows from Eq. (\ref{seve}) that $w$ can be interpreted as the
width of a pulse centered at $x=x_0$. Moreover, the pulse spread in
the momentum space, $\Delta q$, is of the order of $1/w$. Before
photons reach the ground-state atom their distribution function
evolves as
\begin{equation}\label{eite}
\langle 1_l|f^l(x,q,t)|1_l\rangle=\frac 1\pi
e^{-X^2(t)/w^2}e^{-q^2w^2} ,
\end{equation}
where $X(t)=x-x_0-v(t-t_0)$. To obtain the average $\langle
f^l(x,q,t)\rangle$ in the domain $v(t-t_0)>-x_0$ we should use its
general form ($\ref{fift}$). Simple calculations result in the
following average distribution function of the transmitted signal:
\begin{equation}\label{nine}
\langle 1_l|f^l(x,q,t)|1_l\rangle=\frac w{2\pi^{3/2}}\int dk
e^{-ikX(t)}e^{-(q^2+k^2/4)w^2}
\end{equation}
\[\times\bigg[1-\frac \Gamma 2\frac
{\Gamma/2+ikv}{(\omega_{a0}-qv)^2-(kv-i\Gamma)^2/4} \bigg ],\] where
$\omega_{a0}=\omega_a-\omega_0$.
For obtaining Eq. (\ref{nine}) the relations
\begin{equation}\label{tvan}
\sigma_-(t)|1_l\rangle =\big[\langle 1_l|\sigma_+(t)\big]^\dag
\end{equation} \[
=-ig\bigg(\frac {4\pi}{w^2}\bigg)^{1/4}
e^{-i\omega_0(t-t_0)}\int_{t_0}^tdt^\prime
e^{-(i\omega_{a0}+\Gamma/2)(t-t^\prime)}
e^{-[x_0+v(t^\prime-t_0)]^2/2w^2}|0\rangle\] are used. Also it is
assumed that the atom has a sufficient time to relax to the ground
state. It is so if
\begin{equation}\label{a}
v(t-t_0)+x_0\gg w+v/\Gamma.
\end{equation}
By integrating Eq. (\ref{nine}) over $x$ we get
\begin{equation}\label{tvon}
\langle \hat{n}^l(q,t)\rangle =\frac w{\pi^{1/2}}
e^{-q^2w^2}\bigg[1- \frac
{\Gamma^2/4}{(\omega_{a0}-qv)^2+\Gamma^2/4} \bigg ].
\end{equation}
\begin{figure}[!ht]
\centering
\includegraphics{figure_1.eps}
\caption{(Color online) Photon distributions $\langle
f^{l}(x,q,t)\rangle$ ($x>0$) and $\langle f^{r}(x,q,t)\rangle$
($x<0$) as functions of the phase-space variables, $x$ and $q$.
Calculations are performed for $t=20,\,t_0=0,\,\Gamma=1$,$\,
\omega_{a0}=0$, and $x_0=-10$. Quantities $x,q,t$, and $\Gamma$ are
given in units of $w$ (pulse width), $w^{-1}$ (inverse pulse
width), $w/v$ (pulse duration), and $v/w$ (inverse pulse
duration), respectively.}
\end{figure}
The second term in the square brackets describes
resonant reflection of the waves. This process is efficient when
$\omega_{a0}-vq\equiv\omega_a-\omega^l_q\leq\Gamma/2.$ If
$\omega_{a0}\neq 0$ the spectrum of the transmitted field is
asymmetric which is different from the spectrum of the incident
radiation. In fact, Eq. (\ref{tvon}) describes filtering properties
of the atom. It is straightforward to generalize Eq. (\ref{tvon})
for an arbitrary shape of the incident pulse.
The average photon density is
\begin{equation}\label{tvtw}
\langle \hat{\rho}_l(x,t)\rangle =\frac 1{\pi^{1/2}w} \bigg
|e^{-X^2(t)/2w^2}-\frac {i\Gamma w}{2^{3/2}\pi^{1/2}}\Phi
[X(t)]\bigg|^2 ,
\end{equation}
where \[\Phi(x)=\int dq\frac{e^{-iqx-q^2w^2/2}}{\omega_{a0}-qv+i
\Gamma/2}.\] For a very short incident pulse, $(\Gamma w/2v)<<1$,
contribution of the term with $\Phi$ is negligible regardless of the
value of $\omega_{a0}$. This means that reflection is small for this
case.
In the opposite case, $(\Gamma w/2v)\gg1$, we obtain the photon
density
\begin{equation}\label{tvth}
\langle \hat{\rho}_l(x,t)\rangle =\frac 1{\pi^{1/2}w}
e^{-X^2(t)/w^2}\bigg|1-(1-i2\omega_{a0}/\Gamma)^{-1}\bigg|^2,
\end{equation}
which is equal to zero when $\omega_{a0}=0$. Hence, this is the case
of full reflection. Nevertheless, for large detuning,
$2\omega_{a0}/\Gamma\gg1$, the radiation-atom interaction vanishes
resulting in almost full transmission.
The distribution function of the reflected radiation is given by
\begin{equation}\label{tvfo}
\langle 1_l|f^r(x,q,t)|1_l\rangle=\frac {\Gamma^2w}{8\pi^{3/2}}\int
dk e^{-ikX^r(t)}\frac {e^{-(q^2+k^2/4)w^2}}
{(\omega_{a0}+qv)^2-(kv+i\Gamma)^2/4},
\end{equation}
where
$X^r(t)=x+x_0+v(t-t_0)$.
Typical distribution functions, $\langle f^{l,r}(x,q,t)\rangle$, are
shown in Fig. 1. In contrast to the initial positive distribution
(\ref{seve}) the region with negative values of $\langle
f^{r}(x,q,t)\rangle$ can be seen here that indicates a nonclassical
nature of the reflected radiation. Physical quantity $\langle
f^{l,r}(x,q,t)\rangle$ can be interpreted as a quasiprobability
rather than a probability of the photon distribution in the phase
space.
\begin{figure}[!ht]
\centering
\includegraphics{figure_2.eps}
\caption{(Color online) Photon density in the configuration-space.
Gray, red dashed, and blue dot-dashed curves are shown for $\Gamma$
equal to $0.1, 0.5$ , and $1$, respectively. Other parameters are as
in Fig. 1.}
\end{figure}
Using Eq. \ref{tvfo} we obtain coordinate and momentum
distributions of the reflected photons as
\begin{equation}\label{tvfi}
\langle \hat{\rho}_r(x,t)\rangle =\frac {\Gamma^2w}{8\pi^{3/2}}
\bigg|\Phi[-X^r(t)]\bigg|^2 ,\quad\langle \hat{n}^r(q,t)\rangle
=\frac {\Gamma^2w}{4\pi^{1/2}} \frac {e^{-q^2w^2}}
{(\omega_{a0}+qv)^2+\Gamma^2/4} .
\end{equation}
The second expression in (\ref{tvfi}) shows the resonant character
of the reflection at $\omega_a-\omega_q^r\sim \Gamma/2$.
As we see from Eqs. (\ref{tvon}), (\ref{tvtw}) and (\ref{tvfi})
there are no regions with negative values of photon distributions
$\langle \hat{\rho}_{r,l}(x,t)\rangle$ and $\langle
\hat{n}^{r,l}(q,t)\rangle$. A set of curves in Figs. 2, 3 shows the
expected tendency: reflection is bigger for stronger interaction.
\begin{figure}[!ht] \centering
\includegraphics{figure_3.eps}
\caption{(Color online) Photon distribution in the momentum space.
$\omega_{a0}=0$ (a,b); $\omega_{a0}=0.5$ (c,d). $\omega_{a0}$ is
given in units of $v/w $. Other notations are as in Figs. 1,2.}
\end{figure}
Asymmetry of curves with respect to the central point, $q=0$, is
seen in Figs. 3c and 3d.
This is because only the incident photons with $q>0$ can be in
resonance with the atom. Hence, they have the biggest probability to
be reflected thus forming pronounced minima in $\langle
\hat{n}^{l}(q,t)\rangle$ curves and the corresponding maxima in
$\langle \hat{n}^{r}(q,t)\rangle$ curves. In the case of negative
values of $\omega_{ao}$ similar plots can be obtained by formal
replacement $q\rightarrow-q$ in Figs. 3c and 3d.
\section{COHERENT-STATE OF THE INCIDENT RADIATION}
Incident Gaussian pulse can be represented by a coherent-state wave
packet. Following the paper \cite{blo} we define the corresponding
wave function as
\begin{equation}\label{tvsi}
|\Psi\{\alpha\}\rangle=\exp\bigg \{\int dk[\alpha_kl^\dag_k-
\alpha^*_kl_k]\bigg
\}|0\rangle,
\end{equation}
where \[\alpha_k=\pi^{-1/4}(N_0w)^{1/2}e^{-ikx_0-k^2w^2/2}.\] It can
be easily verified that function (\ref{tvsi}) is the eigenfunction
of all annihilation operators:
$l_k|\Psi\{\alpha\}\rangle=\alpha_k|\Psi\{\alpha\}\rangle$. We use
this property in further analysis.
By averaging the initial distribution function over the state
(\ref{tvsi}) we obtain
\begin{equation}\label{tvse}
\langle\Psi\{\alpha\}|f^l(x,q,t_0)|\Psi\{\alpha\}\rangle =\frac
{N_0}\pi e^{-(x-x_0)^2/w^2}e^{-q^2w^2},
\end{equation}
which is very similar to Eq. (\ref{seve}). The only free parameter, $N_0$,
equal to the average number of photons per pulse, differs
Eq. (\ref{tvse}) from Eq. (\ref{seve}).
We use Eq. (\ref{fift}) to study photon density of the reflected
and transmitted radiation. Integrating Eq. (\ref{fift}) over $q$ and
using Eq. (\ref{four}), the average density of photons,
$\hat{\rho}_l(x,t)$ is obtained as
\begin{equation}\label{tvei}
\langle \hat{\rho}_l(x,t)\rangle =\langle \tilde{\rho}_l(x,t)\rangle
-\frac{\Gamma}{4v}\langle \Sigma\rangle_{t-x/v}-\frac
1{2v}\partial_t \langle \Sigma\rangle_{t-x/v}.
\end{equation}
The last two terms in Eq. (\ref{tvei}) describe atom response and
interference of the response with the incoming field, respectively.
A similar term for the backward-propagating pulse is given by
\begin{equation}\label{tvni}
\langle \hat{\rho}_r(x<0,t)\rangle = \frac{\Gamma}{4v}\langle
\Sigma\rangle_{t+x/v}.
\end{equation}
Eq. (\ref{tvni}) describes the radiation back-scattered by the atom.
As we see the field distribution in the waveguide is expressed in
terms of the average
$\langle \Sigma\rangle$ which describes an atomic state. After
averaging (\ref{four}) over the
initial wave function, $|\Psi\{\alpha\}\rangle$, we get
\begin{equation}\label{thh}
(\partial_t+\Gamma )\langle \Sigma\rangle_t
=-4g^2p(t)\int_{t_0}^tdt^\prime e^{-\Gamma
(t-t^\prime)/2}p(t^\prime)\langle \sigma_z\rangle_{t^\prime}
\cos[\omega_{a0}(t-t^\prime )],
\end{equation}
where $p(t)=\pi^{1/4}\bigg(\frac{2N_0}w \bigg
)^{1/2}\exp\{-[x_0+v(t-t_0)]^2/{}2w^2\}$. Eq. (\ref{thh}) should be
completed with the initial condition $\langle
\Sigma\rangle_{t_0}=0$.
The integro-differential equation (\ref{thh}) can be transformed
into a differential equation. We consider the simplest case of
$\omega_{a0}=0$. Applying operator $\partial_t$ to both parts of Eq.
(\ref{thh}) we obtain
\begin{equation}\label{toh}
\hat{L}_t\langle \Sigma\rangle =4g^2p^2(t),
\end{equation}
where \[\hat{L}_t=\partial^2_t +\bigg[\frac
32\Gamma+(t-t_e)\frac{v^2}{w^2}\bigg]\partial_t+\bigg[
\frac{\Gamma^2}{2}+(t-t_e)\frac{v^2}{w^2}\Gamma+4g^2p^2(t)\bigg]\]
and $t_e=t_0+|x_0|/v$.
\begin{figure}[!ht]
\centering
\includegraphics{figure_4.eps}
\caption{(Color online) Atom excitation dynamics vs initial photon
number $N_0$ for $\Gamma=1$.}
\end{figure}
In the case of a long pulse, $(\Gamma w/2v)\gg 1$, the
quasistationary state of $\langle\Sigma\rangle$ given by
\[\langle \Sigma\rangle_{qs}\approx\bigg(
1+\Gamma^2/8g^2p^2(t)\bigg)^{-1},\] can be realized. It follows from
the above expression that for large (small) driving fields,
$p^2(t)\rightarrow\infty \,( p^2(t)\rightarrow 0$), the value of
$\langle \Sigma\rangle_{qs}$ is equal to $1\, (0 )$ in agreement
with the simplest qualitative reasonings. There are damped
oscillations of $\Sigma $ around this state. Their evolution is
governed by Eq. (\ref{toh}) which in the long-pulse limit reduces to
\begin{equation}\label{tth}
\delta \ddot{\Sigma}+\delta\dot{\Sigma}\frac 32\Gamma+ \delta \Sigma
\bigg[\frac {\Gamma^2}2+4g^2p^2(t)\bigg]=0,
\end{equation}
where $\delta\Sigma=\langle \Sigma\rangle -\langle
\Sigma\rangle_{qs}$. Ignoring the dependence of $p$ on $t$ we seek a
solution in the form $\delta\Sigma\sim e^{\lambda t}$. Then the
equation for $\lambda$ is given by
\begin{equation}\label{tthh}
\lambda^2+\lambda\frac32\Gamma+\frac {\Gamma^2}2+4g^2p^2(t)=0.
\end{equation}
It follows from Eq. (\ref{tthh}) that oscillations of $\langle
\Sigma(t)\rangle$ arise if only $(2vp/\pi g)>1$,oscillation decay
rate with the oscillation decay rate of the order of $3\Gamma/4$.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.7\textwidth]{figure_5.eps}
\caption{(Color online) Photon configuration-space densities: (a)
$N_0 = 0.1$ - gray solid line, $N_0 = 0.5$ - red dashed line, $N_0 =
1$ - blue dash-dot line; (b) $N_0 = 10$ - gray solid line, $N_0 =
25$ - red dashed line, $N_0 = 50$ - blue dash-dot line (reflected
pulses are shown in the inset). Input radiation is in coherent
state, $\Gamma=1$ for all curves.}
\end{figure}
Oscillating behavior of the atom excitations (Rabi oscillations) can
be also seen in Fig. 4 which illustrate typical solutions of Eq.
(\ref{toh}). The most pronounced oscillations are for larger values
of $\Gamma$ and $N_0$.
The solution of Eq. (\ref{toh}) is also used to obtain a
configuration-space densities of transmitted and reflected photons.
The calculated data are shown in Fig. 5. Qualitatively, the curves
are similar to those in Fig. 2 if $N_0$ is small (see Fig. 5a).
Pronounced oscillations are present only for the reflected photons
when $N_0$ is sufficiently large (see the inset in Fig. 5b).
The numbers of reflected and transmitted photons are obtained after
integration of the photon densities, $\hat{\rho}_r$ and
$\hat{\rho}_l$,
\begin{equation}\label{tfh}
N_r=\langle \hat{N}_r\rangle =\int_{v(t_0-t)}^0dx\langle
\hat{\rho}_r(x,t)\rangle=\frac\Gamma 4\int_{t_0}^td\tau\langle
\Sigma_\tau \rangle,
\end{equation}
\begin{equation}\label{tfih}
N_l=\langle \hat{N}_l\rangle =\int_0^{v(t-t_0)}dx\langle
\hat{\rho}_l(x,t)\rangle= \int_{t_0}^td\tau v\langle
\tilde{\rho}_l[v(t-\tau),t]\rangle-N_r,
\end{equation}
where the conditions
$\tilde{r}_q|\Psi\rangle=\langle\Psi|\tilde{r}_q^ \dagger=0$ are
used. Intervals for integration over $x$ are chosen to be
sufficiently large to cover the regions where the particle densities
differ from zero. Corresponding time interval, $t-t_0$, satisfies
condition (\ref{a}).
The term $-\partial_t \langle \sigma_z\rangle/2v$, which is
important for determining the pattern of the transmitted pulse, does
not contribute to the total number of the transmitted photons
because of zero value of $\langle \Sigma\rangle$ at the boundary
points $t$ and $t_0$. The calculated values of $N_r$ and $N_l$ are
shown in Fig. 6 by dashed lines.
To estimate upper limit of the reflected photon number, $N_r$, the
inequality $\langle \Sigma\rangle<2$ and Eq. (\ref{tfh}) are used.
Thus we have $N_r\leq \Gamma w/(2v)$. The limiting value of $N_r$
does not depend on $N_0$ and can be small even if $N_0>>1$.
Therefore the reflected radiation can be used as a controllable
source of few-photon pulses. Also reflected photons can be useful,
for example, to check the atom state or obtain the interaction
parameter $g$.
\section{FLUCTUATIONS OF OUTGOING PHOTONS}
Since, according to Eqs. (\ref{thir}),(\ref{fift1}), and
(\ref{fift2}), $\hat{N}_l+\hat{N}_r\equiv \tilde{N_l}$, noise
properties of the reflected and transmitted photons can be described
by the following variances
\begin{equation}\label{tsih}
\langle \Delta\hat{N}_r^2\rangle \equiv\langle
(\hat{N}_r-N_r)^2\rangle=\langle \hat{N}_r^2\rangle -N_r^2,
\end{equation}
\begin{equation}\label{tseh}
\langle \Delta\hat{N}_l^2\rangle \equiv\langle
(\hat{N}_l-N_l)^2\rangle=\langle
(\tilde{N}_l-\hat{N}_r)^2\rangle-N_l^2,
\end{equation}
where
\begin{equation}\label{tseh1}
\hat{N}_r=\int_{t_0}^t d\tau\bigg[\frac\Gamma4\Sigma +ig\int dq
(\sigma_+\tilde{r}_q-\tilde{r}_q^\dagger\sigma_-)\bigg]_\tau.
\end{equation}
Eqs. (\ref{tsih}) and (\ref{tseh}) represent the mean square deviations of
the photon numbers from their average values $N_r$ and $N_l$. Since
there are no reflected photons when $g=0$, the fluctuations of the
transmitted photons are identical to those of the incident
radiation:
\begin{equation}\label{teih}
\langle \Delta\hat{N}_l^2\rangle=\langle
\Delta\tilde{N}_l^2\rangle=N_l=N_0.
\end{equation}
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{figure_6.eps}
\caption{(Color online) Photon number variances. Solid lines -
obtained using the definitions (\ref{tsih}), (\ref{tseh}) and
numerical solutions of Eqs. (\ref{tenii}) and (\ref{tenj}). In the
case of Poissonian statistics of outgoing radiation, the
corresponding variances would be given by $N_{r,l}$ - shown by
dashed lines. $\Gamma = 0.5$ for (a,d); $\Gamma = 1$ for (b,e);
$\Gamma =10$ for (c,f).}
\end{figure}
Eq. (\ref{teih}) can be easily verified using the explicit term
(\ref{fiv}) for $\hat{N}_l$ and Eq. (\ref{tvsi}) for the multimode
coherent-state, $|\Psi \{\alpha\}\rangle$.
In what follows we study the modification of the photon statistics
caused by the radiation-atom interaction. To simplify further
analysis we again consider only the resonant case
$\omega_0=\omega_a$. As it is shown in the Appendix
\begin{equation}\label{tenh}
\langle\hat{N}_r^2\rangle=\frac{\Gamma^2}{8}\int_{t_0}^ t d\tau
\int_{t_0}^\tau d\tau^\prime\langle S(\tau,\tau^\prime)\rangle+N_r,
\end{equation}
where
$S(\tau,\tau^\prime)=2\sigma_+(\tau^\prime)\Sigma(\tau)
\sigma_-(\tau^ \prime).$ It can be easily seen that the reflected
photons does not obey the Poissonian statistics if the first term in
the right side of Eq. (\ref{tenh}) is bigger or smaller than $N^2_r$
(super- or sub-Poissonian statistics, respectively).
The average value $\langle S(\tau,\tau^\prime)\rangle$ is governed
by the equation
\begin{equation}\label{tenii}
\hat{L}_\tau\langle S(\tau,\tau^\prime)\rangle=4g^2p^2(\tau)\langle
\Sigma(\tau^\prime)\rangle,
\end{equation}
which can be derived similarly to Eq. (\ref{toh}). The definition of
$\langle S(\tau,\tau^\prime)\rangle$ and properties of the Pauli
matrices, namely
$\sigma_+(\tau)\Sigma(\tau)=\Sigma(\tau)\sigma_-(\tau)=(\sigma_+)^2
=(\sigma_-)^2=0$, let us get the initial conditions for $\langle
S(\tau,\tau^\prime)\rangle$ as $\langle
S(\tau,\tau^\prime)\rangle=\partial_\tau\langle S(\tau,
\tau^\prime)\rangle=0$ when $\tau=\tau^\prime$.
After solving Eqs. (\ref{toh}) and (\ref{tenii}), it becomes
possible to calculate the mean square value $\langle
\hat{N}_r^2\rangle$ of the reflected photons. In Fig. 6a crossovers
from sub-Poissonian to super-Poissonian statistics are seen for some
specific values of $N_0$ . For bigger $\Gamma$ (see. Figs. 6b,c) the
variances $\langle \Delta\hat{N}_r^2\rangle$ are characterized by
sub-Poissonian statistics. Similar to results of Sect. 4 we see
nonclassical nature of outgoing radiation that can be used in
applications.
To obtain $\langle \Delta\hat{N}_l^2\rangle$ we should know not
only $\langle \hat{N}_r^2\rangle$ but also
$\langle\tilde{N}_l\hat{N}_r\rangle=\frac\Gamma4\int_{t_0}^t d\tau
\langle \tilde{N}_l\Sigma_\tau \rangle $. The quantity $\langle
\tilde{N}_l\Sigma_\tau \rangle$ entering the integrand obeys the
equation
\begin{equation}\label{tenj}
\hat{L}_\tau\langle \tilde{N}_l\Sigma(\tau)
\rangle=4g^2p^2(\tau)\langle N_0-\sigma_z(\tau)\rangle.
\end{equation}
Initial conditions are the same as for $\langle\Sigma(\tau)\rangle$,
i.e.
\[\langle \tilde{N}_l\Sigma(\tau=t_0)\rangle=\partial_\tau \langle
\tilde{N}_l\Sigma(\tau=t_0)\rangle=0.\]
Fluctuations of the transmitted radiation are very similar to the
fluctuations of the incident light (see Figs. 6d-f). This is due to
saturation of the TLS response: only insignificant number of photons
are involved in the atom excitation. Most photons are not affected
by the atom and conserve the statistical properties of the
coherent-state input.
\section{Discussion and Conclusion}
The purpose of this paper is to analyze distinct features of
outgoing radiation. These features describe not only spatial but
also frequency distribution (i.e. spectrum) of the radiation.
Therefore it is appropriate to use the method of photon distribution
functions. Restricting our analysis to an incident pulse formed as
Gaussian packet of single-photon Fock state it becomes possible to
obtain analytical expressions for distribution functions of outgoing
photons. In Fig. 1 one can see the change of sign of distribution
function of the transmitted photons. This means that $\langle
f^{r,l}(x,q,t)\rangle$ describes rather a quasiprobability than a
probability of photon distribution.
Integrating $\langle f^{r,l}(x,q,t)\rangle$ over variables $q$ or
$x$ we obtain spatial or frequency distributions, respectively.
Spectra of transmitted and reflected radiation have very different
structures strongly dependent on both the detuning, $\omega_{a0}$,
and the dimensionless parameter $\Gamma w/v$ (see Fig. 3).
The criterion of negligible reflection as well as the criterion of
negligible transmission are derived using the explicit term for
spatial distribution of photons, Eq. (\ref{tvtw}). These criteria
and the data in Fig. 2 agree well with earlier studies in this field
which show a higher probability for short pulses to be transmitted.
The case of coherent state of the incident radiation is also
considered. The excitation-relaxation rates of the atom depend on
the number of photons, $N_0$. For multi-photon pulses, $N_0>>1$, the
Rabi frequency is proportional to $N_0^{1/2}$. The tendency for
oscillation frequency to grow with $N_0$ is seen in Fig. 4.
Numerical data in Figs. 4, 5 are obtained from solution of Eq.
(\ref{toh}) which governs the evolution of $\langle
\Sigma(t)\rangle\equiv\langle\sigma_z(t)\rangle+1$ . The quantity
$\langle\Sigma(t)\rangle$ describes the probability of TLS to be
excited. At the same time, $\langle \Sigma(t)\rangle$ and its
derivative $\partial_t\langle \Sigma(t)\rangle$ determine the photon
densities $\langle \rho_{l,r}(x,t)\rangle$ [see Eqs.
(\ref{tvei},\ref{tvni})]. The interconnection of these physical
quantities is explained by energy conservation: each atom
excitation is accompanied by annihilation of one photon in the
waveguide and vice versa [see the interaction term in Eq.
(\ref{one})].
In view of possible applications of outgoing radiation its noise
characteristics are also important. Eqs. (\ref{tseh}) and
(\ref{tenh}) and solutions of Eqs. (\ref{tenii},\ref{tenj}) make it
possible to calculate variances of photon numbers. It can be seen
from Fig. 6 that in most cases the transmitted photons obey the
super-Poissonian statistics, while the reflected photons obey the
sub-Poissonian one that is in a qualitative consistence with results
of Ref. \cite{koca} where bunching of transmitted photons and
antibunching of reflected photons only were obtained. In our
formalism, the case considered in \cite{koca} corresponds to
infinitely long pulses. Hence, comparison of \cite{koca} with our
data can be plausible for only large values of the parameter $\Gamma w/v$
used in Fig. 6.
The approach used in the present paper can be easily modified to study more
complex phenomena.
Among of these, propagation of electromagnetic pulses in a waveguide,
coupled to a pair of TLS, is of interest.
\section{Acknowledgment}
We thank V. Bondarenko and A. Sokolov for their interest to this
research and stimulating discussions.
\section{Appendix: derivation of Eq. (\ref{tenh})}
It follows from Eq. (\ref{tseh1}) that the average $\langle
\hat{N}_r^2\rangle$ is given by
\begin{equation}\label{A1}
\langle \hat{N}_r^2\rangle =\frac {\Gamma^2}
{16}\int_{t_0}^td\tau\int_{t_0}^td\tau^\prime\Bigg [\langle \Sigma
(\tau)\Sigma (\tau^\prime )\rangle+\frac{iv}{\pi g}\int
dq\langle\sigma_+(\tau^\prime)\tilde{r}_q(\tau^\prime)\Sigma(\tau)-h.c.\rangle
\end{equation}
\[+\frac{4v}{\pi \Gamma}\int dq\int dq^\prime\langle
\sigma_+(\tau)\tilde{r}_q(\tau ){\tilde
r}^\dag_{q^{\prime}}(\tau^\prime)\sigma_-(\tau^\prime)
\rangle\Bigg].\] It is useful to represent the product
$\tilde{r}_q(\tau ){\tilde r}^\dag_{q^{\prime}}(\tau^\prime)$ in the
ordered form as $\tilde{r}_q(\tau ){\tilde
r}^\dag_{q^{\prime}}(\tau^\prime)={\tilde
r}^\dag_{q^{\prime}}(\tau^\prime)\tilde{r}_q(\tau
)+\delta(q-q^\prime)\exp[i\omega_q^r(\tau^\prime-\tau)]$. The part
of the last term in Eq. (\ref{A1}) that is proportional to
$\delta(q-q^\prime)$ gives a contribution to $\langle
\hat{N}_r^2\rangle$ equal to
\begin{equation}\label{A2}
\frac {\Gamma} {4}\int_{t_0}^td\tau\langle \Sigma(\tau)\rangle =N_r.
\end{equation}
The remaining part of the third term in Eq. (\ref{A1}),
\begin{equation}\label{A3}
\int dq\int dq^\prime\langle \sigma_+(\tau){\tilde
r}^\dag_{q^{\prime}}(\tau^\prime)\tilde{r}_q(\tau
)\sigma_-(\tau^\prime) \rangle ,
\end{equation}
is equal to zero. To prove this let us consider the term $\int
dq^\prime\langle\Psi| \sigma_+(\tau){\tilde
r}^\dag_{q^{\prime}}(\tau^\prime)$ where $|\Psi\rangle$ is given by
Eq. (\ref{tvsi}). Representing ${\tilde
r}^\dag_{q^{\prime}}(\tau^\prime)$ as
\begin{equation}\label{A4}
{\tilde
r}^\dag_{q^{\prime}}(\tau)e^{i\omega_{q\prime}^r(\tau^\prime-\tau
)}= \bigg [r^\dag_{q^{\prime}}(\tau)-ig\int_{t_0} ^\tau
d\tau^{\prime\prime}e^{i\omega_{q\prime}^r(\tau-\tau^{\prime\prime})}
\sigma_+(\tau^{\prime\prime})\bigg
]e^{i\omega_{q\prime}^r(\tau^\prime-\tau)}
\end{equation}
and taking into account that $r^\dag_{q^{\prime}}(\tau)$ commutes
with $\sigma_+(\tau)$, we have
\begin{equation}\label{A5}
\int dq^\prime\langle\Psi| \sigma_+(\tau){\tilde
r}^\dag_{q^{\prime}}(\tau^\prime)=\int
dq^\prime\langle\Psi|\bigg[r^\dag_{q^{\prime}}(\tau)\sigma_+(\tau)
\end{equation}
\[-ig\int_{t_0}
^\tau
d\tau^{\prime\prime}e^{i\omega_{q\prime}^r(\tau-\tau^{\prime\prime})}
\sigma_+(\tau)\sigma_+(\tau^{\prime\prime})\bigg
]e^{i\omega_{q^\prime}^r(\tau^\prime-\tau)}\] \[=ig\int
dq^\prime\int_{t_0}^\tau
d\tau^{\prime\prime}e^{i\omega_{q\prime}^r(\tau^\prime-
\tau^{\prime\prime})}\langle\Psi|[\sigma_+(\tau^{\prime\prime}),
\sigma_+(\tau)]\]\[=i\frac{2\pi
g}v\langle\Psi|[\sigma_+(\tau^\prime),
\sigma_+(\tau)]\theta(\tau-\tau^\prime).\] The last expression in
Eq. (\ref{A5}) is obtained after integration over $q^\prime$ and
$\tau^{\prime\prime}$. Also, we use here the condition
$\langle\Psi|\tilde{r}_{q^\prime}^\dag=0$.
The value of $ \int dq\tilde{r}_q(\tau
)\sigma_-(\tau^\prime)|\Psi\rangle$ can be obtained from Eq.
(\ref{A5}) by means of Hermitian conjugation and replacement $\tau
\longleftrightarrow \tau^\prime $:
\begin{equation}\label{A6}
\int dq\tilde{r}_q(\tau )\sigma_-(\tau^\prime)|\Psi
\rangle=-i\frac{2\pi g}v[\sigma_-(\tau^\prime),
\sigma_-(\tau)]|\Psi\rangle\theta(\tau^\prime-\tau).
\end{equation}
Inserting the last terms of Eqs. (\ref{A5}) and (\ref{A6}) into Eq.
(\ref{A3}) we get zero.
A similar procedure is used to simplify the second term in the
brackets of Eq. (\ref{A1}). Repeating the previous reasonings we
obtain the following relations (see also the Appendix in Ref.
\cite{dom}):
\begin{equation}\label{A7}
\int dq\tilde{r}_q(\tau^\prime )\Sigma(\tau)|\Psi
\rangle=i\frac{2\pi g}v[\sigma_-(\tau^\prime),
\Sigma(\tau)]|\Psi\rangle\theta(\tau-\tau^\prime),
\end{equation}
\begin{equation}\label{A8}
\int dq\langle\Psi|\Sigma(\tau)\tilde{r}^\dag_q(\tau^\prime
)=i\frac{2\pi g}v\langle\Psi|[\sigma_+(\tau^\prime),
\Sigma(\tau)]\theta(\tau-\tau^\prime).
\end{equation}
Using Eqs. (\ref{A7}),(\ref{A8}) we have
\begin{equation}\label{A9}
\frac {\Gamma^2}{16}\int_{t_0}^td\tau\int_{t_0}^t
d\tau^\prime\frac{iv}{\pi g}\int
dq\langle\sigma_+(\tau^\prime)\tilde{r}_q(\tau^\prime)\Sigma(\tau)-h.c.\rangle
\end{equation}
\[=-\frac{\Gamma^2}{16}\int_{t_0}^td\tau\int_{t_0}^\tau d\tau^\prime\langle
\Sigma(\tau)\Sigma(\tau^\prime)+\Sigma(\tau^\prime)\Sigma(\tau)-
4\sigma_+(\tau^\prime)\Sigma(\tau)\sigma_-(\tau^\prime)\rangle.\]
Finally, the overall contribution of three terms in brackets of Eq.
(\ref{A1}) results in Eq. (\ref{tenh}).
|
1,314,259,993,223 | arxiv | \section{Introduction}
\label{sec:intro}
The theory of Compressed Sensing (CS)
\cite{candes2006qru,donoho2006cs} aims at reconstructing sparse or
compressible signals from a small number of linear measurements
compared to the dimensionality of the signal space. In short, the
signal reconstruction is possible if the underlying sensing matrix is
well behaved, \mbox{i.e.~} if it respects a Restricted Isometry Property (RIP)
saying roughly that any small subset of its columns is ``close'' to an
orthogonal basis. The signal recovery is then obtained using
non-linear techniques based on convex optimization promoting signal
sparsity, as the Basis Pursuit DeNoise (BPDN) program
\cite{donoho2006cs,Chen98atomic}. What makes CS more than merely an
interesting theoretical concept is that some classes of randomly
generated matrices (e.g. Gaussian, Bernoulli, partial Fourier
ensemble, etc) satisfy the RIP with overwhelming probability. This
happens as soon as their number of rows, \mbox{i.e.~} the number of CS
measurements, is higher than a few multiples of the assumed signal
sparsity.
In this paper we are interested in a variation of the CS paradigm. We
assume indeed that the support of the signal to recover is partially
known, possibly with a certain error. As explained in
\cite{vaswani5066modified,Vaswani2009}, this context is indeed well
suited to the recovery of (time) sequences of sparse signals when
their supports evolves slowly over time. In that case, the support of
the recovered signal in a previous (discretized) time can be used to
improve the reconstruction of the signal at the next time instance,
either by decreasing the required number of measurements for a given
quality, or by improving the reconstruction quality for a fixed number
of measurements. Recovering a signal with partially known support is
also of interest for certain kind of 1-D signals or images. For
instance, photographic images, \mbox{i.e.~} with positive intensities, have
often many non-zero approximation coefficients in their wavelet
decomposition \cite{mall99}; a prior knowledge that can be favorably
used in their reconstruction from CS measurements.
By adapting the proof of \cite{candes2008rip}, we show in this short
note that the recovery algorithm minimizing the $\ell_1$-norm of the
signal candidate over the complement of the known support part, \mbox{i.e.~}
what we coin \emph{innovative} Basis Pursuit DeNoising (\textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}), has a
similar stability behavior than the common Basis Pursuit DeNoise
program. In particular, this extends the result of
\cite{vaswani5066modified,Vaswani2009} to the cases of noisy
measurements and of compressible signals, \mbox{i.e.~} with non-zero but fast
decaying coefficients in a given sparsity basis. We show also that our
method shares somehow the conclusion of the \emph{cancel-then-recover}
strategy designed in \cite{Davenport_M_2010_article_sigproccompress}
where Authors propose a recovery algorithm that applies an orthogonal
projection to separate the measurements into two components, and then
recovers the known support part of the signal separately from the
unknown support component.
\section{Framework and Notations}
\label{sec:fmwk}
Let $x=\Psi\alpha\in\mathbb{R}^n$ be a sparse or a compressible discrete
signal in the sparsity basis $\Psi\in\mathbb{R}^{n\times n}$ of $\mathbb{R}^n$,
\mbox{i.e.~} the vector $\alpha\in\mathbb{R}^n$ has few non-zero or fast decaying
components respectively. For the sake of simplicity, we work hereafter
with the canonical basis, \mbox{i.e.~} $\Psi=\Id$, identifying $\alpha$ with
$x$. The present work is however valid for any orthonormal $\Psi$,
e.g. the DCT or the Wavelet basis, by integrating $\Psi$ in the
sensing model described in Section \ref{sec:sensing-model}.
We now establish some important notations. We write
$\mathcal{N}=\{1,\,\cdots,n\}$ the index set of the vector components in
$\mathbb{R}^n$. For any vector $u\in\mathbb{R}^n$, $u_i$ is the $i^{\rm th}$
component of $u$ with $i\in\mathcal{N}$, $u_S$ is the vector equal to
the components of $u$ on the set $S\subset\mathcal{N}$ and to 0 elsewhere,
while $u^l$, with uppercase index $l\in\mathbb{N}$ to avoid confusion, is
the vector obtained by zeroing all but the $l$ largest components of
$u$ (in amplitude). For non-trivial basis $\Psi$, $u^l$ would be the
best $l$-term approximation of $u$ in the $\ell_2$-norm sense. The
complement of any set $S\subset\mathcal{N}$ is denoted by $S^c =
\mathcal{N}\setminus S$, and the size of $S$ by $\#S$. The $\ell_p$ norm
(for $p\geq 1$) of $u\in\mathbb{R}^n$ is $\|u\|_p^p=\sum_i |u_i|^p$, while
its support is written ${\rm supp}\, u \triangleq \{i\in\mathcal{N}:u_i\neq
0\}$. By extension, the $\ell_0$ ``norm''\footnote{It is not actually
a true norm since for instance it is not positive homogeneous.} is defined as
$\|u\|_0=\#\,{\rm supp}\, u$.
Let us speak now of the prior knowledge that we have on the signal. In
addition to the assumption of sparsity or compressibility, we presume
that the support of the signal $x$ is partially known. In the sequel,
we denote the known support part by $T\subset\mathcal{N}$, while we always
refer to its size by the letter $s=\#T$. Notice that in our study
nothing prevents $T$ to be corrupted by some ``noise'', \mbox{i.e.~} a priori
$T$ is not fully included to ${\rm supp}\, x$. Moreover, the size of $({\rm supp}\,
x) \setminus T$ is not constrained, what will matter is the values of
the components of $x$ on $({\rm supp}\, x) \setminus T$, \mbox{i.e.~} the
compressibility of $x$ outside of $T$.
\section{Sensing Model}
\label{sec:sensing-model}
Following the common Compressed Sensing model, our vector $x$ is
acquired by a sensing matrix $\Phi\in\mathbb{R}^{m\times n}$ subject to an
additional white noise $n\in\mathbb{R}^m$, \mbox{i.e.~}
$$
y\ =\ \Phi x + n,
$$
where $y\in\mathbb{R}^m$ is the measurement vector. In this model the noise
power is assumed bounded\footnote{Possibly with high probability.} by
$\epsilon$, $\|n\|_2\leq \epsilon$.
As shown after, even if a part of the signal support is known, the
stability of this sensing model, \mbox{i.e.~} our ability to recover or
approximate $x$ from $y$, is also linked to the \emph{Restricted Isometry
Property} (RIP) of the sensing matrix
\cite{candes2005dlp,candes2006qru,candes2006ssr}.
Explicitly, the matrix $\Phi\in\mathbb{R}^{m\times n}$ satisfies the RIP of order
$q\in\mathbb{N}$ ($q\leq n$) and radius $0 \leq \delta_q < 1$, if
$$
(1-\delta_q)\|u\|_2^2\ \leq\ \|\Phi u\|_2^2\ \leq\ (1+\delta_q)\|u\|^2_2,
$$
for all $q$-sparse vectors $u\in\mathbb{R}^n$, \mbox{i.e.~} with $\|u\|_0 \leq q$.
\section{Reconstructing on Innovation}
\label{sec:rec-meth}
Intuitively, if a part $T\subset\mathcal{N}$ of the signal support is known,
a possible (non-linear) reconstruction technique of $x$ would simply
consist in minimizing the sparsity of a signal candidate $u\in\mathbb{R}^n$
over $T^c$, \mbox{i.e.~} the $\ell_0$-norm of $u_{T^c}$, subject to the common
$\ell_2$ fidelity constraint $\|\Phi u - y\|_2\leq \epsilon$ as
prescribed by the noise power bound. As underlined many times in the
community, such a procedure would result in a combinatorial (NP-hard)
problem \cite{natarajan1995sas}. Here again an $\ell_1$
\emph{relaxation} must be used, with possibly additional requirements
on the RIP-``conditioning'' of $\Phi$ \cite{Tropp2006,candes2006ssr}.
The proposed method is a simple extension of the Modified-CS scheme
defined in \cite{vaswani5066modified,Vaswani2009}. We integrate indeed
the case of corrupted measurements by defining the following
optimization program, coined \emph{innovative} Basis Pursuit DeNoising
(\textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}),
\begin{equation*}
\label{eq:IBPDN}
\argmin_u \norm{u_{T^c}}_1\ {\rm s.t.}\ \norm{y - \Phi u}_2\leq
\epsilon. \eqno{({\bf \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}})}
\end{equation*}
The term ``innovative'' recalls that this program tries to minimize
the sparsity of the signal to be reconstructed in the unknown (or
innovation) set $({\rm supp}\, x)\setminus T$ included to $T^c$.
\section{{\em i\hspace{1pt}}BPDN\ and $\ell_2-\ell_1$ Instance Optimality }
\label{sec:ibpdn-ell_2-ell_1}
The main result of this note provides the conditions under which the
solution of \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}\ is close or equal to the initial signal $x$,
\mbox{i.e.~} the so-called $\ell_2-\ell_1$ instance optimality
\cite{Cohen-bestkterm}. It extends in the same time the conclusion of
\cite{vaswani5066modified,Vaswani2009} to the cases of noisy measurements and
compressible signals.
\begin{theorem}
\label{thm:l2l1-inst}
Under the condition of the sensing model described above, writing $\#T
= s$ and given $k\in\mathbb{N}$, let us assume that the matrix $\Phi$
respects the RIP of order $s+2k$ with radius $\delta_{s+2k}\in (0,1)$, and that
its radius for the smaller order $2k$ is $\delta_{2k}\in (0,1)$. Then, if
$\delta_{2k}^2 + 2 \delta_{s+2k} < 1$, \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}\ has the
$\ell_2-\ell_1$ instance optimality meaning that its solution $x^*$
respects
$$
\|x-x^*\|_2\ \leq\ C_{s,k}\,\epsilon\ +\ D_{s,k}\,e_{0}(r;k),
$$
where $r$ is the residual $r=x - x_T$, and $e_0(r;k)=k^{-1/2}\|r -
r^k\|_1$ is the compressibility error\footnote{It could be called also
\emph{scaled} $\ell_1$-\emph{approximation error}.} at $k$-term of
$r$. The two constants $C_{s,k}$ and $D_{s,k}$, given in the proof,
depend on $\Phi$ only. For instance, for small innovation, \mbox{i.e.~} when
$k\ll s$, if $\delta_{2k}=0.02$ and if $\delta_{s+2k}=0.2$,
$C_{s,k}<7.32$ and $D_{s,k}<3.35$.
\end{theorem}
\begin{proof}
We basically adapt the proof of \cite{candes2008rip} to signal
with partially known support.
We define the residual $r = x - x_T$, with ${\rm supp}\, r = ({\rm supp}\, x)
\setminus T$. Let us write $x^*=x+h$ with $h\in\mathbb{R}^n$ so that the
proof amounts to bound $\|h\|_2$. Let $T_0$ be the support of the
$k$ largest coefficients of the residual $r = x - x_T$,
\mbox{i.e.~} $T_0={\rm supp}\, r^k$ with $T_0\,\cap\, T = \emptyset$.
We define next the sets $T_j$ for $j\geq 1$ as the support of the
$k$ largest coefficients of $h_{S_j^c}=h-h_{S_j}$ with $S_j=
T\,\cup\,\bigcup_{l=0}^{j-1}T_l$. By construction, we may observe
that we got the partition $\bigcup_{l\geq 0} T_l = ({\rm supp}\,
x)\setminus T$, with $\#T_j=k$ and $T_j\cap T =T_j\cap T_{j'}
=\emptyset$, for $j,j'\geq 0$ and $j\neq j'$.
Let us write $T_{|0}=T\cup T_0$ and $T_{|01}=T\cup T_0\cup T_1$,
with $\#T_{|0}=s+k$ and $\#T_{|01}=s+2k$. The plan of the proof is
to first bound $\|h_{T_{|01}^c}\|_2$ and then $\|h_{T_{|01}}\|_2$.
Using the triangular inequality, we have $\|h_{T_{|01}^c}\|_2\leq
\sum_{j\geq 2} \|h_{T_j}\|_2$. For $j\geq 1$, $\|h_{T_j}\|_1\geq
k\|h_{T_{j+1}}\|_\infty$ by the ordering of the $T_j$'s, and
therefore $\|h_{T_{j+1}}\|^2_2\leq k\|h_{T_{j+1}}\|_\infty^2 \leq
\inv{k}\|h_{T_j}\|^2_1$. This leads to
\begin{equation}
\label{eq:first-h_Tb01-bound}
\|h_{T_{|01}^c}\|_2\ \leq\ \tinv{\sqrt{k}}\,\sum_{j\geq
1}\|h_{T_j}\|_1\ =\ \tinv{\sqrt{k}}\|h_{T_{|0}^c}\|_1.
\end{equation}
Since $T^c=T_0\cup T_{|0}^c$ and $\|n\|_2=\|y-\Phi
x\|_2\leq\epsilon$, and because $x^*$ solves \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}, we have
$$
\|x_{T^c}\|_1\geq \|x_{T^c} + h_{T^c}\|_1 = \|x_{T_0} + h_{T_0}\|_1\ +\ \|x_{T_{|0}^c} + h_{T_{|0}^c}\|_1
\geq \|x_{T_0}\|_1 - \|h_{T_0}\|_1 - \|x_{T_{|0}^c}\|_1 +
\|h_{T_{|0}^c}\|_1,
$$
and therefore,
$$
\|h_{T_{|0}^c}\|_1 \leq \|x_{T^c}\|_1 +
\|x_{T_{|0}^c}\|_1 + \|h_{T_0}\|_1 - \|x_{T_0}\|_1\\
= 2\|x_{T_{|0}^c}\|_1 + \|h_{T_0}\|_1 = 2\|r - r_{T_0}\|_1 + \|h_{T_0}\|_1.
$$
Consequently, using (\ref{eq:first-h_Tb01-bound}) and the equivalence
of the norms $\ell_2$ and $\ell_1$, we get
\begin{equation}
\label{eq:bound-on-sum-j-gt-2}
\|h_{T_{|01}^c}\|_2\leq
\sum_{j\geq 2} \|h_{T_j}\|_2 \leq\ 2e_0(r;k) + \|h_{T_0}\|_2.
\end{equation}
Let us now bound $\|h_{T_{|01}}\|_2$. Notice that $h_{T_{|01}} = h -
\sum_{j\geq 2} h_{T_j}$, so that, using Cauchy-Schwarz,
\begin{align*}
\|\Phi h_{T_{|01}}\|_2^2&=\ \scp{\Phi h_{T_{|01}}}{\Phi h_{T_{|01}}}\\
&=\ \scp{\Phi h_{T_{|01}}}{\Phi h} - \scp{\Phi h_{T_{|01}}}{\textstyle\sum_{j\geq 2} \Phi h_{T_j}}\\
&\leq\ \|\Phi h_{T_{|01}}\|_2\|\Phi h\|_2 + \textstyle\sum_{j\geq 2}
|\scp{\Phi h_{T_{|01}}}{\Phi h_{T_j}}|.
\end{align*}
By hypothesis, $\Phi$ is RIP of order $q$ and radius $\delta_q$ with
$q\in\{2k, s+2k\}$. It is proved in \cite{candes2008rip} as a result
of the polarization identity, that, for two vectors $u$ and $v$ of
disjoint supports and of sparsity $l$ and $l'$ respectively, if
$\Phi$ is RIP of order $l+l'$, then $|\scp{\Phi u}{\Phi v}|\leq
\delta_{l+l'}\|u\|_2\|v\|_2$. In addition, since $x^*$ is solution of
\textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}\ and $x$ is a feasible point of its fidelity constraint,
$\|\Phi h\|_2\leq \|\Phi x^* - y\|_2 + \|y - \Phi x\|_2 \leq
2\epsilon$. Therefore, combining all these considerations,
\begin{multline*}
\|\Phi h_{T_{|01}}\|_2^2\ \leq\
2\sqrt{1+\delta_{s+2k}}\,\epsilon\,\|h_{T_{|01}}\|_2 + \sum_{j\geq 2}
|\scp{\Phi h_{T_{|0}} + \Phi h_{T_1}}{\Phi h_{T_j}}|\\
\leq\ 2\sqrt{1+\delta_{s+2k}}\,\epsilon\,\|h_{T_{|01}}\|_2 + \big(\delta_{s+2k}\|h_{T_{|0}}\|_2 +
\delta_{2k}\|h_{T_1}\|_2\big)\,{\sum_{j\geq
2}}\|h_{T_j}\|_2\\
\leq 2\sqrt{1+\delta_{s+2k}}\,\epsilon\,\|h_{T_{|01}}\|_2 +\ \mu_{s,k}\,\|h_{T_{|01}}\|_2\,{\sum_{j\geq
2}}\|h_{T_j}\|_2,\\[-7mm]
\end{multline*}
with $\mu_{s,k}=\sqrt{\delta_{s+2k}^2 + \delta_{2k}^2}$.
Since $(1-\delta_{s+2k})\|h_{T_{|01}}\|_2^2\leq \|\Phi
h_{T_{|01}}\|_2^2$, simplifying the last expression and using
(\ref{eq:bound-on-sum-j-gt-2}) lead to
$$
(1-\delta_{s+2k})\,\|h_{T_{|01}}\|_2 \leq\ 2\sqrt{1+\delta_{s+2k}}\,\epsilon\ +\
\mu_{s,k}\,\big(2e_0(r;k) + \|h_{T_0}\|_2\big),
$$
or, since $\|h_{T_0}\|_2\leq\|h_{T_{|01}}\|_2$,
$$
\|h_{T_{|01}}\|_2\ \leq\ \alpha\epsilon\ +\
\beta e_0(r;k),
$$
with
$\alpha= 2\sqrt{1+\delta_{s+2k}}\,/\,(1-\delta_{s+2k}-\mu_{s,k})$ and
$\beta={2\mu_{s,k}}\,/\,(1-\delta_{s+2k}-\mu_{s,k})$.
Finally, using again (\ref{eq:bound-on-sum-j-gt-2}),
$$
\|h\|_2\ \leq\ \|h_{T_{|01}}\|_2 + \|h_{T_{|01}^c}\|_2\ \leq\
\alpha\epsilon\ +\ (\beta + 2) e_0(r;k)\ +\ \|h_{T_{0}}\|_2\ \leq \
C_{s,k}\,\epsilon\ +\ D_{s,k}\, e_0(r;k),
$$
with
$$
C_{s,k} = \frac{4\sqrt{1+\delta_{s+2k}}}{1-\delta_{s+2k}-\mu_{s,k}},
$$
and
$$
D_{s,k} = 2\,\frac{1 + \mu_{s,k} - \delta_{s+2k}}{1-\delta_{s+2k}-\mu_{s,k}}.
$$
The denominator of these two constants makes sense only if $1 -
\delta_{s+2k} - \mu_{s,k} > 0$, \mbox{i.e.~} if $\delta_{2k}^2 + 2
\delta_{s+2k}~<~1$, which provides the announced reconstruction
condition.
\end{proof}
\section{Observations}
\label{sec:obs}
Some observations may be realized from Theorem
\ref{thm:l2l1-inst}. First, in the case where there is no knowledge
about the signal support, \mbox{i.e.~} $T=\emptyset$ and $s=0$, we do find the
previous sufficient condition of \cite{candes2008rip} characterizing
when BPDN satisfies the $\ell_2-\ell_1$ instance optimality, namely
$\delta_{2k}<\sqrt{2}-1$ as involved by $\delta_{2k}^2 + 2\delta_{2k}
< 1$.
Second, the condition $\delta_{2k}^2 + 2\delta_{s+2k} < 1$ is
satisfied if $\delta_{s+2k}<\sqrt{2}-1$ since we have always
$\delta_{2k}<\delta_{s+2k}$. This seems again a simple generalization
of the previous result in \cite{candes2008rip}, \mbox{i.e.~} \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}\ is stable
if the RIP of $\Phi$ is guaranteed over the sparsity order $s+2k$ with
a radius $\delta_{s+2k}<\sqrt{2}-1$. Intuitively, the matrix must be
sufficiently ``well conditionned'' to estimate both the unknown values
of $x$ on the known set $T$ and the $k$ other significant values of
$x$ somewhere outside of $T$. This induces somehow the required $s+2k$
RIP sparsity order, where $s$ and $2k$ stand for the degrees of
freedom of $x$ on $T$ and on $T^c$ respectively.
Third, if the signal $x$ is exactly sparse, there is a $k<N-s$ such
that $k=\#\big(({\rm supp}\, x)\setminus T\big)$ and $e_0(r;k)=0$. Without
noise on the measurements, the previous theorem guarantees therefore
the perfect reconstruction of the signal, \mbox{i.e.~} $x^*=x$, as obtained in
\cite{vaswani5066modified}.
Finally, the compressibility of the signal $x$ is quantified by the
compressibility error $e_0(r,k)$. In other words, the compressibility
is measured from $r=x-x_T$ outside of the known support part $T$ of
$x$. This new measure is of course the simple generalization of the
previous term $e_0(k)=k^{-1/2}\|x-x^k\|_1=e_0(x;k)$ introduced for
instance in \cite{candes2008rip}.
\section{Connection to $\delta$-stable Embeddings and
the Cancel-then-Recover strategy}
\label{sec:related-work}
Theorem \ref{thm:l2l1-inst} has an interesting connection with the
recent work of Davenport et
al.~\cite{Davenport_M_2010_article_sigproccompress} showing that
several signal processing tasks, \mbox{i.e.~} signal detection, classification,
estimation and filtering, can be realized efficiently on the
compressive measurements of a signal without reconstructing it. In
their work, the Authors study in particular the possibility to
subtract from these measurements the influence of the known part of
the signal support. Let us briefly explain that work before to compare
our work with this of \cite{Davenport_M_2010_article_sigproccompress}.
For this explanation, we use the framework of Section \ref{sec:fmwk}
with the simplifying canonical basis $\Psi = \Id$ and the pure sensing
model $y=\Phi x$. We define also the subspace
$\Sigma_T=\{u\in\mathbb{R}^n:{\rm supp}\, u\subset T\}$ and the matrix $\Omega =
\Phi_T\in \mathbb{R}^{m\times s}$, \mbox{i.e.~} the restriction of $\Phi$ to the
columns indexed in $T\subset \mathcal N$. Two operators can be built from
$\Omega$ and its Moore-Penrose pseudoinverse
$\Omega^\dagger=(\Omega^T\Omega)^{-1}\Omega^T$, \mbox{i.e.~} $P_{\Omega}\ =\
\Omega\Omega^{\dagger}$ and $P_{\Omega^\perp} = 1\ -\
\Omega\Omega^{\dagger}$, the orthogonal projectors on the range of
$\Omega$ and on the nullspace of $\Omega^T$ respectively.
Writing $x=x_T+x_{T^c}$, we can notice that $P_{\Omega^\perp}\Phi
x=P_{\Omega^\perp}\Phi x_{T^c}$. In short, the influence (or
interference) of $x_T$ on $y=\Phi x$ may be canceled without
reconstructing $x$. The idea of the \emph{cancel-then-recover}
strategy promoted in \cite{Davenport_M_2010_article_sigproccompress}
is therefore to reconstruct actually $x_{T^c}$ from $\widetilde y = \widetilde
\Phi x = \widetilde \Phi x_{T^c}$, with $\widetilde \Phi =
P_{\Omega^\perp}\Phi $. This can be done for instance by solving
either the Basis Pursuit program
$$
\widetilde x\ =\ \argmin_u \norm{u}_1\ {\rm s.t.}\ \widetilde y = \widetilde \Phi u,
$$
or an equivalent greedy method as CoSaMP
\cite{cosamp,Davenport_M_2010_article_sigproccompress}. Of course,
$\widetilde x_{T}=0$ since this part of $\widetilde x$ does not contribute to the
fidelity constraint. It is equivalent to say that the reconstruction
runs over the space $P_{\Id_{T}^\perp}\mathbb{R}^n$, where
$P_{\Id_{T}^\perp} u = u_{T^c}$ for any $u\in\mathbb{R}^n$. Therefore, the
estimation error between $\widetilde x$ and $x$ can be bounded over $T^c$.
For this purpose $\widetilde \Phi$ must be characterized in function of
$\Phi$. This can be done by considering a generalization the
Restricted Isometry Property: Given $\delta\in (0,1)$ and two spaces
$\mathcal U,\mathcal V\subset\mathbb{R}^n$, a matrix $\Phi$ realizes a
$\delta$-\emph{stable embedding} of $(\mathcal U,\mathcal V)$ if
$$
(1-\delta)\,\|u-v\|^2_2\ \leq\ \|\Phi u - \Phi v\|_2^2\ \leq\ (1+\delta)\,\|u-v\|^2_2,
$$
for all $u\in\mathcal U$ and $v\in \mathcal V$. In particular the RIP of order
$q$ and radius $\delta_q$ is equivalent to a $\delta_q$-stable
embedding of $(\Sigma_q,\{0\})$, with
$\Sigma_q=\{u\in\mathbb{R}^n:\,\|u\|_0\leq q\}$ the set of $q$-sparse
signals. The following result provides then the desired characterization.
\begin{lemma}[Corollary 4 in \cite{Davenport_M_2010_article_sigproccompress}]
\label{lem:conn-delta-stable}
Suppose that $\Phi\in\mathbb{R}^{m\times n}$ is a $\delta$-stable
embedding of $(\Sigma_{2k},\Sigma_T)$. Then $\widetilde \Phi$ is a
$\delta/(1-\delta)$-stable embedding of
$(P_{\Id_{T}^\perp}\Sigma_{2k},\{0\})$.
\end{lemma}
In particular, this Lemma implies that if $\Phi$ is RIP of order
$s+2k$ with radius $\delta_{s+2k}$, it is then a
$\delta_{s+2k}$-stable embedding of $(\Sigma_{2k},\Sigma_T)$, and
therefore, $\widetilde \Phi$ is RIP of order $2k$ and radius
$\delta_{s+2k}/(1-\delta_{s+2k})$ over the space
$P_{\Id_{T}^\perp}\mathbb{R}^n\simeq \mathbb{R}^{n-s}$. The $\ell_2-\ell_1$
instance optimality of the BP program \cite{candes2008rip} above holds
if $\delta'=\delta_{s+2k}/(1-\delta_{s+2k})<\sqrt{2}-1$, \mbox{i.e.~} if
$\delta_{s+2k}<(\sqrt{2}-1)/\sqrt{2}$. In that case,
\begin{equation}
\label{eq:ctr-bound}
\|x_{T^c}-\widetilde x_{T^c}\|_2\ \leq\ \widetilde D_{\delta'}\, e_0(x_{T^c},k)\ =\
\widetilde D_{\delta'}\, e_0(r,k),
\end{equation}
with $\widetilde D_{\delta'} = 2\,\frac{1 + (\sqrt 2 - 1)\delta'}{1-(\sqrt
2 + 1)\delta'} = 2\,\frac{1 + (\sqrt 2 - 2)\delta_{s+2k}}{1-(\sqrt
2 + 2)\delta_{s+2k}}$.
In this paper, we show that \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}\ is optimal when $\delta^2_{2k} +
2\delta_{s+2k} < 1$. This condition is weaker than the one proposed in
\cite{Davenport_M_2010_article_sigproccompress},
i.e. $\delta_{s+2k}<(\sqrt{2}-1)/\sqrt{2}$, however it is interesting
to notice that both consider also the RIP of order $2s+k$ and both
are stable for compressible signals. Moreover, \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}\ gives
guarantees for the estimation of the whole signal and not only for its
behavior over $T^c$. Of course, if $x^*$ is the solution of \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}\
(with $\epsilon=0$), we get similarly
$$
\|x_{T^c}- x^*_{T^c}\|_2\ \leq\ \|x - x^*\|_2\leq\ D_{s,k}\, e_0(r,k),
$$
with $D_{s,k} < 2\,\frac{1 + (\sqrt 2 - 1)\delta_{s+2k}}{1-(\sqrt 2 +
1)\delta_{s+2k}} < \widetilde D_{\delta'}$.
We can remark also that, conversely to the current cancel-then-recover
strategy\footnote{Robustness of this strategy against an additional
noise $n$ could be obtained by bounding the power of
$P_{\Omega^\perp}n$ when $y=\Phi x + n$.}, \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}\ provides
stability against noisy measurements. An open question is however that
$\Phi$ in \cite{Davenport_M_2010_article_sigproccompress} has not to
be really RIP of order $s+2k$ to valid (\ref{eq:ctr-bound}). As
reported in Lemma \ref{lem:conn-delta-stable}, $\Phi$ simply needs to
provide a $\delta$-stable embedding over $(\Sigma_{2k},\Sigma_T)$
which is weaker than asking the RIP of order $s+2k$. Given $k$ and
$m$, that second requirement holds possibly for a smaller radius
$\delta$ than the RIP radius $\delta_{s+2k}$.
\section{Numerical Method}
\label{sec:algo}
In this section, we sketch of a simple algorithm for the reader
interested in a numerical implementation of \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}. This one relies on
monotone operator splitting and proximal methods
\cite{combettes2007drs,Fadili2009}. At the heart of this procedure is
the definition of the \emph{proximity operator} of any convex function
$\varphi:\mathbb{R}^n\to\mathbb{R}$, \mbox{i.e.~} the unique solution of $\prox_\varphi(z)
= \arg\min_{u}\inv{2}\|u - x\|_2^2 + \varphi(z)$.
Both BPDN and \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}\ are special cases of the general minimization
problem
\begin{equation*}
\label{eq:convex-prob}
\arg\min_{x \in \mathcal{H}}\ f(x) + g(x).\eqno{({\rm \bf P})}
\end{equation*}
For \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}, $f(u)=\|u_{T^c}\|_1$ and $g(u) = \imath_{C(\epsilon)}(u)=
0$ if $u\in C(\epsilon)$ and $\infty$ otherwise, \mbox{i.e.~} the
\emph{indicator function} of the closed convex set
$C(\epsilon)=\{v\in\mathbb{R}^n:\|y - \Phi v\|_2\leq \epsilon\}$.
Of course $f$ and $g$ are both non-differentiable, however, since (i)
their domain is non-empty, (ii) they are convex and (iii) lower
semi-continuous (lsc), \mbox{i.e.~} $\liminf_{u \to u_0} f(u) = f(u_0)$ for all
$u_0 \in \dom f$, \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}\ can be solved by the following
Douglas-Rachford iterative method \cite{Fadili2009}:
\begin{equation}
\label{eq:DR-iter}
u^{(t+1)} = (1-\tfrac{\alpha_t}{2})\,u^{(t)} +
\tfrac{\alpha_t}{2}\,S^\odot_{\gamma}\circ\mathcal{P}^\odot_{C(\epsilon)}(u^{(t)}),
\end{equation}
where $A^\odot \triangleq 2A - \Id$ for any operator $A$, $\alpha_t
\in (0,2)$ for all $t \in \mathbb{N}$, $S_{\gamma}=\prox_{\gamma f}$
for some $\gamma>0$ and $\mathcal{P}_{C(\epsilon)} = \prox_{g}$ is the
orthogonal projection onto the tube $C(\epsilon)$. From
\cite{combettes2004smi}, one can show that the sequence $(u^{(t)})_{t
\in \mathbb{N}}$ converges to some point $u^*$ and $x^* =
\mathcal{P}_{C(\epsilon)}(u^*)$ is the solution of \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}.
We may compute that $S_{\gamma} z=\prox_{\gamma f} z$ is actually the
component-wise soft-thresholding operator of $z$ on $T^c$,
\mbox{i.e.~} $(S_{\gamma} z)_i = {\rm sign}\, z_i (|z_i| - \gamma)_+$ if $i\in T^c$
and $z_i$ if $i\in T$, with, for $\lambda\in\mathbb{R}$, $(\lambda)_+ =
\lambda$ if $\lambda\geq 0$ and 0 else. Efficient ways to compute
$\mathcal{P}_{C(\epsilon)}$ are also given in \cite{Fadili2009}.
\section{Conclusion}
\label{sec:conclusion}
This short note has studied the modification of Compressed Sensing
introduced in \cite{vaswani5066modified,Vaswani2009}, \mbox{i.e.~} when the
signal sparsity assumption is increased by the knowledge of a part of
its support. We showed theoretically that a simple generalization of
the common Basis Pursuit DeNoise program, \mbox{i.e.~} the \emph{innovative}
BPDN, has similar stability guarantees than BPDN with respect to both
signal compressibility and noisy measurements. Interestingly, the
obtained requirements are related to the conclusion of
\cite{Davenport_M_2010_article_sigproccompress} when the
cancel-then-recover strategy is applied to the context of this paper.
In the future, we plan to investigate possible numerical applications
of this formalism. In particular, when \textrm{\mbox{\emph{i}\hspace{.5pt}BPDN}}\ is integrated to the
reconstruction of sequences of sparse or compressible signals, we
would like to assess the quality of the reconstruction in function of
the number of measurements when the amount of innovation, \mbox{i.e.~} the
ratio between the unknown and the known signal support parts, can be
quantified over time.
\section{Acknowledgements}
We are very grateful to Prof. Pierre Vandergheynst (Signal Processing
Laboratory, LTS2/EPFL, Switzerland) for his useful advices and his
hospitality during their postdoctoral stay in EPFL.
|
1,314,259,993,224 | arxiv | \section{Introduction}
\emph{Coherent Lagrangian vortices (CLVs)} play an important role in the transport and mixing of
passive, possibly diffusive, scalar quantities in fluid flows. Such structures can be found in flows
living on a wide variety of scales \cite{Huhn2015a,Hadjighasem2017,Abernathey2018}.
Over the past decade, a variety of modeling approaches have been developed to characterize CLVs.
Intuitively, CLVs are viewed as material structures that sustain a non-filamenting boundary under advection by the flow
\cite{Haller2012,Haller2013a,Froyland2015a}; or material structures that resist
leakage of a diffusive passive scalar in an advection--diffusion process
\cite{Karrasch2016b,Haller2018,Haller2019}. Data-driven approaches view CLVs as collections of trajectories that stay together under the
motion of the flow \cite{Allshouse2012,Froyland2015,Hadjighasem2016,Banisch2017,Padberg-Gehle2017}.
Related Eulerian approaches view coherent sets as space-time structures that do
not mix much with their spatial neighborhood \cite{Froyland2010,Froyland2013}.
While these approaches intuitively target the same observed phenomenon, they
often yield different structures when applied to the same flow \cite{Hadjighasem2017}.
To date, only few methods have been successfully applied to realistic flow problems. See, for instance,
\cite{Froyland2010a,Haller2013a,Karrasch2015,Hadjighasem2016b,Hadjighasem2017,Serra2017b,Froyland2019}
for studies of ``medium'' complexity (in that either the domain is not too large or the number of
known/expected structures is low), and \cite{Abernathey2018} for the only---to the best of our knowledge---large-scale study.
The reasons for this lack of realistic applications are manifold: (i) most methods are not fully automated or even automizable,
cf.~\cite{Hadjighasem2017}, (ii) some methods intrinsically do not scale well with the
size of the domain, the number of expected coherent structures, or the number of
tracked trajectories; and/or (iii) there are no performant and robust implementations available.
Our aim in this paper is to report on our progress towards bridging the gap
between large-scale applications and those methods that subsume
the ``geodesic vortices'' class: \emph{black-hole vortices} \cite{Haller2013a}, \emph{objective Eulerian Coherent Structures (OECSs)}
\cite{Serra2016}, and \emph{material barriers to diffusive transport} \cite{Haller2018,Haller2019}. The algorithms are
developed as part of the open-source \texttt{CoherentStructures.jl} project. These
methods in principle scale well with the size of the domain, but existing
implementations related to the publications \cite{Onu2015,Karrasch2015,Hadjighasem2016b,Serra2017}
failed to fully leverage this; we therefore had to make significant
conceptual and implementation modifications.
Conceptually, our work is based on the index-theory-based methodology developed
in \cite{Karrasch2015}, whose implementation was a mixture of methods
implemented in \cite{Farazmand2014a,Onu2015,Hadjighasem2016b}. In
\cite{Serra2017}, Serra \& Haller identified (i) the detection of tensor field
singularities (points of repeated tensor eigenvalues) of tensor fields and (ii) the
identification of their topological type as major computational bottlenecks in the
implementation of \cite{Karrasch2015}. Moreover, these steps required a number of
parameters whose choice had---at times---unpredictable impact on the
computational outcome. As an alternative, they derived an automated method for
computing geodesic vortices based on the geometry of the underlying geodesic
flow. On the upside, their approach (i) does not require singularity detection and
type identification at all, and (ii) is designed to, in principle, not miss any coherent vortices at a
given computational accuracy and spatial resolution. On the downside,
however, (i) the gradient of the underlying computed tensor field is required, (ii) the
computation is performed on the whole domain at once without a localization option;
and (iii) the currently available implementation is not performant.
In our implementation, we have carefully addressed the issues raised by
\cite{Serra2017} regarding the implementation of \cite{Karrasch2015}. Specifically,
we have improved a number of aspects related to the (inherently robust) topological
index-theory-based methods. In the spirit of discrete differential geometry, we now discretize the
tensor index computation in a manner such that important properties are preserved, and these
properties are exploited efficiently.
The main reason why we argue it is worth improving on the index-based approach is
that it allows for the identification of a comparatively small number of candidate regions that each potentially contain
a CLV, around which one may then restrict subsequent computations. This in particular allows for
straightforward parallelization. Ultimately, flow problems of high complexity become manageable.
Implemented in the modern and performant programming language \texttt{Julia}
\cite{Bezanson2017}, our package is able to find geodesic vortices on domains of
unprecedented size and orders of magnitude faster than what is the current state of
the art. Specifically, we demonstrate our code (i) on a parameter-dependent
turbulent flow and (ii) in a global ocean surface
simulation using a computational grid of tens of millions of points. The required
computational power does not exceed what is available on an ordinary work station
or a modern desktop machine. While there remains room for further improvement,
this shows that it is possible to effectively compute CLVs in very large-scale 2D
flows and/or to perform extensive parameter studies on medium-sized domains.
This paper is organized as follows. In \cref{sec:advectiondiffusion}, we recall the
mathematical framework of \cite{Haller2018} that introduced the concept of
``material barriers (to diffusive transport)'' as an instance of the methods falling into
the category of geodesic vortices. These are then summarized together with a
generic computational approach in \cref{sec:geodesic_vortices}.
\Cref{compapproach} is devoted to the description of our computational approach
based on index theory for planar line fields. For convenience, we have collected
related facts in \cref{app:index_theory}. Finally, we demonstrate the outstanding
capabilities of our implementation on two non-trivial applications in
\cref{sec:applications}: a parameter study based on a minimal two-dimensional
turbulence simulation on the torus, and a global ocean surface velocity simulation.
\section{Background}
\subsection{Mathematical setting}
\label{sec:advectiondiffusion}
We now recall the theory related to material barriers to diffusive transport\cite{Haller2018},
as the most recent instance of a method that fits into the geodesic vortex framework. This is also the
method we use in the examples in \cref{sec:applications}.
Here the setting is a time-dependent
incompressible fluid velocity field $\mathbf{v}\colon U \times \mathcal{T} \to \mathbb{R}^2$,
where $U$ is an open, simply connected subset of $\mathbb{R}^2$ and $\mathcal{T}$ is a finite time interval.
A passive scalar $u$, i.e., a scalar quantity that does not affect the velocity field,
undergoes \emph{advection-diffusion} if it satisfies the partial differential equation (PDE)
\begin{equation}\label{eq:ade}
\partial_t c+ \divergence (c \cdot \mathbf{v}) = \nu\divergence\nabla c = \nu\Delta c\,.
\end{equation}
In words, the density $c$ is carried by the fluid and diffuses isotropically. The
inclusion of anisotropic and/or spatially inhomogeneous and time-dependent
diffusion is straightforward, but omitted here for ease of presentation.
\Cref{eq:ade} models the evolution of a range of physically relevant quantities,
including concentrations of dissolved substances (like salinity and temperature) and
vorticity in the 2D Navier--Stokes equations. Strictly speaking, none of these three examples is
passive, but certainly temperature and salinity can be regarded as such over time
scales of a few weeks or even months.
Furthermore, \cref{eq:ade} on $\mathbb R^2$ can be interpreted
as the \emph{Fokker--Planck/Kolmogorov forward equation} of the stochastic differential equation
\[
dX_t = \mathbf{v}(X_t,t)dt + \sqrt{2\nu}dW_t\,,
\]
provided that $\mathbf{v}$ satisfies certain regularity assumptions; cf.~\cite{Karatzas1991}.
The initial value $c(\cdot,0) = c_0$ uniquely defines the solution of \cref{eq:ade} given
appropriate boundary conditions on $\partial U \times \mathcal{T}$.
The value $\nu > 0$ is the \emph{diffusivity} (or the inverse Péclet number in
the non-dimensionalized form) and is very small in many applications. In the
absence of diffusion ($\epsilon = 0$), $c$ is conserved and transported
along the characteristics of the velocity field $\mathbf{v}$. Characteristics $x_t$ satisfy the
ordinary differential equation (ODE) $\frac{\mathrm{d}}{\mathrm{d} t}x_t = \mathbf{v}(x_t,t)$.
We denote by $\mathbf{F}_{t_0}^t$ the flow-map for this ordinary differential equation,
i.e., $\mathbf{F}_{t_0}^t(p)$ corresponds to the time-$t$ solution of the ODE with initial value $x_{t_0} = p$.
By definition, \emph{Lagrangian} (or \emph{material} structures) are
\emph{invariant} under the flow $t \mapsto \mathbf{F}_{t_0}^t$. Hence, the flow map can
be used to \emph{define} Lagrangian coordinates in space by labelling a
spatialtemporal point $(x,t)$ using the fluid ``particle'' $p$ that occupies $x$ at time $t$.
Clearly, in Lagrangian coordinates there is no advective transport, moreover with this change of coordinates
the advection--diffusion equation \eqref{eq:ade} takes the form of a pure diffusion equation \cite{Press1981,Thiffeault2003,Karrasch2016b,Haller2018}
\begin{equation}\label{eq:lade}
\partial_t \tildec = \nu \divergence\left(\mathbf{D}_{t_0}^t \nabla\tildec\right),
\end{equation}
where $\mathbf{D}_{t_0}^t(p) = (D\mathbf{F}_{t_0}^t(p,t))^{-1} (D\mathbf{F}_{t_0}^t(p,t))^{-\top}$ and $\tildec(p,t) = c(\mathbf{F}_{t_0}^t(p),t)$
are, respectively, the diffusion tensor and the scalar density in Lagrangian coordinates. For notational simplicity, we omit the tilde in the notation of the scalar
density function in Lagrangian coordinates.
This coordinate change allows for the separation of the reversible effects of advection from the
irreversible effects of the combined advection and diffusion. Note that the Lagrangian diffusion tensor field $\mathbf{D}$ is both $t$- and $p$-dependent.
In this framework, material barriers to diffusive and stochastic transport have been
defined in \cite{Haller2018} as material surfaces which extremize diffusive transport
over the finite observation time interval. There, it is shown that in two-dimensional
flows, the diffusive transport through a one-dimensional material manifold
$\Gamma$ is given in leading order (with respect to diffusivity $\nu$) by
\begin{align}
\int_\Gamma \left\langle\nabla c_0, \mathbf{T}_{t_0}^t\mathbf{n}\right\rangle\,\mathrm{d}A\,,
\end{align}
where $\mathbf{T}_{t_0}^t$ is the \emph{transport tensor field}, defined as the
time-average of the Lagrangian diffusion tensor fields,
i.e.~$\mathbf{T}_{t_0}^t = \frac{1}{t-t_0}\int_{t_0}^t \mathbf{D}_{t_0}^\tau\,\mathrm{d}\tau$,
$\mathbf{n}$ is the outward-pointing normal, and $\mathrm{d}A$ is the canonical (Euclidean) surface measure.
After normalizing by the length
\footnote{The material barrier theory applies to higher dimensions, but the implementation for 2 dimensions does not generalize easily to 3 or more spatial dimensions.}
of $\Gamma$ and choosing a most ``diffusion-prone'' distribution of $c_0$, a functional
on closed curves is found whose stationary points are null-geodesics
of an indefinite metric tensor field. This fits nicely into the ``geodesic vortex''
framework described in the next section. As a by-product, the trace of the transport
tensor, $\trace(\mathbf{T})$, coined \emph{diffusion barrier strength (DBS)}, is a
diagnostic field whose logarithm we will use for visualization purposes as a scalar
background field in this work.
\subsection{Coherent Lagrangian vortices as null-geodesics}
\label{sec:geodesic_vortices}
Mathematically speaking, the ``geodesic vortex'' approach consists of the
computation of closed null-geodesics of a (possibly
indefinite) metric tensor field that has undergone a parameter-dependent shift.
Three different vortex approaches can be formulated
in this setting:
\begin{enumerate
\item the ``black hole vortex'' approach \cite{Haller2013a}, which seeks stationary curves of
a functional related to stretching;
\item the ``objective Eulerian coherent structures'' (OECS) approach \cite{Serra2016}, which seeks stationary
curves of a functional related to instantaneous stretching (i.e., strain);
\item the ``material barriers to diffusive transport'' \cite{Haller2018,Haller2019} approach described in the previous section.
\end{enumerate}
In these three cases, the tensor field comes from, respectively, (a) the Cauchy--Green strain tensor
$\mathbf{C}_{t_0}^t = (D\mathbf{F}_{t_0}^t)^\top D\mathbf{F}_{t_0}^t$, (b) the rate of strain tensor $\mathbf{S}_{t_0}$ (i.e., the
symmetric part of the velocity gradient $D\mathbf{v}(\cdot,t_0)$), and (c) the transport tensor $\mathbf{T}_{t_0}^t$ defined in the previous section.
In the following, we will use the generic $\mathbf{T}$ to denote any of these tensor fields.
The geodesic vortex approach seeks to find closed null-geodesic curves of
$\mathbf{T}-\lambda\mathbf{I}$; here the (real) parameter $\lambda$ is taken from a physically motivated range.
Recall that null-geodesics are smooth curves $\gamma$ that have ``zero length''
when measured in the indefinite metric $\mathbf{T} - \lambda \mathbf{I}$,
i.e. $\gamma' \cdot (\mathbf{T}-\lambda\mathbf{I})\gamma' = 0$. It is readily verified
that they can be computed as integral curves of
\begin{equation}\label{eq:eta}
\eta_\lambda^{\pm} = \sqrt{\frac{\lambda_2 - \lambda}{\lambda_2 - \lambda_1}}\xi_1 \pm \sqrt{\frac{\lambda-\lambda_1}{\lambda_2-\lambda_1}}\xi_2\,,
\end{equation}
where $\lambda_1 \leq \lambda_2$ are eigenvalues of $\mathbf{T}$ and $\xi_1,\xi_2$ are
corresponding normalized eigenvectors. This is derived from the fact \cite{Haller2013a} that
null-geodesics $\gamma$ have uniform $\mathbf{T}$-strain along themselves, i.e.,
along $\gamma$ one has
\begin{equation}\label{eq:lamcons}
\sqrt{\frac{\gamma'\cdot \mathbf{T} \gamma'}{\gamma' \cdot \gamma'}} = \lambda\,.
\end{equation}
As such, null-geodesics are closed integral curves of a planar line field, to which a corresponding
index theory applies; cf.~\cref{app:index_theory,app:applications}. Analogously to index theory for
planar vector fields, closed null-geodesics have index 1 relative to their inducing line field.
As defined formally in \cref{app:index_theory}, we will (i) assign an index (relative to
the line-field) to regions that are sets whose boundary forms a closed Jordan curve, or that are finite disjoint unions of such sets and (ii) refer
to all such regions with index 1 as \emph{elliptic regions}.
While the elliptic regions we initially identify will not have geodesic vortices as their boundaries per se,
we aim to find geodesic vortices that are homotopic to the elliptic regions found. The identification
of elliptic regions is the first step we use for the computation of geodesic vortices (this builds on \cite{Haller2013a,Karrasch2015}).
\section{The computational approach}\label{compapproach}
In this section, we give details of our implementation approach.
The structure of this section closely resembles the high-level structure
of our implementation provided in \texttt{CoherentStructures.jl}.
Our computational approach, at the highest level, consists of three steps:
\begin{enumerate
\item Identify certain elliptic regions as candidate regions near geodesic vortices, based on index theoretical methods applied to the first eigenvector of $T$;
cf.~\cref{app:index_theory,app:applications}.
for the foundational theoretical aspects.
\item For each identified elliptic region, localize tensor field data to a neighborhood of the region.
\item Compute closed orbits (i.e., geodesic vortices) by a shooting method in this neighborhood.
\end{enumerate}
We describe each step in more detail in the following sections.
\subsection{Identification of elliptic regions}\label{sec:elliptic}
In a computational setting, we know the values of the tensor field $\mathbf{T}$ and,
hence, its subdominant eigenvector field $\ell = \xi_1$, only at a finite number of
points, say, the nodes of a polygonal triangulation/mesh $\mathcal{P}$. A candidate
elliptic region $R$ will be the union of a finite set of polygonal faces $P_1,\dots,P_k$
from $\mathcal{P}$. Such regions are identified in three steps:
\begin{enumerate
\item Compute indices of every mesh face in a fast and robust manner solely from
the given tensor data, without interpolation; cf.~\cref{ssec:polygon}.
\item Suitably merge mesh faces into regions with stable index, and extract those that are elliptic; cf.~\cref{ssec:combination}.
\item Optionally, do further merging to obtain larger elliptic regions.
\end{enumerate}
The second and third steps are necessary because generically, the only structurally stable
singularities occurring in regular tensor fields have index $\pm\frac12$ \cite{Delmarcelle1994}.
Therefore, unless treating a degenerate or artificial tensor field, sufficiently small single polygonal
regions are not elliptic. To identify elliptic regions, we merge nearby polygonal
faces with non-vanishing indices in step (ii). Consistently with the additivity of the
index under curve composition/region merging (see \cref{app:index_theory}), we
add indices of polygonal cells when they are merged.
So far we have not yet specified criteria for deciding which polygonal regions should be merged.
We argue that a reasonable, robust criterion is to require that a candidate region's
index shall not change when (further) enlarged by a specified radius $r>0$.
This is captured by the following definition.
\begin{definition}
We say that a region $R$ is \emph{$r$-stable} (relative to $\ell$), if the set $R_s \coloneqq \lbrace x \in \Omega;~d(x,R) \leq s\rbrace$
has the same index (relative to $\ell$) as $R$ for any $0 \leq s \leq r$.
\end{definition}
Clearly, if a region $R$ is $r$-stable, then within the $r$-vicinity of its boundary all polygonal faces have index 0.
Candidate regions for being homotopic to geodesic vortices are taken to be those that are minimal (with respect to inclusion)
unions of polygons that are $r$-stable and elliptic.
We note that in some cases the assumption of minimality is too strong;
as observed in \cite{Karrasch2015}, it is common for large geodesic vortices to bound
exactly two $r$-stable regions, each of wedge-type (i.e., index $\frac{1}{2}$).
In order to include also these elliptic regions, we additionally introduce
a number of ways to merge multiple $r$-stable regions that have indices summing to $1$.
Before giving a description of the details of steps (i)--(iii),
we summarize that our identification method
is fast, works directly on the line field data at mesh points, and can be used on
unstructured, irregular meshes/grids.
Moreover, it is unnecessary to choose pointwise
orientations for the line field, or to use ad-hoc heuristics for singularity type
classification. Robustness against local computational errors is achieved by
automatically choosing contours large enough so that any enlargement of the
contour (up to a specified size given by $r$) yields the same result. This is the only
parameter required by the (indirect) singularity detection method.
\subsubsection{Step (i): Calculating indices}
\label{ssec:polygon}
Assume we have a polygonal mesh $\mathcal{P}$ on $\Omega$ consisting of
vertices $\mathcal{V}$, edges $\mathcal{E}$, and polygonal faces $\mathcal{F}$. The vertices are points at which
the value of the line field $\ell$ is known. Since we are working in a discrete setting,
the natural curves to consider for the computation of indices are concatenations of
edges in $E$. To this end, let $\gamma$ be a simple closed Jordan curve along $n$ edges of the mesh, i.e., passing through the vertices $v_1,\ldots, v_n,v_{n+1} = v_1$ along the edges $e_i=(v_i,v_{i+1})$ and enclosing a union of polygons;
cf.~\cref{fig:combining}.
Since we know the value of the line field $\ell$ only at the vertices, we need to
approximate the curve $\ell \circ \gamma\colon [1,n+1] \rightarrow \mathbb P^1$ based on those
values in order to approximate $\theta$ used in \cref{def:lf_index}; \cref{app:index_theory}.
We cannot apply \cref{def:lf_index} to the discrete case directly as the angles
$\theta_i$ between $\ell(v_i)$ and the $x$-axis are determined only up to a
multiple of $\pi$. We follow \cite{Tricoche2004} and choose
$\theta_i$ such that the angle difference $\Delta_i\coloneqq\theta_{i+1}-\theta_i$
between subsequent angle representations is minimal modulo $\pi$ for $i=1,\dots,n$.
This is achieved by setting
\begin{equation}\label{eq:angleupdate}
\Delta_i \coloneqq \rem(\alpha_{i+1}-\alpha_i, \pi) = (\alpha_{i+1}-\alpha_i) - \pi\round\left(\frac{\alpha_{i+1}-\alpha_i}{\pi}\right)\,,
\end{equation}
where $\alpha_i$ is \emph{any} angle representation of $\ell(v_i)$. The index is then approximated by
\[
\ind_\ell(\gamma) \coloneqq \frac{1}{2\pi}\left( \theta_n - \theta_1 \right) = \frac{1}{2\pi} \sum_{i=1}^{n} \left(\theta_{i+1} - \theta_i\right) = \frac{1}{2\pi} \sum_{i=1}^{n} \Delta_i\,,
\]
where the right hand side can be viewed as a discretization of the integral representation of
the index in \cref{eq:lfindex}. We will refer to $\ind_\ell$ as the ``computed index'' whenever
we wish to explicitly distinguish this from the true index, though we will not always make the distinction.
We never have to pick an orientation
$\theta_i$ for the line field at the vertices, but only compute the angle updates
$\Delta_i$ via \cref{eq:angleupdate} from any angle representation
$\alpha_i$; the latter is usually obtained by calling the \texttt{arctan} function on the
line field components. Moreover, the value $\Delta_i$ only depends on the (directed)
edge $e_i$ and not on the rest of $\gamma$. Hence, $\Delta$ can be established
as a function on the set of edges $\mathcal{V}$. This method is used in \cite{Tricoche2004} for line field simplification by merging of
singularities, where it is shown that for linear line fields this
approach yields the correct index of an interpolated line-field on triangular meshes -- even though
these resolve the angle function $\theta$ by as few as three values. If $\gamma$ encloses
a region $R$ (and is positively oriented), define $\ind_\ell(R) = \ind_\ell(\gamma)$.
\begin{figure}
\centering
\subfloat[A triangle in a mesh, with vertices $v_1$, $v_2$, $v_3$ and edges $e_1$, $e_2$, $e_3$ labelled.]{\input{mesh.tex}}\quad
\subfloat[Same triangle, with a line field superimposed and values at vertices in red]{
\input{onelement.tex}}\\
\subfloat[\label{fig:rp1}Values of the line field on $\mathbb P^1$, along with straight-line curve.]{
\input{rp1.tex}
}
\caption{(a) Visualization of an irregular computational domain, a triangular mesh. (b)
The line field $\ell$ and its values $\ell(v_1)$, $\ell(v_2)$, $\ell(v_3)$ around a triangle of
the mesh. (c) The angle (mod $\pi$) representation $\alpha_i = \ell(v_i)$ and a
connecting line of straight line segments. The values $\theta_i$ are (directed) arcs
from $\alpha_i$ to $\alpha_{i+1}$. As the curve goes around the center
halfway, the line-field index is $\frac{1}{2}$, correctly indicating the enclosed wedge-type
singularity.}
\label{fig:triangle}
\end{figure}
From the definition of $\Delta_i$, we know that it changes sign if the direction of
$e_i$ is reversed. This gives an additive property that is consistent with the additive
property of the index.
\begin{lemma}\label{lem:union_index}
Let $P_1,\dots, P_k\in\mathcal{F}$ be $k$ distinct faces so that $R = \bigcup_{i=1}^k P_k$.
Then $\ind_\ell(R) = \sum_{i=1}^k \ind_l P_i$.
\end{lemma}
\Cref{lem:union_index} allows us to ignore faces with vanishing index from all considerations in the following.
The method just described can also be interpreted as follows.
Define the values of $\theta_i$ by taking the canonical metric on $\mathbb P^1$ as
given by the angle between subspaces, and obtain a curve by connecting individual points by the
shortest path in this metric (or equivalently by straight lines in the canonical embedding into $\mathbb R^2$). To compute the index, we then count the number of times
this curve winds around the center of the circle representing $\mathbb{P}^1$, and divide by 2; cf.~the (equivalent) definition of the line field index in \cite[p.~218]{Spivak1999}.
\subsubsection{Step (ii): Combining polygons}
\label{ssec:combination}
Let $\mathcal{F} = \lbrace P_i; ~i \in I\rbrace$ be the set of polygonal faces/grid cells of the mesh enumerated by an index set $I$.
We identify each polygon $P_i \in\mathcal{F}$ with its center of mass $p_i$.
In the following, the distance between faces $P_i$ and $P_j$ is taken as the
distance between the centers of mass $p_i$ and $p_j$, for simplicity.
We wish to detect regions $R$ that are elliptic and $r$-stable unions of polygons. We do so by
finding connected components of an undirected graph $\mathcal{G}$ whose nodes are the
center points $(p_i)_{i \in I}$ and whose faces $P_i$ have non-vanishing index. In
this graph, two nodes $p_i \neq p_j$ are connected if and only if $\lvert p_i - p_j\rvert < r$.
By \cref{lem:union_index}, the index of such a connected component
$\widetilde{\mathcal{G}}$ is given by the sum of the non-vanishing indices
$\sum_{i:~p_i\in\widetilde{\mathcal{G}}}\ind_{\ell}(P_i)$\,.
Let $\mathcal{K}$ denote the set of such connected components. Any
$K\in\mathcal{K}$ represents a set of faces $R_K \coloneqq \cup_{k \in K} P_k$ whose
index is, according to \cref{lem:union_index} given by the sum of the indices of
$P_k$, $k\in K$. If these faces have $\ind_\ell(K) = 1$, then the corresponding
region $R_K$ is an elliptic region that is (approximately\footnote{As we are working with center points and not exact distances, it may not be fully $r$-stable.})
$r$-stable (cf.~\cref{fig:combining}), provided that the computed index approximates the true index well enough at the chosen coarseness of the polygonal mesh.
For a region $K$, we will call the average over $(p_i)_{i \in K}$ its \emph{center}.
\begin{figure}
\centering
\input{combining.tex}
\caption{A quadrilateral mesh, with several $r$-stable regions
($r$ equals, e.g., two cell diameters)
containing singular cells. The upper left contour encloses an elliptic region,
whereas the right contour contains 4 cells with non-vanishing indices which sum up to zero.
Below, there is an isolated cell of index $-\frac12$.}
\label{fig:combining}
\end{figure}
\subsubsection{Step (iii): Additional merge heuristics}\label{sec:mergeheuristics}
As mentioned above, observations in \cite{Karrasch2015} showed that the
procedure in step (ii) may miss large elliptic regions, in which two wedge-type
singularities (with index $\frac12$) are further apart than $r$. Thus, we account for
special elliptic configurations of $r$-stable regions by a range of merge heuristics.
The simplest of these, which we call \texttt{combine\_20}, adds to the list of elliptic regions all wedge pairs
comprising $r$-stable regions that are mutually their nearest neighbors (measured by distance between center points) among the
$r$-stable regions (of nonvanishing index). We have a similar heuristic for
combining 3 wedge-type $r$-stable regions with a trisector (index $-\frac{1}{2}$)
called \texttt{combine\_31}. This seems useful in the OECS case but less so for
other types of vortices. Additionally, we have also implemented a
\texttt{combine\_20\_aggressive} heuristic that combines wedge-pairs under strictly
weaker requirements. More specifically, it does so if (i) one of them is the nearest
neighbor of the other; and (ii) if the rectangle with vertices given by the $r$-stable
region centers does not contain any further $r$-stable region of nonzero index.
This heuristic is based on examination of the singularity configurations occurring in
our turbulence simulation described in \cref{sec:turbulence}. There, we also
compare results from the \texttt{combine\_20} heuristic with those obtained from the
\texttt{combine\_20\_aggressive} heuristic. Further heuristics can be developed and
neatly included in our implementation. Since the resulting regions only serve as
\emph{candidate} regions for the closed orbit computation described in
\cref{sec:closed_orbits}, false-positives at worst add some computational effort.
\subsubsection{Discussion}
Since the above procedure forms one centerpiece of our implementation and
earlier implementations of the same index-theory-based considerations
have deservedly earned some criticism, we would like to discuss some of its
features from a theoretical viewpoint here.
\newpage
First, the procedure described above is a rigorously justified singularity
simplification procedure; cf.~also \cite{Tricoche2001}.
If a singularity contained in a single mesh cell with non-vanishing
index is reasonably isolated, merging its enclosing cell with neighboring cells
results in a curve homotopy of the boundary which does not change the value of the index, but
instead increases the number of ``quadrature points'' in the discretization of its
integral representation; recall \cref{eq:lfindex} in \cref{def:lf_index}. Therefore, it
allows us to compute the index more accurately.
If a singularity is accompanied by a very close second singularity, then their
combined index is computed. This yields either (a) the cancellation of poorly
computed/fake non-zero indices, or (b) the computation of the index of a larger
region enclosing two singularities.
Case (b) sometimes occurs in the center of closed orbits, where two wedges (each
with index $1/2$) or three wedges and a trisector (index $-\frac12$) are nearby and
get combined to a joint singularity with index 1, correctly indicating an elliptic region.
Case (a) often occurs along the observed boundary of vortices, where many
singularities cluster along a line. From a macroscopic perspective, however, the
indices of these singularities turn out to cancel each other out, indicating that
the net topological effect of this cluster is equivalent to the complete absence of
singularities, or the presence of a single singularity.
These effects are shown in \cref{fig:combination}. In \cref{fig:combi1},
cells with nonzero index on a quadrilateral mesh (only wedges (index $\frac12$, orange) and trisectors (index $-\frac12$,
blue)) without any combination steps taken are shown. One can clearly see the isolated
wedge pair in the center of the figure, and dense singularity clusters aligned along
quasi-one-dimensional strips. In \cref{fig:combi2}, singularities are post-processed
according to the above procedure. Here, (i) the isolated wedge pair is combined to
an elliptic region (index 1, white), (ii) the dense singularity clusters have been
annihilated under combination/index summation, and (iii) a densely clustered 3-wedge-1-trisector
configuration in the upper right corner (caused in this particular case by the use of a
low order interpolation scheme for the velocity field) has been combined to
another elliptic region.
\begin{figure}
\centering
\subfloat[Rectangles in a regular grid with non-vanishing indices.\label{fig:combi1}]{
\input{sings_before.tex}
}\quad
\subfloat[Centers of regions with nonzero index after the combination steps (with $r=0.2\degree{}$).\label{fig:combi2}]{
\input{sings_after.tex}
}
\caption{Distribution of transport tensor $\mathbf{T}$ singularities and their
types for a $30$-day ocean surface simulation off the coast of Mexico starting on January 7, 2017: orange points
correspond to centers of regions with index $\frac{1}{2}$, blue to index $-\frac{1}{2}$,
and white to index $1$. Background coloring is the DBS field.}
\label{fig:combination}
\end{figure}
Finally, compared to the preceding (the computation of the tensor field $\mathbf{T}$
via advection of potentially dense grids of particles) and the subsequent
computational steps (the computation of closed null-geodesics), the index
computations here are negligible in terms of computational effort. This effort consists
of:
\begin{enumerate
\item computing pairwise distances between points in the plane, though only distances between a \emph{small} number (depending on the heuristic) of \emph{nearby}
points are required, which massively reduces the number of possible singularity
combinations and allows to use a tree structure with \texttt{NearestNeighbors.jl}
\cite{Carlsson2018} instead of a distance matrix; and
\item applying some simple logic/filtering on the resulting distance graph.
\end{enumerate}
The possibility to obtain information on where closed
null-geodesics might be located allows for the restriction to a small (in comparison to the total number of mesh vertices)
number of local domains, which facilitates good scaling behavior when dealing with challenging flow problems on large domains.
\subsection{Restriction to local domains}
\label{sec:localization}
One crucial feature of the index-based computation of geodesic vortices is the
possibility to localize the subsequent closed-orbit computation to regions of a
physically reasonable size. This distinguishes our implementation from that
developed in \cite{Serra2017}. One immediate and significant effect of data
localization is that interpolants do not have to carry global information, allowing
for a better utilization of computer memory (and the memory hierarchy) in the subsequent steps.
This is especially true when working with several processes, as the amount of data
needing to be sent to individual processes is significantly reduced.
Another positive effect of localization is that integral curves which leave
the localization box trigger an error in the ODE integrator and are hence no longer
followed. Finding a reasonable criterion for automated decision-making when to give
up on an integral curve was a rather challenging issue in previous implementations.
The side length $2R$ of the localization box is a parameter that must be supplied and
that bounds the size of geodesic vortices that can be found. This parameter is used
by our implementation to determine default values for other parameters as well.
These include the required return distance for an integral curve to be considered as
``closed'' and error tolerances of the ODE solver. Each candidate region identified in the
previous step can be processed with only local information about the tensor
eigenvector $\xi_1,\xi_2$ and eigenvalue $\lambda_1,\lambda_2$ fields.
\subsection{Computation of closed orbits}\label{sec:closed_orbits}
Let $K$ be an elliptic region identified by the procedure described in \cref{sec:elliptic}.
That procedure returns a point $p_K$ (typically lying in the convex hull of $K$) which we view
as the potential vortex center. To find closed orbits of $\eta^{\pm}_\lambda$ near $K$,
we employ a shooting method and place a Poincaré section of length $R$ from $p_K$ eastwards to $p_K + (R,0)$.
In order to apply the shooting method, i.e., to numerically\footnote{We use the
\texttt{DifferentialEquations.jl} package \cite{Rackauckas2017}.} compute integral curves,
we need to turn the line fields into local vector fields.
\subsubsection{Orientation of the line field}
At each point $x$ in the domain, the line field $\eta^\pm_\lambda(x)$ is identified with a vector
$v^\pm_\lambda(x)$ so that $\spn\lbrace v_\lambda^\pm(x)\rbrace = \eta^\pm_\lambda(x)$
and $|v_\lambda^\pm(x)| = 1$. Clearly, integral curves of $v^\pm_\lambda$ are integral curves
of $\eta^\pm_\lambda$. As for the vector field representation, we choose an orientation such
that $\xi_1$ is spiraling anti-clockwise around $p_K$, and $\xi_2$ is pointing
away from $p_K$. This is achieved by setting
\begin{align*}
\tilde \xi_1(x) &\coloneqq \sign\left(\xi_1(x) \cdot \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix}(x - p_K)\right)\xi_1(x)\,, &
\tilde \xi_2(x) &\coloneqq \sign\left(\xi_2(x) \cdot (x - p_K)\right)\xi_2(x)\,,
\end{align*}
and then
\begin{align}\label{vformula}
v_\lambda^\pm(x) = \sqrt{\frac{\lambda_2 - \lambda }{\lambda_2 - \lambda_1}}\,\tilde{\xi}_1(x) \pm \sqrt{\frac{\lambda - \lambda_1}{\lambda_2 - \lambda_1}}\,\tilde{\xi}_2(x)\,.
\end{align}
The local vector field can be interpreted as a \emph{rotated vector field}; cf.~\cite{Duff1953}.
With the previous orientation, increasing $\lambda$ turns $v_{\lambda}$ to the right (for $v^+_\lambda$)
and to the left (for $v^-_\lambda$).
We calculate $v_\lambda^\pm$ at grid points and then interpolate. On a quadrilateral grid,
this is done by bilinear interpolation followed by a slight rescaling to ensure that the interpolated
value $v(x)$ has unit length. In cases where $v(x) \cdot v(y) < 0$ for adjacent grid points $x,y$
the interpolated vector field will have sharp kinks, suggesting that the orientation for the vector
field may have been chosen incorrectly. As a post-processing step, we check that all
obtained closed orbits do not lie in such mesh cells. Moreover, \cref{vformula} is undefined when
$\lambda \notin [\lambda_1,\lambda_2]$. In our implementation, we nevertheless generate a
vector field at such points, and reject closed orbits a posteriori if $v_\lambda^\pm$
(and likewise also $\eta_\lambda^\pm$) has points in a cell in which
$v_\lambda^\pm$ is undefined at some vertex. We also reject closed orbits in the
unlikely case that they do not enclose the center point $p_K$.
\subsubsection{The shooting method}
After these preparations, the general approach is the following:
Let $\Ret^\pm\colon [\lambda_{\min},\lambda_{\max}] \times [0,R] \rightarrow \mathbb{R}$
denote the Poincaré return distance function for $v^\pm_\lambda$. This is the
(signed) distance from an orbit of $v^\pm_\lambda$ starting at $p_K + (0,s)$ to its
first return point on the Poincaré section. The function $\Ret$ is undefined at points $s_0$
whose orbit does not return, and in such cases, we assign $\Ret^\pm(s_0)\coloneqq\infty$.
Geodesic vortices are given by the zeroes of $\Ret^\pm$.
Abstractly speaking, the problem is to find roots of $\Ret$ over the square
$[\lambda_{\min},\lambda_{\max}] \times [0,R]$. Previous implementations for computing
geodesic vortices \cite{Onu2015,Karrasch2015,Hadjighasem2016,Serra2017} iterate over a fixed range of values
$\lambda \in [\lambda_{\min},\lambda_{\max}]$ and aim to find zeros of $s\mapsto\Ret^\pm(\lambda,s)$.
This has the disadvantage of often dealing with poorly conditioned problems.
\Cref{fig:zerocrossings} shows an example of the behavior of $\Ret^\pm(\lambda,\cdot)$
for various values of $\lambda$. In \cref{fig:x_vs_lambda}, we plot the parameter value
$\lambda^*(s)$, for which integral curves close, over the initial condition $(0,s)$ along the Poincaré
section. At local extrema, we observe a jump in the number of closed orbits for fixed
$\lambda$. In \cref{fig:varying_seeds}, we look at the same situation from a different
angle. Here, we plot the return distance function $\Ret$ again over the initial conditions.
The bifurcations in \cref{fig:x_vs_lambda} correspond to tangencies of the curve with the
zero level set $\Ret=0$ (black) in \cref{fig:varying_seeds}.
\begin{figure}
\centering
\subfloat[The set of points on the Poincaré section for which a given closing parameter $\lambda$ (red lines) corresponds to a closed orbit is difficult to determine for some values of $\lambda$ \label{fig:x_vs_lambda}]{\input{x_vs_lambda_paper.tex}}\\
\subfloat[The return distance function (with fixed parameter value $\lambda$),
whose zeroes correspond to seeding points for closed orbits, may have tangencies with the zero-level set, and, as a consequence, the root finding problem is ill-posed.\label{fig:varying_seeds}]{\input{varyingseedpoints_paper.tex}}
\caption{Problematic behavior of $\Ret^\pm(\lambda,\cdot)$ with fixed $\lambda$.}
\label{fig:zerocrossings}
\end{figure}
We therefore employ a dual approach instead and look for roots of $\Ret^\pm(\cdot,s)$
for a fixed range of values $s \in [0,R]$, corresponding to a range of fixed initial
conditions along the Poincaré section. This has the following advantages:
\begin{enumerate
\item The function $\Ret^\pm(\cdot,s)$ is---as long as the corresponding integral curves go through an annular region around the center singularities in which $v_\lambda^\pm$ is a continuous vector field (recall that the underlying vector field is rotating
under parameter variation)---monotone (see \cref{fig:prd_vs_lambda}), greatly improving the condition of the problem.
\item If we only want to find outermost geodesic vortex, we can start with a large $s$, i.e., at the very right of
the Poincaré section, and decrease $s$ until the first closed orbit is found.
\item Closed orbits that are found tend to be uniformly spatially distributed; see \cref{fig:closed_orbits}.
\end{enumerate}
\begin{figure}
\centering
\subfloat[Return distance function $\Ret$ for some fixed initial condition and varying parameter $\lambda$.]{\input{lamreturn_paper.tex}}\quad
\subfloat[Yellow lines show integral curves for $\lambda=0,1,2,3,4$. Pink lines are with $\lambda=1$ and do not return.\label{fig:lambda}]{\input{lambda.tex}}
\caption{Behavior of the function $\mbox{Ret}^\pm(\cdot, s)$}
\label{fig:prd_vs_lambda}
\end{figure}
\begin{figure}
\centering
\input{sings_after_vortex.tex}
\caption{\Cref{fig:combi2} overlaid with the computed closed orbits.}
\label{fig:closed_orbits}
\end{figure}
\section{Applications with \texttt{CoherentStructures.jl}}
\label{sec:applications}
Both test cases described below were run on a workstation\footnote{Intel(R) Xeon(R) CPU E3-1235 @ 3.20GHz, 16GB RAM} with 4 cores with a Linux (Fedora 27) operating system, except for the computations used to produce \Cref{fig:vortex_density}, which was run on a 32-core compute server.
\subsection{2D turbulence with varying initial time as parameter}
\label{sec:turbulence}
We calculate material barriers of an incompressible turbulent velocity
field over a series of time windows $[t, t+5]$ for $700$ equally
spaced values of $t$ in $[0,70]$. This is done in order to test
our implementation on a large number of velocity fields without the option of
manually adjusting parameters from one time window to the next.
For the convenience of the reader, we provide complete code---here and as a
Jupyter notebook in the supplementary material---to reproduce our simulation.
Unfortunately, the exact velocity field obtained changes from run to run, but
the results should remain qualitatively the same.
The velocity field is generated with the help of the packages \texttt{FourierFlows.jl} \cite{Wagner2019}
and \texttt{GeophysicalFlows.jl} \cite{Constantinou2019}.
\newpage
\subsubsection{Generating a turbulent velocity field}
We begin by importing the relevant packages and by setting up the computational domain.
\begin{minipage}{\textwidth}
\lstinputlisting{turb1.jl}
\end{minipage}
To avoid decay of the flow we employ stochastic forcing.
The code below is modified from the example given in the \texttt{GeophysicalFlows.jl} documentation.
\begin{minipage}{\textwidth}
\lstinputlisting{turb2.jl}
\end{minipage}
We now setup the remaining parameters used in the simulation. We numerically solve the vorticity (transport) equation
\[
\partial_t \zeta = - u\cdot \nabla \zeta -\nu\zeta + f.
\]
Here $u(x,y) = (u_1(x,y),u_2(x,y))^T$ is the (incompressible) velocity field, and $\zeta = \partial_x u_2 - \partial_y u_1$ is its vorticity.
The parameter $\nu$ is set to $10^{-2}$ and is the coefficient of the drag term, $f$ represents the forcing (see also the \texttt{FourierFlows.jl} package and its documentation \cite{Wagner2019}, and \cite{Constantinou2015}).
\begin{minipage}{\textwidth}
\lstinputlisting{turb3.jl}
\end{minipage}
We run this simulation until $t=500.0$ to work in a statistically equilibrated state,
and then save the result at time steps of size $0.2$.
\begin{minipage}{\textwidth}
\lstinputlisting{turb4.jl}
\end{minipage}
The generation of the velocity field by the above code takes just a few minutes. \Cref{fig:turb_vorticity} shows the vorticity field for a specific run at $t = 500$.
\begin{figure}
\centering
\subfloat[\label{fig:turb_vorticity}]{\input{vorticity.tex}}\quad
\subfloat[\label{fig:turb_barriers}]{\input{turb_mat_bar.tex}}
\caption{(a) Vorticity at $t=500$ in a turbulent two-dimensional velocity field.
(b) Centers of regions with index $\frac{1}{2}$ in orange, those with index $-\frac{1}{2}$ in blue, centers of elliptic regions in white.
Material barriers in red, background coloring is DBS field.}
\end{figure}
\subsubsection{Computing material barriers}
We first setup a spatially periodic interpolation of the velocity field, which is performed by the package \texttt{OceanTools.jl} \cite{OceanTools2020}.
\begin{minipage}{\textwidth}
\lstinputlisting{turb5.jl}
\end{minipage}
We are now ready to compute material barriers.
\begin{minipage}{\textwidth}
\lstinputlisting{turb6.jl}
\end{minipage}
The \texttt{materialbarriers} function calculates the transport tensor field $\mathbf{T}$
used in the material-barriers approach (using finite differences for the linearized flow
map $D\mathbf{F}$) and calculates material barriers. The result is shown in \cref{fig:turb_barriers}.
Running with $700$ different values of $t$ took 5h 16min 26 for the less agressive heuristic,
and 8h 20min 9s for the more aggressive heuristic. \Cref{fig:histogram} shows a
histogram of the number of vortices that have been found in the 700 simulations by
the two merge heuristics; cf.~\cref{sec:mergeheuristics}. Clearly, the more
aggressive merge heuristic detects more candidate regions and, as a consequence,
more vortices. An animation containing the detected vortices over each time window
is available in the supplementary material. The animation shows some flutter in the
continuation of some vortices, especially close to their ``generation'' or
``death'', indicating room for further improvement in the robustness of the method.
\begin{figure}
\centering
\input{histogram.tex}
\caption{Comparison of combination heuristics for $700$ different starting time parameters.}
\label{fig:histogram}
\end{figure}
\subsection{Global ocean surface flow}
\label{sec:ocean}
As a very large-scale example, we compute material barriers to diffusive transport in
a global ocean surface flow. The velocity field is obtained by tricubic interpolation
(using the algorithm from \cite{Lekien2005} (cf.~ also \cite[Appendix A]{Chilenski2017}) as implemented in the \texttt{OceanTools.jl} package) of
geostrophic ocean surface velocities from a dataset\footnote{ More specifically, from the \texttt{SEALEVEL\_GLO\_PHY\_CLIMATE\_L4\_REP\_OBSERVATIONS\_008\_057} one.} distributed by the Copernicus Marine Environment Monitoring Service.
These velocities are derived from satellite altimetry data. We choose a time window
(starting November 26, 2016) for which a small spatial piece has been used
repeatedly as a data set ``benchmark'' case in the literature; cf., for instance,
\cite{Haller2018,Froyland2018}, and earlier studies on slightly larger domains
\cite{Haller2013a,Karrasch2015}. The domain studied in the first references is
highlighted by a small white rectangle in \cref{fig:ocean30,fig:ocean90}.
\newpage
We use a regular $10000\times5000$ quadrilateral grid, i.e., with 50 million grid points.
Velocities in land-areas, or where ocean surface velocities are not available, are set
to zero. Trajectories are calculated with \texttt{CoherentStructures.jl}, which
internally uses the \texttt{DifferentialEquations.jl} package \cite{Rackauckas2017},
and relative and absolute tolerances for the ODE solver are set to $10^{-6}$. The
averaged Cauchy-Green tensor is calculated by approximating the linearization of
the flow map using finite differences, with the finite difference stencil reinitialized every 10 days. The resulting tensors are finally combined according to the product rule.
For the first test-case, we approximated the averaged, diffusion-weighted Cauchy--Green tensor for a 90-day period by averaging the Cauchy--Green tensor every 10
days from 0 to 90 days after start. The calculation of the tensors took approx.~21 hours.
We find $40,609$ $r$-stable regions, $r=0.25\degree$, of nonzero index and
obtain $6,506$ elliptic regions with the \texttt{combine\_20\_aggressive} heuristic.
The localization size $R$ is set to $2.5\degree$; cf.~\cref{sec:localization}.
This yields $357$ vortices for $\lambda$-values in $[0.7,1.4]$; the results are shown
in \cref{fig:ocean90}. The geodesic vortex computation took approx.~2.5 hours.
The second test case has an identical setup to the one described above, but with an
observation time window of only $30$ days. The computation of the tensors required approx~ 6.5 hours.
We find $44,160$ $r$-stable regions of nonzero index, with $6,399$ candidate
elliptic regions we obtained $2,255$ vortices. The results are shown in \cref{fig:ocean30}, the geodesic vortex computation took again roughly 2.5 hours.
While we are not qualified to interpret \cref{fig:ocean30,fig:ocean90} from an
oceanographic point of view, we do note that the region of known active vortex
generation to the west of the southern tip of Africa due to Agulhas leakage \cite{Ruijter1999} can be nicely recognized.
To further demonstrate the capabilities of our method, we have run 36 30-day
simulations with a 30-day time lag, i.e., spanning a period of approximately 3 years.
We then plot the initial coherent vortex locations superimposed onto each other with
a (uniform) transparency in \cref{fig:vortex_density}. This visualization highlights regions where
vortices tend to occur more often. \Cref{fig:vortex_density} is part of ongoing work on the statistics of
coherent transport in the global ocean.
\begin{figure}
\centering
\subfloat[\label{fig:ocean90global}]{\includegraphics[width=.9\textwidth]{worldwide_vortices_90days_large}}
\subfloat[Close-up of \cref{fig:ocean90global}.]{\includegraphics[width=.9\textwidth]{worldwide_vortices_90days_small}}
\caption{90-Day DBS field with filled in material diffusive transport barriers (red), their advection after 30, 60 and 90 days (orange, yellow, green respectively).}
\label{fig:ocean90}
\end{figure}
\begin{figure}
\centering
\subfloat[\label{fig:ocean30global}]{\includegraphics[width=.9\textwidth]{worldwide_vortices_30days_large_new}}
\subfloat[Close-up of \cref{fig:ocean30global}.]{\includegraphics[width=.9\textwidth]{worldwide_vortices_30days_small_new}}
\caption{30-Day DBS Field with filled in material diffusive transport barriers (red), their advection after 30 days in orange.}
\label{fig:ocean30}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.9\textwidth]{vortex_density.pdf}
\caption{Coherent Lagrangian vortices from 36 30-day simulations, covering
roughly 3 years. Every coherent vortex is shown in transparent red, with several
spatially overlapping vortices increasing the color intensity.}
\label{fig:vortex_density}
\end{figure}
\section{Conclusion}
In this work, we have described in detail our implementation of the index-theory-based approach \cite{Karrasch2015}
to computing coherent Lagrangian vortices including material barriers to diffusive transport
as closed null-geodesics of some Lorentzian metric tensor field. Despite the fact that
there is no theoretical guarantee that all closed null-geodesics are found with the currently implemented merge heuristics,
the approach is generally well-suited for genuinely challenging flow problems. This is due to the fact
that while the effort for the computations related to the index-theory-based
determination of candidate regions is basically negligible, its outcome
dramatically reduces computational effort in the closed orbit detection via localization.
In fact, there are cases (especially when reducing the number of grid points) where we do find \emph{more}
vortices with our implementation than with the implementation described in
\cite{Serra2017}. We suspect that this is because of the high degree of accuracy and resolution required in \cite{Serra2017}.
We have demonstrated our method on two test cases of considerable size, yet with
affordable computational resources.
Comparable simulation studies have, to the best of our knowledge and
with the possible exception of \cite{Abernathey2018}, not been published before.
In this paper, we have restricted ourselves to presenting the theory and the
implementation for computing closed null-geodesics of tensor fields. The exact
same considerations with minor modifications carry over to the case of
\emph{constrained diffusion barriers}, as introduced in \cite{Haller2019} as closed
orbits of the \emph{transport vector field}. The corresponding implementation
building on the same machinery described here is available in \texttt{CoherentStructures.jl}.
When applied to such challenging problems, remaining issues become apparent,
indicating room for further improvement. Interesting new features that are work in
progress or under consideration include the approximation of the tensor field
$\mathbf{T}$ from scattered trajectories in order to have a purely data-driven
approach for calculating coherent Lagrangian vortices, the use of boundary value
problem solvers or topological methods à la \cite{Wischgoll2001} for the actual
closed orbit detection, and other miscellaneous optimizations. In any case, our
software, even in its current form, paves the way to global transport studies on the ocean surface from a
Lagrangian viewpoint, which we hope may be of interest to the oceanographical community.
|
1,314,259,993,225 | arxiv |
\section{Introduction}
As America's national IP office, the United States Patent and Trademark Office (USPTO) is charged with the mission of granting patents and registering trademarks. The USPTO is required by law to disseminate most nonprovisional patent applications and granted patents to the public.~\footnote{35 U.S.C. § 122(b); 37 C.F.R. § 1.11(a).} U.S. patent data has long been used in many domains of application, including in patent analytics~\citep{oldham2016}, economics~\citep{bell2018, akcigit2018}, and commercial tools for patent prosecutors and litigators,~\footnote{Because agency practice is to refrain from public endorsements of any particular commercial IP products, we omit references to specific examples.} thus serving as a versatile substrate for illustrating the dynamics of national and global innovation.
Concurrently, the fields of artificial intelligence (AI) and natural language processing (NLP) have witnessed a remarkable cadence of scientific breakthroughs. Fueled by model architecture innovations such as self-attention~\citep{vaswani2017} and by the ever-increasing computational horsepower of the leading AI hardware accelerators~\citep{svedin2021,wang2019}, these novel techniques have found versatile areas of application, from machine translation~\citep{vaswani2017} to structural biology~\citep{rives2021}.
In this article, we survey recent developments at the intersection of AI and USPTO data. These developments fall in two broad categories:
\begin{enumerate}
\item Promising AI and NLP techniques can be brought to bear on USPTO data in existing or novel fields of application.
\item USPTO data can contribute to work that advances the frontiers of AI and NLP research.
\end{enumerate}
Both bodies of work hold great promise for advancing scientific and technical progress. We encourage those in both the AI and the IP communities to explore how USPTO data can unlock numerous exciting opportunities---both in their respective disciplines and at the intersection of AI \& IP.
\section{USPTO data for patent-focused AI \& NLP usecases}
One distinct body of work uses AI \& NLP techniques on USPTO patent data toward enhancing the value of such data toward longstanding areas of application. Here, we present case studies in the context of IP administration, practice, and empirical research.
\subsection{AI \& NLP tools for IP administration} \label{sec:ai-nlp-ip-administration}
IP offices worldwide seek to apply AI \& NLP in the administration of their respective IP systems, and the USPTO is no exception. The USPTO recognizes the advent of AI as among the most consequential technologies---both for global society as a whole and for the agency's mission of delivering reliable, timely, and quality IP rights~\citep{hirshfeld2021}.
Operationally, the USPTO focuses on two critical areas of AI application: prior art search and patent classification. AI is a natural tool with which to augment prior art search systems. Representation learning and related techniques can produce semantically meaningful embeddings of language, graphs, images, and even proteins~\citep{devlin2019, dwivedi2021, dosovitskiy2021, rives2021}. The USPTO applies such techniques on the agency's patent archives and uses the results toward improving examiner-facing search systems to surface more relevant prior art documents~\citep{uspto2022}.
Turning to patent classification, the USPTO currently classifies patents using a two-stage process. First, the agency assigns a set of Cooperative Patent Classification (CPC) symbols to each patent to characterize the relevant technologies contained therein. Second, the agency determines the subset of CPC symbols (``claim indicators'') associated with claim scope. The USPTO has recently deployed an AI system, trained on annotated USPTO patent data, for assigning claim indicators, and the agency is currently augmenting the system to assign the full set of CPC symbols~\citep{hirshfeld2021}.
\subsection{AI \& NLP tools for IP practitioners \& inventors}
Since the dawn of computer-based information retrieval, software developers have built tools for assisting IP practitioners and inventors. Some tools are similar to those needed by IP offices (\emph{e.g.}, prior art search), while others are specific to the needs of the private IP bar and inventors (\emph{e.g.}, IP portfolio intelligence). Recent work has used publicly-available USPTO data to train AI models that provide new or enhanced capabilities to IP software products.
The USPTO has recently released AI-empowered search capabilities to the public through the Inventor Search Assistant~\citep{inventorsdigest2022}. This tool surfaces not only published applications from the USPTO patent archives, but also non-patent literature (NPL) and foreign patent documents. Traditional prior art search systems have a steep learning curve (\emph{e.g.}, basic query syntax, proximity operators) that may pose a hurdle to early-stage and independent inventors. Such inventors can especially benefit from the Inventor Search Assistant, which uses machine learning techniques to offer an initial overview of the state-of-the-art from natural language queries alone.
\subsection{AI-powered empirical research \& analytics}
Finally, USPTO data can be elucidated via existing AI \& NLP techniques to produce boundary-pushing empirical research \& analytics. A common patent analysis task is to sort patent documents into specific fields of technology or business applications---commonly known as ``patent landscaping''~\citep{trippe2015}. Recent work has applied deep learning to the task of patent landscaping~\citep{abood2018}, with USPTO data frequently used both as training data and as the source of documents to be landscaped. The USPTO has recently leveraged such techniques in its own empirical studies on U.S. patent archives.
Released in 2021, the USPTO's AI Patent Dataset identifies the presence of AI in over 13 million U.S. patent documents and further sorts them into one of eight component technologies~\citep{giczy2022}. This dataset was created by training a recurrent neural network in a semi-supervised manner to distinguish between positive and negative examples~\citep{abood2018}. The USPTO leveraged the AI Patent Dataset to trace the diffusion of AI and its component technologies within post-1976 U.S. patents~\citep{toole2020}, with such findings informing agency stakeholder engagements and other policy-relevant activities~\citep{uspto2022}.
Much patent analysis focuses on specifications, claims, and metadata, but an often-overlooked data source for patent analytics lies in prosecution history. The USPTO has applied AI techniques to make Office actions more accessible to the patent analysis community. Released in 2017, the USPTO's Office Action Research Dataset comprises a relational database of key elements from 4.4 million Office actions mailed during the 2008 to mid-2017 period~\citep{lu2017}. This dataset was created using machine learning and NLP techniques to systematically extract information from Office actions, thus marking the first time that comprehensive data on examiner-issued rejections was made readily available to the research community.
\section{Advancing the AI \& NLP research frontiers via USPTO data}
The foregoing body of work centers around the application of existing AI \& NLP techniques in IP-relevant areas. But another emerging body of work flips this paradigm by using USPTO data as an accelerant for scientific research in AI \& NLP. We highlight examples in both the training and evaluation of AI models.
\subsection{USPTO data in large language modeling}
Large language models have demonstrated a surprisingly diverse portfolio of natural language capabilities~\citep{devlin2019,brown2020}. Yet early iterations of billion-parameter language models employed lightly curated datasets constructed with few quality or diversity filters. Observing this, \citet{gao2020} compiled a dataset prioritizing both data quality and diversity, combining the background sections of millions of U.S. patents with 21 other data sources to form ``The Pile''.
This 825 GiB language modeling dataset, and subsets thereof, were subsequently used in training or assessing some of the largest and most advanced language models to appear in published research, including GPT-NeoX-20B, Gopher, and RETRO~\citep{black2022,rae2021,borgeaud2021}. OPT-175B~\citep{zhang2022}, currently the largest publicly-available language model, was trained on public USPTO data sourced from The Pile.
We observe that the background sections of patents, while informative, only scratch the surface of available content within the U.S. patent archives. Future language modeling datasets could include full patent specifications or prosecution history documents (\emph{e.g.}, Office actions). The latter holds particular promise as a source of scientific and legal reasoning examples not easily found elsewhere.
\subsection{Patent-sourced datasets for common tasks}
The quantity and detail of patent documents also readily enable their use as datasets for common AI \& NLP benchmark tasks. Patent classification (discussed in Section~\ref{sec:ai-nlp-ip-administration}) is, at its core, a quintessential multiclass classification challenge. AI researchers have already used public USPTO data and CPC annotations to create text classification benchmarks encompassing millions of patent documents~\citep{li2018, lee2020}. These benchmarks have been used to evaluate the capabilities of new self-attention neural network models~\citep{zaheer2020}.
Recent work has also augmented public USPTO data with automated data generation and manual annotations to form specialized benchmark datasets that can test the ability of novel AI and NLP models to penetrate complex technical concepts. For instance, \citet{aslanyan2022} construct a novel semantic similarity benchmark dataset by extrating phrases from patent documents, generating facially similar phrases, and manually rating the semantic similarity of each phrase pair on a five-point scale. A Kaggle competition featuring this benchmark resulted in nearly 43,000 submissions, achieving a top Pearson correlation of 87.8\%~\citep{kaggle2022}. The USPTO is interested in building upon these early successes by fostering future efforts to refashion patent data into valuable AI research benchmarks.
\section{Conclusion}
We have described two technical bodies of work that rest upon USPTO data. The first integrates USPTO data with AI \& NLP techniques to benefit IP administration, practice, and empirical analysis. The second leverages USPTO data in service of state-of-the-art AI \& NLP research.
We envision these two spheres forming a virtuous cycle wherein successes in one area furthers progress in the other. From search engines to benchmarks, and from landscapes to large language models and beyond, we hope that researchers and practitioners will find novel means of harnessing the richness of USPTO data to serve both the IP community and future AI researchers.
|
1,314,259,993,226 | arxiv | \section{Introduction}
An important aspect of event-driven trials is the operational design at the initial and
interim stages, i.e. predicting the event counts over time and the time to reach specific milestones,
accounting for events that may occur not only in patients already recruited and are followed-up but
also in patients yet to be recruited.
Therefore, in event-driven trials we need to model patient recruitment and event counts together.
There are different techniques for recruitment modelling described in the literature and one of the main
directions is using mixed Poisson models. This direction has a long history, with several papers devoted
to the use of Poisson processes with fixed recruitment rates to describe the recruitment process
(Carter et al., \cite{carter05};
Senn \cite{senn97,senn98}).
However, in real clinical trials the recruitment rates in different centres vary.
Therefore, to model this variation, Anisimov and Fedorov\cite{anfed07}
introduced a Poisson-gamma model, where the variation in
rates in different centres is modelled using a gamma distribution
(see also Anisimov\cite{an08}).
Some applications to real trials are considered in Anisimov et al.\cite{an-dow-fed07}.
This technique was developed further in several publications
for predicting and interim re-forecasting the recruitment process
under various conditions Anisimov\cite{an11a,an20}.
Other approaches to recruitment modelling primarily deal with global recruitment.
These approaches use different techniques and we refer interested readers to survey papers by
Barnard et al.\cite{barnard10}; Heitjan et al.\cite{heitjan15}) and
Gkioni et al.\cite{gkioni19}, and also to a discussion paper Anisimov\cite{an16b}
on using Poisson models with random parameters with other references therein.
A larger number of clinical trials are event-driven where the number of clinical events
is required to be large enough to allow for reliable statistical
conclusions about the parameters of patient responses.
For such trials, one of the main tasks
is predicting not only the required number of recruited patients,
but also the number of events that may occur
and the time to reach particular milestones.
A useful review of different approaches for event-driven trials is
provided in Heitjan et al.\cite{heitjan15}.
However, for predicting the number of events
over time and the time to stop the trial, the authors of papers cited primarily
use a Monte Carlo simulation technique, e.g.
Bagiella and Heitjan\cite{bagiel2001}.
Therefore, Anisimov\cite{an11b} developed an analytic methodology
for predictive modelling the event counts together
with patient recruitment in ongoing event-driven trials
accounting also for patient dropout.
This methodology is developed further to forecasting
multiple events at start-up and interim stages
under exponential assumptions, Anisimov\cite{an20},
and predicting some operational characteristics
during follow-up times, Anisimov\cite{an16a}.
It is also of interest in event-driven trials that a number of patients
under treatment will not experience the event within
their exposure time (i.e. time from randomisation to a particular milestone)
as the therapy of different diseases is improving.
Therefore, an interesting direction is to investigate the opportunity of cure.
Here we note the paper by Chen\cite{chen16}, but
he also uses a simulation technique for predicting event timing
and does not consider the case of patient dropout.
Therefore, in the paper we developed a new analytic approach to this problem
which accounts for patient dropout and also for the opportunity for patients
to be cured with some probability.
We assume that the patient recruitment is modelled using a Poisson-gamma model
developed in Anisimov and Fedorov\cite{anfed07}, Anisimov\cite{an11a}.
We consider non-repeated events and assume that the times to the main event and
to dropout are independent random variables, and there is an opportunity
of cure.
Several new models have been developed using exponential, Weibull and log-normal
distributions.
The focus is on the interim stage, where the parameters of different models
are estimated using maximum likelihood technique.
The predictive distributions of the number of future events for all considered models
are derived in the closed forms, thus, Monte Carlo simulation is not required.
The developed technique and R-tools allow for forecasting the event counts over time
and also the time to stop the trial with mean and predictive bounds.
The results are illustrated in the paper using Monte Carlo simulation and the real dataset.
The paper is organised as follows:
Section 2, the basic models for the process of event occurrence;
Section 3, predicting event counts for patients at risk; Section 4, predicting event counts accounting
for ongoing recruitment; Section 5, testing of the Weibull model with cure using Monte Carlo simulation;
Section 6, software development and an R-package;
Section 7, implementation to a real clinical trial, and;
Section 8, fitting models to real data.
\section{Modelling the process of event occurrence}
\label{sec2}
Consider a trial at some interim time $t_1$ and assume that there is one type of non-repeated events: \
the main event of interest $A$, and the patients also can be lost to follow-up (call it dropout).
Then all patients that were recruited in the trial until a given interim time can be divided into three groups:
1) group $A$: patients experienced event $A$.
Denote by $n_A$ the total number of patients in this group
and by $\{x_k \}$ the lengths of follow-up periods
from randomisation date until the event;
2) group $O$: each patient is censored at interim time, thus
the patients have neither experienced an event nor are they lost to follow-up.
Denote by $n_O$ the total number of patients, and by $\{z_i \}$ the lengths of follow-up periods from
randomisation date until interim time;
3) group $L$: patients are lost to follow-up.
Denote by $n_L$ the total number of patients, and by $\{y_j \}$ the lengths of follow-up periods
until censoring by dropout.
Consider now the following cure model describing the process of event occurrence
for every patient.
Assume that after randomisation, the patient can be either cured with some probability $r$,
or with probability $1-r$ can experience event $A$ after some random time $\tau_A$.
If a patient is cured, then event $A$ cannot occur.
Denote time to dropout $\tau_L$, and if event $A$ doesn't occur before $\tau_L$,
the patient experiences dropout regardless of whether this patient is cured or not
(in this case event $A$ cannot occur).
Assume that the events for different patients occur independently and
the times $\tau_A$ and $\tau_L$ are also independent random variables with cumulative distribution
functions (CDF) $F_A(x)$ and $F_L(x)$, respectively.
Suppose that
$F_A(x)$, $F_L(x)$ and probability of cure $r$
are the same for all patients, though potentially we can consider
different treatment groups with different parameters.
Assume also that these functions
are continuously differentiable
and denote by $f_A(x)$ and $f_L(x)$ the corresponding probability density functions (pdf).
\subsection{Estimating parameters of the model}\label{MLestimation}
Consider maximum likelihood method.
Denote for convenience, $S_A(x)=1-F_A(x)$ and $S_L(x) = 1- F_L(x)$.
For a patient in group $O$ with exposure time $z_i$,
the probability that event $A$ and dropout will not occur is \
$ S_L(z_i) \Big( r+ (1-r)S_A(z_i) \Big) $.
For a patient in group $A$ with exposure time $x_k$,
the probability that event $A$ occurs in a small interval $(x_k,x_k+{\rm d} x)$ before dropout is
$(1-r) f_A(x_k) \, S_L(x_i) {\rm d} x$.
For a patient in group $L$ with exposure time $y_j$,
the probability that dropout occurs
in a small interval $(y_j,y_j+{\rm d} y)$ before event $A$ is
$ f_L(y_i) \Big( r+ (1-r)S_A(y_i) \Big) {\rm d} y$.
Given data, the maximum likelihood function has the form
\begin{eqnarray*}
P(F_A,F_L,r) &=&
\prod_{i=1}^{n_O} S_L(z_i) \Big( r+ (1-r)S_A(z_i) \Big) \\
&\times& \prod_{k=1}^{n_A} (1-r) f_A(x_k) S_L(x_k) \\
&\times& \prod_{j=1}^{n_L} f_L(y_j) \Big( r+ (1-r)S_A(y_j) \Big)
\end{eqnarray*}
Correspondingly, the log-likelihood function is
\begin{eqnarray}\label{LogLikW}
{\mathcal L}(F_A,F_L,r) &=& \sum_{i=1}^{n_O} \log(S_L(z_i))
+ \sum_{i=1}^{n_O} \log\Big( r+ (1-r)S_A(z_i) \Big) \nonumber \\
&+& n_A \log(1-r) + \sum_{k=1}^{n_A} \log(f_A(x_k)) + \sum_{k=1}^{n_A} \log(S_L(x_k)) \nonumber
\\
&+& \sum_{j=1}^{n_L} \log(f_L(y_j)) + \sum_{j=1}^{n_L} \log\Big( r+ (1-r)S_A(y_j) \Big) \nonumber
\end{eqnarray}
For different types of distributions this expression will have a different form.
\subsubsection{Exponential with cure model}
This model assumes that the variables $\tau_A$ and $\tau_L$ are exponentially
distributed with rates $\mu_A$ and $\mu_L$ respectively.
This is a three parameter model: $(\mu_A, \mu_L, r)$.
The log-likelihood function:
\begin{eqnarray}\label{LogLik}
{\mathcal L}(\mu_A,\mu_L,r) &=& - \mu_A \Sigma_A - \mu_L \Sigma_{1} + n_A \log(1-r) \nonumber \\
&+& n_A \log(\mu_A) + n_L \log(\mu_L) \\
&+& \sum_{i=1}^{n_O} \log\Big( r+ (1-r)\exp(-\mu_A z_i )\Big) \nonumber \\
&+&
\sum_{j=1}^{n_L} \log\Big( r+ (1-r)\exp(-\mu_A y_j )\Big) \nonumber
\end{eqnarray}
where $\Sigma_A = \sum_{k=1}^{n_A} x_k $ and $\Sigma_1 = \sum_{k=1}^{n_A} x_k + \sum_{j=1}^{n_L} y_j + \sum_{i=1}^{n_O} z_i$.
Consider equating the partial derivatives of the log-likelihood function to zero to find a
relationship between parameters.
Partial derivatives are:
\begin{eqnarray*}
\frac {\partial {{\mathcal L}(\mu_A,\mu_L,r )}} { \partial r } &=&
- \frac{n_A}{1-r} + \sum_{i=1}^{n_O} \frac{ 1- \exp(-\mu_A z_i )} { r+(1-r)\exp(-\mu_A z_i ) } \\
&+& \sum_{j=1}^{n_L} \frac{ 1- \exp(-\mu_A y_j )} { r+(1-r)\exp(-\mu_A y_j ) } \\
\frac {\partial {{\mathcal L}(\mu_A,\mu_L )}} { \partial \mu_A } &=&
- \Sigma_A + n_A/\mu_A \\
&-& (1-r) \sum_{i=1}^{n_O} \frac{z_i\exp(-\mu_A z_i )} { r+(1-r)\exp(-\mu_A z_i )} \\
&-& (1-r) \sum_{j=1}^{n_L} \frac{y_j \exp(-\mu_A y_j )} { r+(1-r)\exp(-\mu_A y_j )} \\
\frac {\partial {{\mathcal L}(\mu_A,\mu_L )}} { \partial \mu_L } &=&
- \Sigma_{1} + n_L/\mu_L
\end{eqnarray*}
Equating the last derivative to zero, we get that
$$
\mu_L = n_L/\Sigma_1
$$
Substituting into relation (\ref{LogLik}) we get a simpler relation depending only on two variables
$(\mu_A,r)$:
\begin{eqnarray}\label{LogLik-2}
{\mathcal L}(\mu_A,r) &=& - \mu_A \Sigma_A + n_A \log(1-r) + n_A \log(\mu_A) \nonumber \\
&+& \sum_{i=1}^{n_O} \log\Big( r+ (1-r)\exp(-\mu_A z_i )\Big) \nonumber \\
&+&
\sum_{j=1}^{n_L} \log\Big( r+ (1-r)\exp(-\mu_A y_j )\Big) \nonumber \\
&+& n_L (\log(n_L) - \log(\Sigma_1) - 1) \nonumber
\end{eqnarray}
To find the estimators, optimisation is carried out by maximising the log-likelihood function. Initial values are set as
$\mu_A (0) = \frac{n_A}{\Sigma_1}$ and $r(0)$ taken to be some range of values in $(0, 1)$.
In optimisation, new variables $(\theta_1, \theta_2)$ are considered:
$$
\mu_A = \exp(\theta_1); \qquad r = \frac{\exp(\theta_2)}{1 + \exp(\theta_2)}
$$
After optimisation, the variables are transformed back to the original parameters.
\subsubsection{Weibull with cure model}
By definition, the pdf and CDF
of a Weibull distribution are
$$
f_W(x,\alpha,b) = \frac \alpha {b^\alpha} x^{\alpha-1} e^{-(x/b)^\alpha}, \, F_W(x,\alpha,b) = 1- e^{-(x/b)^\alpha}, \, x > 0
$$
where $(\alpha,b)$ are shape and scale parameters.
For ease of notation, we use the parametrisation $g=1/b^\alpha$. Then pdf and CDF have the form
$$
\widetilde f_W(x,\alpha,g) = \alpha g x^{\alpha-1} e^{-g x^\alpha}, \
\widetilde F_W(x,\alpha, g) = 1- e^{-g x^\alpha},\, x > 0
$$
Weibull with cure model assumes that the variables $\tau_A$ and $\tau_L$ have Weibull distribution
with parameters ($\alpha_A, g_A)$ and $(\alpha_L, g_L)$ respectively.
This is a five parameter model: $(\alpha_A, g_A, \alpha_L, g_L, r)$.
The log-likelihood function:
\begin{eqnarray*}
{\mathcal L}(\alpha_A, g_A, \alpha_L, g_L, r) &=& -g_L \sum_{i=1}^{n_O} z_{i}^{\alpha_L} +
\sum_{i=1}^{n_O} \log\Big(r + (1 - r)\exp(-g_A z_{i}^{\alpha_A})\Big) \\
&+& n_A \Big(\log(1 - r) + \log(\alpha_A) + \log(g_A)\Big) \\
&+& (\alpha_A - 1) \sum_{k=1}^{n_A} \log(x_k) - g_A \sum_{k=1}^{n_A} x_{k}^{\alpha_A} - g_L \sum_{k=1}^{n_A} x_{k}^{\alpha_L} \\
&+& n_L \Big(\log(\alpha_L) + \log(g_L)\Big) + (\alpha_L - 1) \sum_{j=1}^{n_L} \log(y_j) \\
&-& g_L \sum_{j=1}^{n_L} y_{j}^{\alpha_L} + \sum_{j=1}^{n_L}\log\Big(r + (1 - r)\exp(-g_A y_{j}^{\alpha_A})\Big)
\end{eqnarray*}
Optimisation is carried out in the same way as for the exponential model, with initial values:
$
\alpha_A(0) = 1; g_A(0) = n_A/\Sigma_1;
\alpha_L(0) = 1; g_L(0) = n_L/\Sigma_1;
$
$r(0)$ taken to be some range of values in $(0, 1)$.
The new variables $(\theta_1,\theta_2,\theta_3,\theta_4,\theta_5)$ are:
$$
\alpha_A = e^{\theta_1}; g_A = e^{\theta_2};
\alpha_L = e^{\theta_3}; g_L = e^{\theta_4};
\ r = \frac {e^{\theta_5}} {1+e^{\theta_5}}
$$
Similar relations can be written for the combination of the distributions,
e.g. Weibull distribution for time to event $\tau_A$ and exponential distribution for time
to dropout $\tau_L$, and vice versa.
Note that the Weibull model is in some sense a generalisation of the exponential model.
Indeed, if in particular in the relations above we fix the value $\alpha_A = 1$, then
we get the combined exponential-Weibull with cure model
(time to event $\tau_A$ has an exponential distribution).
By setting both values, $\alpha_A = 1$ and $\alpha_L = 1$, we get the exponential with cure model.
In a similar way the log-likelihood function can be derived also for
a log-normal with cure model.
\section{Predicting event counts for patients at risk }
\label{sec31}
Let us introduce for convenience the time of the occurrence of event $A$, $\nu_A$, so
${\mathbf P}(\nu_A \le z) = (1-r)F_A(z)$.
Note that if $r > 0$, then $\nu_A$ is an improper random variable as ${\mathbf P}(\nu_A < +\infty) = 1-r < 1$.
Consider a conditional probability for a patient in group $O$ to experience an event
in the future time interval $[t_1, t_1 + x]$
given that the follow-up period until the interim time $t_1$ is $z$:
\begin{eqnarray}\label{pAxz}
p_{A}(x,z) &=& {\mathbf P}( \nu_A \le z+x, \tau_L > \nu_A \mid \nu_A > z, \tau_L > z ) \nonumber
\\
&=& \frac {{\mathbf P}( z < \nu_A \le z+x, \tau_L > \nu_A ) }
{{\mathbf P} (\nu_A > z, \tau_L >z )} \\
&=& \frac {(1-r)\int_{z}^{z+x} f_A(u) S_L(u) {\rm d} u }{ S_L(z) \Big( r+ (1-r)S_A(z) \Big)} \nonumber
\end{eqnarray}
For the exponential model $p_{A}(x, z)$ can be calculated in a closed form:
\begin{equation}\label{pAxzExp}
p_{A}(x, z) = \frac{\mu_A}{\mu}\frac{(1 - r) e^{-\mu_A z} (1 - e^{-\mu x})}{r + (1 - r) e^{-\mu_A z}}
\end{equation}
where $\mu = \mu_A + \mu_L$.
Note that for the exponential model, if $r=0$, $p_{A}(x, 0) = \frac{\mu_A}{\mu} (1 - e^{-\mu x})$,
so this expression does not depend on $z$ and we have a memoryless property. However, for $r >0$,
the memoryless property is lost.
For the Weibull with cure model $p_{A}(x, z)$ has the following form:
\begin{equation}\label{Acurexz}
p_{A}(x,z) = \frac { (1-r)W_2(x,z,\alpha_A, g_A, \alpha_L, g_L) }
{ \exp(-g_L z^{\alpha_L}) \Big( r+ (1-r)\exp(-g_A z^{\alpha_A}) \Big)}
\end{equation}
where
\begin{equation}\label{pr10}
W_2(x,z,\alpha_A, g_A, \alpha_L, g_L) = \alpha_A g_A \int_z^{z+x} u^{\alpha_A-1} \exp(-g_A u^{\alpha_A} -g_L u^{\alpha_L}) {\rm d} u
\end{equation}
To compute this function in applications
we can use a numerical integration.
Similar relations can be written for the combination of the distributions,
and also for a log-normal with cure model.
\subsection{Global prediction}\label{Glob-risk}
Assume now that the recruitment of new patients is already completed,
thus, the events in the future may occur only in patients at risk in group $O$.
Denote by $R_O( t_1, t, \{z_{k}\})$ the total predictive number of events $A$
that may occur in future time interval \ $[t_1, t_1+t]$ for patients in group $O$
where $ \{z_{k}\}$ are the times of exposure.
Let ${\rm Br}(p)$ be a Bernoulli random variable, ${\mathbf P}({\rm Br}(p)=1) = 1 - {\mathbf P}({\rm Br}(p)=0) = p$.
\begin{lemma}\label{Lem1}
The process $R_O( t_1, t, \{z_{k}\})$ can be represented in the form:
\begin{equation}\label{riskO}
R_O( t_1, t, (z_{k})) = \sum_{k \in O} {\rm Br}(p_A(t,z_k))
\end{equation}
\end{lemma}
where the variables ${\rm Br}(p_A(t,z_k))$ are independent and
the probability $p_A(t,z)$ is defined above in Section \ref{sec31} and depends on the type
of the distributions used in the event model.
For a rather large number of patients in group $O$, $(> 20)$, we can apply a normal approximation
for the process $R_O( t_1, t, \{z_{k}\})$
using simple formulae for the mean and the variance:
\begin{eqnarray}\label{MeanVar}
M(t_1,t) = \sum_{k \in O} p_A(t,z_k),\
V^2(t_1,t) = \sum_{k \in O} p_A(t,z_k)(1-p_A(t,z_k))
\end{eqnarray}
Then ${\mathbf E}[R_O( t_1, t, \{z_{k}\}] = M(t_1,t)$
and
$(1-\delta)$-predictive interval at time $t_1+t$ is
$
\Big( M(t_1,t) - z_{1-\delta/2}V(t_1,t), M(t_1,t) + z_{1-\delta/2}V(t_1,t)) \Big)
$,
where $z_a$ is an $a$-quantile of a standard normal distribution.
For a not so large number of patients, a distribution of $R_O( t_1, t, \{z_{k}\})$
can be calculated numerically as a convolution of the sum of Bernoulli variables.
Let us evaluate the predictive distribution for the time to reach a given target $K$
for the total planned number of events in the study.
Recall that in previous notation $n_A$ denotes the total number of events that occurred prior
to interim time $t_1$ (size of group $A$).
The remaining number of events that are left to achieve
is
$K_R = K - n_A$.
Let $\tau(t_1,K_R)$ be the remaining time to reach $K_R$ events after the interim time $t_1$.
Then the following relation holds: for any $t > 0$,
\begin{equation}\label{time}
{\mathbf P}(\tau(t_1,K_R) \le t) = {\mathbf P}( R_O( t_1, t, \{z_{k}\}) \ge K_R )
\end{equation}
As the distribution of $R_O( t_1, t, \{z_{k}\})$ can be evaluated for any time $t$,
this relation allows us to calculate also the distribution of $\tau(t_1,K_R) $.
Consider the calculation of
PoS (probability to complete study before a planned time $t_1+T$).
Denote it
as $Q( t_1, T, \{z_{k}\})$.
From (\ref{time}) we get
\begin{equation}\label{PoS}
Q( t_1, T, \{z_{k}\}) = {\mathbf P}( R_O( t_1, T, \{z_{k}\}) \ge K_R )
\end{equation}
If we use a normal approximation for the process $R_O( t_1, T, (z_{k}))$, then
\begin{eqnarray}\label{PoS2}
Q( t_1, T, \{z_{k}\}) &\approx& \Phi\Big( \frac { M(t_1,T) - K_R}{V(t_1,T)} \Big)
\end{eqnarray}
where $\Phi(x)$ is the CDF of a standard normal distribution.
\section{Predicting event counts accounting for ongoing recruitment }
Consider now the situation when at the interim time the planned number of patients
to be recruited is not reached yet, that means, the recruitment is still ongoing.
In this case we need also to predict the future recruitment
and how many events may occur for patients to be recruited in the future.
\subsection{Modelling and predicting patient recruitment }\label{recruit}
Assume that patients arrive at clinical centres according to Poisson processes
with some rates $\lambda_i$. To model the variation in the rates among different
centres we assume that $\lambda_i$ are jointly independent gamma distributed random variables
with parameters $(\alpha,\beta)$ (shape and rate) and pdf
\begin{equation}\label{e00}
f(x,\alpha,\beta) =
\frac {e^{- \beta x} \beta^{\alpha} x ^{\alpha-1} }{ \Gamma(\alpha)},\ x >0,
\end{equation}
where
$ \Gamma(\alpha)$ is a gamma function.
This model is called a Poisson-gamma (PG) recruitment model and was developed in
Anisimov \& Fedorov\cite{anfed07} and further extended in
Anisimov\cite{an08,an11a,an20}.
Denote by $\Pi_a(t)$ a standard Poisson process with rate $a$
and by $\Pi(a)$ a random variable which has a Poisson distribution with parameter $a$.
Then a mixed Poisson process $\Pi_{\lambda}(t)$ where the
rate $\lambda$ is gamma distributed with parameters $(\alpha,\beta)$ is a
PG process (Bernardo and Smith\cite{bernardo04})
with parameters
$(t,\alpha,\beta)$:
\begin{equation}\label{PG1}
{\mathbf P}(\Pi_{\lambda}(t) = k)
= \frac{\Gamma(\alpha + k)}{k!\ \Gamma(\alpha)}\ \frac{t^{k}\beta^{\alpha}}
{\ {(\beta + t)}^{\alpha + k}}\ ,\ k = 0,1,2,..
\end{equation}
Note that for a mixed Poisson process with random rate $\lambda$,
\begin{equation}\label{MeanVar-2}
{\mathbf E} [\Pi_{\lambda}(t)] = {\mathbf E}[\lambda] t; \,
{\mathbf {Var}} [\Pi_{\lambda}(t)] = {\mathbf E}[\lambda] t + {\mathbf {Var}}[\lambda] t^2
\end{equation}
Assume now that some centre is active only in time interval $[u,b]$.
Denote by $d(t, u, b)$ the duration of recruitment window
(duration of active recruitment)
in a centre up to time $t$:
\begin{equation}\label{dtab}
d(t, u, b) =
\begin{cases}
0 & t \leq u \\
t - u & u < t \leq b \\
b - u & t > b
\end{cases}
\end{equation}
Assume that the recruitment rate in this centre is $\lambda$ which is gamma distributed with some parameters.
Then the recruitment process in this centre for any $t > 0$ can be represented as a PG process
with a cumulative rate $\lambda d(t,u,b)$. That means, the number of patients recruited in interval
$[0,t]$ has a mixed Poisson distribution with the rate $\lambda d(t,u,b)$.
Consider now predicting the remaining recruitment at some interim time $t_1$.
Assume for simplicity that all centres are active and in every centre $i$
the following data are available: \
$(v_i,k_i)$ - the duration of active recruitment (recruitment window) and the number of patients recruited.
In
Anisimov and Fedorov\cite{anfed07} (see also Anisimov\cite{an11a}) it was developed a
maximum likelihood technique for estimating parameters $(\alpha,\beta)$ of a PG model
assuming that in all active centres the rates have a gamma distribution with the same parameters.
In \cite{an11a}, the Bayesian technique was also developed
for predicting future recruitment using the property
that the posterior rate in a centre $i$, $\widetilde{\lambda_{i}}$, which is adjusted to
the data in this centre, also has a gamma distribution
with parameters $(\alpha+k_i,\beta+v_i)$.
Consider now a given interim time $t_1$. Let $i$ be some active centre.
Denote by $(\alpha,\beta)$ the parameters of a PG model estimated
using data in all active centres as noted above.
Then the future recruitment process in centre $i$ can be modelled as a PG process
with posterior recruitment rate $\widetilde{\lambda_{i}}$.
Assume that the recruitment in this centre can be closed due to some operational reasons
at some time $t_1 + b_i$.
Then for any $t>0$ the recruitment process in centre $i$ in time interval $[t_1,t_1+t]$
can be represented as a PG process with a cumulative rate
$\widetilde \lambda_i d(t,0,b_i)$.
Assume now that $j$ is some new centre that is planned to be initiated at time $t_1+u_j$
and let $b_j$ be the closing date of recruitment in this centre.
Denote by $\lambda_j$ the recruitment rate in this centre.
Note that the rates in the new centres can be provided by clinical teams
using expert estimates or evaluated using historical data from similar trials.
Then centre $j$ will be active only in time interval $[t_1+u_j,t_1+b_j]$.
Thus, for any $t>0$, the recruitment process
in time interval $[t_1,t_1+t]$
can be represented as a PG process with a cumulative rate
$\lambda_j d(t,u_j,b_j)$.
Consider the prediction of the remaining global recruitment.
Denote by $I_{Active}$ a set of active centres with posterior rates $\widetilde{\lambda_{i}}$.
Assume also that it can be some set $I_{New}$ of new centres that are planned to be
initiated after interim time $t_1$ at times $t_1+u_j,\, j \in I_{New}$.
Denote by $\lambda_j$ the rates in the new centres.
Then the predictive total number of patients $n(t_1,t_1+t)$ to be recruited in the time interval
$[t_1,t_1+t]$ can be represented as
\begin{equation}\label{PredRecr}
n(t_1,t_1+t) = \sum_{i \in I_{Active}} \Pi(\widetilde \lambda_i d(t,0,b_i))
+ \sum_{j \in I_{New}} \Pi(\lambda_j d(t,u_j,b_j))
\end{equation}
This means, $n(t_1,t_1+t)$ has a mixed Poisson distribution with a cumulative rate
\begin{equation}\label{PredRecr-2}
\Omega(t_1,t_1+t) = \sum_{i \in I_{Active}} \widetilde \lambda_i d(t,0,b_i)
+ \sum_{j \in I_{New}} \lambda_j d(t,u_j,b_j)
\end{equation}
For a rather large number of centres, the predictive bounds for $n(t_1,t_1+t)$
can be evaluated using a normal approximation, as the mean and the variance
of $n(t_1,t_1+t)$ can be easily calculated using the property (\ref{MeanVar-2})
and relations \
$
{\mathbf E} [\tilde{\lambda_{i}} ] = (\alpha + k_{i})/(\beta + v_{i}); $
${\mathbf {Var}} [\tilde{\lambda_{i}} ] = (\alpha + k_{i})/(\beta + v_{i})^2
$.
In particular, the mean predicted time to reach a required remaining number of patients
$n_R$ can be numerically calculated as the point when the line ${\mathbf E}[n(t_1,t_1+t)]$
hits level $n_R$.
Note that for a not so large number of centres, for predicting $n(t_1,t_1+t)$ one can use
a PG approximation developed in Anisimov\cite{an20}, Anisimov and Austin\cite{{an-austin20}}.
\subsection{Predicting event counts}
Consider now predicting event counts accounting for ongoing recruitment.
Denote by
\(\kappa_{A}\) the time it takes until event \(A\) occurs first (before dropout), and let
$p_{A}( x ) = {\mathbf P}(\kappa_{A} \leq x)$, $x > 0$, be its CDF.
For cure model with dropout defined in Section \ref{sec2}, in previous notation,
\begin{equation}\label{pAt}
p_{A}( x ) = {\mathbf P}( \nu_A \le x, \nu_A < \tau_L ) = (1-r)\int_{0}^{x} f_A(u) S_L(u) {\rm d} u
\end{equation}
In particular, for the exponential with cure model, using notation $\mu = \mu_A + \mu_L$,
\begin{equation}\label{ExpMod}
p_{A,E}( x ) = (1-r) \frac{\mu_A}{\mu} (1-e^{ -\mu x})
\end{equation}
For Weibull model, using parametrisation $\bar \theta = (\alpha_A, g_A, \alpha_L, g_L)$,
\begin{equation}\label{pr20}
p_{A,W}(x) = (1-r) W_1(x,\bar \theta)
\end{equation}
where
\begin{equation}\label{pr21}
W_1(x,\bar \theta) = \alpha_A g_A \int_0^x y^{\alpha_A-1} \exp(-g_A y^{\alpha_A} -g_L y^{\alpha_L}) {\rm d} y
\end{equation}
Consider
now one clinical centre. Assume that patients
arrive according to a mixed Poisson process
with possibly random rate $\lambda$.
Assume also that the centre is active only in a fixed time
interval $[a,b]$.
In Anisimov\cite{an11b,an20}
the following result is proved.
\begin{lemma}\label{Lem2}
The predicted number of events \(A\) in interval
$[0,t]$ that occur in
the newly recruited patients in this centre
has a mixed Poisson distribution with rate $\lambda q_A(t,a,b)$, where
\begin{equation}\label{8.29}
q_A(t,a,b) = \int_{a}^{min(t,b)} p_{A}( t - u){\rm d} u
\end{equation}
\end{lemma}
For the exponential model, the function $q_A(t,a,b) $ can be easy calculated.
Consider the duration of recruitment window $d(t, a, b)$ in a centre at time $t$
defined in (\ref{dtab}).
Then, using parameters $(r,\mu_A,\mu_L)$,
\begin{equation}\label{ExpMod-2}
q_{A,E}(t,a,b)
= (1-r) \frac{\mu_A}{\mu} \Big( d( t,a,b) - \frac{1}{\mu}e^{- \mu ( t - a)}
( e^{\mu d( t,a,b)} - 1) \Big)
\end{equation}
For Weibull Model
with parameters $(\alpha_A, g_A, \alpha_L, g_L, r)$,
\begin{equation}\label{WeiMod}
q_{A,W}(t,a,b)
= (1-r) \int_{a}^{min(t,b)} W_1(t-u,\bar \theta) {\rm d} u
\end{equation}
where $W_1(t,\bar \theta)$ is defined in (\ref{pr21}).
This function can be numerically calculated.
Similar relations in the integral form
can be written for the combination of the distributions,
and also for a log-normal with cure model.
These results
form the basis for creating predictions of the
event counts in any active centre and globally.
\subsection{Global forecasting event counts at interim stage }
Consider now forecasting the total number of events at some interim time $t_1$.
Denote the times of initiation of new centres (if any) by $\{ u_i \}$ and the times of closure
for all centres by $\{ b_i \}$.
In general it is assumed that centres will be closed
for recruitment at the time when recruitment hits the recruitment target.
Thus,
in applications, we usually assume that $b_i \equiv \widehat T_{Pred}$ where
$ \widehat T_{Pred}$ is the predicted mean remaining time to reach the recruitment target.
\begin{theorem}\label{Th1}
The predictive total number of new events $A$, $k(t_1,t,A )$, that may
occur in future time interval $[t_{1},t_{1} + t]$,
can be represented as a convolution of two independent
random variables:
\begin{equation}\label{eqn1}
k(t_1,t,A ) = \Pi( \Sigma( t,A) )+ R_O( t_1, t, \{z_{k}\})
\end{equation}
where according to (\ref{PredRecr-2}),
\begin{equation}\label{eqn2}
\Sigma\left( t,A \right) =
\sum_{i \in I_{active}} \tilde{\lambda_{i}} q_{A}( t,0,b_i ) +
\sum_{i \in I_{new}}
\lambda_{i} q_{A}( t,u_{i},b_i ),
\end{equation}
and $R_O( t_1, t, \{z_{k}\})$ is the predictive number of events $A$
in group $O$ defined in (\ref{riskO}), Section \ref{Glob-risk}.
Here
the function $q_{A}(t,a,b) $ is defined by (\ref{8.29}) (for exponential and Weibull
models we have the expressions (\ref{ExpMod-2}) and (\ref{WeiMod}), respectively).
The first sum in (\ref{eqn2}) is taken across all active centres and
$\tilde{\lambda_{i}}$ are the posterior rates defined in Section \ref{recruit}.
and the second sum is taken across new centres.
Correspondingly, the probability to complete trial in time is
\begin{equation}\label{eqn3}
{\mathbf P} \Big(k( t_1, T_{R},A) \geq \nu_{R}( A ) \Big)
\end{equation}
where $T_{R}$ is the planned remaining time to complete the trial and
$\nu_{R}( A )$ is the remaining number of events left to achieve.
\end{theorem}
The proof follows from results of Lemmas \ref{Lem1}, \ref{Lem2}
and Section \ref{recruit}.
Note that the mean and the variance of the process $\Pi( \Sigma( t,A) )$
can be calculated explicitly in terms of functions $q_A(\cdot)$ and parameters of the rates.
As typically in real trials the number of centres is rather large, to create predictive
bounds for $k(t_1,t,A )$ one can use a normal approximation.
This technique is realised in R package (\textit{EventPrediction}), see Section \ref{package}.
\section{Monte Carlo simulation}
Monte Carlo simulation was used to test each model's performance.
We considered 1000 patients assuming uniform distribution of centre initiation
over 6 months, and took the target number of events 550.
At a specified cut-off date the model parameters were estimated
using maximum likelihood estimation, see Section \ref{MLestimation}.
Using these estimators, predictions of the future occurrence of events were created.
For the Weibull model two different cases for the initial parameters were considered,
$a_A < 1$ and $a_A > 1$, see Fig~\ref{fig:aA<1} and Fig ~\ref{fig:aA>1} respectively.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=11.0cm,height=6cm]{Fig-1-lwd3.pdf}
\caption{Plot of number of events against time (in days), following the timeline of a simulated trial.
The simulated trajectory of events A is marked by the black solid line, the initial parameters: $a_A = 0.8$,
$b_A = 182$, $a_L = 0.6$, $b_L = 2611$ and $r = 0.2$. An interim analysis was taken at 7 months,
the estimated parameters: $a_A = 0.842$, $b_A = 145$, $a_L = 0.641$, $b_L = 2697$ and $r = 0.276$.
Predictions on future event counts were created using the estimated parameters; the mean trajectory
is shown by the blue dashed line, the 90\% confidence bounds by the red dotted lines.
}
\label{fig:aA<1}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=11.0cm,height=5cm]{Fig-2-lwd3.pdf}
\caption{Plot of number of events against time (in days), following the timeline of a simulated trial.
The simulated trajectory of events A is marked by the black solid line, the initial parameters: $a_A = 1.2$,
$b_A = 213$, $a_L = 1.4$, $b_L = 3701$ and $r = 0.2$. An interim analysis was taken at 7 months,
the estimated parameters: $a_A = 1.265$, $b_A = 175$, $a_L = 1.406$, $b_L = 3916$ and $r = 0.305$.
Predictions on future event counts were created using the estimated parameters; the mean trajectory
is shown by the blue dashed line, the 90\% confidence bounds by the red dotted lines.}
\label{fig:aA>1}
\end{center}
\end{figure}
In both cases, the model successfully predicts the trajectory of the number
of events $A$
with the real trajectory falling within the 90\% predicted bounds.
Furthermore, the parameters estimated at the cut-off time using
maximum likelihood technique are close to the initial parameters showing an appropriate estimation.
\section{Software development}\label{package}
In order to expose the event and recruitment prediction models (as detailed in the previous sections)
to a large number of key stakeholders, an R package (\textit{EventPrediction}) has been developed,
tested and deployed to a centralised R server. The \textit{EventPrediction} package allows
a user to easily pass the data required (i.e. subject event data, centre level data and configuration)
and return back key parameter estimates and predictions with bounds, for both events and recruitment.
\subsection{R package design}
R was chosen over other programming languages, and an R Package was developed over
standalone R scripts,
for a number of reasons, including: R has an easy to use and to setup testing framework;
R ships with easy to use code coverage tools; R has a Comprehensive R Archive Network ("CRAN")
set of packages that are easily accessible; R seamlessly integrates with GitLab
(and other source control software); R allows for an Object Oriented ("OO") approach
(i.e. S3, S4 and R6); and also, primarily, it is simply straightforward to develop, test,
document and centrally deploy an R Package for key stakeholders
to use.
R's S3 lightweight OO solution was a key design feature of the \textit{EventPrediction}
package, as using such an OO approach yields four main benefits: first, S3's simple
to use OO benefit of polymorphism (aka in R as method dispatch); secondly, S3 gives the
OO benefit of inheritance; thirdly, S3 is ubiquitously used by R contributors, easy to use
and for others to comment; and fourthly, S3 is in accordance with the functional programming paradigm,
when an object is passed into an S3 function it is not going to change (unlike full OO approaches like R6).
\subsubsection{Good software engineering principles}
Another major benefit of developing an R Package (and utilising R's OO approach) is
to ensure adherence to good Software Engineering principles, with code that is at a minimum: reliable;
easy to use; efficient; well tested, with tests traceable to requirements and/or design;
well documented; and (importantly) easy to maintain. The \textit{EventPrediction} package conforms
to each of these key programming elements, not only because these are simply good
Software Engineering practices, but also as the biotechnology sector is highly regulated
and there is a requirement to document a number of Software Development Life Cycle ("SDLC") tasks in accordance
with departmental, company and regulatory policies.
\subsubsection{R package SDLC and platform architecture}
Before the design, development and/or testing of any code was initiated, two further key platform
architectural design decisions were made: first, GitLab was used for source control, continuous
integration, documentation, vignettes, readme files and also as part of the full deployment process;
and secondly, R Studio Server Pro was used for development and testing of code,
a Docker Image with a physical R server on AWS.
\subsubsection{Further R package design: function layers}
With a large number of complex R scripts and source papers another design choice (primarily,
to make the code easier to use and easier to maintain) was grouping the code into four layers
using R's S3 OO approach.
The four programming layers are as follows:
{\em Layer One: Highest Level: Main Exposed Application Programming Interface (API).}
This level is exposed to the user and contains: S3 Classes (functions) that allow
instantiation of the objects that contain the input data required and configuration;
functions to predict events and recruitment; plotting and printing functionality;
and key getter functions.
{\em Layer Two: Second Level Functions.}
This level is not exposed to the user and is simply used to dispatch to the third level
functions based on the S3 configuration objects instantiated in Layer One.
{\em Layer Three: Third Level Functions.}
This level is not exposed to the user and contains the main set of controller code
and does all of the hard work of the package.
{\em Layer Four: Lowest Level Functions.}
This level is not exposed to the user and contains a large number of complex
R scripts/algorithms that have been developed and tested using
Monte Carlo Simulation, as detailed in the previous section.
\subsection{R package input data required}
The following set of input data is required by the \textit{EventPrediction} package
to predict events and recruitment (if recruitment is ongoing), with each set of data instantiated
using R's S3 approach (as detailed in the previous sections).
\subsubsection{Event data}
\begin{center}
\begin{tabular}{c c c c}
analysis\_time\_days & \quad censor\_flag & \quad drop\_out\_flag & \quad randomisation\_date \\
28 & 0 & 0 & {\small YYYY-MM-DD} \\
33 & 0 & 0 & {\small YYYY-MM-DD} \\
87 & 1 & 0 & {\small YYYY-MM-DD} \\
42 & 0 & 0 & {\small YYYY-MM-DD} \\
77 & 1 & 1 & {\small YYYY-MM-DD}
\end{tabular}
\end{center}
This data is in accordance with how the key stakeholders produce their data,
it is transformed into the values as described in the previous sections, such that:
\begin{itemize}
\item analysis\_time\_days is the number of days from randomisation to either
the event date ${T_A}$ or censoring date (i.e. the dropout date ${T_L}$ for
subjects that have dropped out or the date used to censor at the cut off
if a subject has not dropped out).
\item censor\_flag == 0 represents group A, a subject experienced event A.
\item censor\_flag == 1 \& drop\_out\_flag == 0 represents group O, a subject
did not experience an event nor dropout.
\item drop\_out\_flag == 1 represents group L, a subject dropped out
before the interim time.
\end{itemize}
\subsubsection{Site recruitment data}
\begin{center}
\begin{tabular}{c c c}
study\_centre\_id & \quad centre\_actual\_enrol &\quad centre\_recruitment\_window\_days \\
xx001&0&140 \\
xx002&1&224 \\
xx003&2&238 \\
xx004&1&221 \\
xx005&0&201
\end{tabular}
\end{center}
\begin{itemize}
\item centre\_actual\_enrol represents the number of subjects
recruited at the unique centre ID.
\item centre\_recruitment\_window\_days represents the actual duration of recruitment
at the unique centre ID (not including any screening period).
The centre is active only during this interval $[a,b]$.
\end{itemize}
\subsubsection {New Sites}
A vector of days for new centres to be initiated \{$u_i$\}, e.g. $c(3, 5, 5, 10, 10, 11, 12, 20)$.
\subsubsection{Configuration}
The following key pieces of information are accepted by the \textit{EventPrediction}
package
(with appropriate defaults) which are used to select the appropriate algorithms
and to provide key modelling values:
\begin{itemize}
\item distributions\_to\_use: A list detailing the distributions to model the dropouts and events:
e.g. list(events = "Exponential", drop\_outs = "Exponential")
\item target\_number\_of\_events: The target number of events for the analysis to be predicted
\item sample\_size: The number of patients planned to recruit
\item confidence\_level: The confidence probability for the upper and lower bounds
\end{itemize}
\section{R package and implementation in a clinical trial}
\subsection{Introduction}
In order to help the key stakeholders with the operational planning of a clinical trial
and to test the quality of the prediction, the \textit{EventPrediction} package was used
on several historic studies.
The following is one such case study in a historical oncology clinical trial,
using the data at a given interim time when recruitment had not completed.
The task was to predict the future recruitment and event counts with bounds and compare
the results with the real trajectory of the recruitment and the events that have already occurred in the past.
The event and centre data was provided in accordance with the package API's
(as detailed in the previous section) along with a target number of events of 250
and patients sample size of 405.
At the interim cut-off time the data
for the study had the following recruitment and event status:
\begin{itemize}
\item 152 Events (i.e. censor\_flag == 1)
\item 155 At Risk (i.e. censor\_flag == 1 and drop\_out\_flag == 0)
\item 13 Drop Outs( i.e. drop\_out\_flag == 1)
\item
85 patients left to recruit.
\end{itemize}
\subsubsection{Key predictions}
Given the above input and implementing the developed model,
the \textit{EventPrediction} package predicted:
\textit{Recruitment:} Predicted number of days until target number of patients is reached
with 90\% bounds, (mean, lower bound, upper bound): 151, 120, 191.
The estimated parameters of a PG model are:
$\alpha = 4.8577, \beta = 516.13$, and the prediction is constructed according to
(\ref{PredRecr}) where it was used some schedule of closing centres.
\textit{Events:} Predicted number of days until target number of events
reached, with 90\% bounds, (mean, lower bound, upper bound):\\
exponential model: 227, 181, 322 \\
Weibull model: 241, 188, 423
\begin{figure}[ht]
\begin{center}
\includegraphics[width=11.0cm,height=5cm]{Fig-4-lwd3.pdf}
\caption{Prediction of the remaining recruitment against time from cut-off (in days).
Real trajectory of patient recruitment is shown by black line.
Mean prediction and 90\% bounds are shown by the blue dashed and red dotted lines.}
\label{Fig-4}
\end{center}
\end{figure}
\subsubsection{Plots and parameter estimates}
Further, the \textit{EventPrediction} package produced the following three key plots,
along with key parameter estimates, for the key stakeholders to consume: \\
1) prediction of the remaining recruitment \\
2) prediction of the remaining number of events using exponential model \\
3) prediction of the remaining number of events using Weibull model
\begin{figure}[ht]
\begin{center}
\includegraphics[width=11.0cm,height=6cm]{Fig-5-lwd3.pdf}
\caption{Prediction of the remaining number of events against time from cut-off (in days).
Real trajectory of events is shown by black line.
Exponential model, mean prediction and bounds depicted by the blue dashed and red dotted lines.}
\label{Fig-5}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=11.0cm,height=6cm]{Fig-6-lwd3.pdf}
\caption{Prediction of the remaining number of events against time from cut-off (in days).
Real trajectory of events is shown by black line. Weibull model,
mean prediction and bounds depicted by the blue dashed and red dotted lines.}
\label{Fig-6}
\end{center}
\end{figure}
Fig~\ref{Fig-4} shows very good fit of the predictive area of recruitment where
the real trajectory of the historical recruitment falls into the predictive area.
As one can see from Fig~\ref{Fig-5} and Fig~\ref{Fig-6}, the predictions
for both types of models, exponential and Weibull,
are rather close, with the following estimated parameters for each: \\
Exponential model: $\mu_A = 0.0069$, $\mu_L = 0.00034$
and $r = 0.2566$. \\
Weibull model: $a_A =1.1636$, $b_A = 126.9995$,
$a_L = 0.3177$, $b_L = 2053795.7$ and $r = 0.2996$.
The actual number of days when the target number of patients was reached in this trial was 152 and the actual number of days
when the planned number of events occurred is 244. As seen in the figures, the predictions
were indeed very close to the actuals.
\subsection{Predicting events when recruitment complete}
We also considered the same case study as above, but at a later interim time when recruitment had completed.
Therefore the task here was to predict the future event counts only.
As detailed in the previous section, this study has a target number of events of 250
and patients sample size of 405.
At the interim cut-off time the data
for the study had the following recruitment and event status:
\begin{itemize}
\item 220 Events (i.e. censor\_flag == 1)
\item 163 At Risk (i.e. censor\_flag == 1 and drop\_out\_flag == 0)
\item 22 Drop Outs( i.e. drop\_out\_flag == 1)
\end{itemize}
\subsubsection{Key predictions}
Given the above input and implementing the developed model,
the \textit{EventPrediction} package predicted:
\textit{Events:} Predicted number of days until target number of events
reached, with 90\% bounds, (mean, lower bound, upper bound):\\
exponential model: 84, 60, 118 \\
Weibull model: 83, 59, 116
\subsubsection{Plots and parameter estimates}
Further, the \textit{EventPrediction} package produced the following two key plots,
along with key parameter estimates, for the key stakeholders to consume: \\
1) prediction of the remaining number of events using exponential model \\
2) prediction of the remaining number of events using Weibull model
\begin{figure}[ht]
\begin{center}
\includegraphics[width=11.0cm,height=6cm]{Fig-8.pdf}
\caption{Prediction of the remaining number of events against time from cut-off (in days).
Real trajectory of events is shown by black line.
Exponential model, mean prediction and bounds depicted by the blue dashed and red dotted lines.}
\label{Fig-8}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=11.0cm,height=6cm]{Fig-7.pdf}
\caption{Prediction of the remaining number of events against time from cut-off (in days).
Real trajectory of events is shown by black line. Weibull model,
mean prediction and bounds depicted by the blue dashed and red dotted lines.}
\label{Fig-7}
\end{center}
\end{figure}
As one can see from Fig~\ref{Fig-8} and Fig~\ref{Fig-7}, the predictions
for both types of models, exponential and Weibull,
are rather close, with the following estimated parameters for each: \\
Exponential model: $\mu_A = 0.00553$, $\mu_L = 0.00034$
and $r = 0.2128$. \\
Weibull model: $a_A =0.9834$, $b_A = 183.8473$,
$a_L = 0.36351$, $b_L = 349763.5$ and $r = 0.2067$.
The actual number of days when the planned number of events occurred is 91. As seen in the figures, the predictions
were indeed very close to the actuals.
\section{Fitting models to real data}
\subsection{Kaplan-Meier plots}
To assess the model fit, we looked at the Kaplan-Meier (KM) curve for each interim dataset and compared these to
the predicted survival functions for exponential and Weibull distributions. This provides a visualisation
for model fit:
the best fit model for the interim data will be the model which distribution best maps the KM curve.
This is for the occurrence of events $A$ and so will only inform of the
best distribution for modelling events $A$. However, similar curves can be created to test the fit
of dropout distribution.
The survival functions for exponential and Weibull models with cure are calculated
using their respective estimated parameters,
with formulae:
Exponential: $S(x, r, \mu_A) = r + (1 - r) \exp(-\mu_A x)$ \\
Weibull: $S(x, r, \alpha_A, b_A) = r + (1 - r) \exp(- (\frac{x}{b_A})^{\alpha_A})$
In Fig~\ref{Fig-9}, one can see that the predicted survival functions for exponential and Weibull models
are very close and both map the KM curve well.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=11.0cm,height=6cm]{Fig-9.pdf}
\caption{Kaplan-Meier plot showing survival function using interim data
with associated confidence intervals (blue lines), alongside survival functions
for exponential (violet solid line) and Weibull (green dashed line) cure models.}
\label{Fig-9}
\end{center}
\end{figure}
\subsection{AIC and BIC criteria}
We also used Akaike information criterion (AIC) and Bayesian information criterion (BIC) to test which model best fits the real dataset. These are calculated by:
AIC $= 2k - 2 {\rm LogLik}$
BIC $= k \log(n) - 2 {\rm LogLik}$
where $k$ = number of parameters in the model, $n$ = total number of patients at interim time,
${\rm LogLik}$ = the log-likelihood from parameter estimation.
For the stopped recruitment case:
\begin{center}
\begin{tabular}{l c c}
\textbf{Model} & \quad \textbf{AIC} & \quad \textbf{BIC} \\
Exponential model, no cure & 3333.6&3341.6 \\
Exponential cure model &3319.0&3327.1 \\
Weibull (A) and exponential (L) cure model & 3321.0 & 3333.0 \\
Exponential (A) and Weibull (L) cure model & 3283.1 & 3299.1 \\
Weibull cure model&3285.0&3305.1
\end{tabular}
\end{center}
For the ongoing recruitment case:
\begin{center}
\begin{tabular}{l c c}
\textbf{Model} & \quad \textbf{AIC} & \quad \textbf{BIC} \\
Exponential model, no cure &2222.9 &2230.4 \\
Exponential cure model &2210.9&2218.4 \\
Weibull (A) and exponential (L) cure model & 2209.3 & 2220.6 \\
Exponential (A) and Weibull (L) cure model & 2183.7 & 2198.7 \\
Weibull cure model&2182.1&2200.9
\end{tabular}
\end{center}
The lower the AIC or BIC, the better the model fit. From the two tables above,
one can see that the cure models show an improvement on the ''no cure'' model.
The exponential (A) and Weibull (L) cure model is the best fit for the stopped recruitment dataset.
By small margins, the values of criteria for ongoing recruitment
suggest either the Weibull cure model
or the exponential (A) and Weibull (L) cure model is best suited for the data.
\section*{Conclusions}
We have developed a new analytic approach for the prediction of event counts
in event-driven trials
when recruitment is complete or ongoing.
We use the exponential and Weibull models and account for patient dropout and opportunity of cure.
Not only can we predict the future occurrence of events, but we can also predict
any remaining recruitment, with mean and bounds, using the Poisson-gamma recruitment model.
The developed results can be easily extended to the combined cure models
using the combination of exponential and Weibull distributions for the time to event
and the time to dropout and also to log-normal with cure model.
Using these novel advanced models
and with access to real subject level and centre level data
we are now able to address
key business use cases for a number of key stakeholders
in order to better forecast the operational design of event-driven clinical trials.
Furthermore, by centralising an exposed R Package \textit{EventPrediction},
utilising good software engineering principles, each of our key
stakeholders have access to the package and can obtain plots,
parameter estimates and predictions with bounds,
without contacting the mathematical modellers nor the R package developer.
We have many opportunities for future improvements
to the mathematical modelling and \textit{EventPrediction} package.
One of the major priorities is to
evaluate the predictions against real clinical
trial operational data to ensure the existing and
any future models are as accurate as we found in testing on historical data.
We are also looking to incorporate other statistical distributions
for modelling time to both the main event and dropout.
\bibliographystyle{unsrtnat}
\small
|
1,314,259,993,227 | arxiv | \section{#1}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\def\mathrm{d}{\mathrm{d}}
\def\langle{\langle}
\def\rangle{\rangle}
\def^{\dag}{^{\dag}}
\def{\cal K}{{\cal K}}
\def{\cal N}{{\cal N}}
\font\maj=cmcsc10 \font\itdix=cmti10
\def\mbox{I\hspace{-.15em}N}{\mbox{I\hspace{-.15em}N}}
\def\mbox{I\hspace{-.15em}H\hspace{-.15em}I}{\mbox{I\hspace{-.15em}H\hspace{-.15em}I}}
\def\mbox{I\hspace{-.15em}1}{\mbox{I\hspace{-.15em}1}}
\def\rm{I\hspace{-.1em}N}{\rm{I\hspace{-.1em}N}}
\def\mbox{Z\hspace{-.3em}Z}{\mbox{Z\hspace{-.3em}Z}}
\def{\rm Z\hspace{-.2em}Z}{{\rm Z\hspace{-.2em}Z}}
\def{\rm I\hspace{-.15em}R}{{\rm I\hspace{-.15em}R}}
\def{\rm I\hspace{-.10em}R}{{\rm I\hspace{-.10em}R}}
\def\hspace{3pt}{\rm l\hspace{-.47em}C}{\hspace{3pt}{\rm l\hspace{-.47em}C}}
\def\mbox{l\hspace{-.47em}Q}{\mbox{l\hspace{-.47em}Q}}
\def\begin{equation}{\begin{equation}}
\def\end{equation}{\end{equation}}
\def\begin{enumerate}{\begin{enumerate}}
\def\end{enumerate}{\end{enumerate}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newcommand{{\bf 1}}{{\bf 1}}
\newtheorem{lemma}{Lemma}
\makeatletter
\usepackage{latexsym}\usepackage{bm}\def1.0{1.0}
\makeatother
\begin{document}
\title{De Sitter scalar-spinor interaction in Minkowski limit}
\author{Y. Ahmadi}
\email{ahmadi.pave@gmail.com} \affiliation{Department of Physics, Razi University, Kermanshah, Iran}
\begin{abstract}
\noindent \hspace{0.35cm}
The scalar-spinor interaction Lagrangian is presented by the Yukawa potential. In dS ambient space formalism, the interaction Lagrangian of scalar-spinor fields was obtained from a new transformation which is very similar to the guage theory. The interaction of massless minimally coupled scalar and spinor fields was investigated. The Minkowski limit of the massless minimally coupled scalar field and massive spinor field interaction in the ambient space formalism of de Sitter space time is calculated. The interaction Lagrangian and massless minimally coupled scalar field in the null curvature limit become zero and the local transformation in the null curvature limit become a constant phase transformation and the interaction in this limit become zero. The covariant derivative reduces to ordinary derivative too. Then we conclude that this interaction is due to the curvature of space time and then the massless minimally coupled scalar field may be a part of a gravitational field.
\end{abstract}
\maketitle
\section{Introduction}
Our universe is expanding by a positive acceleration and experimental data such as highly redshift observation of supernova Ia \cite{Riess, Perl}, galaxy clusters \cite{Henry 1,Henry 2} and cosmic microwave background radiation \cite{Nature}, confirm it. Therefore, our universe maybe described by the de Sitter (dS) metric in the large scale. Furthermore, the recently observational data from \cite{BICEP2} confirm that the early universe in a good approximation is dS universe, the inflationary epoch. Thus the construction of quantum field theory (QFT) in dS space-time is essential. In dS ambient space formalism, the efforts have been made to construct QFT in last years \cite{ta97,77,ta96,taazba,taro12,berotata,derotata}. Fortunately, because of the linearity of the action of dS group on ambient space formalism, the construction of QFT is very similar to the Minkowski space-time. An another advantage of ambient space formalism is the analyticity properties of two-point function (which proved by Bros et al. \cite{brgamo,brmo,brmo03}). These properties are fundamental basis for the interaction calculations. The vector-spinor fields interaction in dS ambient space formalism was investigated in previous paper and the $\cal S$ matrix of Compton scattering was obtained in this formalism \cite{jaahta2017}, then it was showed that the null curvature limit of this $\cal S$ matrix is match with Minkowski counterpart.
In the standard model of QFT, the interaction Lagrangian usually is constructed from gauge theory; except scalar-spinor fields interaction, which con not be obtained from group theoretical point of view. The scalar-spinor interaction Lagrangian is presented by the Yukawa potential. In dS ambient space formalism, the interaction Lagrangian of scalar-spinor fields was obtained from a new transformation \cite{higgs}. This approach is very similar to the guage theory. Also in \cite{ahjata2019} the interaction of massless minimally coupled ({\it mmc}) scalar and spinor fields was investigated. Here, we tried to obtaine the Minkowski limit of this interaction. The null curvature limit of {\it mmc} scalar-spinor interaction Lagrangian and {\it mmc} scalar two-point function discussed in this paper.
The organization of this article is as follow. We recall the spinor and {\it mmc} scalar interaction fields in dS ambient space-time in section \ref{interaction}. In section \ref{flat limit}, the scaler-spinor interaction is investigated in null curvature limit. Finally, the conclusion has been presented in section \ref{Conclusion}.
\setcounter{equation}{0}
\section{the Interaction in $dS$ ambient space formalism}\label{interaction}
The dS space-time can be considered as a 4-dimensional hyperboloid embedded in 5-dimensional Minkowski space with the following relation:
$$
M_H=\{x \in {\rm I\hspace{-.15em}R}^5| \; \; x \cdot x=\eta_{\alpha\beta} x^\alpha
x^\beta =-H^{-2}\};\;\; \alpha,\beta=0,1,2,3,4 ,$$
where
$\eta_{\alpha\beta}=\mbox{diag}(1,-1,-1,-1,-1)$, $H$ is Hubble constant parameter. The dS metric is:
$$
ds^2=\eta_{\alpha\beta}dx^{\alpha}dx^{\beta}|_{x^2=-H^{-2}}=g_{\mu\nu}^{dS}dX^{\mu}dX^{\nu};\;\; \mu,\nu=0,1,2,3 ,$$
that $X^\mu$ is dS intrinsic coordinate and $x^\alpha$ is the 5-dimensional ambient space formalism. There are many coordinate systems which represent the ambient space coordinate $x^\alpha$ in terms of the intrinsic coordinate $X^\mu$. For investigating the flat space limit, the following coordinate system is suitable:
\begin{equation} \label{flat cordinates}
x^\alpha= \left(
H^{-1}\sinh(HX^0),\;
H^{-1}\dfrac{\overrightarrow{X}}{\lVert\overrightarrow{X}\lVert}\cosh(HX^0)\sin(H\lVert\overrightarrow{X}\lVert),\;
H^{-1}\cosh(HX^0)\cos(H\lVert\overrightarrow{X}\lVert)\right)\end{equation}
where $\lVert\overrightarrow{X}\lVert=(X_1^2+X_2^2+X_3^3)^\frac{1}{2}$ is the norm of three-vector $\overrightarrow{X}$ \cite{brgamo}.
The Fourier transformation cannot be defined in curved space-time, but in the dS space-time, the Fourier-Helgason-type transformation can be defined because of maximally symmetric properties of dS hyperboloid \cite{hel62,hel94,brmo03,brmo}. In corresponding with any space-time variable $x^\alpha$, one can defined a $\xi^\alpha=(\xi^0, \vec \xi, \xi^4)$ in the positive null-cone $C^+=\left\lbrace \xi \in {\rm I\hspace{-.15em}R}^5|\;\; \xi\cdot \xi=0,\;\; \xi^{0}>0 \right\rbrace$, which plays the role of energy-momentum parameter \cite{brmo03,higgs,brmo}.
In ambient space of dS universe, the action of \textit{mmc} scalar and spinor free fields is \cite{77,bagamota,higgs}:
\begin{equation} \label{conf spinor massless action} S(\Psi,\Phi)=\int \mathrm{d}\mu(x){\cal L}_{free}(\Psi , \Phi)=\int \mathrm{d}\mu(x)\left[H \bar{\Psi} \gamma^4\left( -i\barra{x} \barra{\partial}^\top+2i\pm\nu\right) \Psi+\Phi_m \;\partial^\top\cdot\partial^\top\;\Phi_m \right],\end{equation}
where $\mathrm{d}\mu(x)$ is the dS-invariant volume element. The $\Psi$ is spinor field and $\bar{\Psi}=\Psi^{\dag}\gamma^0\gamma^4$ is spinor adjoint field in dS ambient space formalism. The $\gamma^\alpha$ are five-matrices with the properties:
\begin{equation} \gamma^{\alpha}\gamma^{\beta}+\gamma^{\beta}\gamma^{\alpha}
=2\eta^{\alpha\beta},\;\;\;\;\gamma^{\alpha\dagger}=\gamma^{0}\gamma^{\alpha}\gamma^{0}.\end{equation}
The ambient gamma matrices $\gamma^\alpha$ are different from Minkowski gamma matrices $\gamma^{'\mu}$. The Minkowski gamma matrices are related with ambient gamma matrices as \cite{bagamota}:
\begin{equation} \label{gamma relation}\gamma'^{\mu}=\gamma^{\mu}\gamma^4.\end{equation}
In \eqref{conf spinor massless action}, the $\nu$ is related to dS mass parameter $m^2_{f,\nu}=H^2(2+\nu^2\pm i\nu)$, which in flat space limit reduces to Minkowski mass parameter $m$\cite{bagamota}. Also we have $\barra x=\eta_{\alpha\beta}\gamma^{\alpha} x^{\beta}$ and $\cancel{\partial}^\top=\gamma^\alpha\partial_\alpha^\top=\gamma^\alpha\left(\partial_\alpha+H^2x_\alpha x\cdot\partial\right)$.
In second term of \eqref{conf spinor massless action}, the $\Phi_m$ is {\it mmc} scalar field \cite{77}.
The action \eqref{conf spinor massless action} is invariant under the global U(1) symmetry.
By replacing transverse derivative $\partial_\alpha^\top$ with this new derivative $D_\alpha\Psi=(\partial_\alpha^\top+{\cal G}B^\top_\alpha\Phi_m)\Psi $, the action \eqref{conf spinor massless action} is invariant under the following transformation \cite{higgs}:
\begin{equation}\label{transformation}
\Psi\longrightarrow\Psi'=e^{-\frac{1}{2}(Hx\cdot B)^{-2}}\Psi
\;\;\;,\;\;\;
\Phi_m\longrightarrow\Phi'_m=\Phi_m+(Hx\cdot B)^{-3}\,.
\end{equation}
Then the interaction Lagrangian is obtained as \cite{higgs}:
\begin{equation} \label{higgs spinor lagrangian} {\cal L}_{int}=-i{\cal G}\;H\bar{\Psi} \gamma^4 \barra{x} \barra{B^\top}\Phi_m\Psi. \end{equation}
Here $B_\alpha$ is an arbitrary 5-vector constant that $B^\alpha B_\alpha=0$ and ${\cal G}$ is the coupling constant that determines the interaction intensity. The scalar field $\Phi_m$ can be written in terms of the massless conformally coupled ({\it mcc}) scalar field as \cite{higgs,77,khrota,gareta}:
\begin{equation}\label{magic}
\Phi_m(x)= \left[HA\cdot\partial^\top + 2 H^3 A\cdot x\right]\Phi_c(x),\end{equation}
where $A_\alpha$ is an arbitrary 5-vector constant. For dimensional compatibility, the dimension of $A$ can be fixed as $H^{-2}$. The \textit{mmc} scalar two-point function can be obtained as \cite{higgs}:
\begin{equation} \label{mmc 2point function}
W_m(x,x')=\frac{iH^4}{8\pi^2}
\frac{({\cal Z}-3)H^4\left[(A\cdot x)^2 +(A\cdot x')^2+A\cdot x\, A\cdot x' \,{\cal Z}\right]+6H^4A\cdot x A\cdot x'-(1-{\cal Z})H^2A\cdot A }{(1-{\cal Z}+i\epsilon)^3},\end{equation}
where $\cal Z$ is ${\cal Z}(x,x')=-H^2\left(x\cdot x'\right)=1+\dfrac{H^2}{2}(x-x')^2$ \cite{brmo,chta}.
\section{the null curvature limit}\label{flat limit}
In the null curvature limit, the {\it mcc} scalar field and spinor field become their counterpart in the Minkowski space \cite{bagamota,brgamo,brmo}:
\begin{equation}\label{psi flat limit}
\lim_{H\rightarrow 0}\Psi(x)=\psi(X)
\;,\;\;\;
\lim_{H\rightarrow 0}\bar{\Psi}(x)\gamma^4=\bar{\psi}(X), \;\;\; \lim_{H\rightarrow 0} \Phi_c(x)=\phi(X).
\end{equation}
By using \eqref{flat cordinates} it is straightforward to show that:
$$
\lim_{H\rightarrow 0}H\barra{x}=-\gamma^4.\
$$
Since the five-vector $B^\alpha$ is constant, $B^4$ can be chosen as zero. Therefore in the null curvature limit one can obtain:
\begin{equation} \label{B flat limit}
\lim_{H\rightarrow 0}\barra B^{\top}=\gamma^4 B_4+\gamma^\mu B_\mu= B_{\mu}\gamma^{\mu} =-B_{\mu}\gamma^{'\mu}\gamma^4\;;\;\;\;\;\;\mu =0,1,2,3.\end{equation}
From \eqref{magic}, one can obtain the $\Phi_m$ in limit of $H\rightarrow 0$ as:
\begin{equation} \label{phi mmc flat limit}
\lim_{H\rightarrow 0}\Phi_m(x) =0\;.
\end{equation}
By using the null curvature limit of:
$$
\lim_{H\rightarrow 0}(x-x')=(X-X'),\;\;\;\lim_{H\rightarrow 0}{\cal Z}=1, \;\; \lim_{H\rightarrow 0}\left(H\;A\cdot x\right)=-A^4 \equiv 0,
$$
one can obtain:
\begin{equation} \label{mmc 2point flat limit}
\lim_{H\rightarrow 0}{\cal W}_{mmc}=\frac{H^2}{2\pi^2}\;
\frac{A\cdot A }{(X-X)^4}=0.\end{equation}
The phase transformations \eqref{transformation} and the covariant derivative $D_\alpha$ in the null curvature limit become:
\begin{equation}\label{flat limit of transformation}
\psi\longrightarrow\psi'=e^{-\frac{i}{2}(B_4)^{-2}}\psi\;\;,\;\;D_\mu=\partial_\mu.
\end{equation}
Therefore by using relations \eqref{higgs spinor lagrangian} and \eqref{phi mmc flat limit} the null curvature limit of ${\cal L}_{int}$ is:
\begin{equation} \label{newint}
\lim_{H\rightarrow 0} {\cal L}_{int}=0.
\end{equation}
\section{conclusion}\label{Conclusion}
Interaction of spinor field with {\it mmc} scalar field is investigated in ambient space formalism of dS universe. Since the local phase transformation of the dS-spinor field in the null curvature limit become a constant phase transformation the interaction disappeared and then it is seen the interaction Lagrangian and two-point function vanished in this limit. The \textit{mmc} scalar field in dS space-time disappear in the null curvature limit and then it is expected, this field may be a part of a gravitational field.
\\
{\bf{Acknowledgments}}: The author wish to express his particular thanks to Prof. M. V. Takook for useful guidances and consultations and also to M. Dehghani, M. Rastiveis, R. Raziani and S. Tehrani-Nasab for useful discussions.
|
1,314,259,993,228 | arxiv | \section{Introduction}
In general relativity (GR), spacetime structure is
determined by a dynamical metric tensor field
$g_{ab}$ and nothing else, and the theory is
both diffeomorphism invariant and locally
Lorentz invariant. Einstein-\ae ther theory is the
extension of GR that incorporates
a dynamical unit timelike vector field
$u^a$---the ``\ae ther"---which breaks the local
Lorentz symmetry down to a 3d rotation subgroup.
Direct coupling of matter to the \ae ther would
violate local Lorentz symmetry yet preserve
diffeomorphism invariance.
This paper presents a brief overview of the
current theoretical and observational status of this theory,
assuming that matter does not couple directly to the
\ae ther.
The action involving metric and \ae ther is
highly constrained. Besides the cosmological constant
term, the only independent diffeomorphism invariant
local terms containing no more than two derivatives are
\begin{equation} S = -\frac{1}{16\pi G}\int \sqrt{-g}~ (R+K^{ab}_{mn}
\nabla_a u^m \nabla_b u^n)~d^{4}x, \label{action} \end{equation}
where
$R$ is
the Ricci scalar, $K^{ab}_{mn}$ is defined as
\begin{equation} K^{ab}_{mn} = c_1 g^{ab}g_{mn}+c_2\delta_m^a\delta_n^b
+c_3\delta_n^a\delta_m^b +c_4u^au^bg_{mn} \end{equation}
with dimensionless coupling constants $c_i$,
and the unit timelike constraint on the \ae ther is
implicit.
(The metric signature is $({+}{-}{-}{-})$
and the speed of light defined by the metric $g_{ab}$
is unity.)
Higher derivatives would be suppressed by powers of
a (presumably) small length, e.g. the
Planck length.
It is assumed here that the
\ae ther is aligned at large scales with the rest frame of
the microwave background radiation.
Einstein-\ae ther theory---``\ae -theory" for short---is
similar to the vector-tensor
gravity theories studied by Will and
Nordvedt,\cite{willnord} but with the crucial
difference that the vector field is constrained to
have unit norm. This constraint eliminates a
wrong-sign kinetic term for the length-stretching
mode,\cite{Elliott:2005va} hence gives the theory a
chance to be viable. An equivalent theory using the
tetrad formalism was first studied by
Gasperini,\cite{Gasperini} and in the above form it was
introduced by Jacobson and Mattingly.\cite{Jacobson:2000xp}
\section{Newtonian and post-Newtonian limits}
\label{NPN}
In the weak-field, slow-motion limit \ae -theory reduces to
Newtonian gravity,\cite{Carroll:2004ai} with a value of Newton's
constant $G_{\rm N}$ related to the parameter $G$ in the action
(\ref{action}) by
\begin{equation} G_{\rm N}=\frac{G}{1-c_{14}/2}, \label{GN}\end{equation}
where $c_{14}\equiv c_1+c_4$. (Similar notation
is used below for other additive combinations
of the $c_i$.)
For any choice of
the $c_i$, all parameterized post-Newtonian
(PPN) parameters\cite{willLR} of \ae -theory
agree with those of
GR\cite{Eling:2003rd,Foster:2005dk}
except the preferred frame
parameters $\alpha_{1,2}$
which are given by\cite{Foster:2005dk}
\begin{eqnarray}
\alpha_1&=& \frac{-8(c_3^2 + c_1c_4)}{2c_1 - c_1^2+c_3^2}\label{alpha1}\\
\alpha_2&=&\frac{\alpha_1}{2}
-\frac{(c_1+2c_3-c_4)(2c_1+3c_2+c_3+c_4)}{c_{123}(2-c_{14})}\label{alpha2} \end{eqnarray}
(This particular way of expressing $\alpha_2$ was given in
Ref.\ \refcite{Foster:2006az}. The small $c_i$ form of $\alpha_2$ was first
computed in Ref.\ \refcite{Graesser:2005bg}.)
Observations currently impose the strong constraints
$\alpha_1 \lesssim 10^{-4}$ and $\alpha_2\lesssim 4\times
10^{-7}$.\cite{willLR} Since \ae -theory has four free
parameters $c_i$, we may set
$\alpha_{1,2}$ exactly
zero by imposing the conditions\cite{Foster:2005dk}
\begin{eqnarray}
c_2&=&(-2c_1^2-c_1c_3 + c_3^2)/3c_1 \label{zeroalphac2}\\
c_4&=&-c_3^2/c_1 \label{zeroalphac4}.
\end{eqnarray}
With
(\ref{zeroalphac2},\ref{zeroalphac4}) satisfied, {\it
all} the PPN parameters of \ae -theory are equivalent to
those of GR. (The parameters $\alpha_{1,2}$ can also be
set to zero by imposing $c_{13}=c_{14}=0$, but this
case is pathological, as discussed in section
\ref{special}.)
\section{Homogeneous isotropic cosmology}
Assuming spatial homogeneity and isotropy, $u^a$
necessarily coincides with the 4-velocity of the
isotropic observers,
and the \ae ther stress tensor
is just a certain combination of
the Einstein tensor and the stress tensor of a
perfect fluid with energy density proportional to the
inverse square of the scale factor, like the curvature
term in the Friedman equation.\cite{Mattingly:2001yd,Carroll:2004ai}
The latter contribution plays no important cosmological
role since the spatial curvature is small, while the former
renormalizes the gravitational constant appearing in
the Friedman equation, yielding\cite{Carroll:2004ai}
\begin{equation}
G_{\rm cosmo}=\frac{G}{1+(c_{13}+3c_2)/2}.
\end{equation}
Since $G_{\rm cosmo}$
is not the same as $G_{\rm N}$ the expansion rate
of the universe differs from what would have been expected
in GR with the same matter content. The ratio is constrained
by the observed primordial ${}^4$He abundance to satisfy
$|G_{\rm cosmo}/G_{\rm N} - 1|<1/8$.\cite{Carroll:2004ai}
When the PPN parameters
$\alpha_{1,2}$ are set to zero by (\ref{zeroalphac2},\ref{zeroalphac4}),
it turns out that $G_{\rm cosmo}=G_{\rm N}$,
so this nucleosynthesis constraint is automatically satisfied.\cite{Foster:2005dk}
\section{Linearized wave modes}
When linearized about a flat metric and constant
\ae ther, \ae -theory posesses five massless modes
for each wave vector: two spin-2,
two spin-1, and one spin-0 mode.
The squared speeds of these modes
relative to the \ae ther rest frame
are
given by\cite{Jacobson:2004ts}
\begin{eqnarray}\label{speeds}
\mbox{spin-2}\qquad&&1/(1-c_{13})\label{s2}\\
\mbox{spin-1}\qquad&&(c_1-{\textstyle{\frac{1}{2}}} c_1^2+{\textstyle{\frac{1}{2}}}
c_3^2)/c_{14}(1-c_{13})\label{s1}\\
\mbox{spin-0}\qquad&&c_{123}(2-c_{14})/c_{14}(1-c_{13})(2+c_{13}+3c_2)\label{s0}
\end{eqnarray}
The corresponding polarization tensors
were found in one gauge in Ref.\ \refcite{Jacobson:2004ts} and
in another gauge in Ref.\ \refcite{Foster:2006az}.
The energy density of
the spin-2 modes is always positive, while for the
spin-1 modes it has the sign of $(2c_1 -c_1^2 +c_3^2)
/(1-c_{13})$, and for the spin-0 modes it has the sign
of $c_{14}(2-c_{14})$.\cite{Eling:2005zq,Foster:2006az}
(These reduce to the results of Ref.\ \refcite{Lim:2004js}
in the decoupling limit where
gravity is turned off.)
These squared speeds correspond to
(frequency/wavenumber)${}^2$,
so must be non-negative to avoid
imaginary frequency instabilities.
They must moreover be greater than unity
(super-luminal),
to avoid the existence of vacuum \v{C}erenkov
radiation by matter.\cite{Elliott:2005va}
(The strongest constraints arise from
the existence of ultra high energy
cosmic rays.) And the mode energy
densities should be positive, to avoid dynamical
instabilities.
With the $\alpha_{1,2}=0$ conditions
(\ref{zeroalphac2},\ref{zeroalphac4}) imposed,
all of these conditions are met for all of the modes
if and only if
$c_\pm=c_1\pm c_3$ are restricted
by the inequalities\cite{Foster:2005dk}
\begin{eqnarray}\label{superluminal}
0&\le& c_+\le1\label{sl1}\\
0&\le& c_-\le c_+/3(1-c_+).\label{sl2}
\end{eqnarray}
Interestingly, if the mode speeds are
instead required to be {\it less}
than unity (sub-luminal), then
the spin-1 and spin-0 energy
densities are negative. Hence not only the
\v{C}erenkov constraint, but
also energy positivity (together with
$\alpha_{1,2}=0$)
requires mode speeds greater than unity.
Note that when (\ref{zeroalphac4}) holds, we have
$c_{14}=2c_+c_-/(c_++c-)$, which satisfies $0\le
c_{14}<2$ when the constraints (\ref{sl1},\ref{sl2})
hold. Thus in particular the condition for attractive
gravity mentioned in section \ref{NPN} need not be
separately imposed, and $c_{14}$ is non-negative.
\section{Primordial perturbations}
Given the same
$G_{\rm N}$, and assuming the PPN parameters
$\alpha_{1,2}$ vanish, the primordial power in cosmological
spin-0 and spin-1 perturbations
is unchanged relative to GR, while
the power in spin-2 perturbations
differs from that in GR by the factor
$(1-c_{14}/2)(1-c_{13})^{1/2}$.\cite{Lim:2004js,Li:2007vz}
When the
constraints (\ref{sl1},\ref{sl2}) are satisfied this
factor is smaller than unity,
hence these spin-2 perturbations are
even more difficult to detect than in
GR.
As for the late time evolution
of these perturbations,
neutrino stresses in the radiation dominated epoch
source the spin-1 mode, which leads to modified
matter and CMB spectra. The effect is rather small however,
and is degenerate with matter-galaxy bias and with
neutrino masses.\cite{Li:2007vz}
\section{Radiation damping and strong self-field effects}
If the fields are weak everywhere
(including inside the radiating bodies),
and
the PPN parameters $\alpha_{1,2}$ vanish,
radiation is sourced only by the quadrupole.
Waves of spins 0, 1 and 2 are radiated,
and the
net power
is given by
$(G_N {\mathcal A}/5)\dddot{Q}_{ij}^2$, where
$Q_{ij}$ is the quadrupole moment and
${\mathcal A}={\mathcal A}[c_i]$ is a function
of the coupling parameters $c_i$
that reduces to unity in the case of GR.\cite{Foster:2006az}
Agreement with
the damping rate of
GR (confirmed to $\sim 0.1\%$ in
binary pulsar systems\cite{willLR})
can be achieved by imposing the condition
${\mathcal A}[c_i]=1$,
which is consistent with the
constraints (\ref{sl1},\ref{sl2}).
Compact sources with strong internal fields such
as neutron stars or black holes
can be handled\cite{Foster:2007gr} using an
``effective source" dynamics specified by a
worldline action
integral
\begin{equation}
S=-m_0\int d\tau\; [1 +\sigma(v^au_a-1)+\sigma'(v^au_a-1)^2+\dots],
\label{effectiveaction}
\end{equation}
where $v^a$ is the 4-velocity of the body, $u_a$ is
the local background value of the \ae ther, and $\sigma$
and $\sigma'$ are
constants characterizing the body, called a
``sensitivity parameters" or just ``sensitivities".
The sensitivites scale as $c_i$ for small $c_i$.
The effects of nonzero sensitivities on two-body
dynamics and radiation rates lead to a number of
phenomena that are constrained by observations,
including violations of the strong equivalence
principle, modifications of the post-Newtonian
dynamics, modifications of quadrupole sourced
radiation, and both monopole and dipole sourced
radiation. When $\alpha_{1,2}=0$, all of these constraints
are met provided the sensitivities are less than
$\sim 0.001$, which will certainly be the case if
$c_i\lesssim 0.01$.\cite{Foster:2007gr}\footnote{This
corrects an error in
version 1 of Ref.\ \refcite{Foster:2007gr}, where
$\sigma$ is said to scale as $c_i^2$. (Also the
a prefactor $c_{14}$ in Eqn. (70) should be deleted.)
As a result of this correction,
the likely constraints on $c_i$ are an order
of magnitude stronger, as stated here.\cite{bzf-pc}}
To be more
precise would require knowing the actual dependence of the
sensitivities on the $c_i$, which has so far only been
determined for $\sigma$ and only at leading order (where
$\sigma$ vanishes
when $\alpha_{1,2}=0$).
(The speed $V$ of the
observed binaries with respect to the background
\ae ther frame can be neglected in formulating these
constraints provided
$V\lesssim 10^{-2}$, which is easily satisfied for
any known proper motion relative to the rest frame
of the microwave background radiation.\cite{Foster:2007gr})
\section{Spherically symmetric stars and black holes}
Unlike GR, \ae -theory has a spherically symmetric mode,
corresponding to radial tilting of the \ae ther. For each mass,
there is a two parameter family of spherically
symmetric static vacuum solutions, rather than
a unique solution as in GR.\cite{Eling:2006df}
Asymptotic flatness reduces this to a one parameter
family.\cite{Eling:2003rd,Eling:2006df}
The solution outside a static
star is the unique solution
for a given mass in which
the \ae ther is aligned with the Killing vector.\cite{Eling:2006df}
This ``static \ae ther"
vacuum solution
depends
on the $c_i$ only through the combination
$c_{14}$, and
was found analytically (up to
inversion of a transcendental equation).\cite{Eling:2006df}
It is stable to linear
perturbations under the same conditions as for
stability of flat
spacetime, with the exception of the case
$c_{123}=0$.\cite{Seifert:2007fr}
The solution inside a fluid star has been found by
numerical integration, both for constant
density\cite{Eling:2006df} and for realistic neutron
star equations of state.\cite{Eling:2007xh}
The maximum masses
for neutron stars range from about 6 to 15\% smaller
than in GR when $c_{14}=1$,
depending on the equation of state.
The corresponding surface redshifts can be
as much as 10\% larger than in GR for the same mass.
Measurements of high
gravitational masses or precise surface redshifts thus
have the potential to yield strong joint constraints
on $c_{14}$ and the equation of state.
The radius of the innermost stable circular orbit (ISCO)
differs from the GR value $6G_{\rm N}M$
by a small term of relative order about $0.03c_{14}$.
For black holes,
the condition of regularity at the spin-0 horizon
selects a unique solution from the one-parameter
family for a given mass.\cite{Eling:2006ec}
When a black hole forms from collapse of matter, the
spin-0 horizon develops in a nonsingular region of
spacetime, where the evolution should be regular. This
motivated the conjecture that collapse
produces a black hole
with nonsingular spin-0 horizon, which has been
confirmed for some particular examples
in numerical simulations of collapse of
a scalar field.\cite{Garfinkle:2007bk}
The black holes with nonsingular spin-0 horizons are
rather close to Schwarzschild outside the horizon for
a wide range of couplings; for instance, the ISCO
radius differs by a factor $(1 + 0.043 c_1 +
0.061 c_1^2)$, in
the case with $c_3=c_4=0$ and $c_2$ fixed so that the
spin-0 speed is unity.\cite{cte-pc} (This
expansion is accurate at least when $c_1\le0.5$.
No solution with
regular spin-0 horizon exists in this case when $c_1
\; \raisebox{-.8ex}{$\stackrel{\textstyle >}{\sim}$}\; 0.8$.) Inside the horizon the solutions differ
more, but like Schwarzschild they contain a spacelike
singularity. Black hole solutions with singular spin-0
horizons have been studied in Ref.\
\refcite{Tamaki:2007kz}. These solutions can differ
much more outside the horizon. Quasi-normal modes of
black holes in \ae -theory have been investigated in
Refs.\ \refcite{Konoplya:2006rv}.
\section{Special values of $c_i$?}
\label{special}
The first case to be examined in
detail\cite{Kostelecky:1989jw,Jacobson:2000xp} was
$c_{13}=c_2=c_4=0$, i.e.\ the ``Maxwell action"
together with the unit constraint on the vector. The
PPN result for $\alpha_2$ (\ref{alpha2}) is infinite in
this case, and the spin-0 mode speed is zero. The
perturbation series used in the PPN analysis is thus
evidently not applicable. Independently of that
however, other problems with this case have been
identified, such as the formation of shock
discontinuities\cite{Jacobson:2000xp,Clayton:2001vy}
and a possibly related
instability.\cite{Seifert:2007fr}
Assuming now that
$\alpha_{1,2}=0$ and the
constraints (\ref{sl1},\ref{sl2}) are satisfied,
and putting aside the case $c_1=c_3=0$ which
is not covered by existing PPN analyses, all
but one of the
cases in which one of the $c_i$ vanishes, or
in which one of $c_{13}$, $c_{14}$, or $c_{123}$
vanishes, have the property that the spin-1 mode speed
(\ref{s1}) diverges while the energy of that mode is
nonzero. It seems very unlikely that such cases are
observationally viable, although they have not been examined
carefully. The exception is the special case
$c_3=c_4=2c_1+3c_2=0$, with $2/3<c_1<1$. This large
value of $c_1$ is probably inconsistent with the
strong field constraints from orbital binaries, but as
mentioned above those are not yet precisely known
because the sensitivity parameters have not yet been
computed for neutron stars, so this case is not yet
ruled out.
\section{Conclusion}
Einstein-\ae ther theory is an intriguing theoretical laboratory
in which gravitational effects of possible
Lorentz violation can be meaningfully studied.
There is a large (order unity) two-parameter space of Einstein-\ae ther
theories for which (i) the PPN parameters are identical to those of GR,
(ii) the linear perturbations are stable and carry positive energy,
(iii) there is no vacuum \v{C}erenkov radiation,
(iv) the dynamics of the cosmological scale factor and
perturbations differ little from GR,
(v) non-rotating neutron star and black hole solutions are
close to those of GR, but might be distinguishable
with future observations. Radiation damping from binaries,
imposes an order $0.001$ constraint on one combination
of the parameters.
Strong self-field effects in neutron stars and black holes
produce violations of the strong equivalence principle and
higher order post-Newtonian effects which
will constrain all the
parameters $c_i$ to be less than
around $0.01$, presuming that
the sensitivity parameters for neutron stars (which have not
yet been computed with the required precision) turn out to
have the expected magnitude.
\section*{Acknowledgments}
I am grateful to C.T.\ Eling,
B.Z.\ Foster, B.\ Li, and E.\ Lim for helpful correspondence.
This work was
supported by NSF grant PHY-0601800.
|
1,314,259,993,229 | arxiv | \section{Introduction}
In Classical Mechanics one of the first paradigmatic problems that students
encounter in their study is the dynamics of a falling body,
\textit{i.e.} an object pulled down to the ground (\textit{e.g.} from the Pisa's tower)
by the constant force of Earth's gravity.
However, amazingly enough, the same problem is rarely discussed in a course of Quantum
Mechanics and the reason is due to the sharp contrast of
the physical simplicity of the problem and the difficulty of its mathematical description
due to how basic Quantum Mechanics courses are structured, largely
based on the solution of the time-dependent Schr\"odinger equation
$i \hbar \frac{\partial \psi}{\partial t}=H\psi$ for the wavefunction
$\psi$ in terms of the eigenfunctions and eigenvalues of the time-independent equation $H\psi=E\psi$.
Indeed, in the traditional quantum mechanics approach to the problem of determining the
wavefunction at time $t$, it is necessary to involve the Airy functions
and the projection of the wavefunction of the falling body into this set of eigenfunctions.
Here we
show that an alternative and easier way to deal with the quantum treatment of the problem of
the falling body is both pedagogically very simple to introduce and
at the same time general enough to be applicable to the single particle case and to general
quantum many--body systems. This
approach exploits the possibility to perform a gauge transformation of the wavefunction in
correspondence to a change of references frames, from the inertial frame of the laboratory
to the accelerated frame of the falling body. In the new frame there is
of course no longer any gravitational effect and therefore the system appears to be {\it "free"},
\textit{i.e.} non subject to the gravity. It is worth to underline that the same method
can be applied to study the effect of gravitational force on quantum many--body
system where the particles interact via a generic two--body potential of the form
$V(\left| \vec{r}_j- \vec{r}_k \right|)$. This
approach permits to easily reach some interesting results. For instance, as we discuss in the
following, the time evolution of interesting observables, such as the variance of the
position of a generic falling wavepacket, is the same as the variance of a free wavepacket:
the solely effect of gravity shows up in the behaviour of the expectation values of position
(and their powers) which, on the other hand, can be obtained instead the classical
Newton's second law of motion. This follows from the Ehrenfest theorem, see
\textit{e.g.} \cite{Griffiths},
covered in all Quantum Mechanics courses, from which we can infer that
the momentum of the wavepacket grows
(for positive gravitational force, with $g<0$) linearly with time, while its position
varies quadratically in time. This last fact will be valid for a generic interacting
potential in any dimensions and, in this paper, we will focus on the three--
and one--dimensional cases as explanatory examples. We will also show how to get
easily the expression of the energy and the total momentum of the falling many--body system
using the basic commutation rules taught in a standard course of Quantum Mechanics. Finally,
as a last non-trivial example, we show how to put in relation the one--body density matrix
of the falling body with the corresponding one of the "free" (although possibly interacting) system,
and give a simple relation between the eigenvalues of the two density matrices.
\section{The quantum Einstein's rocket}
Let us begin by considering the notorious Einstein's \textit{gedankenexperiment}
of a rocket of length $L$ in an {\em empty space} (\textit{i.e.}
very far from any other celestial body),
and subject to an acceleration equal to $g \simeq 9.81 \frac{m}{s^2}$.
Suppose that inside the rocket there is a single quantum object, \textit{e.g.} an Einsteinium atom,
for which the relevant Schr\"odinger equation is simply
\begin{equation}
i \hbar \frac{\p}{\p t}\chi(x,t)\,=\,\left[-\frac{\hbar^2}{2m} \frac{\p^2}{\p x^2} +V_r(x,t)\right]\chi(x,t)\,,
\end{equation}
where we've chosen $x$ as the vertical direction along which the rocket is moving, while
$V_r(x,t)$ is a potential representing the confining action of the rocket walls during the motion.
If we now pass to the reference frame comoving with the rocket, by changing the spatial
variable from $x$ to $\widetilde{x} = x - g\,\frac{t^2}{2}$, the Schr\"odinger equation reads
\begin{equation}
\label{comoving_1body_HWB}
i\hbar\frac{\p}{\p t} \chi(\widetilde{x},t)=\left[-\frac{\hbar^2}{2m} \frac{\p^2}{\p \widetilde{x}^2} +V_r(\widetilde{x}) +i\,\hbar\,g\,t\,\frac{\p}{\p \widetilde{x}} \right]\chi(\widetilde{x},t)\,,
\end{equation}
where the rocket
potential in the new reference
is denoted by $V_r(\widetilde{x})$.
\begin{figure}[t]
\includegraphics[width=0.5\columnwidth]{fig1.pdf}
\caption{Pictorial visualization of the Einstein's \textit{gedankenexperiment}. The effect of an
inertial force $F=-m\,g$ in the Einsteinium atom due to the acceleration of the rocket
(right side picture) is the same as if the rocket would be at rest on Earth (left side picture).}
\label{fig1}
\end{figure}
If we now transform the wavefunction as
\begin{equation}
\label{transform_WF}
\chi(\widetilde{x},t)\,\equiv\,\exp\left[{\frac{i \,m\, g t}{\hbar} \left(\widetilde{x}+\frac{g\, t^2}{2}\right)}\right]\, \overline{\chi}(\widetilde{x},t)\,,
\end{equation}
we can get rid of
the term linear in $\widehat{p}$, \textit{i.e.} in $\frac{\partial}{\partial \widetilde{x}}$,
in Eq. (\ref{comoving_1body_HWB}) and
the Schr\"odinger equation becomes\footnote{For a discussion of the boundary conditions
of the wavefunction $\overline{\chi}$ see \cite{ourpaper}.}
\begin{equation}
\label{final_comoving_1body_HWB}
i\hbar\frac{\p}{\p t} \overline{\chi}(\widetilde{x},t)\,=\,\left[-\frac{\hbar^2}{2m}
\frac{\p^2}{\p \widetilde{x}^2} +V_r(\widetilde{x}) +m\,g\,\widetilde{x} \right]\overline{\chi}(\widetilde{x},t)\,.
\end{equation}
Therefore in the comoving frame of reference, the
Einsteinium atom feels the presence of a gravitational force
$$
F\,=\,-\frac{\p V}{\p x} \,=\,-m\,g\,\,\,
$$
pulling it to the ground of the rocket, as sketched in Fig.\ref{fig1}. This is essentially
the Einstein's equivalence principle, which states that there is no difference between the
gravitational and a fictitious force experienced by an observer in a non-inertial frame of
reference. For a more detailed discussion on the Einstein's equivalence principle in a
Quantum Mechanics context see \cite{Nauenberg}.
In the following we are going to discuss how to deal with a Schr\"odinger equation
of the form of (\ref{final_comoving_1body_HWB}), performing a gauge transformation
on the wavefunction to wash away the gravitational potential. Notice that the above
derivation can and will be repeated as well for a many--body quantum system made of interacting
particles.
\section{Free fall of a quantum particle}
We are interested in studying the Schr\"odinger equation for a particle of mass $m$
subject to a gravitational potential in one dimension (see \textit{e.g.} \cite{LandauLifshitz}):
\begin{equation}
\label{schro_onebody}
i \hbar \frac{\p}{\p t}\chi(x,t)\,=\,\left(-\frac{\hbar^2}{2m} \frac{\p^2}{\p x^2} +m\, g\,x\right)\chi(x,t)\,,
\eeq
where $\chi(x,t)$ is the wavefunction describing the motion of the particle under the force
$F=-m\,g$, with $g$ the gravity acceleration. This problem can be solved by Fourier
transform going to momentum space, as discussed in \cite{Wadati1999}, while
other methods of solution, related to the Airy functions, were proposed and discussed in
\cite{Berry1978,Rau1996,Guedes2001,Feng2001}. For the solution of the problem relative
to a time dependent gravitational force see our recent paper \cite{ourpaper}.
\vspace{3mm}
\noindent
{\bf Gauge Transformation}. Let us discuss now the method of solving Eq. (\ref{schro_onebody}) by
means of a gauge transformation of the wavefunction. The starting point is indeed to perform the following gauge transformation on the wavefunction:
\begin{equation}
\label{wavefunction_gaugetrasf}
\chi(x,t) \,\equiv\, e^{i \theta(x,t)} \,\eta(\rho(t), t)\,,
\end{equation}
where
$$
\rho(t)\,=\,x - \xi(t)
$$
and $\xi(t)$ and $\theta(x,t)$ to be determined.
Substituting (\ref{wavefunction_gaugetrasf}) into (\ref{schro_onebody}),
we see that, in order to eliminate the external potential term, we need to impose
\begin{equation}
\label{conditions_integrab}
\frac{d\xi}{dt} \,=\, \frac{\hbar}{m} \frac{\p \theta}{\p x} \,,
\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\, -\hbar \frac{\p \theta}{\p t} \,=\, \frac{\hbar^2}{2m} \left(\frac{\p \theta}{\p x}\right)^2 + m\,g\,x\,.
\end{equation}
Assuming the validity of these equations, it is easy to see thtat $\eta(\rho,t)$ satisfies
the Schr\"odinger equation with no external potential but with the new spatial variables,
\textit{i.e.}
\begin{equation}
\label{onebody_schro_eta}
i \hbar \frac{\p}{\p t}\eta(\rho,t)\,=\, -\frac{\hbar^2}{2\,m} \frac{\p^2 }{\p \rho^2}\eta(\rho,t)\,.
\end{equation}
If we now make the ansatz
\begin{equation}
\label{ansatz_theta}
\theta(x,t)\, =\, \frac{m}{\hbar} \frac{d\xi}{dt} \,x+ \Gamma(t)\,,
\end{equation}
and we use it in Eq. (\ref{conditions_integrab}), we arrive to the conditions
\begin{equation}
\label{vANDdelta_onebody}
m\frac{d^2 \xi}{dt^2} \,=\, -m\,g\,, \,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \hbar \frac{d \Gamma}{dt}\,=\,-\frac{m}{2} \left(\frac{d\xi}{dt} \right)^2 \, ,
\end{equation}
which determine the functions $\xi(t)$ and $\Gamma(t)$
in terms of the gravitational acceleration $g$.
Once we solve the differential equations (\ref{vANDdelta_onebody}) with the trivial
initial conditions
$\xi(0)= d\xi(0)/dt= 0$ and $\Gamma(0)=0$, we get the following expression for the gauge phase
\begin{equation}
\label{theta_g}
\theta(x,t)\,=\,-\frac{m\,g\,t}{\hbar} \,x -\frac{m\,g^2 \,t^3}{6\,\hbar} \,,
\end{equation}
while the "translational" parameter $\xi$ reads
\begin{equation}
\label{xi_g}
\xi(t)\,=\, -\frac{g \,t^2}{2}\,\,\,.
\end{equation}
Eqs. (\ref{theta_g}) and (\ref{xi_g}), together with Eqs. (\ref{wavefunction_gaugetrasf})
and (\ref{onebody_schro_eta}), completely solves the Schr\"{o}dinger equation (\ref{schro_onebody}),
since $\eta(\rho,t)$ is simply the time--dependent solution of the free
Schr\"odinger equation, which is studied in any Quantum Mechanics course, a major example
being the spreading of a Gaussian wavepacket. Notice that, with our choices $\theta(x,0)=0$ and
$\rho(0)=x$, from (\ref{wavefunction_gaugetrasf}) we have $\chi(x,0)\,=\,\eta(x,0)$.
Therefore we can write the complete solution of the Schr\"odinger equation (\ref{schro_onebody}) as
\begin{equation}
\label{complete_sol_onebody}
\chi(x,t)\,=\,\exp\left[ i\theta(x,t) -i \frac{t}{\hbar}\frac{\widehat{p}^2}{2m}\right]\,\eta(\rho,0)\,=\,\exp\left[i\theta(x,t) -i\frac{t}{\hbar}\frac{\widehat{p}^2}{2m} -i \frac{\xi(t)}{\hbar} \widehat{p}\right] \,\chi(x,0)\,,
\end{equation}
where we use the definition of the translation operator
\begin{equation}
\label{def_transl_op}
\psi(x-\xi(t),t)\,=\,\exp\left[-i \frac{\xi(t)}{\hbar} \widehat{p}\right]\,\psi(x,t)\,,
\end{equation}
and the free time evolution operator. In Eqs. (\ref{complete_sol_onebody}) and
(\ref{def_transl_op}) $\widehat{p}$ refers to the momentum operator: $\widehat{p}\rightarrow-i\hbar\frac{\p}{\p x}$.
\vspace{3mm}
\noindent
{\bf Expectation values}. Using the results just discussed we can study
how the expectation values of different operators, such as position and momentum,
evolve during the motion for a generic initial wavepacket $\chi(x,0)$. The expectation
values of powers of $\widehat{x}$ are defined as
\begin{equation}
\left\langle x^{\mathcal{N}}\right\rangle(t)\,\equiv\,\left \langle \chi(x,t)|\,\widehat{x}^{\mathcal{N}}\,| \chi(x,t)\right \rangle \,=\,\int_{-\infty}^{\infty} \left|\chi(x,t)\right|^2 \,x^{\mathcal{N}}\,dx\,,\
\end{equation}
while the expectation values of powers of the momentum $\widehat{p}$ are
\begin{equation}
\left\langle p^{\mathcal{N}}\right\rangle(t)\,\equiv\,\left \langle \chi(x,t)| \,\widehat{p}^{\mathcal{N}}\, | \chi(x,t)\right \rangle \,=\,(-i\hbar)^{\mathcal{N}} \int_{-\infty}^{\infty} \chi^*(x,t) \,\frac{\p^{\mathcal{N}}}{\p x^{\mathcal{N}}} \chi(x,t)\,dx\,
\end{equation}
(where the wavefunction $\xi$ is normalized).
Assuming as initial values
\begin{equation}
\label{initial_conditions_xp}
\left \langle x\right \rangle (0)\,=\,x_0\,, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \left \langle p\right \rangle (0)\,=\,p_0\,,
\end{equation}
we can employ the solution (\ref{complete_sol_onebody}) to obtain the time evolved expectation
values of these quantities. In the following we focus for simplicity our attention on the
cases ${\mathcal{N}}=1,2$. Since we think the calculation can be instructive in an
introductory Quantum Mechanics course, we perform them in detail so that the various steps
can be more easily followed.
\vspace{3mm}
\noindent
{\bf Commutation relations}.
Before proceeding in that direction, it is useful to study commutation relations
among different operators, such as the position operator with the translation and
the time evolution operators. Let us start with
\begin{equation}
\label{comm_rel_1}
\left[\widehat{x},e^{-i\,a\,\widehat{p}} \right]\,=\,\sum_{j=0}^\infty \frac{(-i\,a)^j}{j!}\,\left[\widehat{x},\widehat{p}^j\right]\,=\,\hbar\,a\sum_{j=0}^\infty \frac{(-i\,a)^{j-1}}{(j-1)!}\,\widehat{p}^{j-1}\,=\,\hbar\,a\,e^{-i\,a\,\widehat{p}} \,,
\end{equation}
where $a$ is a generic real parameter and we used
\begin{equation}
\label{known_commutator}
\left[ \widehat{x},\widehat{p}^j\right]=i\,\hbar\,j\,\widehat{p}^{j-1}\,\,\,.
\end{equation}
Then we have also
\begin{equation}
\nonumber
\left[\widehat{x}^2,e^{-i\,a\,\widehat{p}} \right]\,=\,\sum_{j=0}^\infty \frac{(-i\,a)^j}{j!}\,\left[\widehat{x}^2,\widehat{p}^j\right]\,:
\eeq
using the commutation rules, we get
\begin{equation}
\left[\widehat{x}^2,\widehat{p}^j\right]\,=\,\widehat{x}\left[\widehat{x},\widehat{p}^j\right]+\left[\widehat{x},\widehat{p}^j\right]\widehat{x}\,=\,i\,\hbar\,j\,\widehat{x}\,\widehat{p}^{j-1}+i\,\hbar\,j\,\widehat{p}^{j-1}\widehat{x}\,=\,-\hbar^2 \,(j-1)\,j\,\widehat{p}^{j-2}+2\,i\,\hbar\,j\,\widehat{p}^{j-1}\widehat{x}\,,
\eeq
where we have used (\ref{known_commutator}) in the second and last equality
(with $j-1$ instead of $j$). Therefore
\begin{equation}
\label{comm_rel_2}
\left[\widehat{x}^2,e^{-i\,a\,\widehat{p}} \right]\,=\,a^2\sum_{j=0}^\infty \frac{(-i\,a)^{j-2}}{(j-2)!}\,\widehat{p}^{j-2} +2a \sum_{j=0}^\infty \frac{(-i\,a)^{j-1}}{(j-1)!}\,\widehat{p}^{j-1}\widehat{x} \,=\,e^{-i\,a\,\widehat{p}}\,\left(a^2-2a\,\widehat{x}\right)\,.
\eeq
Now we turn our attention to the commutation relations between the
evolution operator and the position operator. We start again from
\begin{equation}
\nonumber
\left[\widehat{x},e^{-i\,b\,\widehat{p}^2} \right]\,=\,\sum_{j=0}^\infty \frac{(-i\,b)^j}{j!} \left[\widehat{x},\widehat{p}^{2j}\right]\,,
\eeq
where $b$ is some real parameter that will be fixed in the calculations.
Using (\ref{known_commutator}) with $2j$ instead of $j$, we find
\begin{equation}
\label{comm_rel_3}
\left[\widehat{x},e^{-i\,b\,\widehat{p}^2} \right]\,=\,2\,\hbar\, b\sum_{j=0}^\infty \frac{(-i\,b)^{j-1}}{(j-1)!}\widehat{p}^{2j-1}\,=\,2\,\hbar\,b\,e^{-i\,b\,\widehat{p}^2}\widehat{p}\,.
\eeq
We then evaluate
\begin{eqnarray}
\nonumber
\left[\widehat{x}^2,e^{-i\,b\,\widehat{p}^2} \right]&\,=\,&\widehat{x}\left[\widehat{x},e^{-i\,b\,\widehat{p}^2} \right]+\left[\widehat{x},e^{-i\,b\,\widehat{p}^2} \right]\widehat{x}\,=\,2\,\hbar\,b\,\widehat{x}\,e^{-i\,b\,\widehat{p}^2}\widehat{p}+2\,\hbar\,b\,e^{-i\,b\,\widehat{p}^2}\widehat{x}\,\widehat{p}\,=\,
\\ \nonumber
&\,=\,&2\,\hbar\,b\,e^{-i\,b\,\widehat{p}^2}\widehat{p}\,\widehat{x}+(2\,\hbar\,b)^2\,e^{-i\,b\,\widehat{p}^2}\widehat{p}^2+2\,\hbar\,b\,e^{-i\,b\,\widehat{p}^2}\widehat{x}\,\widehat{p}\,=\,\\ \label{comm_rel_4}
&\,=\,&4\,\hbar\,b\,e^{-i\,b\,\widehat{p}^2}\widehat{p}\,\widehat{x}+(2\,\hbar\,b)^2\,e^{-i\,b\,\widehat{p}^2}\widehat{p}^2+2\,i\,\hbar^2\,b\,e^{-i\,b\,\widehat{p}^2}\,,
\end{eqnarray}
where we used (\ref{comm_rel_3}) in the second and third equality, and we used the canonical
commutation relation $\left[\widehat{x},\widehat{p}\right]=i\,\hbar$ in the last equality.
\vspace{1mm}
Let us now consider the momentum operator. It obviously commutes with the translation and
time evolution operators, but it will be useful to evaluate its commutation relation
with the gauge phase term: $\left[\widehat{p},e^{i\,\theta(\widehat{x},t)}\right]$.
Writing
$$\theta(\widehat{x},t)\equiv \widehat{x} A+B\,,$$
where in our case
$$A \,=\,-\frac{m\,g\,t}{\hbar}
\,, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,,
B=-\frac{m\,g^2 \,t^3}{6\,\hbar}\,\,\,,
$$
we have
\begin{equation}
\label{comm_rel_5}
\left[\widehat{p},e^{i\,\theta(\widehat{x},t)}\right]\,=\,e^{i\,B}\sum_{j=0}^\infty \frac{(i\,A)^j}{j!}\,\left[\widehat{p},\widehat{x}^j\right]\,=\,\hbar\,A\,e^{i\,B}\sum_{j=0}^\infty \frac{(i\,A)^{j-1}}{(j-1)!}\,\widehat{x}^{j-1}\,=\,\hbar\,A\,e^{i\left(\widehat{x} A+B\right)}\,\equiv\,\hbar\,A\,e^{i\,\theta(\widehat{x},t)}\,,
\eeq
where we used $\left[\widehat{p},\widehat{x}^j\right]=-i\,\hbar\,j\,\widehat{x}^{j-1}$ in the second equality.
Finally we need to calculate
\begin{eqnarray}
\nonumber
\left[\widehat{p}^2,e^{i\,\theta(\widehat{x},t)}\right]&\,=\,&\widehat{p}\,\left[\widehat{p},e^{i\,\theta(\widehat{x},t)}\right]+\left[\widehat{p},e^{i\,\theta(\widehat{x},t)}\right]\,\widehat{p}\,=\,\hbar\,A\,\widehat{p}\,e^{i\,\theta(\widehat{x},t)}+\hbar\,A\,e^{i\,\theta(\widehat{x},t)}\widehat{p}\,=\,
\\ \label{comm_rel_6}
&\,=\,& \left(\hbar\,A\right)^2\,e^{i\theta(\widehat{x},t)}+2\,\hbar\,A\,e^{i\theta(\widehat{x},t)}\widehat{p}\,,
\end{eqnarray}
where we used the usual commutation relation in the first equality, and the result
(\ref{comm_rel_5}) in the second and last equality.
\vspace{3mm}
\noindent
{\bf Time evolution of ${\bf \widehat x}$'s operators}.
Now we have all quantities we need to evaluate expectation values of the state $\chi(x,t)$
in (\ref{complete_sol_onebody}). Let us start with
\begin{equation}
\left \langle \chi(x,t)|\,\widehat{x}\,| \chi(x,t)\right \rangle \,=\,\left \langle \eta(\rho,0\,)\Big|\,
\exp\left[i\frac{t}{2\,m\,\hbar}\,\widehat{p}^2 \right] \,\widehat{x}\, \exp\left[- i\frac{t}{2\,m\,\hbar}\widehat{p}^2\right]\,\Big| \,\eta(\rho,0)\right \rangle\,,
\eeq
where we used the fact that $\widehat{x}$ commutes with $e^{i\,\theta(\widehat{x},t)}$.
Using (\ref{comm_rel_3}) with $b=\frac{t}{2\,m\,\hbar}$, we may rewrite the expectation value as
\begin{eqnarray}
\nonumber
\left \langle \chi(x,t)|\,\widehat{x}\,| \chi(x,t)\right \rangle &\,=\,&\frac{t}{m}\left \langle \eta(\rho,0)|\,\widehat{p}\,| \eta(\rho,0)\right \rangle +\left \langle \eta(\rho,0)|\,\widehat{x}\,| \eta(\rho,0)\right \rangle\,=\,\\
&\,=\,&\frac{t}{m}\left \langle \eta(x,0)|\,\widehat{p}\,| \eta(x,0)\right \rangle +\left \langle \eta(x,0)\Big|\,\exp\left[i\frac{\xi(t)}{\hbar}\widehat{p}\right]\,\widehat{x}\,
\exp\left[-i\frac{\xi(t)}{\hbar}\widehat{p}\right]\,\Big| \eta(x,0)\right \rangle\,,
\end{eqnarray}
where we used the definition of the translation operator (\ref{def_transl_op}) in
the last equality, since $\rho(t)=x-\xi(t)$. Next using the commutation relation
(\ref{comm_rel_1}) with $a=\frac{\xi(t)}{\hbar}$, we get
\begin{equation}
\label{x_result}
\left \langle \chi(x,t)|\,\widehat{x}\,| \chi(x,t)\right \rangle \,=\, \frac{t}{m}\,p_0+\xi(t)\left \langle \eta(x,0)| \eta(x,0)\right \rangle+\left \langle \eta(x,0)|\,\widehat{x}\,| \eta(x,0)\right \rangle\,=\,\frac{t}{m}\,p_0+\xi(t)+x_0\,,
\eeq
where we employed normalization condition
\begin{equation}
\label{norm_condition}
\left \langle \eta(x,0)| \eta(x,0)\right \rangle\,=\,\left \langle \chi(x,0)| \chi(x,0)\right \rangle\,=\,1\,,
\eeq
and the definitions for the initial expectation values of momentum and position
in Eq. (\ref{initial_conditions_xp}). We can also evaluate the expectation value of
$\widehat{x}^2$ using the same procedure but employing Eqs.(\ref{comm_rel_2}) and (\ref{comm_rel_4}),
this time with $a=\xi(t)/\hbar$ and $b=t/(2\,m\,\hbar)$ respectively, and we find
\begin{equation}
\label{x^2_result}
\left \langle \chi(x,t)\big|\,\widehat{x}^2\,\big| \chi(x,t)\right \rangle \,=\,\xi^2(t)-2\,\xi(t)\frac{t}{m}\,p_0-2\,\xi(t)\,x_0+\left\langle x^2\right\rangle_{\rm free}(t)\,,
\eeq
where we have defined
\begin{equation}
\left\langle x^2\right\rangle_{\rm free}(t)\,\equiv\,\left \langle \eta(x,t)\big|\,\widehat{x}^2\,\big| \eta(x,t)\right \rangle\,,
\eeq
which is the expectation value of $x^2$ evaluated on the free Schr\"odinger
equation solution $\eta(x,t)$, prepared in the initial state $\eta(x,0)=\chi(x,0)$.
Finally, we can write an expression for the variance of $x$
\begin{equation}
\Delta x(t)\,=\,\sqrt{\left\langle x^2\right\rangle(t) - \left\langle x \right \rangle^2(t)}\,,
\eeq
using the results (\ref{x_result}) and (\ref{x^2_result}), from which we get
\begin{equation}
\label{variancex_result}
\Delta x(t)\,=\,\sqrt{\left\langle x^2\right\rangle_{\rm free}(t)-\left(\frac{t}{m}\,p_0+x_0\right)^2}\,=\,\sqrt{\left\langle x^2\right\rangle_{\rm free}(t)-\left\langle x\right\rangle_{\rm free}(t)}\,\equiv\,\Delta x_{\rm free}(t)\,,
\eeq
where we have rewritten $\left\langle x \right\rangle_{\rm free}(t) \equiv \frac{t}{m}\,p_0+x_0$,
which is indeed $\left \langle \eta(x,t)\big|\,\widehat{x}\,\big| \eta(x,t)\right \rangle$ with
$\eta(x,t)$ the solution of the free Schr\"odinger equation.
\vspace{3mm}
\noindent
{\bf Time evolution of ${\bf \widehat p}$'s operators}. The same computations
can be done for the expectation values involving the momentum. Let's start with
\begin{equation}
\nonumber
\left \langle \chi(x,t)|\,\widehat{p}\,| \chi(x,t)\right \rangle \,=\,\left \langle \eta(\rho,t)|\,e^{-i\,\theta(\widehat{x},t)}\widehat{p}\,e^{i\,\theta(\widehat{x},t)}\,| \eta(\rho,t)\right \rangle\,=\,-m\,g\,t\left \langle \eta(\rho,t)| \eta(\rho,t)\right \rangle+\left \langle \eta(\rho,t)|\,\widehat{p}\,| \eta(\rho,t)\right \rangle\,,
\eeq
where we used Eq. (\ref{comm_rel_5}), with $A=-(m\,g\,t)/\hbar$, in the second equality. Now the
computation is very simple, since the momentum operator commutes with the translation and
time evolution operators: therefore, using the normalization condition (\ref{norm_condition})
and the initial conditions (\ref{initial_conditions_xp}), we get
\begin{equation}
\label{p_result}
\left \langle \chi(x,t)|\,\widehat{p}\,| \chi(x,t)\right \rangle\,=\,-m\,g\,t+p_0\,=\,-m\,g\,t+\left\langle p\right\rangle_{\rm free}(0)\,,
\eeq
where obviously $\left\langle p\right\rangle_{\rm free}(t)\equiv\left \langle \eta(x,t)|\,\widehat{p}\,| \eta(x,t)\right \rangle=\left \langle \eta(x,0)|\,\widehat{p}\,| \eta(x,0)\right \rangle=p_0\equiv\left\langle p\right\rangle_{\rm free}(0)$. For the time evolution of the expectation value of
$\widehat{p}^2$ we have
\begin{equation}
\label{p^2_result}
\left \langle \chi(x,t)|\,\widehat{p}^2\,| \chi(x,t)\right \rangle\,=\,m^2\,g^2\,t^2-2\,m\,g\,t\,p_0+\left\langle p^2\right\rangle_{\rm free}(0)\,,
\eeq
where we use the commutation relation (\ref{comm_rel_6}) with $A=-(m\,g\,t)/\hbar$,
and we defined
$$
\left\langle p^2\right\rangle_{\rm free}(t)\equiv\left \langle \eta(x,t)|\,\widehat{p}^2\,| \eta(x,t)\right \rangle=\left \langle \eta(x,0)|\,\widehat{p}^2\,| \eta(x,0)\right \rangle\equiv\left\langle p^2\right\rangle_{\rm free}(0)\,\,\,.
$$
Finally, let's compute the variance of $\widehat{p}$ using Eqs. (\ref{p_result}) and (\ref{p^2_result}), so that
\begin{eqnarray}
\label{variancep_result1}
\Delta p(t)\,=\,\sqrt{\left\langle p^2\right\rangle(t) - \left\langle p \right \rangle^2(t)}&\,=\,&\sqrt{\left\langle p^2\right\rangle(0) - (p_0)^2}\,\equiv\,\Delta p(0)\\ \label{variancep_result2}
&\,=\,&\sqrt{\left\langle p^2\right\rangle_{\rm free}(t)-\left(\left\langle p\right\rangle_{\rm free}\right)^2(t)}\,\equiv\,\Delta p_{\rm free}(t)\,,
\end{eqnarray}
According to this result, we see that the variance of the momentum remains equal to its initial
value at $t=0$ while Eq. (\ref{variancep_result2}) shows that the evolution of the variance of
$\widehat{p}$ for a falling wavepacket is exactly the same as the one of a free expanding wavepacket.
As evident from Eq. (\ref{variancex_result}), also the variance of the position of the
falling wavepacket behaves as in the free expanding (\textit{i.e.} no gravity) case.
\vspace{3mm}
\noindent
{\bf A simple check}. We can check the results obtained above in the case of the time
evolution of an initial Gaussian wavepacket subject to a gravitational force.
Since the spreading of the wavepacket in absence of external potential
is done practically in any introductory course of Quantum Mechanics,
we think is instructive to explicitly see the same problem in presence of the gravity,
\textit{i.e.} of a linear potential.
We prepare a Gaussian wavepacket centered in $x_0$ with variance $\sigma$, and
with initial momentum $k_0$ as initial state:
\begin{equation}
\label{initial_GWP}
\chi(x,0)\,=\,\frac{1}{\sqrt[4]{2\pi\sigma^2}}\,\exp\left[i\,k_0\,x -\frac{(x-x_0)^2}{4\,\sigma^2}\right]\,.
\eeq
In order to find the evolved state $\chi(x,t)$ we could expand the initial wavepacket
with respect to the basis of eigenfunctions of the Schr\"odinger equation
\begin{equation}
\label{linear_potential_evolution}
-\frac{\hbar^2}{2m} \frac{\p^2 \chi}{\p x^2} + m\,g\,x\,\chi(x,t)\,=\,E\,\chi(x,t)\,.
\eeq
From (\ref{complete_sol_onebody}) it is clear that the right basis to use is
\begin{equation}
\nonumber
\chi_{\rm basis}(x,t)\,=\,\frac{1}{\sqrt{2\,\pi}}\,\exp\left[i\,\theta(x,t) - i\,t\,\frac{\hbar\,k^2}{2m}-i\,\xi(t)\,k+i\,k\,x\right]\,,
\eeq
with $k=\sqrt{\frac{2\,m\,E}{\hbar}}$, while $\theta(x,t)$ and $\xi(t)$ are given by
Eqs. (\ref{theta_g}) and (\ref{xi_g}) respectively. Making this expansion
according to the standard methods
of Quantum Mechanics textbooks (see for instance \cite{Griffiths}), we find
\begin{equation}
\label{GWP_timeevolved}
\chi(x,t)\,=\,\frac{1}{\sqrt[4]{2\,\pi\,\sigma^2}}\,\frac{e^{i\,\theta(x,t)}}{\sqrt{1+i\,\frac{\hbar\,t}{2\,m\,\sigma^2}}}\exp{\left\{-\frac{\left[x-x_0-\xi(t)\right]^2+4\,i\,k_0\,\sigma^2\left[x-x_0-\xi(t)\right]+2\,i\,\hbar\,t\,(k_0\,\sigma)^2/m}{4\left(\sigma^2+i\,\frac{\hbar\,t}{2\,m}\right)}\right\}}\,,
\eeq
which has to be compared with the one obtained from the free evolution expansion
of the initial wavepacket (\ref{initial_GWP}), under the same Schr\"odinger equation
but with $g=0$, \textit{i.e.} with
\begin{equation}
\chi(x,t)_{\rm free}\,=\,\frac{1}{\sqrt[4]{2\,\pi\,\sigma^2}}\,\frac{1}{\sqrt{1+i\,\frac{\hbar\,t}{2\,m\,\sigma^2}}}\exp{\left\{-\frac{\left(x-x_0\right)^2+4\,i\,k_0\,\sigma^2\left(x-x_0\right)+2\,i\,\hbar\,t\,(k_0\,\sigma)^2/m}{4\left(\sigma^2+i\,\frac{\hbar\,t}{2\,m}\right)}\right\}}\,.
\eeq
It is evident that the spreading of these wavefunctions as a function of time is the same and
coincides with the expecting value coming from Eq. (\ref{variancex_result})
\begin{equation}
\label{variance_gaussian}
\Delta x(t)\,=\,\sqrt{\sigma^2+\frac{\hbar^2\,t^2}{4\,m^2\,\sigma^2}}\,,
\eeq
while the motion of their centers of mass coincides with the expected value given in
Eq. (\ref{x_result}). Notice that the motion of the center of the wavepacket in
(\ref{GWP_timeevolved}) is the same as the motion of a one--dimensional
classical particle subjected to the gravitational force, \textit{i.e.} it is
an accelerated motion, and this follows from the Ehrenfest theorem.
The analysis can be performed also for the momentum variables and one easily find the
results reported in equations (\ref{p_result}), (\ref{variancep_result1})
and (\ref{variancep_result2}).
\vspace{3mm}
\noindent
{\bf Three--dimensional case}.
It is simple to extend the analysis which we presented above to the case of a single
particle falling along the x--direction (once the axis are opportunely chosen)
in a three--dimensional space. In this case the Schr\"odinger equation reads
\begin{equation}
\label{3d_schro_1body}
i \hbar \frac{\p}{\p t}\chi(\vec{r},t)\,=\,i \hbar \frac{\p}{\p t}\chi(x,y,z,t)\,=\,\left(-\frac{\hbar^2}{2m} \vec{\nabla}_x^2 +m\, g\,x\right)\chi(\vec{r},t)\,,
\eeq
where the vector position $\vec{r}$ is expressed in Cartesian coordinates in the second equality,
and we have denoted with
\begin{equation}
\nonumber
\vec{\nabla}^2_x\,\equiv\,\frac{\p^2}{\p x^2}+\frac{\p^2}{\p y^2}+\frac{\p^2}{\p z^2}
\eeq
the Laplacian. Proceeding in the same way as for the $1D$ case,
we perform a gauge transformation on the wavefunction
\begin{equation}
\label{gauge_trasf_3d_onebody}
\chi(\vec{r},t)\,=\,e^{i\,\theta(x,t)}\,\eta(\rho(t),y,z,t)\,,
\eeq
where $\rho(t)=x-\xi(t)$, and the gauge phase $\theta(x,t)$ and the translational parameter
$\xi(t)$ satisfy Eqs. (\ref{conditions_integrab}). Within these conditions,
the Schr\"odinger equation (\ref{3d_schro_1body}) is reduced to the free Schr\"odinger
equation for $\eta(\rho,y,z,t)$:
\begin{equation}
i \hbar \frac{\p}{\p t}\eta(\rho,y,z,t)\,=\,-\frac{\hbar^2}{2m} \vec{\nabla}_\rho^2\,\eta(\rho,y,z,t)\,,
\eeq
where we define
\begin{equation}
\nonumber
\vec{\nabla}^2_\rho\,\equiv\,\frac{\p^2}{\p \rho^2}+\frac{\p^2}{\p y^2}+\frac{\p^2}{\p z^2}\,.
\eeq
Analogously to (\ref{complete_sol_onebody}), choosing $\theta(x,t)$ to be (\ref{theta_g})
and $\xi(t)$ given by Eq. (\ref{xi_g}), we can rewrite (\ref{gauge_trasf_3d_onebody})
with respect to the evolution and translation operators as
\begin{equation}
\label{complete_sol_3d_onebody}
\chi(\vec{r},t)\,=\,\exp\left\{i\theta(x,t) -i\frac{t}{\hbar}\frac{\widehat{\vec{p}}\,^2}{2m} -i \frac{\xi(t)}{\hbar} \widehat{p}_x\right\}\,\,\chi(\vec{r},0)\,,
\eeq
where we have defined
\begin{equation}
\nonumber
\widehat{\vec{p}}\,^2\,=\,\widehat{p}_x^2+\widehat{p}_y^2+\widehat{p}_z^2\,,
\eeq
with the momentum operators acting on different Cartesian coordinates as
$\widehat{p}_\alpha\rightarrow -i\,\hbar\,\frac{\p}{\p \alpha}$. We are now able to study how
expectation values of different physical quantities evolve. First we redefine expectation
values of position operator and its powers as
\begin{equation}
\left\langle \alpha^{\mathcal{N}}\right\rangle(t)\,\equiv\,\left \langle \chi(\vec{r},t)|\,\widehat{\alpha}^{\mathcal{N}}\,| \chi(\vec{r},t)\right \rangle \,=\,\int_{-\infty}^{\infty}\,dx\,\int_{-\infty}^{\infty}\,dy\,\int_{-\infty}^{\infty}\,dz\, \left|\chi(\vec{r},t)\right|^2 \,\alpha^{\mathcal{N}}\,,
\eeq
where $\alpha$ can be the $x$, $y$ or $z$ coordinate, while for the powers of the momentum
\begin{equation}
\left\langle p_\alpha^{\mathcal{N}}\right\rangle(t)\,\equiv\,\left \langle \chi(\vec{r},t)| \,\widehat{p}_\alpha^{\mathcal{N}}\, | \chi(\vec{r},t)\right \rangle \,=\,(-i\hbar)^{\mathcal{N}} \int_{-\infty}^{\infty}\,dx\,\int_{-\infty}^{\infty}\,dy\,\int_{-\infty}^{\infty}\,dz\,\chi^*(\vec{r},t) \,\frac{\p^{\mathcal{N}}}{\p \alpha^{\mathcal{N}}} \chi(\vec{r},t)\,,
\eeq
where $\alpha$ labels the $x$, $y$ or $z$ component.
From Eq. (\ref{complete_sol_3d_onebody}) it is straightforward to perform
the calculation of expectation values of different coordinates, and since operators
acting on different spaces commutes (like $\widehat{x}$ and $\widehat{p}_y$ or $\widehat{p}_y$ and
$\widehat{p}_z$ and so on) then the motion on $y$ and $z$ directions is trivially evaluated
to be the free one ($g=0$), while for the $x$ component one relies on the
results presented previously for the one--dimensional case.
Notice that by writing the vector position in three dimensions as
\begin{equation}
\vec{r}\,=\,x\cdot\vec{e}_x+y\cdot\vec{e}_y+z\cdot\vec{e}_z\,,
\eeq
where $\vec{e}_x$, $\vec{e}_y$ and $\vec{e}_z$ are the usual unit vectors in Cartesian
coordinate system, \textit{i.e.}
\begin{equation}
\label{unit_vectors}
\vec{e}_x\,=\,\begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix}\,,\,\,\,\,\,\vec{e}_y\,=\,\begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix}\,,\,\,\,\,\,\vec{e}_z\,=\,\begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix}\,,
\eeq
then the expectation value of the vector position $\vec{r}$ can be written in terms of
its single components expectation values
\begin{equation}
\label{r_mean}
\left \langle \vec{r}\right \rangle(t)\,=\, \vec{e}_x\cdot\left \langle x\right \rangle(t)+\vec{e}_y \cdot\left \langle y\right \rangle(t)+\vec{e}_z \cdot\left \langle z\right \rangle(t)\,,
\eeq
and similarly can be done for the squared vector position $\vec{r}^2 =\vec{r}\cdot\vec{r}$, which will read
\begin{equation}
\label{r^2_mean}
\left \langle \vec{r}\,^2\right \rangle(t)\,=\, \left \langle x^2\right \rangle(t)+\left \langle y^2\right \rangle(t)+\left \langle z^2\right \rangle(t)\,.
\eeq
The same decomposition in terms of the single components
expectation values obviously holds also for the momentum operators.
So, in summary, the motion of a wavepacket in three--dimensions under
the action of gravity is described by a spreading which is completely analogous
to the spreading of a free (no gravity) expansion in all directions, while its center of
mass moves along the direction of the gravitational force as a classical particle would do.
\section{Free fall of two interacting particles}
We now study a three--dimensional system made of $2$ interacting particles subject to gravity.
The Schr\"odinger equation reads
\begin{equation}
\label{schro_twobodies}
i\hbar \frac{\p}{\p t} \chi(\vec{r}_1,\vec{r}_2,t)\,=\,\left[-\frac{\hbar^2}{2m}\left(\vec{\nabla}_{r_1}^2+\vec{\nabla}_{r_2}^2\right) +V(\left|r_2- r_1\right|)+g \left(x_1+x_2\right)
\right]\chi(\vec{r}_1,\vec{r}_2,t)\,,
\eeq
where
\begin{equation}
\label{nabla_definition}
\vec{\nabla}_{r_j}^2\,\equiv\,\frac{\p^2}{\p x_j^2}+\frac{\p^2}{\p y_j^2}+\frac{\p^2}{\p z_j^2}\,,
\eeq
for $j=1,2$, and $V(\left|\vec{r}_2-\vec{r}_1\right|)$ describes the interaction among particles and depends only on the distance between them
\begin{equation}
\label{distance_definition}
\left| \vec{r}_2- \vec{r}_1\right|\,=\,\sqrt{\left(x_2-x_1\right)^2+\left(y_2-y_1\right)^2+\left(z_2-z_1\right)^2}\,.
\eeq
In order to solve the Schr\"odinger equation, we employ the same method outlined in
the previous Section: We perform a gauge transformation on the wavefunction
\begin{equation}
\label{sol_twobodies}
\chi(\vec{r}_1,\vec{r}_2,t)\,=\,e^{i\left[\theta(x_1,t)+\theta(x_2,t)\right]}\,
\eta(\bm{\varrho}_1(t),\bm{\varrho}_2(t),t)\,,
\eeq
where $\bm{\varrho}_j(t)=\left(\rho_j\,,\,y_j\,,\,z_j\right)$,
with $\rho_j (t)= x_j-\xi(t)$, while $\theta(x_j,t)$ and $\xi(t)$ obey
Eqs. (\ref{conditions_integrab}) for $x=x_j$ and with $j=1,2$. Notice that because
the interacting potential depends on distance among the particles, it will remain
of the same form after the definition of the new spatial variables $\rho_j(t)$
and $\bm{\varrho}_j(t)$.
Using the ansatz (\ref{ansatz_theta}) and by choosing $\xi(0)=d\xi(0)/dt=0$ and $\Gamma(0)=0$
we have that $\theta(x_j,t)$ is given by (\ref{theta_g}),
while $\xi(t)$ is given by (\ref{xi_g}). Under these conditions,
$\eta(\varrho_1,\varrho_2,t)$ will satisfy the free Schr\"odinger equation for two
interacting particles
\begin{equation}
\label{twobodies_schro_eta}
i\hbar \frac{\p}{\p t} \eta(\bm{\varrho}_1,\bm{\varrho}_2,t)\,=\,\left[-\frac{\hbar^2}{2m}\left(\vec{\nabla}_{\varrho_1}^2+\vec{\nabla}_{\varrho_2}^2\right) +V(\left|\varrho_2-\varrho_1\right|) \right]\eta(\bm{\varrho}_1,\bm{\varrho}_2,t)\,,
\eeq
with
\begin{equation}
\label{nabla_many_def}
\vec{\nabla}_{\varrho_j}^2\,\equiv\,\frac{\p^2}{\p \rho_j^2}+\frac{\p^2}{\p y_j^2}+\frac{\p^2}{\p z_j^2}\,,
\eeq
for $j=1,2$.
Therefore if one knows how to solve Eq. (\ref{twobodies_schro_eta}),
then the complete solution of (\ref{schro_twobodies}) reads
\begin{equation}
\label{complete_sol_twobodies}
\chi(\vec{r}_1,\vec{r}_2,t)\,=\,\exp\left[-i\frac{m\,g\,t}{\hbar}\left(\frac{g\,t^2}{3}+x_1+x_2\right)\right]\,\eta\left(x_1+\frac{g\,t^2}{2},y_1,z_1 ; x_2+\frac{g\,t^2}{2}, y_2,z_2;t\right)\,,
\eeq
where we have used the solution of the original Schr\"odinger equation with a gravitational
force term, with respect to the free ($g=0$) solution of (\ref{twobodies_schro_eta}).
We can now ask the same questions as before: If we start from a generic wavepacket
$\chi(r_1,r_2,0)$ and we let it evolve under the action of gravity, how do its variances and expectation values of powers of position behave? Let's define as usual
\begin{equation}
\label{expect_value_pos_def}
\left\langle \alpha_j^{\mathcal{N}}\right\rangle(t)\,\equiv\,\left \langle \chi(\vec{r}_1,\vec{r}_2,t)|\,\widehat{\alpha}_j^{\mathcal{N}}\,| \chi(\vec{r}_1,\vec{r}_2,t)\right \rangle \,=\,\int
dr_1\,\int dr_2 \,\left|\chi(\vec{r}_1,\vec{r}_2,t)\right|^2 \,\alpha_j^{\mathcal{N}}
\eeq
with $\alpha$ which can be either $x$, $y$ or $z$, while $j=1,2$ labels the particles.
For the expectation value of powers of the momenta $\widehat{p}_{\alpha_j}$ for $j=1,2$
we have
\begin{equation}
\label{expect_value_mom_def}
\left\langle p_{\alpha_j}^{\mathcal{N}}\right\rangle(t)\,\equiv\,\left \langle \chi(\vec{r}_1,\vec{r}_2,t)|\,\widehat{p}_{\alpha_j}^{\mathcal{N}}\,| \chi(\vec{r}_1,\vec{r}_2,t)\right \rangle
\,=\,(-i\hbar)^{\mathcal{N}} \int\,dr_1\,\int\,dr_2\, \chi^*(\vec{r}_1,\vec{r}_2,t) \,\frac{\p^{\mathcal{N}}}{\p \alpha_j^{\mathcal{N}}} \chi(\vec{r}_1,\vec{r}_2,t)
\eeq
with $\int dr_j =\int_{-\infty}^\infty dx_j\,\int_{-\infty}^\infty dy_j\,\int_{-\infty}^\infty dz_j$. For
the initial conditions we take ($j=1,2$)
\begin{equation}
\left \langle \alpha_j\right \rangle (0)\,=\,\alpha_0^{(j)}\,, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\left \langle p_{\alpha_j}\right \rangle (0) \,=\,p_{\alpha 0}^{(j)}\,,
\eeq
It is actually very simple to prove that the same results for the position variables of the
one particle case will hold, that is to say the variances of positions of the particles
will behave as the free expanding case, while the expectation values of powers of the
$x$ component for positions have the same expressions of the one body case, see
Eqs. (\ref{x_result}) and (\ref{x^2_result}), with an additional index $j=1,2$ to
label the particles. For $y$ and $z$ components instead one
has as result the formulas referring to $g=0$, since
the gravitational potential acts only along the $x$ direction.
The simplicity of this result comes from the fact that the commutators
among operators acting on different particles vanish,
therefore\footnote{We report for convenience only the commutators on the $x$ components,
but the same commutator rules will be valid also for $y$ and $z$.}
\begin{gather}
\nonumber
\left[\widehat{x}_j,e^{-i\,a\,\widehat{p}_{x_k}}\right]\,=\,\hbar\,a\,e^{-i\,a\,\widehat{p}_{x_k}}\,\delta_{j,k}\,,\\ \nonumber
\left[\widehat{x}_j^2,e^{-i\,a\,\widehat{p}_{x_k}}\right]\,=\,e^{-i\,a\,\widehat{p}_{x_k}}\,\left(a^2-2a\,\widehat{x}_j\right)\,\delta_{j,k}\,,\\ \nonumber
\left[\widehat{x}_j,e^{-i\,b\,\widehat{p}_{x_k}^2}\right]\,=\,2\,\hbar\,b\,e^{-i\,b\,\widehat{p}_{x_k}^2}\,\widehat{p}_{x_k}\,\delta_{j,k}\,,\\ \nonumber
\left[\widehat{x}_j^2,e^{-i\,b\,\widehat{p}_{x_k}^2}\right]\,=\,\left[4\,\hbar\,b\,e^{-i\,b\,\widehat{p}_{x_k}^2}\widehat{p}_{x_k}\,\widehat{x}_j+(2\,\hbar\,b)^2\,e^{-i\,b\,\widehat{p}_{x_k}^2}\widehat{p}_{x_k}^2+2\,i\,\hbar^2\,b\,e^{-i\,b\,\widehat{p}_{x_k}^2}\right]\delta_{j,k}\,,
\end{gather}
where $\delta_{j,k}$ is the Kronecker delta and we have rewritten
$\theta(\widehat{x}_j,t) = \widehat{x}_j A+B$.
We can rewrite (\ref{sol_twobodies}) as
\begin{eqnarray}
\chi(\vec{r}_1,\vec{r}_2,t)&\,=\,&
\exp\left\{ i[\theta(\widehat{x}_1,t)+\theta(\widehat{x}_2,t)] - i\frac{t}{\hbar}\left[\frac{\widehat{p}_1^2+\widehat{p}_2^2}{2m} +
V\left(\left|\widehat{\varrho}_2-\widehat{\varrho}_1\right|\right)\right]\right\}\,\eta(\varrho_1,\varrho_2,0)\,=\, \\
&\,=\,&\exp\left\{ i [\theta(\widehat{x}_1,t)+\theta(\widehat{x}_2,t)] - i\frac{t}{\hbar}\widehat{H}_0 - i \frac{\xi(t)}{\hbar} (\widehat{p}_{x_1}+\widehat{p}_{x_2})\right\}\,\chi(\vec{r}_1,\vec{r}_2,0)\,,\nonumber
\end{eqnarray}
where we have defined
$$
\widehat{H}_0\equiv \frac{\widehat{p}_1^2 +\widehat{p}_2^2}{2m} + V\left(\left|\widehat{\varrho}_2-\widehat{\varrho}_1\right|\right)\,\,\,.
$$
One can repeat the exact same steps performed in the previous Section to obtain
the expressions for the expectation values of different physical quantities. We summarize
the results below
\begin{gather}
\label{1_result}
\left \langle x_j\right\rangle (t)\,=\,\frac{t}{m}\,p_{x0}^{(j)}+\xi(t)+x_0^{(j)}\,,\\
\label{2_result}
\left \langle x_j^2\right\rangle (t)\,=\,\xi^2(t)-2\,\xi(t)\,\frac{t}{m}\,p_{x0}^{(j)}-2\,\xi(t)\,x_0^{(j)}+\left\langle x_j^2\right\rangle_{\rm free}(t)\,,\\
\label{5_result}
\Delta x_j(t)\,=\,\sqrt{\left\langle x_j^2\right\rangle(t)-\left\langle x_j\right\rangle^2(t)}\,=\,\left(\Delta x_j\right)_{\rm free}(t)\,,
\end{gather}
where as usual we label with the subscript "free" the expectation values
evaluated on the wavefunction $\eta(r_1,r_2,t)$ of the free expanding
problem\footnote{Notice however that in order to evaluate them, one needs to know how to solve the Schr\"odinger equation (\ref{twobodies_schro_eta}) for that specific interacting potential.}.
The same expressions, but with $g=0$, are valid for the expectation values on the $y$ and $z$ components.
We therefore conclude that our gauge transformation can be also used to reduce the initial
Schr\"odinger equation describing the dynamic of two falling interacting particles to the
simpler Schr\"odinger equation where no gravitational potential is present, and with the
same interacting potential among the particles. The fundamental requirement is that the
two--body potential depends only on the relative distance between the particles, after
all a typical feature for a many--body system. When this condition
holds, it is straightforward to generalize
the results presented so far for a quantum many--body system subject to a gravitational force.
\section{Free fall of a quantum many--body system}
In the literature the problem of describing the motion of a "structured", many--body
quantum system under
the action of a gravitational potential has been addressed for various situations ranging
from Bose--Einstein condensates \cite{ChenLiu1976, Wadati2001, Ablowitz2004}
to one--dimensional integrable systems \cite{Sen1988, JukicGalic2010}. Nevertheless
the case of a general three--dimensional many--body system subject to gravity
can be explicitly addressed using the method described in the previous Sections,
even though it should be now clear how to approach it.
Let's then focus our attention on the Schr\"odinger equation of $N$ (for simplicity
spinless) interacting particles subject to gravity along the $x$ direction
\begin{equation}
\label{schro_Nbodies}
i\hbar \frac{\p}{\p t} \chi(\vec{r}_1,\dots,\vec{r}_N,t)\,=\,\left[-\frac{\hbar^2}{2m}\sum_{j=1}^N \vec{\nabla}_{r_j}^2 +\sum_{j<k} V(\left|r_k - r_j\right|)+m\,g \sum_{j=1}^N x_j \right]\,
\chi(\vec{r}_1,\dots,\vec{r}_N,t)\,,
\eeq
where the interacting potential depends on the relative distances among particles
(\ref{distance_definition}), and the kinetic part is written in terms of (\ref{nabla_definition}).
To solve the Schr\"odinger equation we perform, as usual, a gauge transformation on the wavefunction
\begin{equation}
\label{sol_Nbodies}
\chi(\vec{r}_1,\dots,\vec{r}_N,t)\,=\,\prod_{j=1}^N e^{i \theta(x_j,t)}\,\eta(\bm{\varrho}_1(t),\dots,\bm{\varrho}_N(t),t)\,,
\eeq
which is a trivial generalization to the $N$ particle case of Eq. (\ref{sol_twobodies}).
If the gauge phase $\theta(x_j,t)$ and the translational parameter $\xi(t)$ satisfy
(\ref{conditions_integrab}) with $x=x_j$, then $\eta(\varrho_1(t),\dots,\varrho_N(t),t)$
is the solution of the free Schr\"odinger equation
\begin{equation}
\label{Nbodies_schro_eta}
i\hbar \frac{\p}{\p t} \eta(\bm{\varrho}_1,\dots,\bm{\varrho}_N,t)\,=\,\left[-\frac{\hbar^2}{2m} \sum_{j=1}^N \vec{\nabla}_{\varrho_j}^2 +\sum_{j<k} V(\left|\varrho_k-\varrho_j\right|) \right]\eta(\bm{\varrho}_1,\dots,\bm{\varrho}_N,t)\,,
\eeq
where the kinetic part is expressed in terms of (\ref{nabla_many_def}) for every $j$.
Using Eq. (\ref{sol_Nbodies}) one can easily prove that all results presented
in the previous Sections,
in particular those reported in Eqs. (\ref{1_result}) -- (\ref{5_result}) and the
same expressions for $y$ and $z$ coordinates but with $g=0$, also hold for the many--body system.
Finally, using Eqs. (\ref{r_mean}) and (\ref{r^2_mean}) one can derive the laws describing
the time evolution of expectation values of the position of the wavepacket.
In particular, given that the system conserves the $x$, $y$ and $z$-components of the
total momentum, \textit{i.e.} $\left[\widehat{H}_0, \sum_{j=1}^N \widehat{p}_{\alpha_j} \right]=0$
for $\alpha=x$, $y$ and $z$, one can explicitly work out the total momentum and
the energy expectation values of the system in terms of the free case ($g=0$).
Using the commutation relations founded previously, one gets that
\begin{equation}
\label{total_momentum}
\left \langle P\right\rangle(t)\,=\,\left\langle \chi(\vec{r}_1,\dots,\vec{r}_N,t) \Big| \widehat{\vec{P}} \Big|\chi(\vec{r}_1,\dots,\vec{r}_N,t) \right\rangle=\,P_0 -\vec{e}_x\,N\, m\,g\,t\,,
\eeq
where
\begin{equation}
\widehat{\vec{P}}\,=\,\sum_{\alpha=x,y,z}\sum_{j=1}^N \vec{e}_\alpha \, \widehat{p}_{\alpha_j}
\eeq
represents the total momentum of the system, written in terms of the unit
vectors defined in Eq. (\ref{unit_vectors}), while $P_0$ is the initial $t=0$ total momentum:
\begin{equation}
P_0\,=\,\sum_{\alpha=x,y,z}\sum_{j=1}^N \vec{e}_\alpha \, p_{\alpha 0}^{(j)}\,.
\eeq
For the total energy of the system instead, one can compute
\begin{equation}
E(t)\,=\,\left\langle \widehat{H}\right\rangle(t)\,=\,\left\langle \chi(\vec{r}_1,\dots,\vec{r}_N,t) \Bigg| \frac{1}{2m}\widehat{\vec{P}}^2 +\sum_{j<k}\widehat{V}(|r_k - r_j|) +m\,g\sum_{j=1}^N \widehat{x}_j \Bigg|\chi(\vec{r}_1,\dots,\vec{r}_N,t)\right\rangle\,,
\eeq
and using the above results, after an elementary but lengthy calculation, one obtains that the energy is conserved during the motion: $E(t)=E(0)$, $\forall t>0$, as one would expect.
Thanks to the simple rewriting of the many--body wavefunction in Eq. (\ref{sol_Nbodies}), we are also able to write the one--body density matrix of the falling system in terms of the free,
\textit{free} non-falling system. The one--body density matrix is defined as \cite{Pitaevskii16}:
\begin{equation}
\label{obdm_time_def}
\rho(\vec{r},\vec{r}',t)\,=\,N\,\int d\vec{r}_2\dots d\vec{r}_N\,\chi^*(\vec{r},\vec{r}_2,\dots,\vec{r}_N,t) \,\chi(\vec{r}',\vec{r}_2,\dots,\vec{r}_N,t)\,.
\eeq
Therefore using Eq. (\ref{sol_Nbodies}) we can rewrite the density matrix as:
\begin{equation}
\rho(\vec{r},\vec{r'},t)\,=\,N\,e^{i\left[\theta(x',t)-\theta(x,t)\right]}\,\int d\bm{\varrho}_2\dots d\bm{\varrho}_N\,\eta^*(\bm{\varrho},\bm{\varrho}_2,\dots,\bm{\varrho}_N,t) \,\eta(\bm{\varrho}',\bm{\varrho}_2,\dots,\bm{\varrho}_N,t)\,,
\eeq
since $d\vec{r}_j = d\bm{\varrho}_j$ for every $j$, while
$\bm{\varrho}(t) = \vec{r} - \xi(t)$, $\bm{\varrho}'(t) = \vec{r}' - \xi(t)$,
and with $x$ and $x'$ being the $x$-components of $\vec{r}$ and $\vec{r}'$ respectively. So finally:
\begin{equation}
\label{obdm_time_eta}
\rho(\vec{r},\vec{r'},t)\,=\,e^{i\left[\theta(x',t)-\theta(x,t)\right]}\,\rho_{\rm free}(\bm{\varrho},\bm{\varrho}',t)\,,
\eeq
where $\rho_{\rm free}(\bm{\varrho},\bm{\varrho}',t)$ is defined in terms of the wavefunction
$\eta$ solution of the Schr\"odinger equation without gravitational field.
For a translational invariant system, the above equation may be further simplified
by writing everything in terms of the relative coordinate $\vec{R} \equiv \vec{r}-\vec{r}'$.
In this case, since it is also true that $\vec{R}= \bm{\varrho}-\bm{\varrho}'$,
then Eq. (\ref{obdm_time_eta}) may be rewritten as:
\begin{equation}
\label{obdm_transl_inv}
\rho(\vec{R},t)\,=\,e^{i \,m\,g\,t \,X/\hbar}\,\rho_{\rm free}(\vec{R},t)\,,
\eeq
where Eq. (\ref{theta_g}) has been used and $X$ is the $x$-component of the
$\vec{R}$ vector position.
We may further analyse the eigenvalues of the one--body density matrix for a
translational invariant system. In the static case, the one--body density
matrix satisfies the eigenvalue equation \cite{Pitaevskii16}
\begin{equation}
\label{eigeneq_obdm}
\int \rho(\vec{r},\vec{r}')\,\phi_i(\vec{r}')\,d\vec{r}'\,=\,\lambda_i \,\phi_i(\vec{r})\,,
\eeq
where $\lambda_i$ is the occupation number of the $i$--th natural
orbital eigenvector $\phi_i(\vec{r})$. The $\lambda_i$ are such that
$\sum_i \lambda_i = N$. When the Galilean invariance is not broken,
the quantum number labeling the occupation of the natural orbitals is the
wavevector $\vec{k}$, and for an homogeneous system the effective single particle states
are simply plane waves, \textit{i.e.}
$\varphi_\vec{k}(\vec{r})=\frac{1}{\sqrt{L^3}} e^{i \,\vec{k}\cdot\vec{x}-i\,t\frac{\hbar k^2}{2m}}$,
with $L$ denoting the size of the system and we have considered the
free time evolution of the state. Therefore we may write Eq. (\ref{eigeneq_obdm}) for
a falling translational invariant many--body system as:
\begin{equation}
\lambda_\vec{k}(t) \,=\,\int \rho(\vec{R},t)\,e^{i \vec{k} \cdot\vec{R}}\,d\vec{R}\,.
\eeq
Now, thanks to Eq. (\ref{obdm_transl_inv}), we can write the following relation
between the natural orbitals occupation numbers of the falling system with those of the
free non-falling one:
\begin{equation}
\lambda_\vec{k}(t)\,=\,\lambda_{\vec{\widetilde{k}}}^{\rm free}(t)\,,
\eeq
where
$\vec{\widetilde{k}}=\left(k_x+m\,g\,t/\hbar\right)\cdot \vec{e}_x+k_y\cdot \vec{e}_y+k_z\cdot \vec{e}_z$, and we have defined the occupation numbers of the system without gravity ($g=0$) as:
\begin{equation}
\lambda_\vec{k}^{\rm free}(t) \,=\,\int \rho_{\rm free}(\vec{R},t)\,e^{i \vec{k} \cdot\vec{R}}\,d\vec{R}\,.
\eeq
From the above relations, one may observe that there is only a
time--dependent translation over the $x$-component of the momentum wavevector which
identifies the occupation numbers of the falling system with respect to the "free" case
with no gravitational potential.
\section{Conclusions}
In this paper we have shown that the quantum description of the free fall motion
in a gravitational field can be nicely simplified making use of a gauge transformation
of the wavefunction, which corresponds to a change of the reference frame for the
system, from the laboratory reference frame to the one that moves within the falling body.
We have also discussed the time evolution
relative to a generic three--dimensional quantum many--body system
subject to a gravitational potential and we have shown that it can be described
in terms of the free time evolution.
The method of gauge transformation appears to be highly versatile and easily applicable,
since the expectation values of relevant physical quantities can be computed in terms of the
free expanding results. In particular we have shown that the variances of the
initial wavepacket are exactly the same as if the system doesn't feel any gravitational
force at all. Regarding the application of the presented method to other systems we mention
that it could be pedagogically interesting to apply it to the Dirac equation in a linear potential.
A comment on an aspect of the presentation can be useful: we referred to the case
in which the quantum system evolves without gravity ($g=0$, \textit{i.e.} absence of the
linear potential) as the \textit{free} case, with "free" referring to the non falling case.
Since the topic of the motion of an object in a linear potential such the gravity
($g \neq 0$) is traditionally called the "free fall", one may ask if the choice
of referring to the non falling case as "free" is convenient. However, the derivations
explicitly presented clearly show that, in general,
really the "free fall" is "free", since all the physical observables and quantities
(such the one--body density matrix)
during the falling dynamics
are related to the corresponding ones of the non falling systems, and actually are the same
if measured in the comoving system.
Finally, it is worth stressing that all the calculations we presented requires only
basic knowledge of Quantum Mechanics, available to students who attended the first undergraduate
courses on this topic.
{\it Acknowledgements: } A.T. acknowledges discussions
with A. P. Polychronakos during the conference "Mathematical physics of
anyons and topological states of matter” in
Nordita, Stockholm (March 2019).
Both numerical and analytical exercises on the topics presented here
were done and discussed during courses the authors taught during the years,
and we acknowledge feedback and suggestions from the students of these courses.
\vspace{-5mm}
|
1,314,259,993,230 | arxiv | \section{Introduction}
Through powerful expressive abilities, deep neural networks(DNNs) have achieved great success in many areas
\cite{kim-2014-convolutional, devlin-etal-2019-bert}.
However, recent work has revealed the fragility of DNNs. They are vulnerable to adversarial examples
\cite{szegedy-etal-2014-intriguing,goodfellow-etal-explaining}.
By adding invisible perturbations to the original examples, attackers can create adversarial examples which render the input misclassified. An increasing number of studies have shown that adversarial examples exist widely in computer vision(CV), natural language processing(NLP) and other domains. The existence of adversarial examples threatens the security-critical models and has garnered widespread attention\cite{Evtimov-etal-2017-robust,zhang-etal-2020-adversarial-attacks}.
In NLP, most work focuses on the generation of natural language adversarial examples to understand their nature
\cite{ebrahimi-etal-2018-hotflip,ren-etal-2019-generating,zang-etal-2020-word,jin-etal-2020-is,li-etal-2020-bert-attack}.
On the other hand, how to defend against textual adversarial attacks is a relatively more important and challenging problem.
Existing defensive methods can be divided into two categories, \textit{adversarial training} and \textit{adversarial detection}.
Adversarial training refers to mixing adversarial examples with clean examples and retraining the model to improve the model’s robustness\cite{miyato-etal-2017-adversarial-training,zhu-etal-2020-freelb}.
Adversarial training is widely used, but it is resource-consuming to retrain the model. Moreover, such defence methods make strong assumptions about attack methods(e.g., the strategy of word substitution, the perturbation level and so on), which limit their defensive capabilities and scope of application\cite{sato-etal-2018-interpretable,dong-etal-2021-towards-robustness}.
Adversarial detection typically uses auxiliary components to recover or reject adversarial examples in the input to defend against adversarial attacks\cite{zhou-etal-2019-learning-discriminate,mozes-etal-2021-frequency}. In adversarial detection, retraining is not required. But the defender needs to make assumptions about the characteristic of adversarial examples(e.g., the word frequency) so that they can distinguish adversarial and clean examples. Since adversarial examples have not been thoroughly studied, these assumptions make adversarial detection more empirical.
In this paper, we do not directly make assumptions about attack methods or adversarial examples but use a carefully constructed set of reference models to reflect the nature of adversarial examples indirectly. To be specific, we use the different predictions of reference models on clean and adversarial examples to distinguish them accurately and block adversarial attacks.
Through empirical analysis, we argue that these reference models need to meet two criteria, 1.their predictions are consistent on clean examples, and 2.their predictions are inconsistent on the adversarial examples against the victim model.
Through theoretical analysis, we prove the upper and lower bounds of detection accuracy that our method can achieve.
In practice, by decomposing the embeddings of the victim model, we obtain reference models that meet the above conditions to block adversarial attacks.
Our contributions are summarized as follows:
\begin{itemize}
\item[1)] We propose TREATED, a universal method for blocking textual adversarial attacks without any assumption about attack methods or adversarial examples. Thus it can be applied in more realistic and complicated environments.
\item[2)] We empirically and theoretically illustrate the superiority of our method. Comprehensive experiments show that TREATED significantly outperforms two advanced defense methods. We will release our implementations for the convenience of future benchmarking.
\end{itemize}
\begin{figure*}
\begin{center}
\subfigure[]{
\includegraphics[width=0.7\columnwidth]{figure/Padv_C.pdf}
}
\subfigure[]{
\includegraphics[width=0.75\columnwidth]{figure/Padv_D.pdf}
}
\end{center}
\caption{(a)~Under the prior of $C$, the probability that $X$ is an adversarial example at different $p$ and $q$. (b)~Under the prior of $D$, the probability that $X$ is an adversarial example at different $p$ and $q$.}
\label{fig:p&q}
\end{figure*}
\section{Background and Related Work}
\subsection{Adversarial Attacks}
Given a classifier $F$ and an input-label pair $(x,y)$, the attacker aims to craft an adversarial example $x_{adv}$ misclassified by $F$ while $x_{adv}$ is in the $\epsilon$ neighborhood of $x$ as
\begin{equation}
\label{e1:adv.def}
F(x_{adv})\neq y, s.t.~\left\|x-x_{adv}\right\| \leq \epsilon.
\end{equation}
Many works have leveraged the loss function of the classifier and the gradient ascent method to find qualified adversarial examples\cite{goodfellow-etal-explaining,Kurakin-etal-2017-adversarial,Madry-etal-2018-towards-deep}. However, this is not compatible with NLP tasks because the word embedding obtained by gradient information cannot correspond to a valid word. The valid words close to it can only be found by the projection method\cite{papernot-etal-2016-crafting,samanta-mehta-2017-towards}, which reduces the effectiveness of the attack.
Considering that the input of the NLP model is usually discrete tokens, the addition, deletion and replacement of tokens have naturally become common methods of textual adversarial attacks\cite{alzantot-etal-2018-generating,ren-etal-2019-generating}. For example, first, rank the tokens by importance and determine the replacement order; then use different strategies to find the best replacement for each token, thereby significantly reducing the time complexity of the search and forming an adversarial example. In addition, some works use paraphrasing\cite{iyyer-etal-2018-adversarial,zhang-etal-2019-paws}, text generation\cite{wang-etal-2020-cat,wang-etal-2020-t3}, generative adversarial networks(GAN)\cite{zhao-etal-2018-generating-natural}, reinforcement learning\cite{vijayaraghavan-etal-2019-generating} and other methods\cite{li-etal-2019-textbugger} to generate adversarial examples.
\subsection{Adversarial Defense}
\noindent\textbf{Adversarial Training}. Adversarial training is widely used to improve the robustness of DNNs\cite{goodfellow-etal-explaining,shafahi-etal-2019-adversarial,zhang-etal-2019-you-only}. By adding perturbations in the gradient direction to the training samples, adversarial training has achieved great success in many tasks. Recent studies have shown that adversarial training also helps to improve the generalization ability of DNNs\cite{zhu-etal-2020-freelb,jiang-etal-2020-smart}.
However, compared to standard training, adversarial training usually takes several times longer because the gradient of the training samples is repeatedly calculated, which is unacceptable in resource-constrained scenarios.
\noindent\textbf{Adversarial Detection}. The goal of adversarial detection is to identify the existence of adversarial examples, thereby rejecting them.
Most work aims to learn a representation to discriminate adversarial and benign inputs from the victim model\cite{li-li-2017-adversarial,grosse-etal-2017-on-the,ma-etal-2018-characterizing,feinman-etal-2017-detecting,metzen-etal-2017-detecting}.
However, in NLP, whether the examples are adversarial is challenging to judge, so some methods try to recover all inputs\cite{zhou-etal-2019-learning-discriminate,li-etal-2020-textshield}.
Nevertheless, this will inevitably destroy the original inputs and may cause the model's performance to decrease.
\noindent\textbf{Spelling Correction and Grammar Error Correction}. Spelling correction\cite{mays-etal-1991-context,zhang-etal-2020-spelling} and grammar error correction\cite{wang-etal-2020-comprehensive,sakaguchi-etal-2017-grammatical} are also used for blocking textual adversarial attacks.
However, these methods can only deal with attacks that bring grammatical and spelling errors and cannot identify adversarial examples crafted by word substitution.
\section{TREATED}
\label{sec:3}
\subsection{Empirical Analysis}
Given a perfect classify $F_p$ which always classifies correctly, and the victim model $F_v$, we can always identify whether $F_v$ classifies correctly on input $X$(whether adversarial or not), blocking adversarial attacks because $F_p$ can always give the ground truth label $y_{true}$ of $X$.
Note that $y_{true}$ is not necessary here. We only need to know whether the predictions of $F_p$ and $F_v$ are consistent. Then we can identify whether the prediction of $F_v$ is correct without knowing the ground truth label. It means that as long as there exists a set of reference models $\{F_r\}$ so that $F_{r_i}$ have consistent predictions on clean examples and inconsistent predictions on adversarial examples against $F_v$, we can identify whether $X$ is adversarial.
\subsection{Theoretical Analysis}
Mark event $A$ as $\{F_r\}$ have consistent predictions on clean examples, event $B$ as $\{F_r\}$ have consistent predictions on adversarial examples against $F_v$. We assume that $P(A)=p$ and $P(B)=q$. For an input $X$, mark event $C$($D$) as $\{F_r\}$ have consistent(inconsistent) predictions on $X$ and event $E$ as $X$ is adversarial. We have
\begin{equation}
\label{e2:p&q}
\begin{aligned}
&P(E|C)=\frac{P(B)}{P(A)+P(B)}=\frac{q}{p+q},\\ &P(E|D)=\frac{1-P(B)}{(1-P(A))+(1-P(B))}=\frac{1-q}{2-(p+q)}.
\end{aligned}
\end{equation}
In our case, we expect $P(E|C)$ to be as small as possible and $P(E|D)$ to be as large as possible so that we can identify if $X$ is adversarial. For example, if $p=0.95$ and $q=0.1$, then $P(E|C)\approx0.095$ and $P(E|D)\approx0.947$. It means that we have a very high probability of making a correct judgment on whether $X$ is an adversarial example. Figure.\ref{fig:p&q} shows the probability of X is adversarial under prior $C$ and $D$ at different $p$ and $q$.
\subsection{Design of Reference Models}
We expect the reference models $\{F_r\}$ have a high $p$ and a low $q$ so that we can identify if $X$ is adversarial with a high accuracy.
However, these two criteria seem to be contradictory. A high $p$ means that the models have similar parameter distributions and share a lot of knowledge, which also shares vulnerabilities, significantly enhancing the transferability of adversarial examples. In other words, adversarial examples can reduce the accuracy of $\{F_r\}$ to a low level so that the outputs of $\{F_r\}$ on the adversarial examples are also consistent, which hinders the detection of adversarial examples. Conversely, a low $q$ means that there are significant differences among $\{F_r\}$, and it is not easy to ensure consistent predictions on clean examples. So the key lies in how to construct $\{F_r\}$.
Recall the goal of textual adversarial attacks. It aims to find suitable perturbations and apply them to the original input. Essentially, this changes the embeddings that are finally input to the model. It inspires us to decompose the embedding layer of $F_v$ into $N$ parts and share the subsequent layers to construct $N$ reference models(i.e., $\{F_r\}$). In this way, we argue that $\{F_r\}$ have a very high $p$ since they are jointly trained. Moreover, adversarial examples act on the embedding layer of $F_v$, which is decomposed to construct $\{F_r\}$. Therefore $\{F_r\}$ share the increased loss caused by adversarial examples, thereby reducing their transferability, and thus have a low $q$.
\textit{\textbf{Note that the same accuracy does not mean consistent predictions. Models may make different predictions on different samples but have the same accuracy.}}
\section{Experiments}
We attack three text classification models on two popular datasets using TextAttack\cite{morris-etal-2020-textattack-2}. We also make reliable evaluations by automatic evaluation.
\subsection{Experimental Setup}
\subsubsection{Baselines}
We choose FGWS\cite{mozes-etal-2021-frequency} as defense baselines since it achieves state-of-the-art performance in textual adversarial detection.
Follow \cite{zhou-etal-2019-learning-discriminate,mozes-etal-2021-frequency}, we use two word level attack methods, PWWS\cite{ren-etal-2019-generating} and Genetic\cite{alzantot-etal-2018-generating}, to make a fair comparison.
\noindent \textbf{PWWS} generated adversarial texts against classification models by two strategies: 1) the word substitution strategy, and 2) the replacement order strategy. The word substitution strategy used the synonym substitution method. Through the named entity analysis techniques, it could improve the semantic similarity and reduce grammatical errors.
\noindent \textbf{Genetic} utilize the population-based optimization algorithm to reorganize the original sentence, yielding sentences with good attack effects.
Since TREATED is a universal defense method that can defend against attacks of multiple perturbation levels, we further utilize TextBugger\cite{li-etal-2019-textbugger}, DeepWordBug\cite{gao-etal-2018-black} and TextFooler\cite{jin-etal-2020-is} to evaluate its defensive ability against attacks of different perturbation levels.
\subsubsection{Victim Models and Datasets}
Follow \cite{zhou-etal-2019-learning-discriminate,mozes-etal-2021-frequency}, we conduct experiments on two classification datasets.
\begin{table}[ht]
\centering
\normalsize
\caption{Summary of the datasets}
\begin{tabular}{@{}ccccc@{}}
\toprule
\textbf{Dataset} & \textbf{Classes} & \textbf{Train} & \textbf{Test} & \textbf{Avg Len} \\ \midrule
SST-2 & 2 & 67,349 & 872 & 17 \\
IMDb & 2 & 25,000 & 25,000 & 201 \\ \bottomrule
\end{tabular}
\label{t1_dataset}
\end{table}
\begin{table}[ht]
\centering
\normalsize
\caption{Parameters of the models.}
\begin{tabular}{@{}cccc@{}}
\toprule
& \textbf{CNN} & \textbf{LSTM} & \textbf{RoBERTa} \\ \midrule
Embedding dim. & 300 & 300 & 768 \\
Filters & 128 & - & - \\
Kernel size & 3 & - & - \\
LSTM units & - & 128 & - \\
Hidden dim. & 100 & 100 & 100 \\ \bottomrule
\end{tabular}
\label{t:model_para}
\end{table}
\noindent\textbf{The Stanford Sentiment Treebank(SST-2)}\cite{socher-etal-2013-recursive} is a sentence-level sentiment analysis dataset. The movie reviews are
given by professionals.
\noindent \textbf{The IMDb reviews dataset}\cite{maas-etal-2011-learning} is a document-level sentiment analysis dataset containing 50,000
non-professional movie reviews, of which 25,000 are positive, and 25,000
are negative reviews.
We perform adversarial attacks on three neural networks, WordCNN\cite{kim-2014-convolutional}, LSTM\cite{hochreiter-schmidhuber-1997-long} and RoBERTa\cite{liu-etal-2019-roberta}.
\begin{table*}[ht]
\centering
\normalsize
\caption{Adversarial detection performances of FGWS and TREATED, including the accuracy increased after adversarial detection(Increased acc.), the true positive rate and false positive rate(TPR, FPR) and F1.}
\begin{tabular}{@{}llcccccc@{}}
\toprule
\multicolumn{1}{c}{\multirow{2}{*}{\textbf{Dataset/Model}}} &
\multicolumn{1}{c}{\multirow{2}{*}{\textbf{Attack}}} &
\multicolumn{2}{c}{\textbf{Increased acc.}} &
\multicolumn{2}{c}{\textbf{TPR(FPR)}} &
\multicolumn{2}{c}{\textbf{F1}} \\ \cmidrule(l){3-4} \cmidrule(l){5-6} \cmidrule(l){7-8}
\multicolumn{1}{c}{} &
\multicolumn{1}{c}{} &
FGWS &
TREATED &
FGWS &
TREATED &
FGWS &
TREATED \\ \midrule
\multirow{2}{*}{SST-2/LSTM} & PWWS & +30.0 & \textbf{+35.7} & - & 62.2(20.49) & 63.9 & \textbf{68.1} \\
& Genetic & +29.2 & \textbf{+33.5} & - & 61.0(21.47) & 60.3 & \textbf{65.0} \\ \cmidrule(l){3-4} \cmidrule(l){5-6} \cmidrule(l){7-8}
\multirow{2}{*}{IMDb/CNN} & PWWS & +60.0 & \textbf{+84.8} & - & 91.9(0.05) & 83.9 & \textbf{93.2} \\
& Genetic & +57.8 & \textbf{+66.3} & - & 90.5(0.14) & 83.5 & \textbf{88.4} \\ \bottomrule
\end{tabular}%
\label{t2:benchmark_cnn_lstm}
\end{table*}
Both the datasets and networks are widely used in adversarial learning in NLP. Details can be found in Table.~\ref{t1_dataset} and Table.~\ref{t:model_para}.
\subsection{Experimental Results}
We generate 1,000 adversarial examples for each model, dataset, and attack method as part of the detection set. The corresponding clean examples are used as the other part of the detection set. We report the increased accuracy of the model after adversarial detection and the true positive rate(TPR), false positive rate(FPR) and F1 of the detection method to make a comprehensive evaluation.
We make a detailed comparison with FGWS, the state-of-the-art detecting method. Table.~\ref{t2:benchmark_cnn_lstm} shows the overall defense performance on CNN and LSTM models. As can be seen, TREATED can restore the models to higher accuracy. The defense performance decline on the SST-2 dataset. But it still outperforms FGWS. Our method performs better in detection accuracy, improving the F1 score by up to 9.3\% on the CNN model and IMDb dataset.
We report the results of defending against more attacks on the RoBERTa model in Table.\ref{t3:benchmark_r}.
TREATED performs well against attacks of different perturbation levels, such as word-level, char-level and multi-level. It can always increase the accuracy of the models by more than 40\% in the face of various attacks.
\begin{table}[ht]
\centering
\normalsize
\caption{Defense performances on IMDb dataset and RoBERTa against more attacks.}
\begin{tabular}{@{}lccc@{}}
\toprule
\textbf{Attack} & \textbf{Increased acc.} & \textbf{TPR(FPR)} & \textbf{F1} \\ \midrule
PWWS & +64.7 & 69.4(14.47) & 75.5 \\
Genetic & +40.0 & 71.4(35.71) & 69.0 \\
DeepWordBug & +40.4 & 58.6(16.11) & 67.1 \\
TextBugger & +45.0 & 49.9(12.65) & 61.4 \\ \bottomrule
\end{tabular}%
\label{t3:benchmark_r}
\end{table}
\section{Discussion}
\subsection{Influence on Unperturbed Data}
We further study the effect of TREATED on the original accuracy of the models. We use the original test set as the detection set to observe whether the accuracy decrease significantly.
As can be seen in Table.~\ref{t6:unpert}, the accuracy of WordCNN and RoBERTa drop by 3.5\% and 2.6\%, respectively, which are almost negligible. It shows that TREATED can block adversarial attacks without affecting the original performance of the model.
\begin{table}[ht]
\centering
\normalsize
\caption{Influence of TREATED on unperturbed data.}
\begin{tabular}{@{}ccc@{}}
\toprule
Dataset/Model & Orinigal acc. & Detecting acc. \\ \midrule
IMDb/CNN & 89.3 & 85.8 \\
IMDb/RoBERTa & 90.9 & 88.3 \\ \bottomrule
\end{tabular}%
\label{t6:unpert}
\end{table}
\subsection{Ablation Study}
We construct $\{F_r\}$ by decomposing the embedding layer of the victim model. In order to illustrate its effectiveness, we conduct thorough ablation studies. We use standard training models(STM) as reference models to detect adversarial examples.
Table.~\ref{t4:ablation_levels} shows the detection performance of STM and TREATED. Under word- and char-level attacks, TREATED can increase the accuracy of victim models by more than 88\%, which means that TREATED completely prevents adversarial attacks. On the most difficult multi-level to defend, treated increases the accuracy of the victim model by 77.4\%. TREATED also maintains a TPR above 96\%, an FPR below 6\% and an F1 above 95\%, indicating very few misjudgments.
Then we replace the reference models with STM. The defense performance significantly decreases, which fully illustrates the superiority of decomposition of the embedded layer.
In order to further verify the advantages of decomposition of the embedding layer, we report $p$ and $q$ under different conditions. As shown in table~\ref{t5:ablation_pq}, our method always has a higher $p$ and a lower $q$, verifying that our method will have a higher detection accuracy.
\begin{table*}[ht]
\centering
\normalsize
\caption{Ablation study on adversarial detection performances for TREATED and STM against attacks of different perturbation levels. The victim model is WordCNN trained on IMDb.}
\begin{tabular}{@{}lllclc@{}}
\toprule
\textbf{Attack} & \textbf{Level} & \textbf{Defense} & \textbf{Increased acc.} & \textbf{TPR(FPR)} & \textbf{F1} \\ \midrule
\multirow{2}{*}{TextFooler} & \multirow{2}{*}{word} & STM & +58.7 & 63.6(15.82) & 70.9 \\
& & TREATED & \textbf{+88.6} & \textbf{96.0(5.20)} & \textbf{95.4} \\ \cmidrule(l){3-6}
\multirow{2}{*}{DeepWordBug} & \multirow{2}{*}{char} & STM & +68.6 & 70.5(18.16) & 74.8 \\
& & TREATED & \textbf{+88.5} & \textbf{96.2(5.22)} & \textbf{95.5} \\ \cmidrule(l){3-6}
\multirow{2}{*}{TextBugger} & \multirow{2}{*}{multi} & STM & +61.2 & 66.2(15.87) & 72.7 \\
& & TREATED & \textbf{+77.4} & \textbf{96.3(5.97)} & \textbf{95.2} \\
\bottomrule
\end{tabular}%
\label{t4:ablation_levels}
\end{table*}
\begin{table*}[ht]
\centering
\normalsize
\caption{Ablation study on $p$ and $q$ on different victim models and test sets between TREATED and STM. The attack method is PWWS.}
\begin{tabular}{@{}ccccccc@{}}
\toprule
& \multicolumn{2}{c}{\textbf{SST-2/LSTM}} & \multicolumn{2}{c}{\textbf{IMDb/CNN}} & \multicolumn{2}{c}{\textbf{IMDb/RoBERTa}} \\ \cmidrule(l){2-3} \cmidrule(l){4-5} \cmidrule(l){6-7}
& STM & TREATED & STM & TREATED & STM & TREATED \\ \cmidrule(l){1-3} \cmidrule(l){4-5} \cmidrule(l){6-7}
$p \uparrow$ & 0.7052 & \textbf{0.7951} & 0.7842 & \textbf{0.8723} & 0.8422 & \textbf{0.8840} \\
$q \downarrow$ & 0.6471 & \textbf{0.3777} & 0.3142 & \textbf{0.0813} & 0.5483 & \textbf{0.5006} \\ \bottomrule
\end{tabular}%
\label{t5:ablation_pq}
\end{table*}
\section{Conclusion and Future Work}
We propose TREATED, a universal adversarial detection method to defend against textual adversarial attacks with multiple perturbation levels through empirical and theoretical analysis.
By decomposing the embedding layer, we construct a set of reference models that can make consistent predictions on clean examples and inconsistent predictions on adversarial examples. Thus we can identify adversarial examples and block attacks via these reference models.
Extensive experiments prove the effectiveness of TREATED. Compared with the state-of-the-art defense method, our method has higher detection accuracy without harming the original performance. Ablation studies illustrate the superiority of our embedding decomposition method.
Further improving the defense performance on large-scale pre-training model(e.g., RoBERTa) is one of our future works. Besides, it is also promising to find other ways to construct $\{F_r\}$.
\normalem
\bibliographystyle{IEEEtran}
|
1,314,259,993,231 | arxiv | \section{Introduction}\label{sec:intro}
Radio and X--ray observations of galaxy clusters prove
that thermal and non--thermal plasma components coexist in the
intracluster medium (ICM).
While X--ray observations reveal the presence of diffuse
hot gas, the existence of extended cluster--scale radio
sources in a number of galaxy clusters, well known as
{\it radio halos} and {\it relics}, prove the presence of
relativistic electrons and magnetic fields.
\\
Both radio halos and relics are low surface
brightness sources with steep radio spectra, whose linear size
can reach and exceed the Mpc scale.
Radio halos are usually located at the centre of galaxy
clusters, show a fairly regular radio morphology, and
lack an obvious optical counterpart.
A total of about 20 radio halos have been detected up to now
(Giovannini, Tordi \& Feretti \cite{giovannini99}; Giovannini \& Feretti
\cite{gf02}; Kempner \& Sarazin \cite{kempner01};
Bacchi et al. \cite{bacchi03}). Relics are usually found at
the cluster periphery, their radio emission is highly polarized
(up to $\sim$ 30\%), and shows a variety of radio morphologies,
such as sheet, arc, toroids. At present
a total of $\sim$ 20 relics (including candidates) are known
(Kempner \& Sarazin \cite{kempner01}; Giovannini \& Feretti
\cite{gf04}).
\\
\\
Evidence in the optical and X--ray bands has been
accumulated in favour of the hierarchical formation of galaxy clusters
through merging processes (for a collection of reviews on this subject
see Feretti, Gioia \& Giovannini 2002), and this
has provided insightful pieces of information in our understanding of
radio halos. It is not clear whether all clusters with signatures
of merging processes also possess a radio halo; on the other hand,
all clusters hosting a radio halo show sub--structures in the X--ray emission,
and the most powerful radio halos are hosted in clusters
which most strongly depart from virialization (Buote \cite{buote01}).
Giovannini et al. (1999) showed that in the redshift interval 0 -- 0.2
the detection rate of cluster radio halos increases with increasing X--ray
luminosity, which suggests a connection with the gas temperature and cluster mass.
\\
\\
The very large extent of radio halos poses the question of their origin,
since the diffusion time the relativistic electrons need to cover the
observed Mpc size is 30 -- 100 times longer than their radiative lifetime.
Two main possibilities have been investigated so far: ``primary models'', in which
particles are in--situ re--accelerated in the ICM, and ``secondary models''
in which the emitting electrons are secondary products of hadronic collisions
in the ICM (for reviews on these models see Blasi \cite{blasi04};
Brunetti \cite{brunetti03} and \cite{brunetti04}; Ensslin \cite{ensslin04};
Feretti \cite{feretti03}; Hwang \cite{hwang04}; Sarazin \cite{sarazin02}).
Cluster mergers are among the most energetic events in the Universe, with
an energy release up to 10$^{64}$ erg, and a challenging question is if at
least a fraction of such energy may be channelled into particle reacceleration
(e.g. Tribble \cite{tribble93}).
Indeed observational support (for a review see Feretti \cite{feretti03}) is now
given to the particle re--acceleration model, which assumes that the
radiating electrons are stochastically re--accelerated by turbulence in the
ICM and that the bulk of this turbulence is injected during cluster mergers
(Brunetti et al. \cite{brunetti01}; Petrosian \cite{petrosian01};
Fujita, Takizawa \& Sarazin \cite{fujita03};
Brunetti et al. \cite{brunetti04b}).
\\
\\
Although the physics of particle re--acceleration by turbulence has been
investigated in some detail and the model expectations seem to reproduce
the observed radio features, only recently statistical calculations
in the framework of the re--acceleration model have been carried out by
Cassano \& Brunetti (\cite{cassano05}, hereinafter CB05).
Making use of semi--analytical calculations they estimated the energy of turbulence
injected in galaxy clusters through cluster mergers, and derived the
expected occurrence of {\it giant}\footnote{Linear size $\ge$ 1 Mpc as defined in CB05, with
H$_0$=50 ~km~s$^{-1}$~Mpc$^{-1}$.
This size corresponds to \raise 2pt \hbox {$>$} \kern-1.1em \lower 4pt \hbox {$\sim$} 700 kpc with the cosmology assumed in
this paper, i.e. $H_0 =70$~km~s$^{-1}$~Mpc$^{-1}$, $\Omega_m$=0.3 and
$\Omega_{\Lambda}$=0.7. } radio halos as a function of the mass
and dynamical status of the clusters in the framework of the merger--induced
particle re--acceleration scenario.
\\
The most relevant result of those calculations is that the occurrence of giant
radio halos increases with the cluster mass. Furthermore, the expected fration
of clusters with giant radio halos at z$\le$ 0.2 can be reconciled with the
observed one (Giovannini et al. \cite{giovannini99}) for viable values of the
model parameters.
\\
Cassano, Brunetti \& Setti (\cite{cassano04}, hereinafter CBS04) and
Cassano, Brunetti \& Setti (\cite{cassano06}, hereinafter CBS06) showed
that the bulk of giant radio halos
are expected in the redshift range $z\sim 0.2 \div 0.4$
as a result of two competing effects, i.e. the decrease of the fraction of clusters
with halos in a given mass range and the increase of the volume of the Universe with
increasing redshift. Given that inverse Compton losses increase with the redshift,
it is expected that powerful giant radio halos at $z>0.2$ are preferentially
found in massive clusters ($M \sim 2-3 \times 10^{15}M_{\odot}$)
undergoing merging events. In particular, it is expected that
a fraction of 10 -- 35 \% of clusters in this redshift interval and mass range
may host a giant radio halo.
\\
\\
With the aim to investigate the connection between cluster mergers and the
presence of cluster--type radio sources, in particular to derive
the fraction of massive galaxy clusters in the range 0.2 $<$ z $<$ 0.4
hosting a radio halo and constrain the predictions of the re--acceleration
model in the same redshift interval, we undertook an observational
study using the Giant Metrewave Radio Telescope (GMRT, Pune,
India) at 610 MHz. Our project will be presented here and in future papers,
and will be referred to as the GMRT Radio Halos Survey.
\\
Here we report the results on 11 galaxy clusters observed with the GMRT in
January 2005. The paper is organised as follows:
in Section \ref{sec:sample} we present the sample of galaxy
clusters; the radio observations are described in Section \ref{sec:obs};
the analysis of our results and a brief discussion are given in
Section \ref{sec:results} and \ref{sec:discussion} respectively.
\\
\\
\section{The cluster sample}\label{sec:sample}
\begin{table*}[t]
\label{tab:sample1}
\caption[]{Cluster sample from the REFLEX catalogue.}
\begin{center}
\begin{tabular}{rrccccrcc}
\hline\noalign{\smallskip}
REFLEX Name & Alt. name & RA$_{J2000}$ & DEC$_{J2000}$ & z & L$_{\rm X}$(0.1--2.4 keV)
& M$_{\rm V}$ & R$_{\rm V}$ \\
& & & & & $10^{44}$ erg s$^{-1}$
& 10$^{15}$M$_{\odot}$ & Mpc \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
$^{\surd}$ RXCJ\,0003.1$-$0605 & A\,2697 & 00 03 11.8 & $-$06 05 10 & 0.2320 & 6.876
& 1.68 & 2.70 \\
$^{\star}$ RXCJ\,0014.3$-$3023 & A\,2744 & 00 14 18.8 & $-$30 23 00 & 0.3066 & 12.916
& 2.58 & 2.99 \\
$^{\surd}$ RXCJ\,0043.4$-$2037 & A\,2813 & 00 43 24.4 & $-$20 37 17 & 0.2924 & 7.615
& 1.80 & 2.67 \\
$^{\surd}$ RXCJ\,0105.5$-$2439 & A\,141 & 01 05 34.8 & $-$24 39 17 & 0.2300 & 5.762
& 1.50 & 2.60 \\
$^{\surd}$ RXCJ\,0118.1$-$2658 & A\,2895 & 01 18 11.1 & $-$26 58 23 & 0.2275 & 5.559
& 1.45 & 2.58 \\
$^{\surd}$ RXCJ\,0131.8$-$1336 & A\,209 & 01 31 53.0 & $-$13 36 34 & 0.2060 & 6.289
& 1.58 & 2.69 \\
$^{\surd}$ RXCJ\,0307.0$-$2840 & A\,3088 & 03 07 04.1 & $-$28 40 14 & 0.2537 & 6.953
& 1.69 & 2.67 \\
RXCJ\,0437.1$+$0043 & $-$ & 04 37 10.1 & $+$00 43 38 & 0.2842 & 8.989
& 2.02 & 2.79 \\
$^{\surd}$ RXCJ\,0454.1$-$1014 & A\,521 & 04 54 09.1 & $-$10 14 19 & 0.2475 & 8.178
& 1.89 & 2.78 \\
RXCJ\,0510.7$-$0801 & $-$ & 05 10 44.7 & $-$08 01 06 & 0.2195 & 8.551
& 1.95 & 2.86 \\
$^{\surd}$ RXCJ\,1023.8$-$2715 & A\,3444 & 10 23 50.8 & $-$27 15 31 & 0.2542 & 13.760
& 2.69 & 3.12 \\
$^{\surd}$ RXCJ\,1115.8$+$0129 & $-$ & 11 15 54.0 & $+$01 29 44 & 0.3499 & 13.579
& 2.67 & 2.95 \\
$^{\star}$ RXCJ\,1131.9$-$1955 & A\,1300 & 11 31 56.3 & $-$19 55 37 & 0.3075 & 13.968
& 2.72 & 3.04 \\
RXCJ\,1212.3$-$1816 & $-$ & 12 12 18.9 & $-$18 16 43 & 0.2690 & 6.197
& 1.56 & 2.58 \\
$^{\surd}$ RXCJ\,1314.4$-$2515 & $-$ & 13 14 28.0 & $-$25 15 41 & 0.2439 & 10.943
& 2.30 & 2.98 \\
$^{\surd}$ RXCJ\,1459.4$-$1811 & S\,780 & 14 59 29.3 & $-$18 11 13 & 0.2357 & 15.531
& 2.92 & 3.24 \\
RXCJ\,1504.1$-$0248 & $-$ & 15 04 07.7 & $-$02 48 18 & 0.2153 & 28.073
& 4.37 & 3.75 \\
$^{\surd}$ RXCJ\,1512.2$-$2254 & $-$ & 15 12 12.6 & $-$22 54 59 & 0.3152 & 10.186
& 2.19 & 2.81 \\
RXCJ\,1514.9$-$1523 & $-$ & 15 14 58.0 & $-$15 23 10 & 0.2226 & 7.160
& 1.73 & 2.74 \\
$^{\star}$ RXCJ\,1615.7$-$0608 & A\,2163 & 16 15 46.9 & $-$06 08 45 & 0.2030 & 23.170
& 3.84 & 3.62 \\
$^{\surd}$ RXCJ\,2003.5$-$2323 & $-$ & 20 03 30.4 & $-$23 23 05 & 0.3171 & 9.248
& 2.05 & 2.75 \\
RXCJ\,2211.7$-$0350 & $-$ & 22 11 43.4 & $-$03 50 07 & 0.2700 & 7.418
& 1.77 & 2.69 \\
$^{\surd}$ RXCJ\,2248.5$-$1606 & A\,2485 & 22 48 32.9 & $-$16 06 23 & 0.2472 & 5.100
& 1.37 & 2.50 \\
$^{\surd}$ RXCJ\,2308.3$-$0211 & A\,2537 & 23 08 23.2 & $-$02 11 31 & 0.2966 & 10.174
& 2.19 & 2.85 \\
$^{\surd}$ RXCJ\,2337.6+0016 & A\,2631 & 23 37 40.6 & $+$00 16 36 & 0.2779 & 7.571
& 1.79 & 2.69 \\
$^{\surd}$ RXCJ\,2341.2$-$0901 & A\,2645 & 23 41 16.8 & $-$09 01 39 & 0.2510 & 5.789
& 1.49 & 2.57 \\
$^{\surd}$ RXCJ\,2351.6$-$2605 & A\,2667 & 23 51 40.7 & $-$26 05 01 & 0.2264 & 13.651
& 2.68 & 3.16 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
Symbols are as follows: $^{\surd}$ marks the clusters observed by us with the
GMRT as part of our radio halo survey;
$^{\star}$ marks the clusters with radio halo known from the literature
(A\,2744 Govoni et al. \cite{govoni01}; A\,1300 Reid et al. \cite{reid99};
A\,2163 Herbig \& Birkinshaw \cite{herbig94} and Feretti et al.
\cite{feretti01}). All the remaining clusters are part of the GMRT cluster
Key Project (P.I. Kulkarni).
\end{table*}
In order to obtain a statistically significant sample of clusters
suitable for our aims, we based our selection on the ROSAT--ESO
Flux Limited X--ray (REFLEX) galaxy cluster catalogue
(B{\"o}hringer et al. \cite{boeringer04}) and on the extended
ROSAT Brightest Cluster Sample (BCS)
catalogue (Ebeling et al. \cite{ebeling98} \& \cite{ebeling00}).
Here we will concentrate on the REFLEX sample, which was
observed with the GMRT in January 2005 (present paper, see next Section),
in October 2005 and August 2006 (Venturi et al., in preparation).
\\
From the REFLEX catalogue we selected all clusters satisfying the following
criteria:
\begin{itemize}
\item[1)] L$_{\rm X}$(0.1--2.4 keV) $>$ 5 $\times$ 10$^{44}$ erg s$^{-1}$;
\item[2)] 0.2 $<$ z $<$ 0.4;
\item[3)] $-$30$^{\circ}$ $<$ $\delta$ $<$ +2.5$^{\circ}$.
\end{itemize}
The lower limit of $\delta=-30^{\circ}$ was chosen
in order to ensure a good u--v coverage with the GMRT, while the value of
$\delta=+2.5^{\circ}$ is the REFLEX
upper limit.
\\
The limit in X--ray luminosity is aimed at selecting
massive clusters, which are expected to host giant radio halos
(CBS04, CB05 and references therein). It corresponds to a lower limit
in the virial mass of M$_{\rm V}~>~ 1.4\times10^{15} M_{\odot}$
if the L$_{\rm X}$ -- M$_{\rm V}$ derived in CBS06 is assumed.
We point out that the L$_{\rm X}$ -- M$_{\rm V}$ correlation in CBS06 has
a statistical dispersion
of $\sim$ 30\%. This error dominates over the systematic additional
uncertainty introduced by the fact that the correlation was obtained
using the z $<$ 0.2 cluster sample in Reiprich \& B{\"o}hringer
(\cite{reiprich02}).
\\
We obtained a total of 27 clusters. The source list is
reported in Table 1,
where we give (1) the REFLEX name, (2) alternative name from other catalogues,
(3) and (4) J2000 coordinates, (5) redshift, (6) the X--ray luminosity in the
0.1--2.4 keV band, (7) and (8) estimates for the virial mass M$_{\rm V}$
and virial radius R$_{\rm V}$ respectively (from the
L$_{\rm X}$ -- M$_{\rm V}$ correlation derived in CBS06).
\\
The location of the 27 clusters of the sample in the L$_{\rm X}$--z
plane for the whole REFLEX catalogue is reported in Fig. \ref{fig:fzx}.
\begin{figure}
\centering
\includegraphics[angle=0,width=8.5cm]{fig1.ps}
\caption{L$_x$--z plot (0.1--2.4 KeV) for the REFLEX clusters. Open red
circles show the clusters selected for the GMRT observations of the present
project (marked with the symbol $\surd$ in Table 1; see Section \ref{sec:obs}
for details); filled blue triangles indicate those clusters
(marked with $^{\star}$
in Table 1) which are known to host a radio halo from the literature,
i.e. A\,2744 (Govoni et al. \cite{govoni01}), A\,1300 (Reid et al.
\cite{reid99}) and A\,2163 (Herbig \& Birkinshaw \cite{herbig94}; Feretti
et al. \cite{feretti01}); filled green squares indicate the clusters of the
sample belonging to the GMRT Cluster Key Project (P.I. Kulkarni).
The light blue dashed region is the one surveyed in
our project.}
\label{fig:fzx}
\end{figure}
\section{Radio observations}\label{sec:obs}
From the sample given in Table 1 we selected all clusters
with no radio information available in the literature at the time
our GMRT proposal was written. We also excluded all clusters belonging
to the GMRT Cluster Key Project (P.I. Kulkarni), and remained with
a total of 18 clusters, marked with the symbol $\surd$ in
Table 1.
\\
From the list of marked clusters, 11 were given higher priority and
were observed with the GMRT during a 27--hour run allocated in January 2005.
Table \ref{tab:obs} reports the following information: cluster name,
half power beamwidth (HPWB) of the full array of the observations (arcsec),
total time on source (minutes) and rms (1$\sigma$ in $\mu$Jy b$^{-1}$)
in the full resolution image.
\\
Five clusters listed in Table 1 were observed with the GMRT in
a second observing run carried out in October 2005, i.e. A\,2813, A\,2485,
A\,2895, RXCJ\,1115.8+0129 and RXCJ\,1512.2--2254; finally the two remaining
clusters A\,2645 and A\,2667 will be observed in August 2006.
They will all be presented in a future paper (Venturi et al.
in preparation).
\\
\begin{table*}
\caption[]{GMRT observations.}
\begin{center}
\begin{tabular}{lccc}
\hline\noalign{\smallskip}
Cluster & Beam, PA & Obs. time & rms \\
& (full array) $^{\prime \prime} \times^{\prime \prime}$, $^{\circ}$&
min & $\mu$Jy b$^{-1}$\\
\noalign{\smallskip}
\hline\noalign{\smallskip}
A\,2697 & 8.5$\times$5.0, --83 & 90 & 80 \\
A\,141 & 7.7$\times$7.4, 75 & 150 & 100 \\
A\,209 & 8.0$\times$5.0, 64 & 240 & 60 \\
A\,3088 & 8.0$\times$7.0, 40 & 190 & 65 \\
A\,521 & 8.6$\times$4.0, 57 & 210 & 35 \\
A\,3444 & 7.6$\times$4.9, 19 & 120 & 67 \\
RXCJ\,1314.4--2515 & 8.0$\times$5.0 15 & 150 & 65 \\
S\,780 & 7.5$\times$5.0 25 & 80 & 70 \\
RXCJ\,2003.5--2323 & 6.9$\times$5.0, --3 & 240 & 40 \\
A\,2537 & 10.3$\times$6.0 67 & 150 & 60 \\
A\,2631 & 9.2$\times$6.3 --77 & 240 & 50 \\
\hline{\smallskip}
\end{tabular}
\end{center}
\label{tab:obs}
\end{table*}
The observations were carried out at 610 MHz, using simultaneously
two 16 MHz bands (upper side band, USB, and lower side band, LSB),
for a total of 32 MHz. Left and
right polarization were recorded for each band.
The observations
were carried out in spectral line mode, with 128 channels each
band, and a spectral resolution of 125 kHz/channel.
The data reduction and analysis were carried out with the NRAO
Astronomical Image Processing System (AIPS) package.
In order to reduce the size of the dataset, after bandpass calibration
the central 94 channels were averaged to 6 channels of $\sim$ 2 MHz each.
For each source the USB and LSB datasets, as well as the datasets
taken in different days, were calibrated and reduced separately,
then the final images from each individual dataset were combined in the
image plane to obtain the final image. Wide--field imaging
was adopted in each step of the data reduction.
\\
For each cluster we produced images over a wide range of resolutions,
in order to fully exploit the information GMRT can provide.
We point out that the nominal largest detectable structure provided
by the GMRT at 610 MHz is 17$^{\prime}$. This value ensures the possible
detection of the extended radio sources we are searching for,
since the angular scale covered by a 1 Mpc--size structure is
$\sim~5^{\prime}$ at z=0.2 and $\sim~3^{\prime}$ at z=0.4.
\\
The sensitivity of our
observations (1$\sigma$ level) is in the range 35 -- 100 $\mu$Jy for
the full resolution images (see Table \ref{tab:obs}), which were
obtained by means of uniform weighting.
The spread in the noise level depends most critically
on the total time on source, on the total bandwidth available (in few
cases only one portion of the band provided useful data,
see individual clusters in \textsection \ref{sec:results}),
and on the presence of strong sources in the imaged field.
Slightly lower values for the noise level are obtained for the low resolution
images (see Section \ref{sec:results} and figure captions), which were made
using natural weighting.
\\
The average residual amplitude errors in our data are of the order of
\raise 2pt \hbox {$<$} \kern-1.1em \lower 4pt \hbox {$\sim$}$~$5\%.
\\
\section{Results}\label{sec:results}
\medskip\noindent
\begin{table*}[t]
\caption[]{Parameters of the extended cluster radio sources.}
\begin{center}
\begin{tabular}{lcrclrl}
\hline\noalign{\smallskip}
Cluster & Source Type & S$_{\rm 610~MHz}$ & logP$_{\rm 610~MHz}$ & LAS & LLS & L$_1$/L$_2$ \\
& & mJy & W Hz$^{-1}$ & arcmin & kpc & \\
\noalign{\smallskip}
\hline\noalign{\smallskip}
A\,209 & Giant Halo & 24.0 $\pm$ 3.6 & 24.46 & $\sim$ 4 & $\sim$ 810 & $\sim$ 2 \\
A\,521 & Relic & 41.9 $\pm$ 2.1 & 24.91 & $\sim$ 4 & $\sim$ 930 & $\sim$ 4.5 \\
RXCJ\,1314.4--2515 & Western Relic & 64.8 $\pm$ 3.2 & 25.03 & $\sim$ 4 & $\sim$ 910 & $\sim$ 3 \\
& Eastern Relic & 28.0 $\pm$ 1.4 & 24.67 & $\sim$ 4 & $\sim$ 910 & $\sim$ 4.3 \\
& Halo & 10.3 $\pm$ 0.3 & 24.22 & $\sim$ 2 & $\sim$ 460 & $\sim$ 1.5 \\
RXCJ\,2003.5--2323 & Giant Halo & 96.9 $\pm$ 5.0 & 25.49 & $\sim$ 5 & $\sim$ 1400 & $\sim$ 1.3 \\
A\,3444 & Central Galaxy & 16.5 $\pm$ 0.8 & 24.51 & $\sim$0.7 & $\sim$ 165 & $\sim$ 1.4 \\
& surrounding Halo & 10.0 $\pm$ 0.8 & 24.29 & $\sim$1.5 & $\sim$ 350 & $\sim$ 1.4 \\
\hline{\smallskip}
\end{tabular}
\end{center}
\label{tab:param}
\end{table*}
\\
Cluster scale radio emission either in the form of radio halo or relic
was detected in 4 clusters of the sample (\textsection \ref{sec:halos});
in one cluster extended emission was found around the dominant
galaxy (\textsection \ref{sec:minihalo});
for the remaining six clusters no hint of extended emission is present at
the sensitivity level of the observations (\textsection \ref{sec:noext}).
Details on each cluster are given in this Section. In Appendix A we report the
610 MHz radio contours within the virial radius for all the observed clusters.
All the images were convolved with a HPWB of 15.0$^{\prime \prime} \times
12.0^{\prime \prime}$, except for A\.209, RXCJ\,1314.4--2515 and
RXCJ\,2003.5--2323 where a different resolution was chosen in
order to complement the information provided in Figs. 2, 3, 4, 5 and 6.
Table \ref{tab:param} reports the
observational information for the detected cluster--scale radio
sources. The last column in the table, L$_1$/L$_2$, provides the ratio
between the major (LAS) and minor axis of the extended emission.
The linear size and flux densities were derived from the
3$\sigma$ contour level.
\begin{figure*}
\hspace{0.5truecm}\includegraphics[angle=0,width=7.6cm, height=6.5cm]{fig2a.ps}
\hspace{1.5truecm}\includegraphics[angle=0,width=7.2cm, height=6.5cm]{fig2b.ps}
\caption{Left -- GMRT 610 MHz radio contours for the A\,209 cluster
superposed on the POSS--2 optical plate.
The 1$\sigma$ level in the image is 60 $\mu$Jy b$^{-1}$. Contours are
0.3$\times(\pm$ 1,2,4,8,16...) mJy b$^{-1}$. The
HPWB is $8.0^{\prime\prime} \times 5.0^{\prime\prime}$, p.a.
$64^{\circ}$. Right -- Natural weighted image of the same sky region at
the resolution of $18.0^{\prime\prime} \times 17.0^{\prime\prime}$, p.a.
$0^{\circ}$. The rms (1$\sigma$) in the image is 60 $\mu$Jy b$^{-1}$,
contours are 0.18$\times(\pm$ 1,2,4,8,16...) mJy b$^{-1}$.}
\label{fig:a209_opt}
\end{figure*}
\subsection{Clusters with halos, giant halos and relics}\label{sec:halos}
\subsubsection{Abell 209}
Abell 209 (RXCJ\,0131.8--1336) is a richness R=3 cluster at
z=0.2060 (1$^{\prime\prime}$=3.377 kpc). A high
X--ray temperature is reported in the literature.
Rizza et al. (\cite{rizza98}) estimated a mean gas
temperature of kT$\sim$10 keV from the ROSAT
X--ray luminosity; this
value was confirmed by Mercurio et al. (\cite{mercurio04a})
from the analysis of {\it Chandra} archive data.
\\
The cluster has been extensively studied at optical (Mercurio et al.
\cite{mercurio03}, Mercurio et al. \cite{mercurio04a} and \cite{mercurio04b},
Haines et al. \cite{haines04}), and X--ray wavelenghts (ROSAT--HRI,
Rizza et al. \cite{rizza98}, Mercurio et al. \cite{mercurio04a}).
These studies show that A\,209 is far from a relaxed dynamical stage, and
it is undergoing a strong dynamical evolution.
In particular the X--ray and the optical data suggest that A\,209 is
experiencing a merging event between two or more components.
\\
Mercurio et al. (\cite{mercurio03}) provided an estimate of the virial
mass of the cluster, $M_{\rm V} = 2.25^{+0.63}_{-0.65}\times 10^{15}M_{\odot}$,
consistent with our estimate given in Table 1 if we account for the
uncertainty of our value (see Sect. \ref{sec:sample}).
\\
The cluster merging scenario is confirmed by the weak lensing analysis
carried out by Dahle et al. (\cite{dahle02}), who found two significant peaks
in the mass distribution of the cluster: the largest one is close to the central
cD galaxy, and the secondary mass peak is located at about 5 arcmin
north of the cluster centre and associated to a peak in the optical galaxy
distribution.
\\
\\
610 MHz contours of the A\,209 emission within the virial radius
are given in Fig. \ref{fig:a209_lr}, while Fig. \ref{fig:a209_opt}
shows the central part of the field at two different resolutions
superposed on the POSS--2 image. Inspection of Fig. \ref{fig:a209_lr}
and of the right panel in Fig. \ref{fig:a209_opt}
coupled with flux density measurements, suggests the presence
of extended emission around the individual central cluster
radio galaxies.
\\
In order to highlight such emission we subtracted all the individual
sources visible in the full resolution image from the u--v data, and
convolved the residuals with a
HPBW with size $32.0^{\prime\prime} \times 30.0^{\prime\prime}$.
The image is reported in Fig. \ref{fig:a209_halo}.
\begin{figure}
\hspace{0.8truecm}\includegraphics[angle=0,width=7.2cm, height=6.7cm]{fig3.ps}
\caption{Radio contours over grey scale of the A\,209 cluster
radio halo after subtraction of the individual radio sources (see left panel
of Fig. \ref{fig:a209_opt} and Sect. 4.1.1 in the text). The resolution of this
image is $32.0^{\prime\prime} \times 30.0^{\prime\prime}$,
p.a. $30^{\circ}$. The rms (1$\sigma$) in the image is 0.15 mJy b$^{-1}$,
contours are 0.35$\times(\pm$ 1,2,4,8,16...) mJy b$^{-1}$.}
\label{fig:a209_halo}
\end{figure}
The adopted procedure indeed confirms the existence of cluster
scale extended emission. The possible presence of a radio halo in
A\,209 was suggested by Giovannini et al. (\cite{giovannini99})
from inspection of the NRAO VLA Sky Survey (NVSS), and confirmed
in Giovannini et al. (\cite{gg06}) on the basis of 1.4 GHz VLA observations.
Our GMRT image in Fig. \ref{fig:a209_halo}
is in partial agreement with the size and morphology of the VLA 1.4 GHz
image shown by those authors. The largest angular size (LAS)
is $\sim 4^{\prime}$, i.e. $\sim$ 810 kpc, therefore
we classify the source as a {\it giant} radio halo. Its total
flux density, measured after subtraction of the individual radio sources
(see left panel of Fig. \ref{fig:a209_opt}) is
S$_{\rm 610~MHz} = 24.0 \pm 3.6$ mJy, which implies a total radio power of
logP$_{\rm 610~MHz}$ (W/Hz)= 24.46.
The difficulty in subtracting the extended individual sources (in particular
the head--tail radio galaxy located just South of the cluster centre) reflects
both in the large error associated with the flux density measurement,
and in the unusual brightness distribution of the radio halo, characterised
by two peaks of emission.
\\
Further observations are already in progress with the GMRT, in
order to better image and study this source.
\subsubsection{Abell 521}
A detailed study of A\,521 (RXCJ\,0454.1--1014, z=0.2475,
1$^{\prime\prime}$=3.875 kpc) has already been published by Giacintucci et
al. (\cite{giacintucci06}). This merging cluster hosts a
radio relic located at the border of the X--ray emission.
We discussed the origin of this source in the light of current scenarios
for the formation of radio relics, i.e.
acceleration of electrons from the thermal pool or compression
of fossil radio plasma, both through merger shock waves.
We refer to that paper for the images and radio information
and will include A\,521 in the discussion in Section \ref{sec:discussion}.
All values and observational parameters reported in
Table \ref{tab:obs} and \ref{tab:param} are taken from Giacintucci
et al. (\cite{giacintucci06}).
\subsubsection{RXCJ\,1314.4$-$2515}
\begin{figure*}
\centering
\hspace{-0.5truecm}\includegraphics[angle=0,width=7.4cm, height=6.4cm]{fig4a_babb.ps}
\hspace{1truecm}\includegraphics[angle=0,width=8cm, height=6.5cm]{fig4b.ps}
\caption{Left --
GMRT 610 MHz radio contours for the cluster RXCJ\,1314.4--2515
superposed on the POSS--2 optical plate.
The 1$\sigma$ level in the image is 60 $\mu$Jy b$^{-1}$.
Contours are 0.18$\times(\pm$1,2,4,8,16...) mJy b$^{-1}$. The
HPWB is $8.0^{\prime\prime} \times 5.0^{\prime\prime}$, p.a.
$15^{\circ}$. The western and the eastern relics are labelled
as E--R and W--R respectively, and the individual point sources
in the relics/halo region are indicated as A and B.
Right -- GMRT 610 MHz radio contours for the cluster RXCJ\,1314.4--2515
superposed on the POSS--2 optical plate.
The 1$\sigma$ level in the image is 60 $\mu$Jy b$^{-1}$.
Contours are 0.2$\times(\pm$1,2,4,8,16...) mJy b$^{-1}$. The
HPWB is $20.0^{\prime\prime} \times 15.0^{\prime\prime}$, p.a.
$39^{\circ}$ The western and the eastern relics are labelled
as E--R and W--R respectively, RH indicates the radio halo, and
the individual point sources in the relics/halo region are
indicated as A and B.}
\label{fig:rxcj1314_rel2}
\end{figure*}
Evidence of a disturbed dynamical status for the cluster
RXCJ\,1314.4$-$2515 (z=0.2439, 1$^{\prime\prime}$=3.806 kpc)
is reported in the literature. The redshift distribution of the
cluster galaxies clearly shows a bimodal structure,
with two peaks separated in velocity space by $\sim$ 1700 km s$^{-1}$
(Valtchanov et al. \cite{valtchanov02}). The X--ray morphology of the
cluster is also bimodal, and it is elongated along the
E--W direction, the western peak being the brightest
(Valtchanov et al. \cite{valtchanov02}).
\\
This cluster was observed with the VLA at 1.4 GHz by Feretti
et al. (\cite{feretti05}), who revealed the presence of a radio halo at the
cluster centre and two peripheral sources, which they classified as relics.
\\
Fig. \ref{fig:rxcj1314_lr} reports the contour image of our 610 MHz
observations at the resolution of $15.0^{\prime\prime}
\times 13.0^{\prime\prime}$ within the virial radius. The central part
of the cluster is given in the left and right panel of
Fig. \ref{fig:rxcj1314_rel2}, both superposed on the POSS--2 plate.
The left panel shows the full resolution image,
while in the right panel lower resolution contours are displayed.
Fig. \ref{fig:rxcj1314_halo_asca} shows the same region
overlaid on the X--ray ASCA image.
Our images confirm that RXCJ\,1314.4--2515 has a complex radio morphology,
with the presence of three different regions of extended emission on the cluster
scale.
\begin{figure}\label{fig:rxcj1314_halo_asca}
\centering
\hspace{-0.5truecm}\includegraphics[angle=0,width=8.0cm]{fig5_babb.ps}
\caption{GMRT 610 MHz radio contours for the cluster
RXCJ\,1314.4--2515 superposed on the X--ray archive ASCA image (colour).
The 1$\sigma$ level in the image is 60 $\mu$Jy b$^{-1}$.
Contours are 0.18$\times(\pm$1,2,4,8,16...) mJy b$^{-1}$. The
HPWB is $25.0^{\prime\prime} \times 22.0^{\prime\prime}$, p.a.
$15^{\circ}$.}
\label{fig:rxcj1314_halo_asca}
\end{figure}
\\
Two parallel features are easily visible in Figs. \ref{fig:rxcj1314_lr},
\ref{fig:rxcj1314_rel2} and \ref{fig:rxcj1314_halo_asca}.
They are separated by $\sim 6^{\prime}$ and extend in the SE--NW direction
for approximately 4$^{\prime}$ (i.e. $\sim$ 910 kpc at the cluster redshift).
The remarkable superposition of the low resolution radio image with the
ASCA image in Fig. \ref{fig:rxcj1314_halo_asca} clearly
shows that these two sources are located at the border of the detected
X--ray emission.
\\
The overall morphology of these two features, coupled with their
location with respect to the intracluster gas, suggest that they
are radio relics, as also discussed in Feretti et al. (\cite{feretti05}),
who ruled out any association with individual galaxies.
In the following we will
refer to the eastern and the western relics as E--R and
W--R respectively, as also labelled in Fig.
\ref{fig:rxcj1314_rel2}.
The morphology and flux density ratio of the two relics are
consistent with the 1.4 GHz data in Feretti et al. (\cite{feretti05}).
Their flux densities at 610 MHz are S$_{\rm 610~MHz} = 64.8 \pm 3.2$ mJy and
S$_{\rm 610 MHz} = 28.0 \pm 1.4$ mJy for W--R and E--R respectively.
The value given for E--R does not include the southernmost pointlike source
A (Fig. \ref{fig:rxcj1314_rel2}).
In order to derive the total spectral index of W--R and E--R between
1.4 GHz and 610 MHz we included also the contribution of source A to the
flux density of
E--R, for a consistent comparison with Feretti et al. (\cite{feretti05}), and
obtained 32.8 mJy. Our flux density measurements lead to the same
value for the spectral index in both features. In particular,
$\alpha_{\rm 610~MHz}^{\rm 1.4~GHz}$(W--R) = 1.40 $\pm$ 0.09 and
$\alpha_{\rm 610~MHz}^{\rm 1.4~GHz}$(E--R) = 1.41 $\pm$ 0.09.
\\
\\
Figs. \ref{fig:rxcj1314_rel2} (right panel)
and \ref{fig:rxcj1314_halo_asca}
show that extended emission is present in the region between
W--R and E--R, consistent with the 1.4 GHz VLA images in Feretti
et al. (\cite{feretti05}), who classified
this feature as a radio halo. This source is referred to as RH in the right
panel of Fig. \ref{fig:rxcj1314_rel2}. It is spatially coincident with the
bulk of the optical galaxies (see Valtchanov et al. \cite{valtchanov02}) and
its largest angular size is $\sim~2^{\prime}$, corresponding to 460 kpc,
i.e. it is not a giant radio halo.
The radio halo seems to blend with the emission of the western relic
W--R, however it is difficult to say whether this is a true feature, since
projection effects are likely to play a role. Given the different
polarisation properties of radio halos and relics, polarisation information
would be necessary to investigate this issue.
We measured a flux density of S$_{\rm 610~MHz} = 10.3 \pm 0.3$ mJy for the
radio halo. No spectral index estimate between 610 MHz and 1.4 GHz can be
derived, due to the lack of a flux density value
at 1.4 GHz (Feretti et al. \cite{feretti05}).
\\
\\
\subsubsection{RXCJ\,2003.5--2323}\label{sec:rxcj2003}
\begin{figure*}
\centering
\hspace{-1truecm}\includegraphics[angle=0,width=8.1cm, height=6.6cm]{fig6a.ps}
\hspace{1truecm}\includegraphics[angle=0,width=8cm, height=6.7cm]{fig6b.ps}
\caption{Left -- Full resolution GMRT 610 MHz contours of the
central region of RXCJ\,2003.5--2323, superposed to the POSS--2
optical image. The resolution of the radio image is
$6.9^{\prime\prime} \times 5.0^{\prime\prime}$, p.a. $-0.3^{\circ}$,
the 1$\sigma$ level is 40 $\mu$Jy b$^{-1}$. Contours are
0.12$\times(\pm$1,2,4,8...) mJy b$^{-1}$.
Individual sources are labelled from A to H.
Right -- GMRT 610 MHz gray scale and radio contours of the
giant radio halo in RXCJ\,2003.5--2323 after subtraction of the
individual sources (from B to H in the left panel). The
HPWB is $32.0^{\prime\prime} \times 23.0^{\prime\prime}$, p.a.
$15^{\circ}$. Contours are 0.3$\times(\pm$1,2,4,8...) mJy b$^{-1}$.
The 1$\sigma$ level in the image is 100 $\mu$Jy b$^{-1}$.}
\label{fig:rxcj2003_halo}
\end{figure*}
RXCJ2003.5--2323 is the most distant cluster in our
sample, with z=0.3171 (1$^{\prime\prime}$=4.626 kpc).
Little information is available in the literature.
The ROSAT All Sky Survey (RASS) image shows that the X--ray emission
is elongated along the NW--SE direction, which
might suggest a disturbed dynamical status for RXCJ\,2003.5--2323.
\\
Our GMRT 610 MHz observations show that it is the most striking
cluster among those observed thus far.
It hosts a {\it giant} radio halo, one of the largest known up to date.
Its largest angular size is $\sim 5^{\prime}$, corresponding to $\sim$
1.4 Mpc.
Hints of the presence of this very extended radio halo were clear
already from inspection of the NRAO VLA Sky Survey (NVSS).
\\
The cluster radio emission within the cluster virial radius
is given in Fig. \ref{fig:rxcj2003}. The central part of the cluster is
shown in Fig. \ref{fig:rxcj2003_halo}.
The left panel shows a full resolution image
superposed to the POSS--2 optical image, to highlight the individual sources
(labelled from A to H). The sources with a clear optical counterpart (B to H) were
subtracted from the u--v data when producing the image shown in the right panel
of Fig. \ref{fig:rxcj2003_halo}, which we convolved with a larger beam
in order to highlight the low surface brightness emission. We did not subtract
A, since no optical counterpart is visible on the POSS--2, therefore we
consider this feature as a peak in the radio halo emission.
One of the most striking features of this giant radio halo
is its complex morphology:
clumps and filaments are visible on angular scales of the
order of $\sim 1^{\prime}$ (clumps) and $\sim 2-3^{\prime}$ (filaments),
as clear from Figures \ref{fig:rxcj2003} and
\ref{fig:rxcj2003_halo} (right panel).
Unfortunately no deep X--ray images are available for this cluster,
therefore a combined radio and X--ray analysis cannot be carried out.
The only information we can derive from the RASS image of the cluster is
that the whole radio emission from the halo is embedded within the X--ray
emission, as shown in Fig. \ref{fig:rxcj2003_rass}.
\\
The total flux density of the radio halo (after subtraction of the point
sources) is S$_{\rm 610~MHz} = 96.9 \pm 5.0$ mJy, corresponding to
logP$_{\rm 610~MHz}$ (W/Hz) = 25.49.
\begin{figure}\label{fig:rxcj2003_rass}
\centering
\hspace{0.5truecm}\includegraphics[angle=0,width=6.5cm]{fig7.ps}
\caption{ROSAT All Sky Survey contours (black) of RXCJ\,2003.5--2323 overlaid
on the 610 MHz gray scale and contours (gray) of the radio halo.
The X--ray contours levels are logarithmically spaced by a factor of
$\sqrt 2$. The radio image is the same as right
panel of Fig. \ref{fig:rxcj2003_halo}. }
\label{fig:rxcj2003_rass}
\end{figure}
\subsection{Candidate extended emission in Abell 3444}\label{sec:minihalo}
Abell 3444 (RXCJ\,1023.8$-$2715, z=0.2542, 1$^{\prime\prime}$=3.924 kpc) was
indicated as possible cooling core cluster
by L\'emonon (\cite{lemonon99}) and Matsumoto (\cite{matsumoto01}) on the basis
of the analysis of ASCA data, though at limited significance.
The X--ray ASCA image shows that the inner part of the cluster is elongated
along the SE--NW direction.
\\
No radio information is reported in the literature. Unfortunately, due to
calibration problems, we could use only the USB of our dataset to image
this cluster. Our GMRT 610 MHz image
of the radio emission within the cluster virial radius is reported
in Fig. \ref{fig:a3444}, and shows that the radio emission is dominated by a
chain of individual sources, all with optical counterpart from the POSS--2.
The alignment of the chain of radio galaxies is in agreement with the inner
elongation of the archive ASCA X--ray image.
\\
A radio--optical overlay of the central part of the field is given
in Fig. \ref{fig:a3444_opt} (Left). The extended radio galaxy at
the north--western end of the chain is associated with the dominant
cluster galaxy (right panel in Fig. \ref{fig:a3444_opt}).
Its morphology is complex. Bent emission in the
shape of a wide angle tail is clear in the inner part of the source,
surrounded by extended emission. At least a couple of
very faint objects are visible in the same region of the extended
radio emission, so it is unclear if we are dealing with extended
emission associated with the dominant cluster galaxy, or if this
feature is the result of a blend of individual sources.
Under the assumption that all the emission detected within the
3$\sigma$ contour (left panel of Fig. \ref{fig:a3444_opt}) is associated
with the dominant cluster galaxy, we measured a flux density
S$_{\rm 610~MHz} = 16.5 \pm 0.8$ mJy,
which corresponds to logP$_{\rm 610~MHz}$(W/Hz) = 24.51. The largest
angular size of the radio source is $\sim 40^{\prime\prime}$, hence
the linear size is $\sim$ 165 kpc.
\\
Both panels of Fig. \ref{fig:a3444_opt} suggest that emission on
a larger scale may be present around the central radio source. Indeed
we measured a flux density of S$_{\rm 610~MHz} = 10.0\pm 0.8$ mJy on
an angular scale of $\sim 1.5^{\prime}$, i.e. $\sim$ 350 kpc.
\\
This situation is reminiscent of the class
of core--halo sources, where extended emission surrounds a radio
component obviously associated with a galaxy.
Core--halo sources are usually located in cooling core clusters.
Some well--known examples are 3C\,317 (Zhao, Sumi \& Burns \cite{zhao93}),
3C84 (B{\"o}hringer et al. \cite{boeringer93}), PKS 0745--191 (Baum \&
O'Dea \cite{baum91}).
\begin{figure*}
\centering
\hspace{-0.5truecm}\includegraphics[angle=0,width=9.8cm,height=7.5cm]{fig8a.ps}
\hspace{1truecm}\includegraphics[angle=0,width=7.6cm]{fig8b.ps}
\caption{Left -- GMRT 610 MHz radio contours for the cluster A\,3444
superposed on the POSS--2 optical plate.
The 1$\sigma$ level in the image is $\sim 50~\mu$Jy b$^{-1}$.
Contours are 0.15$\times(\pm$1,2,4,8,16...) mJy b$^{-1}$ (3$\sigma$).
The HPWB is $23.2^{\prime\prime} \times 16.1^{\prime\prime}$, p.a.
$37.6^{\circ}$. Right -- High resolution zoom on the central cluster galaxy.
The HPBW is $7.6^{\prime\prime} \times 4.9^{\prime\prime}$,
p.a. $19^{\circ}$.
Contours are given starting from the 3$\sigma$ level:
0.20$\times(\pm$1,2,4,8...).}
\label{fig:a3444_opt}
\end{figure*}
\subsection{Galaxy clusters without extended emission}\label{sec:noext}
For the remaining six clusters our 610 MHz GMRT observations did
not show any indication of possible extended emission at the noise
level of the final images.
\subsubsection{S\,780}
S\,780 (RXCJ\,1459.4--1811, z=0.2357, 1$^{\prime\prime}$=3.952 kpc) is
the most X--ray luminous and most massive cluster
among those presented in this paper.
No information is available in the literature.
Inspection of the ROSAT archive indicates that the X--ray emission
is elongated in the E--W direction.
\\
Fig. \ref{fig:s0780} reports the 610 MHz contours of the S\,780 field
within the virial radius.
The radio emission from S\,780 is typical of a very active cluster,
with a number of cluster--type radio galaxies.
Beyond the dominant central radio source, one head--tail radio
galaxy is clearly visible close to the cluster centre, one wide--angle tail
is located at $\sim~6^{\prime}$ from the cluster centre (well within
the virial radius) and one FRII radio galaxy (Fanaroff \& Riley \cite{fr74})
with distorted jets is located at $\sim~8^{\prime}$ from the cluster centre
(in the S--E direction). A few more radio
sources in the cluster field are optically identified.
A visual inspection of the optical counterparts of all these radio sources
suggests they have similar optical magnitudes.
\\
Radio--optical overlays are given in Fig. \ref{fig:s0780_opt}. The
left panel shows the central part of the cluster superposed on the
POSS--2 optical frame, and the right panel is a high resolution zoom of
the central cluster galaxy. The radio galaxy shows a compact component
coincident with the nucleus of the associated galaxy, extended emission
in the eastern direction and a filament aligned South--East.
The total angular size is $\sim 50^{\prime\prime}$, corresponding to
a largest linear size LLS $\sim$ 200 kpc. The flux density is
S$_{\rm 610~MHz} = 135.9 \pm 6.8$ mJy, i.e.
logP$_{\rm 610~MHz}$(W/Hz) = 25.32. Sources A and B highlighted in the right
panel of Fig. \ref{fig:s0780_opt} were not included in the flux density
measurement. The flux density of the filament just South of the central
radio source is S$_{\rm 610~MHz} = 3.1 \pm 0.2 mJy$.
No indication of residual flux density is present in the cluster centre.
\\
\\
\begin{figure*}
\centering
\hspace{-0.5truecm}\includegraphics[angle=0,width=9.0cm,height=7.8cm]{fig9a.ps}
\hspace{1truecm}\includegraphics[angle=0,width=8.3cm]{fig9b.ps}
\caption{Left -- GMRT 610 MHz radio contours for the cluster S\,0780
superposed on the POSS--2 optical plate.
The 1$\sigma$ level in the image is 65$~\mu$Jy b$^{-1}$.
Contours are 0.20$\times(\pm$1,2,4,8,16...) mJy b$^{-1}$.
The HPWB is $20.9^{\prime\prime} \times 15.9^{\prime\prime}$, p.a.
$45.9^{\circ}$. Right -- High resolution zoom on the central cluster
galaxy. The HPWB is $7.5^{\prime\prime} \times 5.0^{\prime\prime}$, p.a.
$25^{\circ}$. Contours are 0.19$\times(\pm$1,2,4,8,16...)
mJy b$^{-1}$ (first contous is 3$\sigma$).}
\label{fig:s0780_opt}
\end{figure*}
\subsubsection{Abell 2697}
Very little information is available in the literature for
A\,2697 (RXCJ\,0003.1--0605, z=0.2320, 1$^{\prime\prime}$=3.698 kpc).
Archive X--ray ROSAT and ASCA images show that the hot intracluster
gas has a fairly regular distribution.
\\
For this cluster only one portion of the band (USB) was available. We
imaged the cluster in a range of resolutions,
reaching a 1$\sigma$ noise level of
$\sim 80~\mu$Jy b$^{-1}$ in each image.
The radio field is dominated by a head--tail galaxy.
Radio contours are reported in Fig. \ref{fig:a2697} in the Appendix.
\\
No cluster--type extended feature is visible at the sensitivity
level of the images, and no significant flux density from positive
residuals was found by integrating over a wide region of the
cluster centre.
\\
\subsubsection{Abell 141}
The X--ray emission of A\,141 (RXCJ\,0105.5--2439, z=0.2300,
1$^{\prime\prime}$=3.674 kpc) is bimodal. The archive X--ray ROSAT
images show that the North--South elongation of the ASCA image is the
result of two components, the northern one being the brightest and
largest. The same orientation was found also in the
distribution of the cluster galaxies by Dahle et al. (2002), who concluded
that the overall optical analysis is suggestive of recent
merger activity. Those authors reported also evidence of weak lensing.
\\
High resolution radio imaging aimed at the detection of cluster radio
galaxies was carried out with the VLA--A at 1.4 GHz by Rizza et al.
(\cite{rizza03}).
\\
For this cluster only one portion of the observing band (USB) was
available.
Our GMRT observations of A\,141 revealed neither the presence of diffuse
emission at the level of $\sim 100~\mu$Jy b$^{-1}$, nor unusually high residuals.
Radio contours are reported in Fig. \ref{fig:a141} in the Appendix.
\subsubsection{Abell 3088}
Very little information is available in the literature for
A3088 (RXCJ\,0307.0$-$2840, z=0.2537, 1$^{\prime\prime}$=3.952 kpc).
It is a richness 2 galaxy cluster with a
very regular and symmetric X--ray morphology. On the basis of XMM--Newton
observations, Zhang et al. (\cite{zhang06}) reported a gas temperature
kT=6.4$\pm$0.3 keV and classified it as a ``single dynamical state'' cluster
with a cooling core (Finoguenov, B{\"o}hringer \& Zhang \cite{finogue05}).
\\
Our GMRT 610 MHz observations show that the field has only a few radio sources,
with a lack of positive residual flux density in the central cluster region
and no hints of diffuse emission from the cluster at the detection level
of our images, i.e 1$\sigma~\sim~65~\mu$Jy b$^{-1}$.
Contours of the radio emission within the cluster virial radius
are shown in Fig. \ref{fig:a3088}.
\subsubsection{Abell 2537}
Little information is available in the literature for
A\,2537 (RXCJ\,2308.2--0211, z=0.2966, 1$^{\prime\prime}$=4.419 kpc).
Archive HST observations show the presence of several red and blue arcs,
and Dahle et al. (\cite{dahle02}) report evidence of weak lensing.
The cluster was observed in the X--ray band by XMM--Newton
and was classified as ``single dynamical state'', with gas temperature
kT=7.9$\pm$0.7 KeV (Zhang et al. \cite{zhang06}).
A secondary X--ray peak is present at $\sim~7^{\prime}$ from the
cluster gas concentration.
\\
For this cluster only the USB provided useful data.
The 610 MHz radio emission from the cluster, shown in Fig. \ref{fig:a2537}
is dominated by a tailed radio galaxy located at the cluster centre.
Very few other radio sources are detected above the 5$\sigma$ level of
the image. No hint of extended emission is present in the field
at the level of $\sim 60~ \mu$Jy b$^{-1}$ (1$\sigma$), and no
high positive flux density residuals were detected over the central
cluster region.
\\
\subsubsection{Abell 2631}
Little information is available in the literature for the rich cluster
A\,2631 (RXCJ\,2337.6+0016, R=3, z=0.2779, 1$^{\prime\prime}$=4.221 kpc).
Archive ROSAT X--ray images are available for A\,2631, which show a
complex morpholgy. Based on XMM--Newton observations Zhang et al.
(\cite{zhang06}) classified it as ``offset centre'', with varying
isophote centroids on different angular scales, and reported
a gas temperature kT=9.6$\pm$0.3 KeV. Finoguenov et al. (\cite{finogue05})
interpret the XMM properties of this cluster in terms of a late stage
of a core disruption.
The cluster was observed with the VLA--A at 1.4 GHz
(Rizza et al. \cite{rizza03}).
\\
Our GMRT observations of this cluster were spread over two days,
however on both days only one portion of the band (USB) was available.
The 610 MHz radio emission of A\,2631 within the virial radius is
shown in Fig. \ref{fig:a2631}. It is dominated by a central
tailed radio galaxy and all the remaining sources above the 5$\sigma$
level are located South of the cluster centre. No signs of extended emission
are present in the field at the rms level of $\sim 50~ \mu$Jy b$^{-1}$
(1$\sigma$), and no positive residuals were found by integrating over the
central region of the cluster.
\\
\section{Discussion and conclusions}\label{sec:discussion}
Our 610 MHz GMRT radio halo survey has been designed in
order to statistically investigate the connection
between cluster merger phenomena and the presence of cluster--scale
radio emission. In particular, our main goal is to derive the fraction
of massive clusters (i.e. $M \ge 10^{15} M_{\odot}$) with
giant radio halos in the redshift range 0.2$<$z$<$0.4, in order to constrain
the expectations made by CBS04 and CB05 in the framework of the
particle re--acceleration model.
The total cluster sample consists of two sub--samples of massive clusters
extracted from the REFLEX and extended BCS catalogues, and includes a total
of 50 clusters.
\\
The cluster sample presented here (see Table 1) includes 27 REFLEX
clusters, eleven of which were observed in a first run of GMRT observations
carried out in January 2005. If we consider the literature data, information
is now available for 15 of the 27 objects.
The most relevant results we obtained, as well as the status of the observations for
the remaining clusters in the sample are summarized below.
\begin{itemize}
\item[{\it (a)}] Two new giant radio halos were found,
i.e. A\,209 (also reported in Giovannini et al. \cite{gg06} while
this paper was in preparation), and RXCJ\,2003.5--2323, discovered with the
present 610 MHz GMRT observations.
\item[{\it (b)}] A radio halo (LLS$\sim$460 kpc) was found in
RXCJ\,1314.4--2515.
\item[{\it (c)}] Two relics were found in the cluster RXCJ\,1314.4--2515,
and one in A\,521 (Giacintucci et al. \cite{giacintucci06}).
These three relics are impressive structures.
Their largest linear size is of the order of the Mpc, which suggests
that particle acceleration, most likely related to the hierarchical
formation of clusters and accretion processes, might be required to
account for their formation (e.g. Ensslin \& Br\"uggen \cite{eb02}).
The relic in A\,521 has already been
studied in detail. Here we just wish to mention that RXCJ\,1314.4--2515
is the third galaxy cluster known to date hosting two relics,
after A\,3667 (Roettiger, Burns \& Stone \cite{roetti99};
Johnston--Hollitt et al. \cite{jh02}) and A\,3376 (Bagchi et al. \cite{bagchi06}).
Furthermore, it is unique in hosting two relic sources and one radio halo,
and hence a challenge for our understanding of the connection between radio
halos, relics and the physics of cluster mergers.
\item[{\it (d)}] Extended emission on smaller scale (of the order of $\sim 350$ kpc)
was detected around the dominant galaxy
in A\,3444, whose radio morphology and monocromatic power are similar to those
of core--halo radio galaxies found at the centre of cooling core clusters.
\item[{\it (e)}] Three clusters in the sample host well--known giant
radio halos, i.e. A2744 (Govoni et al. \cite{govoni01}), A1300
(Reid et al. \cite{reid99}) and A2163 (Herbig \& Birkinshaw \cite{herbig94},
Feretti et al. \cite{feretti01}).
\item[{\it (f)}] No extended emission of any kind was detected at the level
of 50 -- 100 $\mu$Jy b$^{-1}$ in six of the 11 clusters observed by us and
presented here.
\item[{\it (g)}] The cluster RXCJ\,0437.1+0043 is known not to host extended emission,
based on low resolution 1.4 GHz VLA observations (Feretti et al.
\cite{feretti05}).
\item[{\it (h)}] Five clusters were observed by us in October 2005, two more will be
observed in August 2006, and they will be presented in a future paper
(see Sect. \ref{sec:obs}).
\item[{\it (h)}] The remaining 5 clusters are being observed by other authors
(GMRT Cluster Key Project, P.I. Kulkarni) and no literature information
is available thus far.
\end{itemize}
\begin{figure}
\centering
\includegraphics[angle=0,width=8.5cm]{fig10.ps}
\caption{LogL$_{\rm X}$--LogP$_{\rm 1.4~GHz}$ plot for the clusters
with detected giant radio halos. Stars represent the literature clusters
at z$<$0.2 and filled circles the literature clusters at z$>0.2$.
Open circles show the location of A\,209 (lower left)
and RXCJ\,2003.5--2323 (upper right).}
\label{fig:LxLr}
\end{figure}
In Fig. \ref{fig:LxLr}
we show the location of the giant radio halos in A\,209 and
RXCJ\,2003.5--2323 in the log~L$_{\rm X}$ -- log(P$_{\rm 1.4~GHz}$) plane,
where all the previously known clusters with giant radio halos are also
reported (see CBS06 and references therein for the literature data).
The radio power at
1.4 GHz for these two clusters was obtained scaling the measured flux density
at 610 MHz with a spectral index $\alpha_{\rm 610~MHz}^{\rm 1.4~GHz} =
1.2 \pm 0.2$ (the uncertainty assumed here dominates over the 610 MHz
flux density error). Clusters at z$<$0.2 and those at z$>$0.2 are shown with
different symbols. Despite some overlap, the most powerful radio halos are
hosted in the most X--ray luminous clusters, which are also the most distant.
The location of A\,209 and RXCJ2003.5--2323 on the plot is in good agreement
with the distribution of all giant radio halos known in the literature.
\\
\\
An important piece of information would be the knowledge of the
merging stage of the clusters in the sample, since cluster merger is a major
ingredient in the re--acceleration model.
The literature information on the clusters presented here is not
homogenous, and it is not possible to make conclusive statements
on the connection between merging/non--merging signatures and the
presence/absence of radio halos. A\,209 is known to be undergoing merging
events, but no information is available for RXCJ\,2003.5--2323, except
for the elongated X--ray emission imaged by ROSAT. The three
radio halo clusters known from the literature are all reported to
be dynamically active
(see for instance Zhang et al. \cite{zhang06} and Finoguenov et
al. \cite{finogue05}). Signature
of cluster merger is present in the optical and X--ray bands
for A\,521 (Giacintucci et al. \cite{giacintucci06} and references therein)
and RXCJ\,1314.4--2515, which host extended radio
emission in the form of radio halo and relics. Elongated or more
complex X--ray isophotes are visible in S\,780, A\,141, A\,2631
and in RXCJ\,0437.1+0043, which lack cluster scale
radio emission.
The remaining two clusters without extended emission are considered
``relaxed'' on the basis of the X--ray emission
\\
To summarize, the optical and X--ray information for the sample of
clusters presented here is not inconsistent with the findings that clusters
with radio halos are characterised by signatures of merging processes.
On the other hand, clusters without extended radio emission may or may not
show dynamical activity at some level.
This crucial issue will be further investigated in future works.
\\
\\
In the framework of the canonical particle re--acceleration model
giant radio halos are believed to be essentially
an on/off phenomenon, triggered by dissipation via collisionless
damping of turbulence injected during cluster mergers.
\\
The physics of collisionless turbulence and of particle acceleration is
still poorly understood and many hidden ingredients could be of relevance
in computing the efficiency of the particle acceleration processes
in the ICM.
On the other hand, from simple energetic arguments,
it is clear that the possibility to develop a giant radio halo
is related to the efficiency of turbulence injection and to the
possibility to generate large enough ($\geq$Mpc sized) turbulent cluster
regions. In this respect, the calculations in CBS04, CB05 and more recently
in CBS06 show that major cluster mergers (i.e. with mass ratio of the
order $\leq$ 5:1) between massive clusters (M$\geq 10^{15}$M$_{\odot}$) may
provide the necessary ingredients to develop giant radio halos.
During these mergers a fraction of up to $\sim 10\%$ of the cluster
thermal energy is believed to be injected in a $\sim {\rm Mpc}^3$ region.
However, from a theoretical point of view, it is hard to predict if a
particular merging
clusters may host a giant radio halo, since this depends
on a number of parameters which cannot be easily estimated.
For instance, in order to have enough time for the turbulence injected
on large scales to cascade down to collisionless scales, it is necessary
that seed relativistic particles (to be reaccelerated) are present
in the turbulent ICM, and that the magnetic field in the ICM is strong
enough to allow $\sim$GeV electrons to emit synchrotron radiation at the
observed frequency.
\\
The statistical approach developed in CBS04, CB05 and CBS06 allows
a more reliable estimate of the fraction of clusters hosting a giant
radio halo.
Without going into the calculation details, the most
relevant result in the light of those papers is that the
fraction of galaxy clusters with mass
$\sim 2-3.5\times 10^{15} M_{\odot}$ and redshift $z = 0.2 \div 0.4$
expected to host a giant radio halo is in the range
$\sim 10-35 \%$.
In addition, CBS06 showed that the cluster magnetic field plays
an important role, and that this fraction depends
the scaling law between the magnetic field and cluster mass.
\\
Radio information is now available for 15 of the 27
clusters considered in this paper, and 5 of them possess a giant radio
halo. However, at this stage of our work the statistics are still poor,
and no firm comparison with theoretical expectations can be reached.
For this reason our analysis, in the light of
the predictions made in CB05 and CBS06, will be carried out as soon
as the information on the whole selected sample (REFLEX and BCS) is
completed (Venturi et al. in prep.; Cassano et al in prep.).
\\
\\
{\it Acknowledgements.}
We thank the staff of the GMRT for their help during the observations.
GMRT is run by the National Centre for
Radio Astrophysics of the Tata Institute of Fundamental Research.
T.V. and S.G. acknowledge partial support from the Italian Ministry
of Foreign Affairs. G.B., R.C. and G.S. acknowledge partial support
from MIUR grants PRIN2004 and PRIN2005.
|
1,314,259,993,232 | arxiv |
\section{Introduction:}
Exotic hadrons in QCD remain poorly understood theoretically. The
recent discoveries of the $X$, $Y$, $Z$ states~\cite{Choi:2003ue}, for
instance, in the charmonium spectroscopy was rather unexpected. Many
of the expected states, on the other hand, which are associated with
gluonic excitations like hybrids or glueballs have not been
unambiguously identified. The $0^{++}$ glueball is the lightest
stable particle in the QCD spectrum in the limit where all quark
masses are sent to infinity. In this situation, its mass has been
computed rather accurately in quenched lattice QCD
simulations~\cite{Morningstar} to be slightly smaller than two GeV. In
the presence of finite quark masses, the properties of the glueball
should remain relatively undisturbed provided $m_q \gapprox 1$ GeV. In
the physical situation, however, three quarks are substantially
lighter than 1 GeV. Unquenched lattice simulations have been
performed but the results are somewhat contradictory.
The simulations of ref.~\cite{Hart:2006ps} obtain near to maximal
mixing between glueball and $\bar{q}q$ states and find that
unquenching leads to a strong lowering of the masses. A similar effect
of unquenching was observed for the $I=1$ scalar mesons by several
groups (e.g.~\cite{Frigori:2007wa,Hashimoto:2008xg}). This picture,
however, is not confirmed by the recent results from
ref.~\cite{Richards:2010ck} based on unquenched simulations with
$N_f=2+1$ and larger statistics who find gluball states very similar
to the quenched ones.
A possible scenario, suggested from using Laplace sum
rules~\cite{novikov,narisonveneziano} \footnote{references to more
recent work which incorporate, in particular, more realistic
modelling of instanton effects can be traced e.g. from
ref.~\cite{Harnett:2008cw}} is that there could be two mesons below
2 GeV with large glueball overlap. One of these could be rather light
and possibly identified with the $\sigma(600)$. Phenomenological
implications of this scenario have been discussed in some detail in
ref.~\cite{Minkowski}.
The classification of the lowest lying experimentally observed scalar
mesons into a flavour nonet is also not a completely solved
problem~\cite{tornqvistrevue}. It has been proposed, for instance,
that the $a_0(980)$ and the $f_0(980)$ mesons could have a specific
status as weakly bound $K\bar{K}$
molecules~\cite{Weinstein:1990gu}. This model simply explains their
near degeneracy and their proximity to the $K\bar{K}$ threshold. It also
seems able to explain the values of the $2\gamma$ partial
widths~\cite{Hanhart:2007wa}. Alternatively, it has been pointed out
a long time ago that the mass pattern of the nonet below 1 GeV can be
understood assuming a tetraquark flavour structure~\cite{Jaffe:1976ig}
(see also~\cite{Black:1998wt}).
The peculiarity of a nonet composed of the $\sigma$, $\kappa$, $a_0(980)$
and $f_0(980)$ is most clearly formulated in terms of 't Hooft's large
$N_c$ limit of QCD~\cite{'tHooft:1973jz}. The masses, for instance, strongly
deviate from the ideal mixing pattern predicted in this
limit\footnote{In principle, dual ideal mixing is
possible~\cite{Black:1998wt}. The scalars must then be either
tetraquarks, i.e. exotics, or else the mass squared of the
$\sigma$-meson must be a decreasing function of the strange quark
mass~\cite{Cirigliano:2003yq} which is unphysical.}. This implies
that in discussing the light scalars, effects which are sub-leading in
$1/N_c$, such as meson loops, ought to be taken into
account. Modellings of meson loops effects can be found in the
classic papers~\cite{Tornqvist:1982yv,vanBeveren:1986ea}. More
recently, a model from which an explicit $1/N_c$ dependence can be
deduced has been proposed~\cite{Pelaez:2003dy}. Investigations in the
ADS/CFT modelling of large $N_c$ QCD have also been
performed~\cite{Colangelo:2008us}.
Experiments on radiative decays of the $\phi$ meson have been
proposed~\cite{Achasov:1987ts} in order to clarify the flavour
structure of the light scalars. Such experiments have been performed
and are planned to continue (see~\cite{kloe2revue} for a review). The
simplest way, however, to quantify the various aspects of the
structure of the scalar resonances would be via their couplings to a
set of simple operators. The glue content, for instance, is best
probed from the coupling to the gluonic operator $\alpha_s
G^2$. Similarly, the $\bar{q}q$ content is probed by the couplings of
the scalar mesons to quark-antiquark operators. Such couplings have
been considered for the $I=1$ and $I=1/2$ scalars by
Maltman~\cite{Maltman:1999jn} who suggested that their values can also
be used for properly identifying the nonet. A lattice QCD
result for the coupling of $I=1$ scalars to $\bar{u}d$ is presented
in~\cite{McNeile:2006nv}. Studies of couplings to tetraquark operators
have also been recently undertaken~\cite{Prelovsek:2010kg,Jansen:2009hr}.
The $\sigma(600)$ resonance is very unstable and does not give rise to
a usual Breit-Wigner behaviour in cross-sections. Its existence has been
demonstrated only recently~\cite{CCL} by making a combined use of
experimental data and theoretical properties of the $\pi\pi$
scattering amplitude, which can be encoded into the set of
Roy~\cite{Roy:1971tc} integral equations.
On the real axis, where the additional constraint of unitarity
applies, the Roy equations were known as a powerful
tool for analyzing experimental pion-pion scattering
data~\cite{pennington73,BFP74}. New high precision experimental data
on low energy pion-pion
scattering~\cite{E865,NA48/2cusp,DIRAC,NA48/2Kl4,lastNA48} have
spurred renewed interest in these
equations~\cite{anantbuttiker,ACGL,DFGS,GKPY2,GKPY3}.
In ref.~\cite{ACGL}, the
Roy equations are treated as a boundary value problem and exact
solutions have been searched for numerically below a matching point
$\sqrt{s_A}=0.8$ GeV.
When applied to resonances, the Roy equations are used for computing the
partial-wave amplitude for complex values of the energy. The masses
and widths of the resonances
may be identified from the poles of the amplitude on the second Riemann sheet.
The domain of validity of the Roy equations, as displayed in
ref.~\cite{CCL} allows one to discuss both the $\sigma$ and the
$f_0(980)$. The same poles which appear in the
elastic scattering amplitude can be shown to also appear in two-point
correlation functions of scalar operators and also in $\pi\pi$ matrix
elements of these operators. The poles also appear in scattering
amplitudes with a pion pair in the final state like
$\gamma\gamma\to\pi\pi$. The residues of the poles are also determined
and can be interpreted in terms of couplings between scalar resonances and
operators. In the present work we consider, from this point of view,
the couplings of the scalar $I=0$ mesons $\sigma$ and $f_0(980)$ to the gluonic
operator $\alpha_s G^2$ and to the quark operators $\bar{u}u+\bar{d}d$
and $\bar{s}s$. We will update the results that can be obtained for
these couplings using the Roy equations combined with low-energy
constraints from chiral symmetry. We will also consider the couplings
to two photons, which were discussed in a similar framework in
ref.~\cite{Pennington:2006dg}. In that case, chiral constraints can be used
as well as recent experimental data from the Belle
collaboration~\cite{Belle1,Belle2}.
The plan of the paper is as follows. We begin in sec.~\sect{royeq} by
constructing solutions to the Roy equations in a domain which extends
up to the $K\bar{K}$ threshold. This domain covers most of the $f_0$
effect on the real axis. We find that a very simple generalisation of
the parametrisations used in ref.~\cite{ACGL} is adequate for
approximating the solutions. In sec.~\sect{poles} we use these
solutions inside the Roy integral representations to perform
extrapolations to the complex energy plane. We determine the resonance
poles and their associated residues (sec.~\sect{poles}). These results are
applied in sec.~\sect{2gammas} to the determination of the scalar
mesons couplings to two photons. For this purpose, we use the
coupled-channel dispersive Omn\`es representation for
$\gamma\gamma\to\pi\pi, K\bar{K}$ and the chirally constrained fits
performed in~\cite{garciamartinmou}. The couplings of the scalar
mesons to operators are finally considered in sec.~\sect{operators}. A
complex plane definition is proposed from which a simple relation is
obtained between the couplings and pion scalar form-factors computed
at the resonance pole positions. Evaluations are made possible in this
case by using chiral constraints for the form-factors in combination
with coupled-channel Omn\`es representations~\cite{DGL90}.
\section{Roy equation solution for $t_0^0(s)$ up to the $K\bar{K}$
threshold}\lblsec{royeq}
In order to improve the determination of the $f_0(980)$ properties, we begin
in this section by constraining the $I=0$ $S$-wave amplitude $t_0^0(s)$
to satisfy the Roy equation up to the $K\bar{K}$ threshold\footnote{We
neglect isospin breaking and take $m_K=(m_{K^+}+m_{K^0})/2$}. The Roy
equation reads
\begin{eqnarray}\lbl{singleroy}
&& {\rm Re\,} t_0^0(s)= a_0^0 + {s-4m_\pi^2\over 12m_\pi^2}(2a_0^0-5a_0^2)\\
&& + {1\over\pi}\Xint-_{4m_\pi^2}^{\infty} ds' \Big[ {\rm Im\,} t_0^0(s')
\left( {1\over s'-s} + K_0(s',s)\right)\nonumber\\
&& + {\rm Im\,} t_1^1(s') K_1(s',s) +{\rm Im\,} t_0^2(s') K_2(s',s) \Big]
+d_0^0(s)\nonumber
\end{eqnarray}
where $a_0^0$, $a_0^2$ are the $I=0,2$ $S$-wave scattering
lengths. Detailed expressions for the kernels $K_a(s',s)$ and the
driving term $d_0^0(s)$ can be found in ref.~\cite{ACGL}.
Eq.~\rf{singleroy} is supplemented with the non-linear,
unitarity relation involving the inelasticity
parameter $\eta_0^0(s)$
\begin{equation}\lbl{unitrel}
\vert 1+ 2i\sigma_\pi(s) t_0^0(s) \vert=\eta_0^0(s)
\end{equation}
with $\sigma_\pi(s)=\sqrt{1-4m_\pi^2/s}$.
The inelasticity parameter $\eta_0^0$ is rigorously equal to one in
the region $s\le 16m_\pi^2$. Based on experimental indications, we use
here the approximation $\eta_0^0(s)=1$ up to the $s=4m_K^2$.
Furthermore in eq.~\rf{singleroy}, we use for the $P$-wave ${\rm Im\,}(t_1^1)$
as well as the $I=2$ partial-wave ${\rm Im\,}(t_0^2)$ inputs taken from
ref.~\cite{ACGL} i.e. satisfying the coupled Roy equations below 0.8
GeV and taken from experiment above. Imaginary parts of higher
partial-waves, which enter into the driving term $d_0^0$ are also
taken from experiment.
\subsection{Multiplicity of the solutions:}
Taking the matching point as $s_m=s_K=4m_K^2$, the Roy
equation~\rf{singleroy} admits a
family of solutions~\cite{pomponiuwanders,atkinson,gasserwanders}
rather than a unique one.
We will assume that the phase-shift at the $K\bar{K}$ threshold satisfies
\begin{equation}
\pi < \delta_K < {3\pi\over2}\ , \quad \delta_K\equiv \delta_0^0(s_K),
\end{equation}
which implies~\cite{pomponiuwanders,atkinson,gasserwanders} a
two-parameter family of solutions\footnote{
One assumes that the following set of inputs are given: the two
scattering lengths $a_0^0$, $a_0^2$, the phase-shift $\delta_0^0(s)$
above the matching point, the inelasticity function $\eta_0^0(s)$ and,
finally, the imaginary parts of the partial-waves ${\rm Im\,} t_0^2(s)$ and
${\rm Im\,} t_{l\ge 1}^a(s)$.}.
In other terms, we must impose two
conditions in order to select a unique solution. As one condition, we
can fix the value of the phase-shift at one energy, for instance
the value of
\begin{equation}\lbl{deltaAetK}
\delta_A\equiv \delta_0^0(s_A),\quad \sqrt{s_A}=0.8\ \hbox{GeV}.
\end{equation}
In order to define a second condition, we consider the singularity
of the derivative of the phase-shift at the matching point. For a generic
Roy solution, the divergence depends on the value of the
phase-shift at the matching point in the following
way~\cite{pomponiuwanders,gasserwanders}
\begin{equation}\lbl{divergalpha}
\left.{d\over ds}\delta_0^0(s)\right\vert_{s\to s_m^-}\sim (s_m-s)^{\alpha-1},\quad
\alpha={2\delta_0^0(s_m)\over\pi}-2\ .
\end{equation}
In our case, the matching point coincides with a two-particle
threshold, we expect the derivative of the phase-shift to exhibit a
square-root singularity
\begin{equation}\lbl{divergundemi}
\left.{d\over ds}\delta_0^0(s)\right\vert_{s\to s_K^-}= A\,
(s_K-s)^{-{1\over2}}\ .
\end{equation}
This divergence is weaker than the generic matching point
divergence~\rf{divergalpha} provided the threshold phase-shift is not
too large,
\begin{equation}\lbl{maxdeltak}
\delta_0^0(s_K) < 225^\circ\ .
\end{equation}
We will assume here that this condition is fulfilled. In this case, we
can use as a second condition that the phase-shift behaves as in
eq.~\rf{divergundemi} close to the $K\bar{K}$ threshold. It is not
difficult to work out the explicit expression for the coefficient $A$
of the square-root singularity in eq.~\rf{divergundemi}.
For this purpose, let us consider the unitarity relation for ${\rm Im\,} t_0^0$ in
the region of the $K\bar{K}$ threshold
\begin{equation}\lbl{imt00}
{\rm Im\,} t_0^0(s)= \sigma_{\pi}(s) \vert t_0^0(s) \vert^2
+ \theta(s-s_K)\sigma_K(s) \vert g_0^0(s)\vert^2\
\end{equation}
where $g_0^0(s)$ is the partial-wave $\pi\pi\to K\bar{K}$ amplitude with
$I=0$, $J=0$.
The principal value integration in the Roy equation~\rf{singleroy} generates
singularities associated with discontinuities of the derivative of
${\rm Im\,} t_0^0(s')$. Finite discontinuities lead to logarithmic divergences
upon integration. The square-root divergence is generated from the
function $\theta(s'-s_K)\sigma_K(s')$. Performing the integration
analytically in the neighbourhood of the threshold one easily finds that
\begin{equation}\lbl{A}
A=
{\sigma_\pi(s_K)\vert g_0^0(s_K)\vert^2\over
2\cos2\delta_K \sqrt{s_K} } \ .
\end{equation}
Once a solution is found for a given value of $\delta_A$, we can
compute the $\chi^2$ over the experimental data in the range
$[s_A,s_K]$ and then search for the value of $\delta_A$ which
minimises this $\chi^2$. In practice, the value of the phase-shift at
the $K\bar{K}$ threshold, $\delta_K$ should be constrained by the data on
both sides of the matching point.
We can thus constrain both parameters $\delta_A$ and
$\delta_K$ by fitting the experimental data using Roy equation solutions.
\subsection{Numerical approximations to the solution}
Let us denote by ${\cal R}[t_0^0]$ the right-hand side of
eq.~\rf{singleroy} and by $\epsilon(s)$ the difference between the
left and right-hand sides
\begin{equation}\lbl{epsilon}
\epsilon(s)= {\cal R}[t_0^0](s)-{\rm Re\,} t_0^0(s)\ .
\end{equation}
We construct numerical approximations to the phase-shift
in the range $4m_\pi^2\le s\le 4m_K^2$ using a simple modification of the
Schenk parametrisation~\cite{schenk91} compatible with
eq.~\rf{divergalpha}
\begin{eqnarray}\lbl{paramsol}
&& \tan\delta_0^0(s)=\\
&& \quad \sigma_\pi(s)
\left[ a_0^0 +\sum_1^N \alpha_i \left( {s\over s_\pi}-1\right)^i \right]
{s_\pi-s_0\over s-s_0}\,
{\sigma^K(s_\pi)+\beta\over \sigma^K(s)+ \beta}\nonumber
\end{eqnarray}
with $s_\pi=4m_\pi^2$ and $\sigma^K(s)=\sqrt{s_K/s-1}$.
This representation involves $N$ polynomial parameters $\alpha_i$
plus 2 parameters $s_0$ and $\beta$. The last factor generates a
square-root divergence in the derivative of $\delta_0^0$ as expected from
eq.~\rf{divergundemi}. In principle, the parameter $\beta$ could be
determined as a function of the known~\rf{A} coefficient $A$ in front
of the divergence. In
practice, we have left it as a free parameter, adjusted such as
to help approximate the solution for $s$ close to $s_K$ but not necessarily
reproducing the exact limiting behaviour for $s=s_K$. We have checked that the
correct order of magnitude for $A$ is reproduced.
The $N+2$ parameters in eq.~\rf{paramsol} are determined from a
variational principle, by minimising the integral over the error
function squared
\begin{equation}
\chi^2_R\equiv \int_{4m_\pi^2}^{4m_K^2} ds' \left\vert
\epsilon(s')\right\vert^2
\end{equation}
while fixing the two values of $\delta_0^0(s_A)$ and $\delta_0^0(s_K)$.
An exact solution corresponds to $\epsilon(s)$ vanishing identically
in the whole range $[4m_\pi^2,4m_K^2]$ and therefore to $\chi^2_R=0$.
We used routines from the MINPACK library~\cite{minpack} to determine
the parameters in eq.~\rf{paramsol} which minimise $\chi^2_R$.
We increased the number of parameters up to ten. With ten parameters
one achieves an accuracy $\vert \epsilon(s)\vert\lapprox 5\, 10^{-4}$
below the matching point. The behaviour of the error function is illustrated in
fig.~\fig{royerror}.
The figure shows that $\epsilon(s)$ is an oscillating
function which has a number of
zeros approximately equal to the number of parameters in eq.~\rf{paramsol}.
The figure also illustrates how the error function evolves
upon increasing the number of parameters which is suggestive of a
convergence towards an exact solution.
The accuracy is comparable to that quoted in ref.~\cite{ACGL}
below their matching point $s_A$. Above the $K\bar{K}$ threshold,
$\epsilon(s)$ increases rapidly becoming $\simeq10^{-1}$. In this region,
this is similar to the results quoted in refs.~\cite{ACGL,GKPY2}.
\vskip0.2cm
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.66666\textwidth]{royerror.ps}\\
\caption{\sl Error function (see eq.~\rf{epsilon}) corresponding to an
approximation of the solution (see eq.~\rf{paramsol}) with 8
parameters (dashed line) and 10 parameters (solid line)
}.
\label{fig:royerror}
\end{center}
\end{figure}
\subsection{ Inputs above the matching point:}
The behaviour of the inelasticity function $\eta_0^0(s)$ close to the
$K\bar{K}$ threshold is expected to have a strong influence on the
properties of the $f_0(980)$ resonance.
By definition, $\eta_0^0(s)$
is equal to the modulus of the $\pi\pi\to\pi\pi$ partial-wave
$S$-matrix element. Unitarity of the $S$-matrix,
\begin{equation}
\vert S_{11}\vert^2=1-\sum_{n\ne1} \vert S_{1n}\vert^2 \
\end{equation}
implies that $\eta_0^0\equiv \vert S_{11}\vert$ can be
determined experimentally either a)
by measuring the cross-sections of the various open inelastic channels
or b) by measuring the cross-section for elastic scattering.
The observation (by method (b)) that inelasticity sets in rather sharply at
the $K\bar{K}$ threshold suggests that the
$K\bar{K}$ channel should dominate the inelasticity below
the $\eta\eta$ threshold.
The $\pi\pi\to K\bar{K}$ amplitude with $I=J=0$ has
been measured in high-statistics
experiments~\cite{cohen,etkin,Lindenbaum:1991tq}.
We will use here the results of ref.~\cite{cohen} because the results
of~\cite{etkin,Lindenbaum:1991tq} have been argued to necessitate some
rescaling~\cite{morganpennington,buggzousarantsev}. The
$\pi\pi\to\eta\eta$ amplitude has been measured in ref.~\cite{Alde:1985kp}.
Some experimental information on the $\pi\pi\to 4\pi$ inelastic
amplitude is also available. We will rely on the discussion of
ref.~\cite{buggzousarantsev} who argue that the $\pi\pi\to 4\pi$
amplitude is small in magnitude below 1.4 GeV and can be modelled by
contributions from the $f_0(1370)$ and the $f_0(1500)$ resonances.
\begin{figure}
\begin{center}
\includegraphics[width=0.66666\textwidth]{inelastic.ps}
\caption{\sl Inelasticity function $\eta_0^0(s)$. The data shown are
determinations from the elastic amplitude
from refs.~\cite{hyams73} and ~\cite{kaminski96}. The dashed
line is the $K$-matrix fit from ref.~\cite{hyams73}. The solid line is a
determination of $\eta_0^0$ based on experimental information on
the inelastic channels $K\bar{K}$, $\eta\eta$ and $4\pi$. }
\label{fig:inelastic}
\end{center}
\end{figure}
Fig.~\fig{inelastic} shows
the experimental determinations of $\eta_0^0$ based on the elastic amplitude
from refs~\cite{hyams73} and ~\cite{kaminski96}. The result of the $K$-matrix
fit performed in ref.~\cite{hyams73} is plotted (dashed curve), which is
characterised by a rather deep dip near 1 GeV.
We also show the central value of the fit based on the inelastic
channels (solid curve). The dip, in that case, is much less
pronounced. The inelastic determination of $\eta_0^0$ is actually not
inconsistent with the elastic determinations of
refs.~\cite{hyams73,kaminski96} within the errors.
It has a $\chi^2/N=1.6$ with the data of ref.~\cite{hyams73} and
a $\chi^2/N=0.4$ with the data of ref.~\cite{kaminski96} (which is smaller
than one because of the very large errors).
For the phase-shift $\delta_0^0$ above the $K\bar{K}$ threshold,
we use the determination of Hyams et al.~\cite{hyams73}. It is in good
agreement with other analysis
of the CERN-Munich experiment (e.g. ~\cite{buggzousarantsev})
or the analysis of the
CERN-Munich-Cracow experiment~\cite{kaminski96} below 1.5 GeV. The
energy region above 1.5 GeV is suppressed in the Roy equation because of the
two subtractions.
\subsection{Inputs below the matching point}
In the energy range $[s_A,4m_K^2]$, in which we fit the two parameters
$\delta_A$ and $\delta_K$, we combine the sets of data
from Hyams et al.~\cite{hyams73} and the data from Kaminski et
al.~\cite{kaminski96}. The former data have much smaller error
bars but it is likely that this is only because Kaminski et
al.~\cite{kaminski96} have estimated their errors in a more realistic
way. This is suggested by comparing the phase-shifts resulting from different
analysis of the CERN-Munich experiment
(e.g.~\cite{estabrooks74,buggzousarantsev}, see also the
review~\cite{ochs91} for detailed comparisons and further
experimental references). We have therefore
appended a weight factor of $1/4$ to the $\chi^2$ of the data of Hyams
et al. in the combined $\chi^2$.
For the $S$-wave scattering lengths, we take the numbers quoted in the
latest NA48/2 publication~\cite{lastNA48}
\begin{eqnarray}
&&a_0^0=0.2196\pm0.0028_{\hbox{stat}}\pm0.0020_{\hbox{syst}}\\
&&a_0^2=-0.0444\pm0.0007_{\hbox{stat}}\pm
0.0005_{\hbox{syst}}\pm0.0008_{\hbox{ChPT}}\nonumber
\end{eqnarray}
\begin{table}[htb]
\begin{center}
\begin{tabular}{c|cccc}\hline\hline
\TT\BB $\eta_0^0$ & $\delta_A$ & $\delta_K$ &
$\hat{\chi}^2_{\hbox{\small\cite{hyams73}}}$ &
$\hat{\chi}^2_{\hbox{\small\cite{kaminski96}}}$\\ \hline
(a) & $\left(80.9\pm 1.4\right)^\circ$ &
$\left(190^{+5}_{-10}\right)^\circ$ &2.7 & 1.9 \\
(b) & $\left(82.9\pm 1.7\right)^\circ$ &
$\left(200^{+5}_{-10}\right)^\circ$ &2.2 & 1.3\\ \hline\hline
\end{tabular}
\caption{\sl Results for the two phases $\delta_A$ and $\delta_K$ from
fitting the experimental phase-shifts in the range $0.8\ \hbox{GeV}
\le \sqrt{s}\le 2m_K$ with Roy solution functions corresponding to
two different central values of the inelaticity function (see
fig.~\fig{inelastic}). On the first line $\eta_0^0$ is determined from
a sum over inelastic channels (shallow-dip shape), on the second line
$\eta_0^0$ is determined from the elastic channel (deep-dip shape).
}
\lbltab{fitres}
\end{center}
\end{table}
The results of fitting the combined data sets as described above in the region
$[s_A,4m_K^2]$ varying the two parameters $\delta_A$ and
$\delta_K$ are presented in table~\Table{fitres}.
We show separately
the result corresponding to the two different determinations of the
inelasticity function. We also show
$\hat{\chi}^2=\chi^2/N$ (with $N=10$ data points) corresponding to
the data of Hyams\footnote{We remark that while the
$\chi^2$ seems large, half of its value comes from the single energy bin
with $E=0.99$ GeV. } et al.~\cite{hyams73}
and to the data of Kaminski et al.~\cite{kaminski96}. The table shows
that a better $\chi^2$ is obtained upon using the inelasticity
function from the elastic data (deep-dip shape).
This reproduces the observation first made in the recent
analysis of ref.~\cite{GKPY3}. In that work, a variant of the three
coupled Roy equations (derived from once-subtracted dispersion
relations) have been considered in their whole domain of validity,
i.e. up to $\sqrt{s}=1.1$ GeV and required to be satisfied
withing the errors of the data. Their analysis favours a value for the
threshold phase $\delta_K$ somewhat larger than the results of
table~\Table{fitres} while their result for $\delta_A$ is compatible
with ours.
Fig.~\fig{comproyhy} displays the curves for the phase-shift
$\delta_0^0$ corresponding to the fit results of
table~\Table{fitres}. The figure also shows the phase-shifts from the
Berkeley experiment~\cite{protopopescu73} which were not included in
the fit. Numerical values of the parameters describing the Roy
solution phase-shifts (see eq.~\rf{paramsol}) are given in the appendix.
\begin{figure}[hbt]
\begin{center}
\includegraphics[width=0.66666\textwidth]{comproy_hy.ps}
\caption{\sl $I=0$ $S$-wave $\pi\pi$ phase-shifts: for $\sqrt{s}\le
2m_K$ the two curves represent solutions of the Roy equation
corresponding to two different determinations of the inelasticity function
$\eta_0^0$ (see fig.~\fig{inelastic}).
}
\label{fig:comproyhy}
\end{center}
\end{figure}
\section{Poles and residues of the $\sigma(600)$ and
$f_0(980)$}\lblsec{poles}
Resonances correspond to poles of the $\pi\pi\to\pi\pi$
scattering amplitude $t_0^0(s)$ on unphysical Riemann sheets. These poles
are also present in form-factors and correlation functions which
involve currents which can couple to a pion pair in the $S$-wave.
We will consider only the second Riemann sheet here and recall a
few standard formulas which enable one to perform the
continuations~\footnote{More general formulas, for a four
sheets situation can be found e.g. in ref.~\cite{xiaozheng01}}.
These formulas can be expressed in terms of the amplitude $t_0^0(s)$.
Let us start with the continuation of the amplitude $t_0^0$ itself.
Its right-hand cut is associated with unitarity relations and has
successive thresholds in $s$: $4m_\pi^2$, $16m_\pi^2$, $36m_\pi^2$,
$K\bar{K}$, \ldots The second sheet is defined with respect to the
discontinuity relation which holds between the first two thresholds $4m_\pi^2\le
s\le 16m_\pi^2$. Using the property of real-analyticity $t_0^0(z^*)=
{t_0^0}^*(z)$ (which results from $T$-invariance), it can be written as
\begin{equation}\lbl{disct00}
t_0^0(s+i\epsilon)-t_0^0(s-i\epsilon)=2\sigma^\pi(s-i\epsilon)
t_0^0(s-i\epsilon)t_0^0(s+i\epsilon)
\end{equation}
where we have introduced
\begin{equation}
\sigma^\pi(z)\equiv \sqrt{4m_\pi^2/z-1}\
\end{equation}
(which satisfies $\sigma^\pi(s-i\epsilon)=i\sigma_\pi(s)$). From
relation~\rf{disct00}, one finds that the second sheet extension of
$t_0^0$ is
\begin{equation}\lbl{t00II}
t_0^{0,II}(z)= {t_0^0(z)\over 1- 2\sigma^\pi(z) t_0^0(z)}
\end{equation}
which is easily seen to verify the continuity relation
$t_0^{0,II}(s-i\epsilon)= t_0^0(s+i\epsilon)$.
The poles of the $T$-matrix can now be determined by searching for the
zeros of the denominator in eq.~\rf{t00II}: $S_0^0(z)= 1-
2\sigma^\pi(z) t_0^0(z)$ (which is the partial-wave $S$-matrix). The
derivative of $S_0^0(z)$
\begin{equation}
\dot{S}_0^0(z_S)\equiv \left.{d\over dz} ( 1- 2\sigma^\pi(z)
t_0^0(z))\right\vert_{z=z_S} \ .
\end{equation}
is needed in order to determine the residues.
Numerical results for the pole positions and the $S$-matrix
derivatives are presented in table~\Table{zpoles}.
\begin{table}[htb]
\begin{center}
\begin{tabular}{c||cc}\hline\hline
\TT\BB & $\sqrt{z_S}$ (MeV) & $\dot{S}_0^0(z_S)$ (GeV$^{-2}$) \\ \hline
\TT\BB $\sigma(600)$ & $\left(442^{+5}_{-8}\right) +i\left(274^{+6}_{-5}\right)$ &
$-\left(0.75^{+0.10}_{-0.15}\right)+i\left(2.20^{+0.14}_{-0.10}\right)$ \\
\TT\BB $f_0(980)$ & $\left(996^{+4}_{-14}\right) +i\left(24^{+11}_{-3}\right)$ &
$-\left(1.1^{+3.0}_{-0.4}\right)+i\left(6.6^{+0.8}_{-1.0}\right)$ \\ \hline\hline
\end{tabular}
\caption{\sl Positions of the complex poles and values of
the corresponding derivatives of the $S$-matrix $S_0^0$ from the Roy integral
representation of $t_0^0$ and the real-axis Roy solution discussed in
sec.~\sect{royeq}}
\lbltab{zpoles}
\end{center}
\end{table}
The central values in the table correspond to the Roy solution
associated with the deep-dip shaped $\eta_0^0$. This choice gives a
result for the $\sigma$ position very close to that of
ref.~\cite{CCL}. The errors were determined by varying the most
significant parameters in the Roy equation i.e. the two scattering
lengths $a_0^0$, $a_0^2$, the two phase-shifts $\delta_0^0(s_A)$,
$\delta_0^0(s_K)$ (see table~\Table{fitres}) and the parameters of the
$f_2$ meson which dominates the driving term. We have also included
the result of varying between the two different determinations of the
inelasticity in the form of asymmetric errors. For instance, using the
shallow-dip inelasticity, the value of the sigma pole position is
located at : $\sqrt{s_\sigma}=436+i278$ MeV and that of the $f_0$
is located at $\sqrt{s_{f_0}}=983+i36$ MeV. The errors on the $\sigma$
pole parameters quoted in table~\Table{zpoles} are smaller than those
in ~\cite{CCL}: this can be traced to the fact than the range of
variation for the phase $\delta_0^0(s_A)$ as determined from the fit
using Roy solutions is smaller than the one estimated in ref.~\cite{CCL}.
\section{Scalar meson couplings to two photons }\lblsec{2gammas}
\subsection{$\gamma\gamma\to\pi\pi$ on the real axis:}
Informations on the couplings of the light scalar mesons to two
photons can be extracted from the amplitudes $\gamma\gamma\to
\pi^0\pi^0, \pi^+\pi^-$. This may be performed in a model independent
way by making use of the analyticity and unitarity properties of the
partial-wave amplitudes $h^I_{J,\lambda\lambda'}(s)$ which, as a consequence, satisfy
Omn\`es-type~\cite{omnes58} dispersive representations. A
representation of this kind for $h^0_{0,++}(s)$ was reconsidered
recently~\cite{garciamartinmou} which makes use of a two-channel
extension of the Omn\`es approach~\cite{babelon76,zheng09}.
It should be valid in a range of energies up to one GeV where it is a
reasonably good approximation to retain just two channels ($\pi\pi$,
$K\bar{K}$) in the unitarity relation. This representation involves also the
$\gamma\gamma\to K\bar{K}$ isoscalar partial-wave amplitude $k^0_{0,++}(s)$
and has the following form
\begin{eqnarray}\lbl{2chanrepres}
&& \left(
\begin{array}{l}
h^0_{0,++}(s)\\
k^0_{0,++}(s)
\end{array}\right) =\\
&& \qquad\left(
\begin{array}{l}
\bar{h}^{0,Born}_{0,++}(s)\\
\bar{k}^{0,Born}_{0,++}(s)
\end{array}\right) +
\bm{\Omega}(s)\times\Bigg[
\left(\begin{array}{l}
b^{(0)} s +b^{'(0)} s^2\\
b_K^{(0)} s +b_K^{'(0)} s^2
\end{array}\right)
\nonumber\\
&& \qquad
+{s^3\over\pi}\int_{-\infty}^{-s_0}{ds'\over(s')^3(s'-s)}
\bm{\Omega}^{-1}(s')\,{\rm Im\,}
\left(\begin{array}{l}
\bar{h}^{0,Res}_{0,++}(s')\\
\bar{k}^{0,Res}_{0,++}(s')
\end{array}\right)
\nonumber\\
&& \qquad
-{s^3\over\pi}
\int_{4m_\pi^2}^\infty
{ds'\over (s')^3(s'-s)} {\rm Im\,} \bm{\Omega}^{-1}(s')
\left(\begin{array}{l}
\bar{h}^{0,Born}_{0,++}(s')\\
\bar{k}^{0,Born}_{0,++}(s')
\end{array}\right)
\Bigg]\ . \nonumber
\end{eqnarray}
The right-hand side of this equation involves
the $2\times2$ Omn\`es matrix $\bm{\Omega}$,
which encodes the effects of the final-state interaction.
Its matrix elements $\bm{\Omega}_{ij}$ are
determined from the $T$-matrix by solving (numerically) the set of homogeneous
coupled integral equations which arise from combining
dispersion relations and two-channel unitarity
\begin{equation}\lbl{intomnes}
\Omega_{ij}(s)={1\over\pi}\int_{4m_\pi^2}^\infty
{ds'\over s'-s} \left(\bm{T}^*(s')\Sigma(s')
\bm{\Omega}(s')\right)_{ij}
\end{equation}
with $\Sigma(s)=\hbox{diag}(\sigma_\pi(s),\sigma_K(s))$. One assumes
asymptotic conditions on $T_{ij}(s)$ (i.e. that
$T_{12}(s)$ goes to zero and that the sum of the
eigen-phase shifts goes to $2\pi$~\cite{mushkebook}) which ensure that
eqs.~\rf{intomnes} have a unique solution once initial conditions are
specified
\begin{equation}
\Omega_{ij}(0)=\delta_{ij}\ .
\end{equation}
These asymptotic conditions are rather close from the experimental
values at $\sqrt{s}\simeq 2$ GeV.
Eq.~\rf{2chanrepres} also involves contributions from the
left-hand cut of the partial-waves which are associated
with singularities of the cross-channel amplitude $\gamma\pi\to
\gamma\pi$. The leading singularity arises from
the charged pion pole which is exactly calculable and labelled
$\bar{h}^{0,Born}_{0,++}(s')$ in eq.~\rf{2chanrepres}
(this term also dominates the amplitude in the soft photon limit).
Singularities associated with multi-pion cuts are
described more phenomenologically (but with reasonable accuracy)
through the light resonance contributions, labelled
$\bar{h}^{0,Res}_{0,++}(s')$ in the above formula. Finally,
eq.~\rf{2chanrepres} involves four polynomial parameters. These have
been introduced by writing over-subtracted dispersion relations, such
as to cutoff integral contributions from higher energy regions.
The polynomial parameters have been determined in
ref.~\cite{garciamartinmou} from a chirally constrained fit\footnote{
The fit was performed in an energy range $\sqrt{s}\le 1.3$ GeV. For
$I=2$ amplitudes and for $J=2$ amplitudes, single channel Omn\`es
representations were used. Chiral constraints arise upon matching the
dispersive and the chiral two-loop
representations~\cite{gasserivan05,gasserivan06} from the fact that
the $p^4$ and certain $p^6$ chiral coupling-constants are known.}
of the experimental data from ref.~\cite{Belle1} (charged pions) and
ref.~\cite{Belle2} (neutral pions).
In the present work, we use the $\pi\pi$ phase-shifts obtained by
solving the Roy equation below the $K\bar{K}$ threshold in association
with the deep-dip inelasticity as discussed in sec.~\sect{royeq}.
As compared to ref.~\cite{garciamartinmou}, this leads to small
differences in the $\gamma\gamma\to \pi\pi$ amplitudes localised in
the region of the $f_0(980)$ peak. The values of the fitted parameters
and the polarisabilities are not modified.
\subsection{$\gamma\gamma\to\pi\pi$ in the complex plane}
Once the polynomial parameters are determined, the integral representations
~\rf{2chanrepres}~\rf{intomnes} allow one to compute the partial-wave
amplitude $h^0_{0,++}(s)$ for complex values of $s$. In order to
compute the second sheet extension one considers the discontinuity
between the first two thresholds which reads,
\begin{eqnarray}
&& h_{0,++}^0(s+i\epsilon)- h_{0,++}^0(s-i\epsilon)=\nonumber\\
&& \qquad 2\sigma^\pi(s-i\epsilon) t_0^0(s-i\epsilon) h_{0,++}^0(s+i\epsilon),
\end{eqnarray}
such that the second sheet extrapolation is
\begin{equation}\lbl{h00II}
h_{0,++}^{0,II}(z)= {h_{0,++}^{0}(z)\over 1- 2\sigma^\pi(z) t_0^0(z)}\ .
\end{equation}
The quantity of interest here is the decay width of the scalar mesons into
two photons. Following Pennington~\cite{pennington06}, it
can be defined by first identifying the residues of the amplitudes
$t_0^{0,II}(z)$ and $h_{0,++}^{0,II}(z)$ in terms of coupling constants
\begin{equation}\lbl{residt00}
\left.32\pi t_0^{0,II}(z) \right\vert_{pole}= {g^2_{S\pi\pi}\over
z_S-z},\quad
\left. h_{0,++}^{0,II}(z) \right\vert_{pole}= {g_{S\pi\pi}
g_{S\gamma\gamma} \over z_S-z}\ .
\end{equation}
The couplings $g_{S\gamma\gamma}$, $g_{S\pi\pi}$ are expected to be
complex numbers (see below).
One can formally define the decay width by taking the usual relation
between a coupling constant and the corresponding decay width
\begin{equation}
\Gamma_{S\to 2\gamma}\equiv {\vert g_{S\gamma\gamma}\vert^2 \over 16\pi
m_S}\ ,
\end{equation}
which yields the following numerical results for the two-photons
widths of the scalar mesons
\begin{equation}\lbl{w2gamma}
\begin{array}{lll}
\Gamma_{\sigma(600)\to 2\gamma}&=\left(2.08\pm 0.20\,
^{+0.07}_{-0.04}\right) \ &\hbox{(keV)}\\[1mm]
\Gamma_{f_0(980)\to 2\gamma} &= \left(0.29 \pm0.21 \,
^{+0.02}_{-0.07}\right) \ &\hbox{(keV)}\ .
\end{array}\end{equation}
The separation of the errors reflect the structure of the Omn\`es
representation~\rf{2chanrepres}.
The first error is associated with varying the subtraction parameters
in eq.~\rf{2chanrepres}, i.e. it essentially reflects
the experimental errors in the two-photon cross-sections. The second
error is associated with the uncertainties in the Omn\`es matrix elements
coming from the $\pi\pi$ phase-shifts and inelasticities.
Fig.~\fig{sigma2g} compares our value for the sigma width with
results quoted in the recent
literature~\cite{pennington06,oller07,pennington08,bernabeu08,
mennessier08,mao09,Mennessier10,Hoferichter:2011wk}
(see also ~\cite{achasov08}) which are all based on the complex pole
definition. Evaluations using a Breit-Wigner definition can yield a
somewhat different result (e.g.~\cite{Fil'kov:1998np}). In the case
of the $f_0(980)$, which is a rather narrow resonance, the two
definitions should give reasonably compatible results. The central value
which we find in~\rf{w2gamma} is practically identical to the one
quoted in the PDG~\cite{PDG}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.53333\textwidth]{sigma2g.ps}
\caption{\sl Recent determinations of the $\sigma\to 2\gamma$ width
from experimental measurements of $\gamma\gamma\to 2\pi$ cross-sections.}
\label{fig:sigma2g}
\end{center}
\end{figure}
Let us finally quote the corresponding central complex values of the coupling
constants $g_{S\gamma\gamma}$, $g_{S\pi\pi}$ (in GeV),
\begin{equation}\lbl{complexg}
\begin{array}{ll}
g_{\sigma\gamma\gamma}=(-0.31+i0.60)\,10^{-2},\ & g_{\sigma\pi\pi}=1.12+i4.63\\
g_{f_0\gamma\gamma}=(\,\,\,0.38-i0.02)\,10^{-2},\ & g_{f_0\pi\pi}=0.23+i2.79\ .
\end{array}
\end{equation}
It is striking that these couplings can be far from being real. It is
difficult to find a general physical interpretation for the phases of
the couplings but it is instructive to consider the case of
a narrow resonance, i.e. when ${\rm Im\,} z_S$ is small.
In this situation, the Breit-Wigner approximation describes the
amplitude in the region of the zero and the corresponding pole of the
resonance
\begin{equation}\lbl{breitwigner}
S_0^0(z)\simeq S_B(k){ k-k_S\over k-k^*_S}
\end{equation}
where $k$ is the $\pi\pi$ momentum, $k=\sqrt{z/4-m_\pi^2}$, and $S_B$
is a slowly varying function. Neglecting contributions which are
quadratic in ${\rm Im\,} z_S$, this representation gives the derivative at
$z=z_S$ as
\begin{equation}\lbl{dotSBW}
\dot S_0^0(z_S)\simeq {i\exp(2i\delta_0^0({\rm Re\,} z_S))\over 2{\rm Im\,} z_S}
\end{equation}
Using this in the expressions for the residues, the coupling
$g_{S\pi\pi}$ gets expressed as
\begin{equation}
{g_{S\pi\pi}^2\over 32\pi}\simeq {-\exp(-2i\delta_0^0({\rm Re\,} z_S)){\rm Im\,}
z_S\over\sigma_\pi({\rm Re\,} z_S)}\ ,
\end{equation}
i.e. the phase of $g_{S\pi\pi}$ is given in terms of the phase-shift
at the resonance mass
\begin{equation}\lbl{phasegpipi}
g_{S\pi\pi}=\vert g_{S\pi\pi}\vert \hbox{e}^{i\left({\pi\over2}
-\delta_0^0({\rm Re\,} z_S)\right)}
\end{equation}
and vanishes only in the absence of any non-resonant background
phase. In the case of the coupling $g_{S\gamma\gamma}$ one finds, at
leading order in ${\rm Im\,} z_S$,
\begin{equation}
g_{S\pi\pi} g_{S\gamma\gamma}\simeq 2i\exp(-2i\delta_0^0({\rm Re\,}
z_S))h_0^0({\rm Re\,} z_S) {\rm Im\,} z_S\
\end{equation}
i.e. (using~\rf{phasegpipi})
\begin{equation}\lbl{phasegpigaga}
\hbox{Phase}\,(g_{S\gamma\gamma})= \hbox{Phase}\,(h_0^0({\rm Re\,} z_S))-\delta_0^0({\rm Re\,} z_S)
\end{equation}
which vanishes modulo $\pi$ when ${\rm Re\,} z_S$ is in the region of
applicability of Watson's theorem. These narrow width estimates for
the phases provide a reasonably good approximation for the $f_0(980)$
when its mass is located below the $K\bar{K}$ threshold (which is not
the case for our central value).
\section{$\sigma(600)$ and $f_0(980)$ couplings to gluon and quark
operators}\lblsec{operators}
\subsection{Definitions}
The $\sigma(600)$ and $f_0(980)$ mesons have the same quantum numbers
$J^{PC}=0^{++}$ as the vacuum which are also those expected for the lightest
glueball. One can characterise the gluon content of a scalar meson from
its coupling to the gluonic operator $\alpha_s G^2$. One may also
consider the trace of the energy-momentum tensor operator,
$\theta_\mu^\mu$, which is proportional to $\alpha_s G^2$ in the chiral limit.
Correspondingly, two coupling constants $C_S^{GG}$, $C_S^{\theta}$
(with mass dimension) can be introduced
\begin{equation}\lbl{gcontentdef}
\begin{array}{ll}
\braque{0\vert {\alpha_s} G^{a\mu\nu} G^a_{\mu\nu}\vert S}& =m_S^2\, C_S^{GG} \\
\braque{0\vert \theta_\mu^\mu \vert S} & = m_S^2 \,C_S^{\theta}\\
\end{array}
\end{equation}
where $S$ is either the $\sigma$ or the $f_0(980)$ meson. We will also
consider matrix elements associated with scalar quark-antiquark
operators. It is convenient to use a normalisation which remains well
defined in the chiral limit
\begin{equation}\lbl{qcontentdef}
\braque{0\vert \bar{u}u+ \bar{d}d \vert S} =\sqrt2 B_0 \,C_S^{uu},\quad
\braque{0\vert \bar{s}s \vert S} = B_0 \,C_S^{ss} \ .
\end{equation}
with $B_0=-\lim_{m_q\to0}\braque{0\vert\bar{q}q\vert0}/F_\pi^2$. With this
convention, the couplings are renormalisation group invariant in the
chiral limit.
At first, it is necessary to clarify the meaning of such matrix
elements since scalar mesons are resonances and not stable
one-particle states. One may use a complex plane definition,
which is rather natural here as it applies equally well to broad
resonances like the $\sigma$ or to ordinary narrow resonances. A
simple relation between the couplings $C^{j}_S$ and pion scalar
form-factors can be derived. For this purpose, let us consider
two-point correlation functions
\begin{equation}
\Pi_{jj}(s)=i\int d^4x e^{ipx} \braque{0\vert T j_S(x) j_S(0)\vert 0}
\end{equation}
where $j_S(x)$ is one of the scalar operators considered above.
The correlator $\Pi_{jj}$ satisfies a K\"allen-Lehmann representation
(e.g. ~\cite{IZ})
\begin{equation}
\Pi_{jj}(s)={s^3\over2\pi}\int_{4m_\pi^2}^\infty ds' {\rho_{jj}(s')\over
(s')^3( s'-s)}+ \alpha s^2 +\beta s+\gamma
\end{equation}
(written here with three subtractions) in which the spectral function is
given as a sum over a complete set of states
\begin{equation}\lbl{specsum}
(2\pi)^4\sum_n \delta^4(p_n-q)\vert \braque{0\vert j_S(0)\vert n}\vert^2
=\theta(q_0)\rho_{jj}(q^2)\ .
\end{equation}
The discontinuity of $\Pi_{jj}$ across the real axis in the range
$4m_\pi^2\le s\le 16m_\pi^2$ is generated by the two-pion states $n=\pi^a\pi^a$
in the sum~\rf{specsum} and it can be written as
\begin{equation}\lbl{discPijj}
\Pi_{jj}(s+i\epsilon)-\Pi_{jj}(s-i\epsilon)= {3\over 16\pi}
\sigma^\pi(s-i\epsilon) F_j(s-i\epsilon) F_j(s+i\epsilon)\ .
\end{equation}
Here, $F_j$ is the form-factor associated with the two-pion matrix
element of $j_S$
\begin{equation}
\braque{0\vert j_S(0)\vert \pi^i(p)\pi^j(p')}=\delta^{ij}
F_j((p+p')^2)\ .
\end{equation}
In deriving eq.~\rf{discPijj} one makes use of the fact that
$F_j(s)$ is itself a real-analytic function. It
has a cut along the positive real axis, and its discontinuity in the
range $[4m_\pi^2,16m_\pi^2]$ reads
\begin{equation}\lbl{discFj}
F_j(s+i\epsilon)-F_j(s-i\epsilon)=2\sigma^\pi(s-i\epsilon)
t_0^0(s-i\epsilon) F_j(s+i\epsilon).
\end{equation}
From the discontinuity relations~\rf{discPijj}~\rf{discFj}, it is
simple to deduce the second sheet extensions of the form-factor
\begin{equation}\lbl{FSII}
F_j^{II}(z) = {F_j(z)\over 1- 2\sigma^\pi(z) t_0^0(z)}
\end{equation}
and that of the correlator $\Pi_{jj}$
\begin{equation}\lbl{PiII}
\Pi^{II}_{jj}(z)= \Pi_{jj}(z)+ {3\over 16\pi} {\sigma^\pi(z) \left(
F_j(z)\right)^2\over 1- 2\sigma^\pi(z) t_0^0(z)}\ .
\end{equation}
These expressions show that the form factor and the correlation
function on the second Riemann sheet have exactly the same poles $z_S$ as
the $T$-matrix. Considering the residue of the pole provides a
natural identification for the resonance couplings $\braque{0\vert
j_S\vert S}$,
\begin{equation}
\left.\Pi_{jj}^{II}(z)\right\vert_{pole}\equiv { \left(\braque{0\vert j_S\vert
S}\right)^2\over z_S-z}\ ,
\end{equation}
which thus get expressed in terms of the $\pi\pi$ form-factor
evaluated at the position of the pole,
\begin{equation}\lbl{ffactorrel}
\braque{0\vert j_S\vert S}=
\sqrt{{-3\sigma^\pi(z_S)\over16\pi\,\dot{S}_0^0(z_S)}} F_j(z_S)\ .
\end{equation}
One can verify that the interpretation of residues in terms of
coupling constants satisfy consistency conditions. For instance, one
expects the residue of the form-factor $F_j^{II}(z)$ to involve the
product of the two couplings $\braque{0\vert j_S\vert S}$ and
$g_{S\pi\pi}$ in the following way
\begin{equation}\lbl{ffpole}
\left. F_j^{II}(z)\right\vert_{pole} = {
\braque{0\vert j_S\vert S}\times g_{S\pi\pi}\over\sqrt3(z_S-z)}\ .
\end{equation}
It is easy to verify that this expression can be exactly recovered
using formulas~\rf {t00II},\rf{FSII},~\rf{PiII} for the second-sheet
extensions together with the definition of $g_{S\pi\pi}$ from the
residue of $t_0^{0,II}(z)$ and the definition of $\braque{0\vert j_S\vert
S}$ from the residue of $\Pi_{jj}^{II}(z)$.
In the limit of narrow resonances, one can express the couplings
$C_S^j$ in terms of the form-factor $F_j$ evaluated on the real
axis. For this purpose, one can write $F_j$ in the neighbourhood of
the resonance position as a function of the momentum $k$
\begin{equation}
F_j(z)={\phi_j(k)\over k-k^*_S}
\end{equation}
displaying explicitly the pole on the second sheet. If the pole is
close to the real axis we can expand the function $\phi_j(k)$,
\begin{equation}
\phi_j(k_S)=\phi_j({\rm Re\,} k_S)+i({\rm Im\,} k_S)\phi'({\rm Re\,} k_S)+\cdots
\end{equation}
which, to lowest order in ${\rm Im\,} k_S$ leads to the approximation
\begin{equation}\lbl{FjBW}
F_j(z_S)\simeq {1\over2} F_j({\rm Re\,} z_S)\ .
\end{equation}
Using also the expression for the derivative of the $S$-matrix in the
narrow width limit~\rf{dotSBW} one can express the couplings in terms
of quantities evaluated on the real axis
\begin{equation}\lbl{BWCj2}
\left(\braque{0\vert j_S\vert S}\right)^2\simeq {3\over 16\pi}
\sigma_\pi(M^2_S)M_S\Gamma_S \,
\left(\hbox{e}^{-i\delta_0^0(M^2_S)} F_j(M^2_S)\right)^2
\ ,
\end{equation}
using ${\rm Re\,} z_Z\simeq M^2_S$, ${\rm Im\,} z_S= M_S\Gamma_S$. This expression
shows that the squares of the couplings $C_S^j$ must be real numbers
in the narrow width limit, provided $M_S$ is in the region of
applicability of Watson's theorem. The couplings themselves can be
either real or pure imaginary depending on whether the phase shift and
the phase of the form-factor are equal or differ by $\pi$.
\subsection{Numerical results}
Analyticity and unitarity allows one to derive
Omn\`es representations for the form-factors, analogous to those for the
$\gamma\gamma\to \pi\pi$ amplitude but much simpler because of the
absence of a left-hand cut. Let us briefly recall the derivation. Let
$\overline{F}(s)$ be a two-component vector formed from the pion and kaon
form-factors,
\begin{equation}
^t\,{\overline{F}(s)}=(F_j^\pi(s),{2\over\sqrt3}F_j^K(s))
\end{equation}
and multiply it with the inverse of the Omn\`es matrix\footnote{The
determinant of the Omn\`es matrix can be expressed in analytical
form: $\hbox{det}
\bm{\Omega}(s)=\exp\left({s\over\pi}\int_{4m_\pi^2}^\infty
ds' {\phi(s')\over s'(s'-s)}\right)$ with
$\phi(s')=\theta(4m_K^2-s')\delta_0^0(s')+\theta(s'-4m_K^2)\delta_{\pi\pi\to
K\bar{K}}(s')$ which shows that it does not vanish.}
\begin{equation}
\overline{G}(s)\equiv\ \bm{\Omega}^{-1}(s) \overline{F}(s)\ .
\end{equation}
This multiplication removes part of the right-hand cut i.e. the
components of $\overline{G}(s)$ have a right-hand discontinuity which
vanishes in the range
\begin{equation}
{\rm Im\,} \overline{G}(s) \simeq 0,\quad 4m_\pi^2\le s \le s_2
\end{equation}
where $s_2$ is the point above which two-channel unitarity is no
longer a good approximation. By construction, the components of
$\bm{\Omega}(s)$ behave as $1/s$ when $s\to\infty$ and a similar behaviour
is expected from the from-factors, such that $\overline{G}(s)$ should
satisfy a once-subtracted dispersion relation. In terms of
$\overline{F}$, it reads
\begin{eqnarray}
&& \overline{F}(s)=\bm{\Omega}(s)\Big[
\left(\begin{array}{c}
\alpha\\
\beta\\
\end{array}\right) \nonumber\\
&&\quad + {s\over\pi}\int_{s_2}^\infty {ds' \over s' (s'-s) }\, {\rm Im\,}\left(
\bm{\Omega}^{-1}(s')\overline{F}(s') \right)\Big]\ .
\end{eqnarray}
In the range $s<<s_2$, the energy dependence of the integral may be
neglected and one ends up with the following representation for the
form-factors
\begin{equation}\lbl{omffactor}
\left(\begin{array}{r}
F_j^\pi(s)\\[0.1cm]
{2\over\sqrt3} F_j^K(s)
\end{array}\right)=
\left(\begin{array}{cc}
\Omega_{11}(s) & \Omega_{12}(s)\\[0.1cm]
\Omega_{21}(s) & \Omega_{22}(s)
\end{array}\right)\left(\begin{array}{c}
\alpha +\alpha' s\\
\beta +\beta' s
\end{array}\right)\ .
\end{equation}
As the discussion above shows, it is valid for $s << s_2$.
Such representations were used and discussed in detail in ref.~\cite{DGL90}.
In order to determine the polynomial parameters, one
can rely on chiral symmetry~\cite{DGL90}. As a first approximation, one
can use the chiral expansions of the form factors at order $p^2$ and
determine the polynomial coefficients by matching the $O(p^2)$ values
of $F_j^P(0)$, $\dot F_j^P(0)$
\begin{table}[htb]
\begin{center}
\begin{tabular}{c||rc||rc}\hline\hline
\TT\BB $j_S$ & $F_j^\pi(0)$ & $\dot{F}_j^\pi(0)$ & $F_j^K(0)$ &
$\dot{F}_j^K(0)$ \\ \hline
\TT $m_u\bar{u}u+
m_d\bar{d}d$ & $m_\pi^2$ & $0$ & ${1\over2}m_\pi^2$ & $0$ \\
\TT $m_s\bar{s}s$ & $0$ & $0$ & $m_K^2-{1\over2}m_\pi^2$ & $0$ \\
\TT $\theta_\mu^\mu$ & $2m_\pi^2$ & $1$ & $2m_K^2$ & $1$ \\ \hline\hline
\end{tabular}
\caption{\sl Pion and kaon form-factors associated with various
operators $j_S$. The table shows their values at $s=0$ and the
values of their derivatives at leading chiral order.}
\lbltab{chiralFj}
\end{center}
\end{table}
with those of the Omn\`es representation. These $O(p^2)$ values are
recalled in table~\Table{chiralFj}.
The representation~\rf{omffactor} then allows one to compute the
form-factors for complex values of $s$ (with $\vert s\vert < s_2$) and
thus determine the values of the couplings between scalar operators and
scalar mesons from residue relations like~\rf{ffactorrel}.
The numerical values of the absolute values of couplings (the phases
will be shown later) of the $\sigma$ and $f_0(980)$ mesons to the
$\bar{q}q$ operators obtained in this manner are collected in
table~\Table{qqbarcoupl}.
\begin{table}[htb]
\begin{center}
\begin{tabular}{c||cc}\hline\hline
\TT & $\sigma(600)$ & $f_0(980)$ \\ \hline
\TT$\vert C^{uu}_S\vert$ (MeV)& $206\pm4 ^{+4}_{-6}$ & $82\pm31 ^{+12}_{-7}$ \\
\TT$\vert C^{ss}_S\vert$ (MeV)& $17\pm5 ^{+1}_{-7}$ & $146\pm44
^{+14}_{-7}$
\\ \hline\hline
\end{tabular}
\caption{\sl Absolute values (in MeV) of the couplings of the
$\sigma$ and $f_0(980)$ mesons to scalar $\bar{q}q$
operators as defined in eq.~\rf{qcontentdef}. }
\lbltab{qqbarcoupl}
\end{center}
\end{table}
In this table, the first error reflects the influence of higher order
chiral corrections in the polynomial parameters. We have estimated
that the order of magnitude, relative to the $O(p^2)$ values, should
be $\simeq30\%$ for the corrections proportional to $m_s$, and
neglected the corrections proportional to $m_{u,d}$. As expected, a
larger uncertainty is generated for the $f_0(980)$ than for the $\sigma$.
The second error is associated with the uncertainties in the $\pi\pi$
and $K\bar{K}$ $T$-matrix as reflected in the Omn\`es matrix elements.
A previous estimate of the $\bar{q}q$ coupling of the $\sigma$ meson,
using Breit-Wigner approximations, was given in
ref.~\cite{gardnermeissner} in the form
$\braque{0\vert\bar{d}d\vert\sigma}=\sqrt{2/3}B_0/\chi$, with
$\chi=20$ GeV$^{-1}$, which is significantly smaller than our
result. Some results for the couplings of the $I=1$ and $I=1/2$ mesons
to $\bar{q}q$ operators can be found in the litterature. These
resonances are reasonably narrow, such that various definitions should
be equivalent and we can compare their values to those we found for
the $I=0$ mesons.
We normalise the couplings of these
mesons in accordance with eq.~\rf{qcontentdef}
\begin{equation}\lbl{coupl1}
\braque{0\vert\bar u s \vert K^*_0}= B_0 C^{us}_{K^*_0},\quad
\braque{0\vert\bar u d \vert a_0 }= B_0 C^{ud}_{a_0}\ .
\end{equation}
An evaluation of the $a_0(980)$ coupling was performed in
ref.~\cite{Maltman:1999jn} using finite-energy sum rules (see also
ref.~\cite{Narison:1984jr}). Converted to the normalisation of
eq.~\rf{coupl1}, the result of~\cite{Maltman:1999jn}
reads,
\begin{equation}\lbl{Ca0}
\vert C_{a_0(980)}^{ud}\vert =197\pm37\ \hbox{MeV}
\end{equation}
which is remarkably similar to the coupling of the $\sigma$ meson
$C_\sigma^{uu}$ in table~\Table{qqbarcoupl}.
The coupling $C_{a_0(980)}^{ud}$ is related to the coupling $c_m$
introduced in ref.~\cite{Ecker:1988te} by $C_{a_0(980)}^{ud}=4c_m$ and
can be estimated from its relation to the low-energy chiral coupling
constants~\cite{Ecker:1988te}, eventually supplemented with large
$N_c$ or chiral sum rule
constraints~\cite{JOP2,rosellsanzcillero}. These approaches yield
values in the range $C_{a_0(980)}^{ud}= [120,200]$ MeV. An unquenched
lattice QCD calculation has also been performed~\cite{McNeile:2006nv}
which gives: $C_{a_0(980)}^{ud}= [304,340]$ MeV. These values should
not be compared too litteraly to the preceeding ones because they
correspond to unphysical pion masses $m_\pi/m_\pi^{phys} \gapprox 5$
and only two dynamical flavours.
An estimate for the $\kappa$ meson coupling $C^{us}_\kappa$ can
be made following a similar approach to that used here for the
$\sigma$ meson. One can compute the position of the complex pole and
the corresponding value of the $S$-matrix derivative from the
Roy-Steiner equations~\cite{descotesmou}. The central values which one
obtains in this way are
\begin{equation}
\sqrt{z_S}\simeq (658+i\,277)\ \hbox{MeV},\quad
\dot{S}_0^{1\over2}(z_S)\simeq (0.59+i\,2.03)\ \hbox{GeV}^{-2}
\end{equation}
The coupling can then be defined in terms of the $K\pi$ scalar
form-factor evaluated at $z_S$ (see ref.~\cite{ElBennich:2009da},
appendix C) and this gives
\begin{equation}
\vert C_{\kappa(800)}^{us}\vert \simeq 156 \ \hbox{MeV}\ .
\end{equation}
Comparing now the couplings of the $I=0$
mesons from table~\Table{qqbarcoupl} to those of the $I=1,\ 1/2$ mesons
one observes that the values of $C^{uu}_\sigma$,
$C^{us}_\kappa$, $C^{ss}_{f_0(980)}$, $C^{ud}_{a_0(980)}$ are rather
similar, the relative differences do not exceed $\simeq 20\%$. This is
compatible with an assignment of the mesons $\sigma$, $\kappa$,
$f_0(980)$, $a_0(980)$ into a nonet. Results on the couplings of the
heavier scalar mesons $a_0(1450)$ and $K^*_0(1430)$ are also
available. Ref.~\cite{Maltman:1999jn} gives
\begin{equation}
\vert C^{ud}_{a_0(1450)}\vert= 284\pm54\ \hbox{MeV},\quad
\vert C^{us}_{K^*_0(1430)}\vert= 370\pm20 \ \hbox{MeV} \ .
\end{equation}
The result for the $a_0(1450)$ was obtained from a finite-energy sum
rule and the one for the $K^*_0(1430)$ from a one-channel Omn\`es
representation. An evaluation using a two-channel representation and
complex pole definition was made in ref.~\cite{ElBennich:2009da} which
gives $\vert C^{us}_{K^*_0(1430)}\vert \simeq 282$ MeV. With the
normalisations used here, the couplings of the $a_0(1430)$ and
$K^*_0(1430)$ to quark-antiquark operators seem to be significantly
larger than those of the light scalars.
\begin{table}[htb]
\begin{center}
\begin{tabular}{c||cc}\hline\hline
\TT & $\sigma(600)$ & $f_0(980)$ \\ \hline
\TT$\vert C^{\theta}_S\vert$ (MeV)& $197\pm15^{+21}_{-6}$& $114\pm44 ^{+22}_{-7}$ \\
\TT\BB$\vert C^{GG}_S\vert$ (MeV)& $472\pm 15 ^{+26}_{-16}$ & $227\pm41
^{+51}_{-16}$ \\ \hline\hline
\end{tabular}
\caption{\sl Absolute values of the couplings of the $\sigma$ and
$f_0(980)$ to the gluonic operators $\theta_\mu^\mu$ and $\alpha_s G^2$.
}
\lbltab{gluoncoupl}
\end{center}
\end{table}
Finally, one can compute the couplings of the light $I=0$ scalars to the
energy-momentum trace operator $\theta_\mu^\mu$ using the chiral
results for the associated form-factor at $s=0$ from
table~\Table{chiralFj} . The results are shown on the first line of
table~\Table{gluoncoupl}.
One finds that both the $\sigma$ and the $f_0(980)$ display a
significant coupling to the $\theta_\mu^\mu$ operator. The trace of
the energy-momentum tensor has the following exact expression in
QCD~\cite{collins77} with three heavy flavours integrated out
\begin{equation}
\theta_\mu^\mu= {\beta(g)\over 2g} G^a_{\mu\nu} G^{a\mu\nu}
+(1+\gamma_m(g))\sum_{q=u,d,s} m_q \bar{q}q\ .
\end{equation}
This expression allows one to disentangle the $\alpha_s G^2$ part from the
$\bar{q}q$ one if one uses a perturbative approximation for the
$\beta$ function and for the anomalous dimension. The results shown in
table~\Table{gluoncoupl} for $C_S^{GG}$ correspond to a leading order
approximation.
Our results for $C_S^\theta$ may be compared with the Laplace sum
rule evaluation~\cite{narisonveneziano}
\begin{equation}\lbl{narisonvenez}
C_\sigma^\theta=[272,329]\ \hbox{MeV}\ .
\end{equation}
However, one should keep in mind that in the
calculation of~\cite{narisonveneziano}, the spectral
function ${\rm Im\,}\Pi_{jj}(s)$ corresponding to the operator
$j_S=\theta_\mu^\mu$ is approximated by a simple delta
function. Fig.~\fig{spectralmm} shows our result for this spectral
function based on using two-channel unitarity and physical $\pi\pi$
scattering inputs. It displays a peak corresponding to the
$f_0(980)$ resonance, while the $\sigma$ resonance does not show up as
a clear enhancement, but generates a broadening of the
$f_0(980)$ peak at low energies. It is then plausible that the
value~\rf{narisonvenez} should be compared with the sum
$C_\sigma^\theta+C_{f_0}^\theta$ from table~\Table{gluoncoupl}: the
agreement is then rather reasonable.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.66666\textwidth]{spectralmm.ps}\\
\caption{\sl Spectral function of the $\Pi_{jj}$ correlator with
$j_S=\theta_\mu^\mu$. The long-dashed and short-dashed curves are
the contributions from the $\pi\pi$ and
$K\bar{K}$ intermediate states respectively. }
\label{fig:spectralmm}
\end{center}
\end{figure}
Finally, the central values of the phases of the couplings $C_S^j$ are
shown in table~\Table{phasesCj}. In the Breit-Wigner approximation,
one expects the phases to be either zero of $\pm90^\circ$
(see~\rf{BWCj2}). The actual values are often not too different from
this approximation.
\begin{table}[htb]
\begin{center}
\begin{tabular}{c|rr}\hline\hline
\TT & $\sigma\phantom{(6)}$ & $f_0(980)$ \\ \hline
\TT $\bar{u}u+\bar{d}d$ & $28.2^\circ$ & $ 89.1^\circ$ \\
\TT $\bar{s}s $ & $-80.2^\circ$ & $-14.2^\circ$ \\
\TT $\theta_\mu^\mu$ & $87.2^\circ$ & $-34.6^\circ$ \\ \hline\hline
\end{tabular}
\caption{\sl Central values of the phases of the couplings $C_S^j$.}
\lbltab{phasesCj}
\end{center}
\end{table}
\section{Conclusions}
We have considered several properties of the light scalar isoscalar
mesons $\sigma$ and $f_0(980)$ using definitions which rely on the
positions of the poles in the complex plane and their associated
residues. This approach allows one to deal with a broad resonance like
the $\sigma$ in a well defined way.
In order to compute the positions of the poles and the residues,the
Roy integral representation for the $\pi\pi$ scattering amplitude
$t_0^0$ was used. On the real axis, we have started from the Roy
equation solutions of ref.~\cite{ACGL}, which use a matching point
$\sqrt{s_m}=0.8$ GeV and construct an extended solution which, for the
$S$-wave $t_0^0$, has a higher matching point $\sqrt{s_m}=2m_K$ such
as to improve the theoretical constraints on the $f_0(980)$ meson
properties. In order to constrain the value of the $S$-wave scattering
phase-shift at the $K\bar{K}$ threshold and discriminate between
different shapes of the inelasticity, corresponding to different
experiments, we perform fits of the phase-shifts below the $K\bar{K}$
threshold based on the Roy solutions. We find that the solution
corresponding to a deep-dip shaped inelasticity has a better $\chi^2$
than that corresponding to a shallow-dip shape. This is in agreement with
the observations of ref.~\cite{GKPY3}. The properties of the $f_0(980)$
resonance, as expected, are particularly sensitive to the central
value of the inelasticity. The results based on this Roy representation
of the amplitude for the second-sheet pole positions are in
table~\Table{zpoles}.
As a first application, we have re-determined the scalar to two photons
couplings $g_{S\gamma\gamma}$, following the methodology first
advocated in ref.~\cite{Pennington:2006dg}, and based on the
determinations of the $\gamma\gamma\to\pi\pi$ amplitudes from the
recent experimental measurements~\cite{Belle1,Belle2}. The result
found for the $\sigma$ is somewhat smaller than that originally given in
ref.~\cite{Pennington:2006dg}.
As a second application, the couplings of the $\sigma$ and $f_0(980)$
mesons to scalar operators, which can be formally denoted as
$\braque{0\vert j_S(0)\vert \sigma}$, $\braque{0\vert j_S(0)\vert
f_0(980)}$ were defined and evaluated.
Choosing $j_S=(\bar{u}u+\bar{d}d)/\sqrt2$, $j_S=\bar{s}s$ these
matrix elements provide a quantitative measure of the quark-antiquark
contents of the scalar mesons, while choosing $j_s=\theta_\mu^\mu$ is a
measure of the glue content. A simple, general relation can be
established between such couplings and the value of the pion
form-factor associated with the operator $j_S$ computed at the
position of the resonance pole, $F_j^{\pi\pi}(z_S)$. This relation is
given in eq.~\rf{ffactorrel} in the general case and in eq.~\rf{BWCj2}
in the limiting case of a narrow resonance.
Such form-factors are known to be calculable from a coupled-channel
Omn\`es representation~\cite{DGL90} which should be valid in a
complex energy range which accomodates the $\sigma$ as well as the
$f_0(980)$ resonances. The polynomial parameters in such
representations are constrained by chiral symmetry, for both the
$\bar{q}q$ and $\theta_\mu^\mu$ operators, and can be estimated from
the leading order chiral Lagrangian~\cite{DGL90}. In principle,
matrix elements of other types of operators, for instance tetraquark
operators, could be addressed in the same way. The values of
$F_j(0)$ and $\dot{F}_j(0)$, in such cases, are not predicted from
chiral symmetry but could be obtained e.g. from lattice QCD.
The numerical results for the $\bar{q}q$ coupling constants of
$\sigma$ and the $f_0(980)$ mesons are shown in
table~\Table{qqbarcoupl}. The couplings are not particularly
suppressed but it would be interesting to compare them with couplings
to tetraquark operators. The couplings can also be compared to the
analogous couplings of the $I=1$ and $I=1/2$ mesons to the $\bar{u}d$
and $\bar{u}s$ operators respectively for which estimates can be found
in the litterature including one calculation in lattice
QCD~\cite{McNeile:2006nv}. This comparison supports a nonet assignment
of the $\sigma$, $\kappa$, $a_0(980)$, $f_0(980)$ mesons. Our results
for the couplings to the gluonic operators $\theta_\mu^\mu$ and
$\alpha_s G^2$ indicate that both the $\sigma$ and $f_0(980)$ couple
significantly to such operators as well.
\section*{Acknowledgements}
I would like to thank prof. W. Ochs for sending me original data tables
and Martin Hoferichter for making several very useful comments on the
manuscript.
\section*{Appendix}
We show below central values of the parameters of Roy solutions for
the phase-shift $\delta_0^0(s)$, for a ten-parameter approximation
according to eq.~\rf{paramsol}, corresponding to two different central
values of the inelasticity function $\eta_0^0$, see sec.~\sect{royeq}.
\begin{center}
\begin{tabular}{lll}\hline\hline
\TT\BB & $\eta_0^0$: deep-dip & $\eta_0^0$: shallow-dip \\ \hline
\TT $s_0$ & $0.724237452 $ & $0.736126142 $ \\
$\beta$ & $0.104114178 $ & $0.360170063 $ \\
$\alpha_1$ & $0.140785825 $ & $ 0.146890648 $ \\
$\alpha_2$ & $-0.0408980664 $ & $-0.0391129286 $ \\
$\alpha_3$ & $0.00648917902 $ & $ 0.00545496306 $ \\
$\alpha_4$ & $-0.000845352717 $ & $-0.000636406542 $ \\
$\alpha_5$ & $7.20101833 10^{-5} $ & $4.62727765 10^{-5}$ \\
$\alpha_6$ & $-2.89568524 10^{-6} $ & $-8.84679012 10^{-7}$ \\
$\alpha_7$ & $-8.92462472 10^{-9} $ & $-9.93513196 10^{-8}$ \\
$\alpha_8$ & $3.07108997 10^{-9} $ & $4.83952846 10^{-9} $ \\ \hline\hline
\end{tabular}
\end{center}
|
1,314,259,993,233 | arxiv | \section{Introduction}
Most fundamental theories of physics have connections among their basic variables, like the standard model of particle physics and the Ashtekar formulation of general relativity. It is therefore important, especially with respect to a quantization of these theories, to consider functions of connections, i.e. observables, in these theories. Probably the most well known example of such functions are the Wilson loops, i.e. traces of the holonomies of loops along closed paths; but also open paths have been considered, in particular when the observables have to act on fermions.
One problem we have encountered in our attempt \cite{AastrupGrimstruprew} to merge quantum gravity with noncommutative geometry is that variables like the Wilson loops, and related variables, tend to discretize the underlying spaces. Therefore in this paper we will commence the study of an algebra of "functions" of smeared objects in order to avoid this discretization. More concretely we will study a $C^*$-algebra generated by flows of vector fields on a manifold $M$ and the smooth compactly supported functions on $M$. Flows of vector fields constitutes a natural notion of families of paths, and when evaluated on a connection in the spin-bundle $S$, naturally gives an operator on the spinors, i.e. on $L^2(M,S)$, and not like a path just acting on one point in $M$. The holonomy-diffeomorphism algebra is defined as the $C^*$-algebra generated by the flows and the smooth function with norm given by the supremum over all the smooth connections. In this setup, smooth connections are viewed as representations of the holonomy-diffeomorphism algebra.
It is the holonomy-diffeomorphism algebra which is our candidate for an algebra of observables.
One test to see if a found algebra of observables is suitable is to look at the spectrum of the algebra, i.e. the space of irreducible representations modulo unitary equivalence.
The main result in this paper is that all non-degenrate separable representations of the holonomy-diffeomorphism are given by so called measurable connections. These are objects, which are similar to the generalized connections encountered in Loop Quantum Gravity (LQG), see \cite{AshtekarLewandowski}, but which take the measure class of the Riemannian metrics into account instead of the measure class of the counting measure. The measurable connections of course contain the smooth connections.
This paper is the second of two papers. Where its prequel \cite{AastrupGrimstrup1} is concerned with an exposition of a mathematical framework of quantum gravity based on the holonomy-diffeomorphism algebra this paper is solely concerned with the mathematical analysis of this algebra. \\
The paper is organized as follows:
In section 2 we define the holonomy-diffeomorphism algebra.
In section 3 we define the flow algebra. This algebra is constructed as a quotient of the cross product of the group generated the flows of the vector fields and the compactly supported smooth functions on the manifold. The ideal in this cross product, which is divided out, is the relation of local reparametrization. In particular the representations defining the holonomy-diffeomorphism algebra also give representations of the flow algebra.
We show that separable non degenerate representations of this flow algebra are given by so called measurable connections, and show the unitary equivalence between these measurable connection is given by measurable gauge equivalence.
In section 4 we compare our setup with the LQG setup. The generalized connections appearing in LQG also gives rise to representations of the flow algebra, however non-separable representations. We show that the generalized connections can be obtained as the representations of a disretized version of the flow algebra.
In section 5 we study the the properties of the representations of the holonomy-diffeomorphism algebra given by smooth connections. In particular we show that a connection is irreducible if and only if the corresponding representation of the holonomy-diffeomorphism algebra is irreducible, and give some structure of the separable part of the spectrum.
In the second part of section 5 we show that if the dimension of the manifold is bigger than 1 the representations defined in section 4 coming from generalized connections are not contained in the spectrum of the holonomy-diffeomorphism algebra. We do, however, not know if there are other non-separable representations contained in the spectrum of the holonomy-diffeomorphism algebra. \\
\textbf{Acknowledgement:} We thank M. Bekka, U. Haagerup and R. Nest for help concerning the representation theory of $Gl_n({\Bbb R} )$.
We thank Adam Rennie for enlightning discussions and great hospitality during our during our stay at the ANU, Canberra.
The first author would like to thank A. Tak\'acs and Horst S. for constant support.
\section{The holonomy-diffeomorphism algebra}
Let $M$ be a connected manifold and $S$ a vector bundle over $M$. We assume that $S$ is equipped with a fibre wise metric. This metric ensures that we have a Hilbert space $L^2 (M , \Omega^{\frac12} \otimes S)$, where $\Omega^{\frac12}$ denotes the bundle of half densities on $M$. Given a diffeomorphism $\phi: M\to M$ this acts unitarily on $L^2 (M , \Omega^{\frac12} )$ through
$$ \phi (\xi)(m)= \varphi^*(\xi (\varphi (m) ) , $$
where
$$\phi^* :\Omega^{\frac12} (\varphi (m)) \to \Omega^{\frac12} (m) $$
denotes the pullback.
Let $X$ be a vectorfield on $M$, which can be exponentiated, and let $\nabla$ be a connection in $S$. Denote by $t\to \exp_t(X)$ the corresponding flow. Given $m\in M$ let $\gamma$ be the curve
$$\gamma (t)=\exp_{1-t} (X) (\exp_1 (X)(m) )$$
running from $\exp_1 (X)(m)$ to $m$. We define the operator
$$e^X_\nabla :L^2 (M , \Omega^{\frac12} \otimes S) \to L^2 (M , \Omega^{\frac12} \otimes S)$$
in the following way:
Let $\xi \in L^2 (M , \Omega^{\frac12} \otimes S)$ be locally over $(\exp_1(x)(m)) $ of the form $f \otimes \omega \otimes s $, where $f$ is a function, $\omega$ an element in $\Omega^{\frac12}$ and $s$ an element in $S$.
The value of $(e^X_\nabla)(\xi)$ in the point $m$ is given as
$$(f((\exp_1(X))(m))) (\exp_1^*(\omega)) \otimes (\hbox{Hol}(\gamma , \nabla) s), $$
where $\hbox{Hol}(\gamma , \nabla)$ denotes the holonomy of $\nabla$ along $\gamma$.
If the connection $\nabla$ is unitary with respect to the metric on $S$, then $e^X_\nabla$ is a unitary operator.
If we are given a system of unitary connections ${\cal A}$ we define an operator valued function over ${\cal A}$ via
$${\cal A} \ni \nabla \to e^X_\nabla ,$$
and denote this by $e^X$. Denote by ${\cal F} ({\cal A} , {\cal B} (L^2(M,\Omega^{\frac12}\otimes S)) )$ the bounded operator valued functions over ${\cal A}$. This forms a $C^*$-algebra with the norm
$$\| \Psi \| = \sup_{\nabla \in {\cal A}} \{\| \Psi (\nabla )\| \}, \quad \Psi \in {\cal F} ({\cal A} , {\cal B} (L^2(M,\Omega^{\frac12}\otimes S)) ) $$
For a function $f\in C^\infty_c (M)$ we get another operator valued function $fe^X$ on ${\cal A}$.
\begin{definition}
Let
$$C = \hbox{span} \{ fe^X |f\in C^\infty_c(M), \ X \hbox{ exponentiable vectorfield }\} . $$
The holonomy-diffeomorphism algebra $\mathbf{H D} (M,S,{\cal A}) $ is defined to be the $C^*$-subalgebra of ${\cal F} ({\cal A} , {\cal B} (L^2(M,\Omega^{\frac12}\otimes S)) )$ generated by $C$.
We will by ${\cal H} {\cal D} (M,S,{\cal A}) $ denote the $*$-algebra generated by $C$.
\end{definition}
It is this algebra that will be the object of study in this paper. We will in particular be interested in the spectrum and the representations of the holonomy-diffeomorphism algebra.
\subsection{Formulation with a metric}
The construction above of the holonomy-diffeomorphism algebra shows that it is background independent. In some situations it is however convenient to have a formulation with a metric, and we will therefore in this subsection explain the construction given a fixed background metric $g$.
Given an exponentiable vectorfield $X$ and a unitary connection $\nabla$, the first attempt to define an operator associated to $X$ would be to define
$$(e^X_\nabla ( \xi))( m)= \hbox{Hol} (\gamma , \nabla ) \xi ( \exp_1(X)(m)) ,\quad \xi \in L^2(M,S,dg) .$$
(We have kept the notation from the previous section). The problem is, that since the flow of $X$ might not preserve the measure $dg$, this is not a unitary operator, and even worse it might not define an operator at all, since
$\| e^X_\nabla ( \xi )\|^2=\infty $ can occur.
To fix this we define the operator as
$$(e^X_\nabla ( \xi))( m)= \frac{\sqrt[4]{|g|} (\exp_1(X) (m))}{\sqrt[4]{|g|} (m)} \hbox{Hol} (\gamma , \nabla ) \xi ( \exp_1(X)(m)) ,\quad \xi \in L^2(M,S,dg) .$$
This renders $e^X_\nabla$ unitary. We then define $\mathbf{HD}(M,S,{\cal A})$ and $\mathcal{HD}(M,S,{\cal A})$ as we did in the previous section.
\section{An abstract algebra}
In order to study the spectrum and the representations of $H D (M,S,{\cal A}) $ we will introduce an algebra which is more abstract than $H D (M,S,{\cal A}) $ and which at the end carries the information of the representation theory of $H D (M,S,{\cal A}) $ plus some additional information.
Let $X$ be a vector field on $M$ which can be exponentiated. We will denote the flow to time $t$ as $e^{tX}$. By $e^X$ we denote the flow from $0$ to $1$, i.e. the map $ M\times [0,1] \to M $ given by
$$ (m,t)\to e^{tX} (m) . $$
Two such flows for two vector fields $X_1,X_2$ can be composed via
$$t \to \left\{ \begin{array}{cl} e^{(2t X_1)} &, t\in [0,\frac12 ]\\
e^{ ( (2t -1)X_2)}&,t \in [\frac12 , 1]
\end{array} \right.$$
This is of course usually not a flow.
If two flows are the same modulo reparametrization we will identify the flows.
Also we will identify $e^Xe^{-X}$ with the trivial flow $I$, and $e^X \circ I$, $I \circ e^X$ with $e^X$. We denote by ${\cal F}$ the group generated by these flows.
The flow group ${\cal F}$ acts on $M$ simply by considering for $e^X$ the diffeomorphism $e^{1X}$. Note that this action is not faithful, since we are considering the diffeomorphism part of the flow.
From the action on $M$ we induce a left action on $C^\infty_c(M)$ via
$$ e^X(f)(m)=f(e^{-1X}(m)) . $$
We form the cross product
$$ {\cal F} \ltimes C^\infty_c (M). $$
This algebra consists of the linear span of formal products
$$ f F , \quad f\in C^\infty_c (M), F\in {\cal F} , $$
with the multiplication relation
$$f_1F_1 f_2 F_2=f_1 F_1 (f_2) F_1 F_2 $$
and adjoint
$$ (f_1 F_1)^*= F_1^*\overline{f}_1=F_1^{-1}(\overline{f}_1) F_1^{-1} .$$
If we are given a vector bundle $S$ with a metric over $M$ and a unitary connection $\nabla$ we get a $*$-representation $\varphi_\nabla$ of $ {\cal F} \ltimes C^\infty_c (M) $ on $L^2(M,\Omega^{\frac12} \otimes S) $ via
$$\varphi_\nabla (fe^X )=fe^X_\nabla .$$
Therefore unitary connections gives to representations of $ {\cal F} \ltimes C^\infty_c (M) $. On the other hand a representation of $ {\cal F} \ltimes C^\infty_c (M) $ will in general not have much to do with unitary connections; the reason being that if $e^{X_1}\not= e^{X_2}$, but however coincide on $\hbox{supp}f$, then $e^{X_1} f \not= e^{X_2} f$ in ${\cal F} \ltimes C^\infty_c (M)$, but $e^{X_1}_\nabla f = e^{X_2}_\nabla f$ for all connections.
We thus need to add extra relations to the cross product.
Let $A$ be a subset of $M$. Let $F_1,F_2 \in {\cal F}$. We will consider the $F_1$ and $F_2$ restricted to $A$ as maps
$$F_1,F_2 : A\times [0,1] \to M . $$
We will say that $F_1$ is locally over $A$ a reparametrization of $F_2$ if for each $a\in A$ exists a monotone growing piecewise smooth bijection
$$\varphi_a :[0,1] \to [0,1] $$
such that
$$ F_1 (a,t)=F_2(a,\varphi_a (t)) . $$
\begin{definition}
Let $I$ be the subset of ${\cal F} \ltimes C^\infty_c (M)$ given by
$$\{F_1f-F_2 f | F_1 \hbox{ is a local reparametrization of } F_2\hbox{ over \textnormal{supp}}f \} $$
\end{definition}
\begin{lemma}
$I$ is a $*$-ideal in $ {\cal F} \ltimes C^\infty_c (M) $.
\end{lemma}
\textit{Proof:} Multiplying $I$ from the right with a function $g\in C^\infty_C(M)$ preserves $I$. Multiplying from the left we get
$$g (F_1f-F_2 f) =F_1 F_1^{-1}(g)f -F_2 F_2^{-1}(g)f .$$
Since $F_1$ is a local reparametrization of $F_2$ we have $F_2^{-1}(g)f =F_1^{-1}(g)f$, and thus $g (F_1f-F_2 f) \in I$.
When $F_1$ is a local reparametrization of $F_2$, then so is $FF_1$ of $FF_2$. Hence multiplication from the right with ${\cal F}$ preserves $I$.
Right multiplication yields
$$(F_1f-F_2 f)F=(F_1F -F_2F)F(f) , $$
and since $F_1F$ is a local reparametrization of $ F_2F$ over supp$F(f)$ we have $IF \subset I$. \hfill $\Box$
\begin{definition}
The flow algebra ${\cal F} M$ is defined as
$$ {\cal F} \ltimes C^\infty_c (M) / I. $$
\end{definition}
Note that for a unitary connection $\nabla$ we have $\varphi_\nabla (I)=0$, and hence it descends to a $*$-representation, also denoted $\varphi_\nabla$, of ${\cal F} M$ on $L^2(M,\Omega^{\frac12}\otimes S)$.
It follows that $C^\infty_c (M)$ embeds into ${\cal F} M$ as $f\to f \cdot e^0$.
\subsection{Separable representations}
We will now study the separable $*$-representations of ${\cal F} M$, i.e. $*$-homomorphisms from ${\cal F} M$ to ${\cal B} ({\cal H} ) $, ${\cal H}$ separable.
Therefore let $\varphi : {\cal F} M \to {\cal B} ({\cal H} )$ be such a representation, and we will also assume that $\varphi$ is non-degenerate. In particular we get a $*$-representation, also denoted $\varphi$, of $C_c^\infty (M)$ on ${\cal H}$.
\begin{lemma}
Let $\varphi : C^\infty_c (M) \to {\cal B} ({\cal H} )$ be a $*$-representation. Then $\varphi$ has a unique extension to a $*$-representation
$$\tilde{\varphi} :C_0(M)\to {\cal B} ( {\cal H}).$$
\end{lemma}
\textit{Proof:} We only need to show that $\varphi$ is continuous. We first extend $\varphi$ to a unital $*$-homomorphism
$$\varphi^\sim : C^\infty_c (M)^\sim \to {\cal B} ({\cal H} ) , $$
where $C^\infty_c (M)^\sim$ denotes $C^\infty_c (M) $ with a unit added. The norm of $\varphi^\sim (f)$ is given via the spectral radius. If $\lambda \notin f(M)$ then $\lambda-f$ has an inverse in $C^\infty_c (M)^\sim$. This implies that $\lambda \notin \hbox{spec}(\varphi^\sim (f))$, and $\varphi$ is therefore continuous. \hfill $\Box$
\\
We will also denote the extension by $\varphi$. The commutator $\varphi (C(M))' $ of $\varphi (C(M))$ in ${\cal B} ({\cal H} )$ is a type $I$ von Neumann algebra. This means that there exists an orthogonal family of projections $\{ P_n \}_{n\in \{ 1, \ldots , \infty \} }$ in the center of $\varphi (C(M))' $ with $\sum P_n=\mathbf{1_{\cal H}}$ such that $P_n \varphi (C(M))' $ is a type $I_n$.
Let $F\in {\cal F}$ and $A\in \varphi (C(M))' $. Then $\varphi (F)A \varphi (F^{-1}) \in \varphi (C(M))' $ since
\begin{eqnarray*}
\lefteqn{\varphi (f) \varphi (F)A \varphi (F^{-1})} \\
& =& \varphi (F) \varphi (F(f))A \varphi (F^{-1}) =\varphi (F) A \varphi (F(f)) \varphi (F^{-1})= \varphi (F)A \varphi (F^{-1}) \varphi (f) .
\end{eqnarray*}
Conjugating with $\varphi (F)$ is therefore an automorphism of $\varphi (C(M))' $, and it follows that conjugating with $\varphi (F)$ preserves the type structure, and thus $\varphi (F)$ commutes with each $P_n$. This means that studying the representations of ${\cal F} M$ we can restrict to the case of $\varphi (C(M))' $ being of type $I_n$ for a fixed $n\in \{ 1, \ldots , \infty \}$. In fact it follows from what will come that all representations are of type $I_n$ for a fixed $n$.
Since $\varphi (C(M))' $ is of type $I_n$, $\varphi (C(M))' $ is isomorphic to $L^\infty (X)\otimes {\cal B} ({\cal H} ) $, with dim${\cal H}=n$, and $X$ being some $\sigma$-finite measure space with measure $\mu$. This means that the representation $\varphi$ is given via a representation
$$\psi: C^\infty_c (M)\to {\cal B} (L^2 (X,\mu )) $$
with $\psi (C^\infty_c (M) )\subset L^\infty (X)$ and $\varphi =\psi \otimes \mathbf{1}$. Since $\varphi$ is non-degenerate, we can assume that $X=M$, $(\psi (f)\xi )(m)=f(m)\xi (m)$, and that $\mu$ is a regular Borel measure on $M$. Furthermore we can also assume that $(M,\mu)$ is finite.
All together we have
\begin{proposition}
Let $\varphi: {\cal F} M \to {\cal B} ({\cal H} )$ be separable non-degenerate representation. There exists a finite Borel measure $\mu$ on $M$ such that $\varphi $ is unitarily equivalent to a representation $\varphi_1$ on $L^2 (M,\mu )\otimes {\cal H}_n$, dim${\cal H}=n$, $n=1,\ldots ,\infty$, with
$$(\varphi_1 (f)\xi\otimes \eta)(m)=f(m) \xi (m) \eta, \quad f\in C^\infty_c(M). $$
\end{proposition}
\vskip 1cm
Let $\varphi$ be a representation of the same form as $\varphi_1$ in the above proposition. We will identify $L^2 (M,\mu )\otimes {\cal H}_n$ with $L^2 (M,\mu , {\cal H}_n)$. We assume, that $\mu (M)=1$. We define the measure $\mu_F$ as
$$\mu_F(A)=\mu (F (A)) , $$
for all measurable subsets $A$ of $M$.
Let $F$ be a flow and let $f\in C_0(M)$. We have
\begin{equation} \label{id}
(\varphi (F) \varphi (f) \xi )(m) = (F(f)) (m) (\varphi (F) \xi )(m) .
\end{equation}
Let $\xi$ be a vector with $\| \xi (m)\|_{2,n}=1$, where $\| \cdot \|_{2,n}$ denotes the norm on ${\cal H}_n$. We define
$$k_F(m)=\| \varphi (F) (\xi)(m) \|^2_{2,n} .$$
This is independet of $\xi$ $\mu$-almost everywhere because
\begin{eqnarray} \label{formel}
\| f\|^2_2 &=& \langle \varphi (f) \xi ,\varphi (f)\xi \rangle =\langle \varphi (F) \varphi (f) \xi , \varphi (F) \varphi (f)\xi \rangle \nonumber \\
&=& \langle \varphi (F(f)) \varphi (F) \xi , \varphi (F(f)) \varphi (F)\xi \rangle \nonumber \\
&= & \int_M |f(F^{-1}(m) |^2\| \varphi (F) \xi (m)\|_{2,n}^2 d\mu (m)
\end{eqnarray}
for all $f\in C_0(M)$.
Since formula (\ref{formel}) holds for all bounded measurable functions we get
$$\mu_F (F^{-1}(A)) =\mu (A)=\int_M 1_A^2 d\mu = \int_M |F(1_A)|^2 k_F d\mu = \int_M 1_{F^{-1}(A)} k_F d\mu , $$
thus $\mu_F=k_F \mu$, and therefore $\mu_F \ll \mu $. The same holds for $F^{-1}$ and we get
\begin{lemma} \label{aekvi}
$\mu$ is equivalent to $\mu_F$.
\end{lemma}
With this lemma we can now prove
\begin{thm}
$\mu$ is equivalent to a measure induced by a Riemannian metric on $M$.
\end{thm}
\textit{Proof:} This follows from lemma \ref{aekvi} and the result that a quasi-ivariant Borel measure on a locally compact group is equivalent to the Haar measure.
\hfill $\Box$ \\
Because of (\ref{id}) we can consider $\varphi (F)$ as a measurable family $m\to \varphi (F)(m)$ in $U(n)$, the group of unitary operators on ${\cal H}_n$, i.e.
$$(\varphi (F)\xi ) (m) =\sqrt{k_F (m)} \varphi (F)(m) (\xi (F^{-1}(m)) .$$
This gives rise to the following
\begin{definition}
A measurable $U(n)$-connection, $n=1,\ldots , \infty$, is a map $\nabla$ from ${\cal F}$ to the group of measurable maps from $M$ to $U(n)$ satisfying
\begin{enumerate}
\item $\nabla (1)= 1$.
\item $\nabla (F_1 \circ F_2)(m)=\nabla (F_1) (m) \circ \nabla (F_2)(F_1^{-1}(m))$
\item If $F_1$ and $F_2$ are the same up to local reparametrization over some set $U\subset M$, then
$$ \nabla ( F_1)_U= \nabla (F_2)_U . $$
\end{enumerate}
\end{definition}
A measurable $U(n)$-connection $\nabla$ gives rise to a representation of ${\cal F} M$ on $L^2(M,\Omega^{\frac12}\otimes {\cal H}_n )$ via
$$ ( \varphi_\nabla (f)(\xi))(m)=f(m)\xi (m) $$
$$ \varphi_\nabla (F)(\xi)(m) = \big( (F^{-1})^* (F^{-1}(m)) \otimes \nabla (F)(m)) \big)\xi (F^{-1}(m)) $$
With the work done so far in this section we have
\begin{thm} \label{flowrep}
Any non-degenerate separable representation of ${\cal F} M$ is unitarily equivalent to a representation of the form $\varphi_\nabla$, where $\nabla$ is a measurable $U(n)$-connection.
\end{thm}
\subsection{Equivalence of representations}
\begin{definition}
Two representations $\varphi_1, \varphi_2$ of ${\cal F} M$ on ${\cal H}_1,{\cal H}_2$ are called unitary equivalent if there exist a unitary $U:{\cal H}_1 \to {\cal H}_2$ with
$$\varphi_1(a)=U^* \varphi_2 (a) U , \quad \hbox{ for all } a\in {\cal F} M. $$
\end{definition}
According to \ref{flowrep} any two non-degenrate separable representations are of the form $\varphi_{\nabla_1}, \varphi_{\nabla_2}$ and ${\cal H}_1=L^2(M,\Omega^{\frac12}\otimes {\cal H}_{n_1} )$, ${\cal H}_2=L^2(M,\Omega^{\frac12}\otimes {\cal H}_{n_2} )$.
If they are unitarily equivalent we have
$$(\varphi_{\nabla_1} (f)\xi )(m)=f(m)\xi (m)=(\varphi_{\nabla_2} (f)\xi )(m), $$
and therefore the unitary $U$ is given as a measurable map $m\to u(m)$, with $u(m):{\cal H}_{n_1} \to {\cal H}_{n_2}$ unitary. Consequently we have $n_1=n_2$.
This leads to the following
\begin{definition}
A measurable $U(n)$-gauge transform is a measurable map
$$ M\ni m \to {\cal U} ({\cal H}_n ) .$$
Two measurable $U(n)$-connections $\nabla_1$, $\nabla_2$ are called gauge equivalent if there exists a measurable $U(n)$-gauge transform $m\to u(m)$ with
$$ \nabla_1(F)(m)=u(m) \nabla_2(F)(u(F^{-1}(m)))^*\hbox{ for all }F\in {\cal F} . $$
\end{definition}
\begin{prop}
Two representation of the form $\varphi_{\nabla_1}$, $\varphi_{\nabla_2}$, $\nabla_1 , \nabla_2$ measurable $U(n)$ connections, are unitarily equivalent if and only if $\nabla_1 , \nabla_2$ are measurable gauge equivalent.
\end{prop}
\section{Comparision to the {{LQG}} spectrum}
In this section we will compare the setup we have so far to that of LQG, and also give some non-separable representation of ${\cal F} M$. For simplicity we will only work with piecewise analytic flows and paths. When we talk about paths we will identify $l^{-1} \circ l$ with the trivial path starting and ending in the start point of $l$. We will also identify two path which are the same up to reparametrization.
\begin{definition}
Let $G$ be a connected Lie-group. A generalized connection is an assignment $\nabla (l)\in G$ to each piecewise analytic edge $l$, such that
$$\nabla (l_1 \circ l_2 )=\nabla (l_1) \nabla (l_2) .$$
\end{definition}
For details on generalized connections see \cite{AshtekarLewandowski1,MarolfMourao}, see also \cite{AastrupGrimstrup2}.
Let us now further assume that we have a representation of $G$ as a subgroup of $U(n)$. Note that we can in general not use a generalized connection to define a representation of ${\cal F} M$ on $L^2(M, \Omega^{\frac12} \otimes {\cal H}_n )$ like we did for a smooth or measurable connection. The problem is, that $e^X_\nabla$ need not be measurable.
On the other hand, if we equip $M$ with the counting measure, we can use a generalized connection $\nabla$ to define a representation of ${\cal F} M$
on $L^2(M, {\cal H}_n)$. Here we however see, that a measurable connection does not define a represention on $L^2(M,{\cal H}_n)$, since a measurable connection is only defined up to zero sets, and therefore not in single points.
\begin{definition}
A generalized unitary gauge transformation is a map
$$U:M\to U(n).$$
Two generalized connections $\nabla_1 $ and $\nabla_2$ are said to be unitarily gauge equivalent if for all paths $l$ we have
$$U(e(l)) \nabla_1(l)U^*(s(l)) =\nabla_2(l), $$
where $e(l)$ denotes the end point of $l$ and $s(l)$ the start point.
\end{definition}
In order to see the generalized connections as related to the spectrum of an algebra similar the the flow algebra, we will define a discrete version of it.
Let $C_d(M)$ be the algebra of functions on $M$ with finite support. We define ${\cal F}_d M$ like ${\cal F} M$ but with $C^\infty_c (M)$ replaced by $C_d (M)$. For a given point $m\in M$ we denote by $1_m$ the function with value $1$ in $m$ and zero elsewhere. This is a projection, and due to the relations defining ${\cal F}_dM$ we have $F 1_m F^{-1}=1_{F(m)} $.
Given a non-degenerate representation $\varphi :{\cal F}_dM \to {\cal B} ({\cal H} ) $ we define ${\cal H}_m=\varphi (1_m){\cal H}$. Since $\varphi (F)$ is a unitary operator, then, due to the conjugation relation with $1_m$ above, $\varphi (F) :{\cal H}_m \to {\cal H}_{F(m)}$ is a unitary operator. In particular we have
$${\cal H} =\bigoplus_{m\in M} {\cal H}_m ,$$
and all the ${\cal H}_m$'s have the same dimension, and we can therefore write the Hilbert space as
$$ {\cal H} =\bigoplus_{m\in M} {\cal H}_n =L^2(M,{\cal H}_n),$$
where $M$ is equiped with the counting measure and ${\cal H}_n$ is a Hilbert space of dimension $n$, $n$ being a cardinal number.
We thus have
\begin{proposition}
To every non-degerate representation $\varphi$ of ${\cal F}_d M$, there exists a generalized connection $\nabla$, such that $\varphi$ is of the form
$$\varphi : {\cal F}_dM \to {\cal B} (L^2 (M,{\cal H}_n)) ,$$
with
$$(\varphi (f) \xi ) (m)=f(m)\xi (m) , $$
and
$$(\varphi (F) \xi ) ( F(m))=\nabla (F_m) \xi (m) , $$
where $F_m$ is the edge $F$ defines between $m$ and $F(m)$.
Two non-degenrate rapresentations $\varphi_1, \varphi_2$, associated to two generalized connections $\nabla_1,\nabla_2$, are equivalent if and only if they are generalized gauge equivalent.
\end{proposition}
\section{The spectrum of the holonomy-diffeomorphism algebra}
We will in this section restrict attention to $\mathbf{H D} (M,S,{\cal A})$, where $S$ is the trivial two-dimensional bundle, and ${\cal A}$ is the set of $SU(2)$-connections.
\subsection{Properties of the representations}
We remind the reader of the following two
\begin{definition}
A connection is called irreducible if in a given point $m$ the holonomy group in this point acts irreducible on the fibre in $m$.
\end{definition}
\begin{definition}
A $*$-representation $\varphi :A \to {\cal B} $ of a $C^*$-algebra $A$ on a Hilbert space ${\cal H}$ is called irreducible if $\varphi (A)$ acts irreducibly on ${\cal H}$.
\end{definition}
There is the following well known charaterization of irreducible representations, see \cite{BratteliRobinson}
\begin{thm}
A representation $\varphi: A\to {\cal B} ({\cal H})$ of a $C^*$-algebra on a Hilbert space is called irreducible if one of the following equivalent conditions are fullfilled:
\begin{enumerate}
\item $\varphi $ is irreducible.
\item The commutant $\varphi (A)'=\{ b\in {\cal B} ({\cal H})| b\varphi (A)=\varphi (A)b \hbox{ for all }a\in A\}$ is equal to ${\Bbb C} 1_{\cal H}$.
\item Every nonzero vector $\xi \in {\cal H}$ is cyclic for $\varphi (A)$, or $\varphi (A)=0$ and ${\cal H} ={\Bbb C}$.
\end{enumerate}
\end{thm}
We can now connect the two notions of irreducibility
\begin{proposition}
A representation $\varphi_\nabla$ is irreducible if and only if $\nabla$ is irreducible.
When $\nabla$ is reducible, the representation $\varphi_\nabla$ splits into two irreducible representations, corresponding to $U(1)$ connections.
\end{proposition}
\textit{Proof:} Clearly the representations $\varphi_\nabla$ are not zero.
Let us assume that $\nabla$ is irreducible. Since the holonomy group acts irreducibly in one point it acts irreducibly in all points, and for every two points in $M$ the holonomy paths between these two points acts irreducibly between the fibers in the points. Since we have the flows as operators, and we can multiply these with compactly supported smooth functions, it follows that every nonzero vector is cyclic for $\varphi_\nabla (\mathbf{H D} (M,S,{\cal A}))$, i.e. $\varphi_\nabla $ acts irreducibly.
On the other hand, if $\nabla$ is not irreducible, we can split the bundle $S$ into two line bundle, each being invariant under the action of the holonomy groupoid. Consequently $\varphi_\nabla$ splits into two irreducible representations, each corresponding to a $U(1)$ connection.
\hfill $\Box$ \\
This motivates the following
\begin{definition}
A measurable $U(n)$-connection $\nabla$ is called irreducible if the corresponding representation $\varphi_\nabla$ is irreducible.
\end{definition}
We remind the reader of the following
\begin{definition}
Let $A$ be a $C^*$-algebra. The spectrum of $A$, $\hbox{spec}( A)$, is defined as
$$ \hbox{spec}(A) =\{ \hbox{Irreducible representations} \} / \hbox{Unitary equivalence } . $$
\end{definition}
We have
\begin{proposition} Put
$$ \mathcal{U}_1=\{ \hbox{Measurable }U(1)\hbox{-connections} \} $$
and
$$ \mathcal{U}_2=\{ \hbox{Irreducible measurable }U(2)\hbox{-connections} \} . $$
The separable part of the $\hbox{spec} (\mathbf{H D} (M,S,{\cal A}))$ is contained in
$$ ( \mathcal{U}_1 \cup \mathcal{U}_2)/\{ \hbox{Measurable gauge equivalence } \} $$
\end{proposition}
\textit{Proof:} The only statement that remains to prove is that irreducible measurable $U(n)$-connections, $n\geq 3$, do not appear in the spectrum. This follows, since representations of rank below $2$ form a closed subset in the spectrum. \hfill $\Box$ \\
\subsection{The non-separable part of the spectrum of the holonomy-diffeomorphism algebra}
In general we do not know what the non-separable part of the spectrum looks like. We have, however, the following
\begin{proposition}
Let \text{dim}$(M)>1$. The representations $\psi_\nabla $, where $\nabla$ is a generalized connection given by representing the flow-algebra on $L^2(M,{\cal H}_n)$ with the counting measure, are not contained in the spectrum of $\mathbf{H D} (M,S,{\cal A})$.
\end{proposition}
\textit{Proof:} We will show that $\psi_\nabla$ can not be bounded by the representations of the form $\varphi_{\nabla_1}: {\cal F} M \to {\cal B} (L^2 (M,\Omega^{\frac12}\otimes S)) $, where $\nabla_1$ is a smooth connection.
We choose an open subset $U$ of $M$ diffeomorphic to ${\Bbb R}^n$, $n\geq 2$
We consider a subgroup of the flow group, which acts on $U$ like $Gl_n^+({\Bbb R})$ on ${\Bbb R}^n$.
The $Gl_n^+({\Bbb R})$-part of the representation $\psi_\nabla$ is given by representing $Gl_n^+({\Bbb R})$ on $L^2({\Bbb R}^n, c)$, $c$ being the counting measure. The subspace ${\Bbb C} 1_0$ of $L^2({\Bbb R}^n, c)$ is invariant under this representation, and therefore the trivial represenation of $Gl_n^+({\Bbb R})$ is contained in this representation.
The representation $\varphi_\nabla$ is equivalent to two copies of a representation $\pi$ on $L^2({\Bbb R}^n)$, where the $Gl_n^+({\Bbb R})$-part of the representation is given by
$$ (\pi(g)(\xi)(x)=|\hbox{det}g|^{-\frac12} \xi ( xg^{-1} ), \quad g\in Gl_n^+({\Bbb R}) . $$
The $n$-fold tensor product of $\pi$ is given by
$$ (\pi^{\otimes n} (\xi))(x)= |\hbox{det}g|^{-\frac{n}{2}} \xi (xg^{-1}), \quad x\in M_n({\Bbb R}) $$
on $L^2(M_n({\Bbb R}), \lambda )$. Since the Haar measure on $Gl_n({\Bbb R}) $
is given by
$$\mu (S) =\int_S |\hbox{det}x|^{-n} dx, $$
where $S$ is a subset of $M_n({\Bbb R})$, the $n$-fold tensor product of $\pi$ is equivalent to the left regular representation of $Gl_n^+({\Bbb R})$ on $L^2(Gl_n({\Bbb R}), \mu )$. In particular $\pi$ is bounded by the left regular representation.
We will now show, that the trivial representation of $Gl_n^+({\Bbb R})$ is not weakly contained in the left regular representation, thereby concluding that $\psi_\nabla$ can not be bounded by the representations of the form $\varphi_{\nabla_1}$, where $\nabla_1$ is the trivial connection on ${\Bbb R}^n$.
Denote by $\lambda$ the left regular representation of $ Gl_n^+ ({\Bbb R} )$ on $L^2 (Gl_n({\Bbb R} ))$. We want to show that the trivial representation is not weakly contained in this representation when we consider $Gl_n^+({\Bbb R} )$ as discrete group. Since $\lambda \otimes \overline{\lambda} $ is equivalent to a multiple of $\lambda$ it follows that the trivial representation is weakly contained in $\lambda$ if and only if the trivial representation is weakly contained in $\lambda \otimes \overline{\lambda} $. According to Theorem 5.1 in \cite{Bekka} this is equivalent to the trivial representation being weakly contained in $\lambda \otimes \overline{\lambda} $ when $Gl_n^+({\Bbb R} )$ is considered as a continuous group with the usual topology. This is then again equivalent to the trivial representation being weakly contained in $\lambda$ when $Gl_n^+({\Bbb R} )$ is considered as a continuous group with the usual topology. Restricting to $\lambda$ to $Gl_n^+({\Bbb R} )$ we get that the trivial representation is contained in the left regular representation of $Gl_n^+({\Bbb R} )$ as continuous group. This is however in contradiction to the $Gl_n^+({\Bbb R} )$ being non-amenable for $n\geq 2$, see \cite{Greenleaf}.
If $\nabla_2$ is an arbitrary smooth $SU(2)$-connection we can proceed as follows: Since the trivial representation of $Gl_n^+({\Bbb R})$ is not weakly contained in the left regular representation, there exists, according to proposition G.4.2 in \cite{Bekka2}, positive numbers $a_1,\ldots a_k$ and elements $g_1,\ldots , g_k$ in the flow group with
$$ \| \varphi_{\nabla_1}(a_1g_1+\ldots +a_kg_k)\| <\|\psi_\nabla (a_1g_1+\ldots +a_kg_k)\| . $$
However since $a_1,\ldots ,a_k$ are positive, we have
$$ \| \varphi_{\nabla_2}(a_1g_1+\ldots +a_kg_k)\| \leq \| \varphi_{\nabla_1}(a_1g_1+\ldots +a_kg_k)\| .$$
Hence $\psi_\nabla $ can not be bounded by $\varphi_{\nabla_2}$.\hfill $\Box$
\begin{bibdiv}
\begin{biblist}
\bib{AastrupGrimstrup1}{article}{
author = {Aastrup, Johannes}
author = {Grimstrup, Jesper M{\o}ller},
title = {$C^*$-algebras of Holonomy-Diffeomorphisms \& Quantum Gravity I}
eprint = {1209.5060},
archivePrefix = {arXiv},
primaryClass = {math-ph},
}
\bib{AastrupGrimstruprew}{article}{
author = {Aastrup, Johannes}
author = {Grimstrup, Jesper M{\o}ller},
title = {Intersecting Quantum Gravity with Noncommutative
Geometry: A Review},
journal = {SIGMA},
volume = {8},
pages = {018},
doi = {10.3842/SIGMA.2012.018},
year = {2012},
eprint = {1203.6164},
archivePrefix = {arXiv},
primaryClass = {gr-qc},
}
\bib{AastrupGrimstrup2}{article}{
author={Aastrup, Johannes},
author={Grimstrup, Jesper M{\o}ller},
author={Nest, Ryszard},
title={On spectral triples in quantum gravity. II},
journal={J. Noncommut. Geom.},
volume={3},
date={2009},
number={1},
pages={47--81},
issn={1661-6952},
review={\MR{2457036 (2009h:58059)}},
doi={10.4171/JNCG/30},
}
\bib{AshtekarLewandowski1}{article}{
author={Ashtekar, Abhay},
author={Lewandowski, Jerzy},
title={Representation theory of analytic holonomy $C^*$-algebras},
conference={
title={Knots and quantum gravity},
address={Riverside, CA},
date={1993},
},
book={
series={Oxford Lecture Ser. Math. Appl.},
volume={1},
publisher={Oxford Univ. Press},
place={New York},
},
date={1994},
pages={21--61},
review={\MR{1309913 (95j:58021)}},
}
\bib{AshtekarLewandowski}{article}{
author={Ashtekar, Abhay},
author={Lewandowski, Jerzy},
title={Background independent quantum gravity: a status report},
journal={Classical Quantum Gravity},
volume={21},
date={2004},
number={15},
pages={R53--R152},
issn={0264-9381},
review={\MR{2079936 (2005g:83043)}},
doi={10.1088/0264-9381/21/15/R01},
}
\bib{Bekka}{article}{
author={Bekka, Mohammed E. B.},
title={Amenable unitary representations of locally compact groups},
journal={Invent. Math.},
volume={100},
date={1990},
number={2},
pages={383--401},
issn={0020-9910},
review={\MR{1047140 (91g:22007)}},
doi={10.1007/BF01231192},
}
\bib{Bekka2}{book}{
author={Bekka, Bachir},
author={de la Harpe, Pierre},
author={Valette, Alain},
title={Kazhdan's property (T)},
series={New Mathematical Monographs},
volume={11},
publisher={Cambridge University Press},
place={Cambridge},
date={2008},
pages={xiv+472},
isbn={978-0-521-88720-5},
review={\MR{2415834 (2009i:22001)}},
doi={10.1017/CBO9780511542749},
}
\bib{BratteliRobinson}{book}{
author={Bratteli, Ola},
author={Robinson, Derek W.},
title={Operator algebras and quantum statistical mechanics. 1},
series={Texts and Monographs in Physics},
edition={2},
note={$C^\ast$- and $W^\ast$-algebras, symmetry groups,
decomposition of states},
publisher={Springer-Verlag},
place={New York},
date={1987},
pages={xiv+505},
isbn={0-387-17093-6},
review={\MR{887100 (88d:46105)}},
}
\bib{Greenleaf}{book}{
author={Greenleaf, Frederick P.},
title={Invariant means on topological groups and their applications},
series={Van Nostrand Mathematical Studies, No. 16},
publisher={Van Nostrand Reinhold Co.},
place={New York},
date={1969},
pages={ix+113},
review={\MR{0251549 (40 \#4776)}},
}
\bib{MarolfMourao}{article}{
author={Marolf, Donald},
author={Mour{\~a}o, Jos{\'e} M.},
title={On the support of the Ashtekar-Lewandowski measure},
journal={Comm. Math. Phys.},
volume={170},
date={1995},
number={3},
pages={583--605},
issn={0010-3616},
review={\MR{1337134 (96h:58018)}},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
1,314,259,993,234 | arxiv | \section{Introduction}
\label{s.intro}
According to the Belinski-Khalatnikov-Lifshitz (BKL) conjecture \cite{Belinski:1969, Belinski:1970ew}, as a space-like singularity is approached in general relativity, at a generic point spatial derivatives become negligible compared to time-like derivatives and the equations of motion at neighbouring points decouple. In this limit, the dynamics at each point become those of a Bianchi space-time, typically Bianchi VIII or IX, the Bianchi space-times with the richest dynamics.
Of course, close to a singularity quantum gravity effects are expected to become important and general relativity can no longer be trusted. But if the BKL conjecture is correct, it suggests that understanding the nature of quantum gravity effects in the Bianchi space-times may provide important insights also for generic regions of space-time close to surfaces where general relativity would predict the formation of space-like singularities. As such, a natural first step in studying quantum gravity effects is to start with the Bianchi models, and particularly the Bianchi type VIII and type IX space-times.
Loop quantum cosmology (LQC) is one approach to studying quantum gravity effects in cosmological space-times, based on a non-perturbative quantization of symmetry-reduced cosmological space-times following, as closely as possible, loop quantum gravity; for reviews see, e.g., \cite{Ashtekar:2011ni, Banerjee:2011qu}. In particular, Bianchi type I, II and IX space-times have all been studied in LQC \cite{Bojowald:2003md, Chiou:2007sp, MartinBenito:2008wx, Ashtekar:2009vc, Ashtekar:2009um, WilsonEwing:2010rh, Singh:2013ava}.
In LQC, semi-classical states (i.e., states that at late times are sharply peaked around a classical solution) are not only interesting from a physical point of view, but also have simple dynamics in the sense that quantum fluctuations do not grow significantly so long as the spatial volume of the space-time always remains much larger than the Planck volume (or in the anisotropic case, that all scale factors remain much larger than the Planck length $\ell_\pl$) \cite{Rovelli:2013zaa}. Since quantum fluctuations are small, it is reasonable to approximate $\langle \mathcal{O}^2 \rangle \approx \langle \mathcal{O} \rangle^2$ for any observable $\mathcal{O}$. The dynamics of the expectation values of observables are given by some `effective equations', that in the classical limit are identical to the equations of motion of general relativity, but also include quantum corrections that become important when the space-time curvature nears the Planck scale \cite{Ashtekar:2006wn, Taveras:2008ke}. In the cases where numerical solutions of the quantum theory have been derived, the effective equations closely track the dynamics of sharply peaked quantum states, see, e.g., \cite{Ashtekar:2006wn}.
While the effective LQC dynamics have not yet been compared with full quantum dynamics for the Bianchi space-times (although see \cite{MartinBenito:2009qu, Diener:2017lde, Pawlowski-talk} for numerical studies working, in part, towards this), so long as the three scale factors remain much larger than the Planck length, the observables in the quantum theory (i.e., the scale factors and their conjugate momenta) will be heavy degrees of freedom; in this case quantum fluctuations can safely be neglected for semi-classical states and the effective dynamics are expected to track the quantum dynamics of semi-classical states~\cite{Rovelli:2013zaa}. Numerical studies of the effective equations for Bianchi space-times have found that quantum gravity effects generate a non-singular bounce that replaces the big-bang singularity \cite{Gupt:2012vi, Corichi:2012hy, Corichi:2015ala}, and quantum gravity effects are important only for a few $t_{\rm Pl}$. Even a short time away from the bounce, the solution is extremely well approximated by classical general relativity, so the LQC bounce can be treated as an instantaneous transition between two classical solutions. As shall be shown, there are simple transformation rules describing how, in the effective LQC dynamics, the classical Bianchi solutions on either side of the bounce are related.
These transformation rules for the LQC bounce are analogous to the transition rules of the Mixmaster dynamics that describe the evolution of the vacuum Bianchi IX space-time in general relativity, and in fact provide a quantum gravity extension to them: the Mixmaster transition rules describe the classical evolution, while the LQC transition rules describe the dynamics, with quantum gravity corrections, of the Bianchi IX space-time near the Planck scale.
The vacuum Bianchi I and Bianchi IX solutions in general relativity are briefly reviewed in Sec.~\ref{s.gr}. The LQC bounce transition rules are derived for the vacuum Bianchi I space-time in Sec.~\ref{s.lqc}, and these results are extended to the vacuum Bianchi IX space-time in Sec.~\ref{s.mixmaster}.
\newpage
\section{Classical Solutions}
\label{s.gr}
The line element for the Bianchi I space-time is
\begin{equation}
{\rm d} s^2 = - N^2 {\rm d} t^2 + e^{2 \alpha_1} {\rm d} x_1^2 + e^{2 \alpha_2} {\rm d} x_2^2 + e^{2 \alpha_3} {\rm d} x_3^2,
\end{equation}
where the $\alpha_i$ are the logarithms of directional scale factors, $a_i = e^{\alpha_i}$, and are functions of $t$ only. The lapse $N$ is also a function of $t$ only.
The dynamics for the space-time can be determined from the Hamiltonian constraint of the Arnowitt-Deser-Misner formulation of general relativity \cite{Arnowitt:1962hi}. These dynamics have a particularly simple form when the lapse is chosen to be $N = V = \exp(\sum_i \alpha_i)$, in this case the Hamiltonian constraint for the vacuum Bianchi I space-time in general relativity is
\begin{equation} \label{ham-cl}
\mathcal{C}_I = \frac{\ell_o^{-3}}{32 \pi G} \Big[ \Pi_1^2 + \Pi_2^2 + \Pi_3^2 - 2 (\Pi_1 \Pi_2 + \Pi_1 \Pi_3 + \Pi_2 \Pi_3) \Big].
\end{equation}
Here the $\Pi_i(t)$ are canonically conjugate to $\alpha_i$,
\begin{equation}
\{\alpha_i, \Pi_j\} = -8 \pi G \, \delta_{ij},
\end{equation}
and classically, e.g., $\Pi_1 = a_1 a_2 \dot a_3 + a_1 \dot a_2 a_3$ with the dot denoting a derivative with respect to proper time (i.e., $\dot f = N^{-1} df/dt$). Here $\ell_o^3$ is the spatial volume with respect to the coordinates $x_i$; for non-compact space-times it is necessary to restrict integrals over the homogeneous spatial slice to a fiducial cell and then $\ell_o^3$ is the volume of the fiducial cell. For Bianchi IX, a convenient coordinate choice gives $\ell_o^3 = 16 \pi^2$.
The dynamics are given by ${\rm d}\mathcal{O}/{\rm d}\tau = \{\mathcal{O}, \mathcal{C}\}$, and for the Hamiltonian \eqref{ham-cl} all $\Pi_i$ are constants of the motion while
\begin{equation} \label{cl-alpha}
\alpha_i = \frac{\Pi_j + \Pi_k - \Pi_i}{2 \ell_o^3} \, \tau + \alpha_i^{(0)},
\end{equation}
with $i, j, k$ all different, $\alpha_i^{(0)}$ a constant of integration determined by the initial conditions, and $\tau$ the harmonic time coordinate for $N = V$.
It is convenient to express the logarithmic scale factors in terms of the mean logarithmic scale factor $\Omega = \sum_i \alpha_i / 3$ and the two shape parameters \cite{Misner:1969ae}
\begin{equation}
\beta_+ = \frac{1}{4} (\alpha_1 + \alpha_2 - 2 \alpha_3), \quad
\beta_- = \frac{1}{2\sqrt{3}} (\alpha_1 - \alpha_2).
\end{equation}
The dependence of $\Omega$ and $\beta_\pm$ on $\tau$ follows from \eqref{cl-alpha}. Clearly, the trajectory of the Bianchi I solution in the 3-dimensional $(\Omega, \beta_\pm)$ space is a straight line.
\bigskip
For the Bianchi IX space-time, the metric (with $N=V$) is
\begin{equation}
{\rm d} s^2 = - V^2 {\rm d}\tau^2 + \sum_i e^{2 \alpha_i} (\mathring{\omega}^i)^2,
\end{equation}
with $\mathring{\omega}^i$ a 1-form satisfying ${\rm d} \mathring{\omega}^i = \tfrac{1}{2} \epsilon^i{}_{jk} \mathring{\omega}^j \wedge \mathring{\omega}^k$. The Hamiltonian constraint has an additional potential term compared to \eqref{ham-cl} due to the presence of spatial curvature: $\mathcal{C}_{IX} = \mathcal{C}_I + U(\alpha_i)$, with $\mathcal{C}_I$ the Bianchi I Hamiltonian constraint \eqref{ham-cl} and the dominant terms in $U$ are
\begin{equation}
U \sim \frac{\ell_o^3}{32 \pi G} e^{4 \Omega} \Big( e^{4 \beta_+ + 4 \sqrt{3} \beta_-} + e^{4 \beta_+ - 4 \sqrt{3} \beta_-} + e^{-8 \beta_+} \Big).
\end{equation}
There are also additional terms in the potential, but these terms, near the singularity, are negligible for generic Bianchi IX solutions \cite{Ringstrom:2000mk} (the same terms dominate the potential for generic vacuum Bianchi VIII solutions near the singularity \cite{Brehm:2016cck}.) This potential has a triangular symmetry in the $\beta_\pm$ plane, and as the three walls of the potential are exponentially steep, the potential walls can be approximated by a hard wall (located where $U \sim 1/G$) that the system bounces off instantaneously in the $(\Omega, \beta_\pm)$ space. In this approximation, away from the potential walls the system follows a Bianchi I solution, and when the system bounces off one of the potential walls, it instantaneously transitions from one Bianchi I solution to another, with the new solution determined from the previous one by simple transition rules: two $\Pi_i$ remain unchanged while the third (depending on which of the three exponential walls in $U$ the system bounces off) transforms as
\begin{equation} \label{cl-trans}
\Pi_j \to \tilde \Pi_j = 2 \Pi_k + 2 \Pi_l - \Pi_j,
\end{equation}
with $j,k,l$ all different. $\Omega, \beta_\pm$ are continuous in $\tau$, though not differentiable at the transition times. For details see, e.g., \cite{Belinski:1969, Belinski:1970ew, Misner:1969ae, Montani:2007vu, Uggla:2013laa, Berger:2014tev, Wilson-Ewing:2017vju}.
As the singularity is approached, $\Omega \to -\infty$ monotonically while $\beta_\pm$ repeatedly bounce off the walls of the triangular potential $U$. This triangle becomes larger as $\Omega$ decreases due to the $e^{4\Omega}$ prefactor in $U$, so the Mixmaster dynamics (i.e., the dynamics of the vacuum Bianchi IX space-time as it approaches the $V=0$ singularity) can be seen as a particle in the $\beta_\pm$ plane bouncing off the walls of an expanding triangular potential.
Alternately, the Mixmaster dynamics can also be viewed as a particle in the three-dimensional $(\Omega, \beta_\pm$) space with the potential walls forming a bottomless triangular pyramid. In the approach to the singularity, the system continually moves towards the singularity at $\Omega \to -\infty$, bouncing off the pyramidal walls an infinite number of times before reaching the singularity.
\section{The LQC Bounce: Bianchi I}
\label{s.lqc}
Numerical solutions of the LQC effective dynamics of the Bianchi I space-time show that a non-singular bounce occurs very rapidly, and that quantum gravity effects quickly become negligible either side of the bounce---to an excellent approximation either side of the bounce can be described by a classical Bianchi I solution \cite{Gupt:2012vi}. Therefore, the LQC bounce of the Bianchi I space-time can be approximated as an instantaneous transition between two classical Bianchi I solutions, much as how the Mixmaster dynamics can be approximated by a sequence of Bianchi I solutions linked by instantaneous transitions.
The Hamiltonian constraint
\begin{align} \label{lqc-b1}
\mathcal{C}_I^{(LQC)} = & \, -\frac{V^2 \ell_o^{-3}}{8 \pi G \gamma^2 \Delta}
\Big[ \sin \mathcal{F}_1 \sin \mathcal{F}_2 + \sin \mathcal{F}_1 \sin \mathcal{F}_3 \nonumber \\ & \qquad \qquad \qquad \qquad
+ \sin \mathcal{F}_2 \sin \mathcal{F}_3 \Big]
\end{align}
generates the effective LQC dynamics for the vacuum Bianchi I space-time \cite{Chiou:2007sp, Ashtekar:2009vc}. Here
\begin{equation}
\mathcal{F}_i = \frac{\gamma \sqrt\Delta}{2 V} (\Pi_j + \Pi_k - \Pi_i),
\end{equation}
with $i,j,k$ all different, while $\Delta \sim \ell_\pl^2$ is the smallest non-zero eigenvalue of the area operator of loop quantum gravity, and $\gamma$ is the Barbero-Immirzi parameter. For details on the quantum theory, see \cite{Ashtekar:2009vc}.
One way to determine how the two classical solutions either side of the LQC bounce are related is to notice that the equations of motion for all of the $\Pi_i$ in the effective LQC dynamics are identical: ${\rm d} \Pi_1 / {\rm d}\tau = {\rm d} \Pi_2 / {\rm d}\tau = {\rm d} \Pi_3 / {\rm d}\tau$. This is because $\mathcal{C}_I^{(LQC)}$ depends on the $\alpha_i$ only through $V = \exp (\sum_i \alpha_i)$, and the Poisson bracket $\{\Pi_i, V\} = 8 \pi G V$ is the same for all $\Pi_i$, so
\begin{equation} \label{dpi}
\frac{{\rm d} \Pi_i}{{\rm d}\tau} = \{\Pi_i, \mathcal{C}_I^{(LQC)}\} = 8 \pi G V \, \frac{\delta \mathcal{C}_I^{(LQC)}}{\delta V}.
\end{equation}
Since the $\Pi_i$ are constant in the classical regime on either side away from the bounce, this implies the key result that during the bounce all of the $\Pi_i$ will be shifted by exactly the same amount: $\Pi_i \to \tilde\Pi_i = \Pi_i + \Delta\Pi$, with $\Delta\Pi$ given by the integral of \eqref{dpi} with respect to $\tau$ over the short period of time near the bounce that \eqref{dpi} is non-zero. A simple way to calculate the value of $\Delta\Pi$ is by noting that the three $\Pi_i$ before the bounce must satisfy the classical Hamiltonian constraint \eqref{ham-cl}, and so must the $\tilde\Pi_i = \Pi_i + \Delta\Pi$ after the bounce, away from the small bounce region where quantum gravity effects are important and cannot be neglected. Given the requirements that the $\Pi_i$ and $\Pi_i + \Delta\Pi$ both satisfy the classical constraint $C_I = 0$, the only possible solutions for $\Delta\Pi$ are $\Delta\Pi=0$ (the pre-bounce solution) and
\begin{equation}
\Delta\Pi = - \frac{2}{3} (\Pi_1 + \Pi_2 + \Pi_3),
\end{equation}
for the post-bounce solution \cite{Wilson-Ewing:2017vju}. It then follows that the values of the $\Pi_i$ either side of the LQC bounce, in the regions well-approximated by a classical solution, transform as
\begin{equation} \label{pi}
\Pi_i \to \tilde\Pi_i = \Pi_i - \frac{2}{3} (\Pi_1 + \Pi_2 + \Pi_3).
\end{equation}
Note that $\sum_i \Pi_i \to \sum \tilde\Pi_i = -\sum_i \Pi_i$; this is a signature of the LQC bounce in the volume $V(\tau)$. The transformation rule \eqref{pi} does not depend on the bounce occurring rapidly, but since the LQC bounce is nearly instantaneous \cite{Gupt:2012vi}, it is in addition possible to approximate the exact solution for $\alpha_i(\tau)$ by a piecewise constant function.
From the transition rule \eqref{pi}, it is possible to calculate how the logarithmic scale factors of the classical Bianchi I solutions approximating the LQC solution on either side of the bounce are related. Given the classical solution \eqref{cl-alpha} on one side of the bounce, on the other side the solution is
\begin{equation}
\tilde\alpha_i(\tau) = \frac{\tilde\Pi_j + \tilde\Pi_k - \tilde\Pi_i}{2 \ell_o^3} \, \tau + \tilde\alpha_i^{(0)},
\end{equation}
and from the transformation rule \eqref{pi}, it follows that
\begin{equation} \label{alpha}
\tilde \alpha_i(\tau) = \alpha_i(\tau) - \frac{\Pi_1 + \Pi_2 + \Pi_3}{3 \ell_o^3} (\tau - \tau_b),
\end{equation}
where $\tilde\alpha_i^{(0)}$ has been chosen to ensure that $\alpha_i(\tau)$ is continuous at the bounce time $\tau_b$.
From the transformation rule \eqref{alpha}, it follows that
\begin{gather} \label{lqc-tr}
\tilde\Omega(\tau) = \Omega(\tau) - \frac{\Pi_1 + \Pi_2 + \Pi_3}{3 \ell_o^3} \, (\tau - \tau_b), \\ \label{lqc-tr2}
\tilde \beta_\pm(\tau) = \beta_\pm(\tau),
\end{gather}
and their velocities also change in a simple manner:
\begin{equation} \label{vel}
\frac{{\rm d} \tilde \Omega}{{\rm d}\tau} = - \frac{{\rm d} \Omega}{{\rm d} \tau}, \qquad
\frac{{\rm d} \tilde \beta_\pm}{{\rm d}\tau} = \frac{{\rm d} \beta_\pm}{{\rm d} \tau}.
\end{equation}
The LQC bounce exactly reverses the evolution of the mean logarithmic scale factor $\Omega$, which changes from contraction to expansion with the amplitude $|{\rm d}\Omega/{\rm d}\tau|$ unchanged; on the other hand the dynamics of the shape parameters $\beta_\pm$ are entirely unaffected by the LQC bounce and continue evolving as before. Note that the momenta conjugate to $\Omega, \beta_\pm$ given by $p_\Omega = -12\pi G \, {\rm d}\Omega/{\rm d}\tau$ and $p_\pm = 12\pi G \, {\rm d}\beta_\pm/{\rm d}\tau$ (see, e.g., \cite{Montani:2007vu}) therefore transform as $p_\Omega \to \tilde p_\Omega = -p_\Omega$ and $p_\pm \to \tilde p_\pm = p_\pm$.
Note that due to the symmetry of the Bianchi I space-time, where all directions are treated equally in both $\mathcal{C}_I$ and $\mathcal{C}_I^{(LQC)}$, the shape parameters must either (i) be unaffected by the bounce, or (ii) reverse direction after the bounce. Anything else would require the presence of a preferred direction \cite{Uggla}. It is the first possibility that occurs in LQC. Note that this simple argument implies that either possibility (i) or (ii) is realized also in any other theory that gives a bounce in a Bianchi I space-time (with a good classical limit either side of the bounce) without introducing a preferred direction.
Finally, note that in the effective LQC dynamics for the Bianchi I space-time (the absolute value of) the expansion
\begin{equation} \label{theta}
\theta = \frac{1}{NV} \frac{{\rm d} V}{{\rm d}\tau} = 3 e^{-3\Omega} \frac{{\rm d} \Omega}{{\rm d}\tau}
\end{equation}
is bounded above by the Planck scale \cite{Corichi:2009pp}, and the LQC bounce occurs when the expansion nears $\sim \ell_\pl^{-1}$ (the exact value of $\theta$ at the LQC bounce may depend on the solution), so the `potential wall' responsible for the LQC bounce is located at $\theta \sim \ell_\pl^{-1}$. Importantly, the bounce can easily happen when all scale factors satisfy $a_i \gg \ell_\pl$ and the effective dynamics remain valid. Since the expansion depends on ${\rm d}\Omega/{\rm d}\tau$ in addition to $\Omega$, for different values of ${\rm d}\Omega/{\rm d}\tau$ the LQC bounce will occur at different $\Omega$ and therefore the `potential wall' of the LQC bounce cannot be located at a universal value of $\Omega$ in the $(\Omega, \beta_\pm)$ space for all solutions. This is different to the Mixmaster dynamics of general relativity, where the potential walls form the same bottomless triangular pyramid in the $(\Omega,\beta_\pm)$ space no matter the Bianchi IX solution. Nonetheless, although the location of the LQC `potential wall' depends on both $\Omega$ and ${\rm d}\Omega/{\rm d}\tau$ (but not $\beta_\pm$), it always provides a `bottom wall' the LQC solution bounces off with simple transition rules relating the classical Bianchi I solutions before and after the bounce.
\section{Quantum Mixmaster Dynamics}
\label{s.mixmaster}
The LQC effective dynamics for the Bianchi IX space-time are generated by the Hamiltonian constraint \cite{Singh:2013ava}
\begin{equation}
\mathcal{C}_{IX}^{(LQC)} = \mathcal{C}_I^{(LQC)} + U(\alpha_i),
\end{equation}
with $\mathcal{C}_I^{(LQC)}$ the LQC Hamiltonian constraint for the Bianchi I space-time \eqref{lqc-b1}, and the potential $U(\alpha_i)$ unchanged from the classical theory. (There are some ambiguities in the quantization of Bianchi space-times with non-vanishing spatial curvature in LQC, see \cite{Singh:2013ava} for details. This effective Hamiltonian corresponds to the `K' loop quantization and neglects inverse triad effects.)
\begin{figure*}
\begin{subfigure}[t]{0.55\textwidth} \vskip -20pt
\includegraphics[width=\textwidth]{mixmaster-cl.pdf}
\end{subfigure}%
\begin{subfigure}[t]{0.33\textwidth} \vskip -5pt
\includegraphics[width=\textwidth]{mixmaster-lqc.pdf}
\end{subfigure}
\caption{ \small
A schematic depiction of the classical Mixmaster dynamics is given on the left, and of the effective LQC Mixmaster dynamics on the right. The vertical axis is $\Omega$ (with $\Omega$ increasing vertically) and the plane is spanned by $\beta_\pm$. In this example, the initial conditions are given at the blue dot, and the transitions due to one of the three $U(\beta_\pm)$ potential walls are indicated by red circles. In the classical Mixmaster dynamics, the triangular pyramid is bottomless so there are an infinite number of transitions as the singularity is approached. On the other hand, in the effective LQC Mixmaster dynamics there is now a bottom floor due to quantum gravity effects; the LQC bounce off this floor is indicated in the figure on the right by a green circle. Following the LQC bounce, with the volume $V$ now increasing, the system will continue to undergo the usual classical Kasner transitions whenever the system hits one of the spatial curvature walls.}
\label{fig}
\end{figure*}
Away from the LQC bounce, the classical dynamics are an excellent approximation to the LQC effective dynamics, and these dynamics, as reviewed in Sec.~\ref{s.gr}, are in turn well approximated by a sequence of Bianchi I solutions where the spatial curvature (and therefore the potential $U$) is negligible except during the transitions between Bianchi I solutions.
These transitions are very rapid and can be approximated as being instantaneous. Since the LQC bounce also occurs very rapidly in the effective theory (numerical simulations find $\sim t_{\rm Pl}$ for the Bianchi I space-time \cite{Gupt:2012vi}), it is also reasonable to approximate the LQC bounce as being instantaneous. In this case, absent fine-tuned initial conditions, it is reasonable to expect that: (i) LQC effects will be negligible during bounces off the potential $U$, and (ii) the potential $U$ will be negligible during the LQC bounce. Then, the rules \eqref{cl-trans} derived classically for the Mixmaster transitions off the potential walls of $U$, and those derived for the Bianchi I LQC bounce \eqref{pi}, would both remain the same for the effective LQC Mixmaster dynamics. It would be good to check the assumptions (i) and (ii) above by numerically solving, for a wide range of initial conditions, the effective LQC dynamics for the Bianchi IX space-time; this is left for future work.
Based on these two assumptions, the LQC Mixmaster dynamics can be described as a sequence of classical Bianchi I solutions bouncing off the potential walls with $\Omega$ decreasing until the LQC bounce occurs. At the LQC bounce, $\Omega$ is reflected following \eqref{lqc-tr}, while the shape parameters are entirely unaffected by the LQC bounce. The LQC bounce is expected to occur when (the absolute value of) the expansion reaches the Planck scale, $|\theta| \sim \ell_\pl^{-1}$; as can be seen in \eqref{theta}, this occurs at
\begin{equation}
e^{3\Omega} \sim 3 \, \ell_\pl \, \left| \frac{{\rm d} \Omega}{{\rm d}\tau} \right|.
\end{equation}
After the bounce, now with $\Omega$ increasing, the dynamics is approximated by another sequence of classical Bianchi I solutions, again bouncing off the potential walls of $U$ following the classical transition rules reviewed in Sec.~\ref{s.gr}. This picture holds if the spatial curvature is negligible during the LQC bounce; if $U$ cannot be neglected during the LQC bounce, more work is necessary to determine how $\Omega$ and $\beta_\pm$ transform in this case.
These LQC Mixmaster dynamics are piecewise linear in the $(\Omega,\beta_\pm)$ space, with bounces off a triangular pyramid with the three usual Mixmaster spatial curvature `upper walls' and a new quantum gravity `bottom wall', this is depicted in Fig.~\ref{fig}.
Alternately, the LQC Mixmaster dynamics can be projected on the $\beta_\pm$ plane, where the trajectory is again piecewise linear with the system bouncing off the triangular potential walls of $U$. In a contracting space-time the potential walls will initially be moving away from the origin, but when the LQC bounce occurs the potential walls will reverse direction and move back towards the origin. During the LQC bounce, the trajectory of the system in the $\beta_\pm$ plane is unchanged as seen in \eqref{lqc-tr2}.
Finally, the transition rules for the LQC bounce in the Mixmaster model can also be expressed in terms of other variables, for example the Kasner exponents transform as $k_i \to \tilde k_i = 2/3 - k_i$, and the BKL $u$ parameter transforms as $u \to \tilde u = (u+2)/(u-1)$, see \cite{Wilson-Ewing:2017vju} for details.
\section{Discussion}
\label{s.disc}
For the vacuum Bianchi IX space-time, the LQC effective dynamics provides a quantum gravity extension to the Mixmaster dynamics by introducing a new type of transition that occurs when the expansion $\theta$ reaches the Planck scale. This effectively introduces a new `quantum gravity bottom' to the pyramid-shaped potential in the $(\Omega,\beta_\pm)$ space; the location of this LQC potential wall depends on $\Omega$ and ${\rm d}\Omega/{\rm d}\tau$, but not $\beta_\pm$. Alternately, the LQC bounce can be viewed in the $\beta_\pm$ plane as the reversal of the motion of the potential walls (from initially moving away from the origin to afterwards moving back towards the origin at the same speed) without having any impact on the dynamics of the shape parameters $\beta_\pm$.
The classical Mixmaster dynamics are known to be chaotic \cite{Barrow:1981sx, Chernoff:1983zz, Cornish:1996yg}, and the LQC Mixmaster dynamics are essentially the same, except with a new additional type of transition. It seems likely the LQC Bianchi IX dynamics will also be chaotic for the reason that there will be an infinite number of expansion-recollapse-contraction-bounce cycles, and each expansion-recollapse-contraction segment is identical to a portion of the classical Bianchi IX space-time dynamics. Since the LQC bounce does not reverse (or change in any way) the dynamics of $\beta_\pm$, it seems the sensitivity to initial conditions and the mixing of solutions generated during the expansion-recollapse-contraction segment will remain present and add up over consecutive cycles, in which case the dynamics can be expected to be chaotic. For more on this point, see \cite{Wilson-Ewing:2017vju}.
In earlier work, on the other hand, it was suggested that inverse volume effects could remove the chaotic behaviour from the Bianchi IX space-time \cite{Bojowald:2003xe}. However, this was based on the assumption that the Bianchi IX space-time would contract indefinitely; this assumption is violated by the occurrence of the bounce in LQC. In future work, it would be interesting to extend the results obtained here to include inverse triad effects (that were assumed to be negligible here); these are expected to become important only for a Bianchi IX space-time that has at least one scale factor reach (at some time) $\sim \ell_\pl$.
Note that the above argument indicating that the effective LQC dynamics for the Bianchi IX space-time may be chaotic is not relevant for the BKL conjecture: the BKL conjecture, if correct, will presumably only hold for a short time when the curvature is large, while the argument for chaos in the Bianchi IX space-time relies in part on an infinite sequence of transitions between Bianchi I solutions. If, as expected, the BKL behaviour only lasts a short time before the system bounces and leaves the BKL regime, the finite number of transitions between Bianchi I solutions may not be sufficient to cause chaos.
It is interesting to add a massless scalar field to the Bianchi space-times. This gives an extra contribution to the Hamiltonian constraint, and some other relations are modified (see \cite{Wilson-Ewing:2017vju} for details), but the transition rules \eqref{lqc-tr}--\eqref{vel} remain the same: they are not affected by the presence of the massless scalar field.
Finally, another possibility would be to add a cosmological constant and/or other matter fields like radiation or dust; these would be expected to affect the dynamics especially in the classical regime, away from the bounce that was the focus here. See \cite{Barrow:2017yqt} for work along these lines in a different realization of a cyclic Mixmaster space-time where the non-singular bounce is caused by a ghost field rather than quantum gravity effects; similar qualitative results (insofar as the classical regime is concerned) would likely hold in LQC.
\newpage
\noindent
{\it Acknowledgments:}
I thank Claes Uggla for very helpful discussions and comments on an earlier draft of the paper, and Marco de Cesare for his help in preparing the figures.
This work was supported in part by the Natural Science and Engineering Research Council of Canada.
\small
\raggedright
|
1,314,259,993,235 | arxiv | \section{Introduction}
Quantum entanglement provides a fundamental potential resource for
communication and information processing and is one of the key quantitative
notions of the intriguing field of quantum information theory and quantum
computation. A quantum superposition state decays into a classical,
statistical mixture of states through a decoherence process which is caused
by entangling interactions between the system and its environment \cite{c}.
Superposition of quantum states however, are very fragile and easily
destroyed by the decoherence processes. Such uncontrollable influences cause
noise in the communication or errors in the outcome of a computation, and
thus reduce the advantages of quantum information methods. However, in a
more realistic and practical situation, decoherence caused by an external
environment is inevitable. Therefore, influence of an external environmental
system on the entanglement cannot be ignored. A novel research has been
carried out to study the quantum communication channels. Macchiavello and
Palma \cite{bb} have developed the theory of quantum channels to encompass
memory effects. In real-world applications the assumption of having
uncorrelated noise channels can not be fully justified. However, quantum
computing in the presence of noise is possible with the use of decoherence
free subspaces \cite{k} and the quantum error correction \cite{Pres}.
Application of mathematical physics to economics has seen a recent
development in the form of quantum game theory. Two-player quantum games
have attracted a lot of interest in recent years [5-7]. A number of authors
have investigated the quantum prisoner's dilemma game [8-10]. A detailed
description on quantum game theory can be found in references [11-16]. There
have been remarkable advances in the experimental realization of quantum
games such as Prisoner's Dilemma \cite{zhu,a}. The Prisoner's Dilemma game
is a widely known example in classical game theory. The quantum version of
the Prisoner's Dilemma has been experimentally demonstrated using a nuclear
magnetic resonance (NMR) quantum computer \cite{a}. Recently, Prevedel et
al. have experimentally demonstrated the application of a measurement-based
protocol \cite{b}. They realized a quantum version of the Prisoner's Dilemma
game based on the entangled photonic cluster states. It was the first
realization of a quantum game in the context of one-way quantum computing.
Studies concerning the quantum games in the presence of decoherence and
correlated noise have produced interesting results. Chen et al. \cite{I}\
have shown that in the case of two-player Prisoner's Dilemma game, the Nash
equilibria are not changed by the effect of decoherence in a maximally
entangled case. Nawaz and Toor \cite{m} have studied quantum games under the
effect of correlated noise by taking a particular example of the
phase-damping channel. They have shown that the quantum player outperforms
the classical players for all values of the decoherence parameter $p$. They
have also shown that for maximum correlation the effects of decoherence
diminish and it behaves as a noiseless game. Recently, we have investigated
different quantum games under different noise models and found interesting
results \cite{n}. More recently, Gawron et al. \cite{QMSG} have studied the
noise effects in quantum magic squares game. They have shown that the
probability of success can be used to determine characteristics of quantum
channels. Investigation of multiplayer quantum games in a multi-qubit system
could be of much interest and significance. In the recent years, quantum
games with more than two players were investigated [24-27]. Such games can
exhibit certain forms of pure quantum equilibrium that have no analog in
classical games, or even in two-player quantum games. Recently, Cao et al.
\cite{rr} have investigated the effect of quantum noise on a multiplayer
Prisoner's Dilemma quantum game. They have shown that in a maximally
entangled case a special Nash equilibrium appears for a specific range of
the quantum noise parameter (the decoherence parameter). However, yet no
attention has been given to the multiplayer quantum games under the effect
of correlated noise, which is the main focus of this paper.
In this paper, we investigate three-player Prisoner's Dilemma quantum game
under the effect of decoherence and correlated noise in a three-qubit
system. We have considered a dephasing channel parameterized by the memory
factor $\mu $ which measures the degree of correlations. By exploiting the
initial state and measurement basis entanglement parameters, $\gamma \in
\lbrack 0,\pi /2]$ and $\delta \in \lbrack 0,\pi /2],$ we study the role of
decoherence parameter $p\in \lbrack 0,1]$ and memory parameter $\mu \in
\lbrack 0,1]$ on the three-player Prisoner's Dilemma quantum game. Here, $%
\delta =0$ means that the measurement basis are unentangled and $\delta =\pi
/2$ means that it is maximally entangled. Similarly, $\gamma =0$ means that
the game is initially unentangled and $\gamma =\pi /2$ means that it is
maximally entangled. Whereas the lower and upper limits of $p$ correspond to
a fully coherent and fully decohered system, respectively. Similarly, the
lower and upper limits of $\mu $ correspond to a memoryless and maximum
memory (degree of correlation) cases, respectively. It is seen that in
contradiction to the two-player Prisoner's Dilemma quantum game, in the
three-player game, the quantum player can outperform the classical players
for all values of the decoherence parameter $p$ for the maximum degree of
correlations (i.e. memory parameter $\mu $ $=1$). In comparison to the
two-player situation, the three-player game does not become noiseless and
quantum player still remains superior over the classical ones for an entire
range of the decoherence parameter, $p,$ in memoryless case i.e. $\mu =0$.
It is shown that the payoffs reduction due to decoherence is controlled by
the memory parameter $\mu $ throughout the course of the game. It is also
shown that the Nash equilibrium of the game does not change under the
correlated noise in contradiction to the case of decoherence effects as
investigated by Cao et al. \cite{rr}.
\section{Three-player Prisoner's Dilemma game}
Properties of the two-player quantum games have been discussed extensively
[11-13, 29], however, not much attention has been given to the multiplayer
quantum games. Study of the multiplayer games may exhibit interesting
results in comparison to the two-player games. The three-player Prisoners'
Dilemma is similar to the two-player situation except that Alice, Bob and a
third player Charlie join the game. The three players are arrested under the
suspicion of robbing a bank. Similar to two-player case, they are
interrogated in separate cells without communicating with each other. The
two possible moves for each prisoner are, to cooperate $(C)$ or to defect $%
(D).$ The payoff table for the three-player Prisoner's Dilemma is shown in
table 1. The game is symmetric for the three players, and the strategy $D$
dominates the strategy $C$ for all of them. Since the selfish players prefer
to choose $D$ as the optimal strategy, the unique Nash equilibrium is ($%
D,D,D $) with payoffs ($1,1,1$). This is a Pareto inferior outcome, since ($%
C,C,C$) with payoffs ($3,3,3$) would be better for all the three players.
This situation is the very catch of the dilemma and is similar to the
two-player version of this game. The dilemma of this game can be resolved in
its quantum version. Du et al. [25] investigated the three-player quantum
Prisoner's Dilemma game with a certain strategic space. They found a Nash
equilibrium that can remove the dilemma in the classical game when the
game's state is maximally entangled. This particular Nash equilibrium
remains to be a Nash equilibrium even for the non-maximally entangled cases.
However, their calculations for the expected payoffs of the players comprise
product measurement basis for the arbiter of the game. Here in our model we
use the entangled measurement basis for the arbiter of the game to perform
measurement. In addition, we include the effect of decoherence and
correlated noise in the three-players settings.
\section{Time correlated dephasing channel}
Quantum information is encoded in qubits during its transmission from one
party to another and requires a communication channel. In a realistic
situation, the qubits have a nontrivial dynamics during transmission because
of their interaction with the environment. Therefore, Bob may receive a set
of distorted qubits because of the disturbing action of the channel. Studies
on quantum channels have attracted a lot of attention in the recent years
\cite{bb,oo}. Early work in this direction was devoted mainly, to memoryless
channels for which consecutive signal transmissions through the channel are
not correlated. In the correlated channels (channels with the memory), the
noise acts on consecutive uses of channels. We consider here the noise model
based on the time correlated dephasing channel. In the operator sum
representation, the dephasing process can be expressed as \cite{p}
\begin{equation}
\rho _{f}=\sum\limits_{i=0}^{1}A_{i}\rho _{in}A_{i}^{\dagger }
\end{equation}%
where
\begin{eqnarray}
A_{0} &=&\sqrt{1-\frac{p}{2}}I \notag \\
A_{1} &=&\sqrt{\frac{p}{2}}\sigma _{z}
\end{eqnarray}%
are the Kraus operators, $I$ is the identity operator $\sigma _{z}$ is the
Pauli matrix and $p$ is the decoherence parameter. Let $N$ qubits are
allowed to pass through such a channel then equation (1) becomes \cite%
{Flitney2}%
\begin{equation}
\rho _{f}=\sum\limits_{k_{1,}....,.k_{n}=0}^{1}(A_{k_{n}}\otimes
.....A_{k_{1}})\rho _{in}(A_{k_{1}}^{\dagger }\otimes
.....A_{k_{n}}^{\dagger })
\end{equation}%
Now if the noise is correlated with the memory of degree $\mu ,$ then the
action of the channel on the two consecutive qubits is given by the Kraus
operators \cite{bb}%
\begin{equation}
A_{ij}=\sqrt{p_{i}[(1-\mu )p_{j}+\mu \delta _{ij}]}\sigma _{i}\otimes \sigma
_{j}
\end{equation}%
where $\sigma _{i}$ and $\sigma _{j}$ are usual Pauli matrices with indices $%
i$ and $j$ run from $0$ to $3$ and $\mu $ is the memory parameter$.$ The
above expression means that with the probability $(1-%
\mu
)$ the noise is uncorrelated whereas with the probability $%
\mu
$ the noise is correlated. Physically the parameter $%
\mu
$ is determined by the relaxation time of the channel when a qubit passes
through it. In order to remove correlations, one can wait until the channel
has relaxed to its original state before sending the next qubit. However,
this may lower the rate of information transfer. The Kraus operators for the
three qubit system can be written as \cite{qqq}%
\begin{equation}
A_{ijk}=\sqrt{[(1-\mu )p_{i}+\mu \delta _{ij}][(1-\mu )p_{j}+\mu \delta
_{jk}]p_{k}}\sigma ^{i}\otimes \sigma ^{j}\otimes \sigma ^{k}
\end{equation}%
where $i,$ $j,$ $k$ are $0$ or $3.$ The memory parameter $%
{\mu}%
$ is contained in the probabilities $A_{ijk},$ which determines the
probability of the errors $\sigma ^{i}\otimes \sigma ^{j}\otimes \sigma
^{k}. $ Recalling that $(1-%
{\mu}%
)$ is the probability of independent errors on two consecutive qubits and $%
{\mu}%
$ is the probability of identical errors. The sum of probabilities of all
types of errors on the three qubits add to unity as we expect,%
\begin{equation}
\sum\limits_{i,j,k}[(1-\mu )^{2}A_{i}A_{j}A_{k}+2\mu (1-\mu )A_{i}A_{j}+\mu
^{2}A_{i}]=1
\end{equation}%
It is necessary to consider the performance of the channel for arbitrary
values of the $%
\mu
$ to reach a compromise between various factors which determine the final
rate of information transfer.\ Thus in passing through the channel any two
consecutive qubits undergo random independent (uncorrelated) errors with the
probability ($1-%
\mu
)$ and identical (correlated) errors with the probability $%
\mu
$. This should be the case if the channel has a memory depending on its
relaxation time and if we stream the qubits through it.
\section{The model}
In our model, Alice, Bob and Charlie, each uses individual channels to
communicate with the arbiter of the game. The two uses of the channel i.e.
the first passage (from the arbiter) and the second passage (back to the
arbiter) are correlated as depicted in figure 1. We consider that the
initial entangled state is prepared by the arbiter and passed on to the
players through a quantum correlated dephasing channel (QCDC). On receiving
the quantum state, the players apply their local operators (strategies) and
return it back to the arbiter via QCDC. Then, the arbiter performs the
measurement and announces their payoffs. Let's consider that the three
players Alice, Bob and Charlie be given the following initial quantum state:%
\begin{equation}
\left\vert \psi _{in}\right\rangle =\cos \frac{\gamma }{2}\left\vert
000\right\rangle +i\sin \frac{\gamma }{2}\left\vert 111\right\rangle
\end{equation}%
where $0\leq \gamma \leq \pi /2$ corresponds to the entanglement of the
initial state. The players can locally manipulate their individual qubits.
The strategies of the players can be represented by the unitary operator $%
U_{i}$ of the form \cite{n}.
\begin{equation}
U_{i}=\cos \frac{\theta _{i}}{2}R_{i}+\sin \frac{\theta _{i}}{2}P_{i}
\end{equation}%
where $i=1,$ $2$ or $3$\ and $R_{i}$, $P_{i}$ are the unitary operators
defined as
\begin{eqnarray}
R_{i}\left\vert 0\right\rangle &=&e^{i\alpha _{i}}\left\vert 0\right\rangle
,\qquad \qquad R_{i}\left\vert 1\right\rangle =e^{-i\alpha _{i}}\left\vert
1\right\rangle \notag \\
P_{i}\left\vert 0\right\rangle &=&e^{i(\frac{\pi }{2}-\beta _{i})}\left\vert
1\right\rangle ,\qquad P_{i}\left\vert 1\right\rangle =e^{i(\frac{\pi }{2}%
+\beta _{i})}\left\vert 0\right\rangle
\end{eqnarray}%
where $0\leq \theta _{i}\leq \pi ,$ and $-\pi \leq \{\alpha _{i},$ $\beta
_{i}\}\leq \pi .$ Application of the local operators of the players
transforms the initial state given in equation (7) to
\begin{equation}
\rho _{f}=(U_{1}\otimes U_{2}\otimes U_{3})\rho _{in}(U_{1}\otimes
U_{2}\otimes U_{3})^{\dagger }
\end{equation}%
where $\rho _{in}=\left\vert \psi _{in}\right\rangle \left\langle \psi
_{in}\right\vert $ is the density matrix for the quantum state. The
operators used by the arbiter to determine the payoffs for Alice, Bob and
Charlie are
\begin{eqnarray}
P^{k}
&=&\$_{000}^{k}P_{000}+\$_{001}^{k}P_{001}+\$_{110}^{k}P_{110}+%
\$_{010}^{k}P_{010} \notag \\
&&+\$_{101}^{k}P_{101}+\$_{011}^{k}P_{011}+\$_{100}^{k}P_{100}+%
\$_{111}^{k}P_{111}
\end{eqnarray}%
where $k=A$, $B$ or $C$ and
\begin{eqnarray}
P_{000} &=&\left\vert \psi _{000}\right\rangle \left\langle \psi
_{000}\right\vert ,\qquad \left\vert \psi _{000}\right\rangle =\cos \frac{%
\delta }{2}\left\vert 000\right\rangle +i\sin \frac{\delta }{2}\left\vert
111\right\rangle \notag \\
P_{111} &=&\left\vert \psi _{111}\right\rangle \left\langle \psi
_{111}\right\vert ,\qquad \left\vert \psi _{111}\right\rangle =\cos \frac{%
\delta }{2}\left\vert 111\right\rangle +i\sin \frac{\delta }{2}\left\vert
000\right\rangle \notag \\
P_{001} &=&\left\vert \psi _{001}\right\rangle \left\langle \psi
_{001}\right\vert ,\qquad \left\vert \psi _{001}\right\rangle =\cos \frac{%
\delta }{2}\left\vert 001\right\rangle +i\sin \frac{\delta }{2}\left\vert
110\right\rangle \notag \\
P_{110} &=&\left\vert \psi _{110}\right\rangle \left\langle \psi
_{110}\right\vert ,\qquad \left\vert \psi _{110}\right\rangle =\cos \frac{%
\delta }{2}\left\vert 110\right\rangle +i\sin \frac{\delta }{2}\left\vert
001\right\rangle \notag \\
P_{010} &=&\left\vert \psi _{010}\right\rangle \left\langle \psi
_{010}\right\vert ,\qquad \left\vert \psi _{010}\right\rangle =\cos \frac{%
\delta }{2}\left\vert 010\right\rangle -i\sin \frac{\delta }{2}\left\vert
101\right\rangle \notag \\
P_{101} &=&\left\vert \psi _{101}\right\rangle \left\langle \psi
_{101}\right\vert ,\qquad \left\vert \psi _{101}\right\rangle =\cos \frac{%
\delta }{2}\left\vert 101\right\rangle -i\sin \frac{\delta }{2}\left\vert
010\right\rangle \notag \\
P_{011} &=&\left\vert \psi _{011}\right\rangle \left\langle \psi
_{011}\right\vert ,\qquad \left\vert \psi _{011}\right\rangle =\cos \frac{%
\delta }{2}\left\vert 011\right\rangle -i\sin \frac{\delta }{2}\left\vert
100\right\rangle \notag \\
P_{100} &=&\left\vert \psi _{100}\right\rangle \left\langle \psi
_{100}\right\vert ,\qquad \left\vert \psi _{100}\right\rangle =\cos \frac{%
\delta }{2}\left\vert 100\right\rangle -i\sin \frac{\delta }{2}\left\vert
011\right\rangle \label{mbasis}
\end{eqnarray}%
where $0\leq \delta \leq \pi /2$ and $\$_{lmn}^{k}$ are elements of the
payoff matrix as given in table 1. Since quantum mechanics is a
fundamentally probabilistic theory, the strategic notion of the payoff is
the expected payoff. The players after their actions, forward their qubits
to the arbiter of the game for the final projective measurement in the
computational basis (see equation (\ref{mbasis})). The arbiter of the game
finally determines their payoffs (see figure 1). The payoffs for the players
can be obtained as the mean values of the payoff operators as
\begin{equation}
\$_{k}(\theta _{i},\alpha _{i},\beta _{i})=\text{Tr}(P^{k}\rho _{f})
\end{equation}%
where Tr represents the trace of the matrix. Using equations (5) to (13),
the payoffs for the three players can be obtained as
\begin{eqnarray}
&&\left. \$_{k}(\theta _{i},\alpha _{i},\beta _{i})=\right. \notag \\
&&c_{1}c_{2}c_{3}[\eta _{1}\$_{000}^{k}+\eta
_{2}\$_{111}^{k}+(\$_{000}^{k}-\$_{111}^{k})\mu _{p}^{(1)}\mu _{p}^{(2)}\xi
\cos 2(\alpha _{1}+\alpha _{2}+\alpha _{3})] \notag \\
&&+s_{1}s_{2}s_{3}[\eta _{2}\$_{000}^{k}+\eta
_{1}\$_{111}^{k}-(\$_{000}^{k}-\$_{111}^{k})\mu _{p}^{(1)}\mu _{p}^{(2)}\xi
\cos 2(\beta _{1}+\beta _{2}+\beta _{3})] \notag \\
&&+c_{1}c_{2}s_{3}[\eta _{1}\$_{001}^{k}+\eta
_{2}\$_{110}^{k}+(\$_{001}^{k}-\$_{110}^{k})\mu _{p}^{(1)}\mu _{p}^{(2)}\xi
\cos 2(\alpha _{1}+\alpha _{2}-\beta _{3})] \notag \\
&&+s_{1}s_{2}c_{3}[\eta _{2}\$_{001}^{k}+\eta
_{1}\$_{110}^{k}-(\$_{001}^{k}-\$_{110}^{k})\mu _{p}^{(1)}\mu _{p}^{(2)}\xi
\cos 2(\beta _{1}+\beta _{2}-\alpha _{3})] \notag \\
&&+s_{1}c_{2}c_{3}[\eta _{1}\$_{100}^{k}+\eta
_{2}\$_{011}^{k}+(\$_{100}^{k}-\$_{011}^{k})\mu _{p}^{(1)}\mu _{p}^{(2)}\xi
\cos 2(\alpha _{2}+\alpha _{3}-\beta _{1})] \notag \\
&&+c_{1}s_{2}s_{3}[\eta _{2}\$_{100}^{k}+\eta
_{1}\$_{011}^{k}-(\$_{100}^{k}-\$_{011}^{k})\mu _{p}^{(1)}\mu _{p}^{(2)}\xi
\cos 2(\beta _{2}+\beta _{3}-\alpha _{1})] \notag \\
&&+s_{1}c_{2}s_{3}[\eta _{1}\$_{101}^{k}+\eta
_{2}\$_{010}^{k}+(\$_{101}^{k}-\$_{010}^{k})\mu _{p}^{(1)}\mu _{p}^{(2)}\xi
\cos 2(\beta _{1}+\beta _{3}-\alpha _{2})] \notag \\
&&+c_{1}s_{2}c_{3}[\eta _{2}\$_{101}^{k}+\eta
_{1}\$_{010}^{k}-(\$_{101}^{k}-\$_{010}^{k})\mu _{p}^{(1)}\mu _{p}^{(2)}\xi
\cos 2(\alpha _{1}+\alpha _{3}-\beta _{2})] \notag \\
&&+\frac{\mu _{p}^{(1)}}{8}(\cos ^{2}(\delta /2)-\sin ^{2}(\delta
/2))[\$_{000}^{k}-\$_{111}^{k}-\$_{001}^{k}+\$_{110}^{k}-\$_{010}^{k}+%
\$_{101}^{k}+\$_{011}^{k}-\$_{100}^{k}]\times \notag \\
&&\sin (\gamma )\sin (\theta _{1})\sin (\theta _{2})\sin (\theta _{3})\cos
(\alpha _{1}+\alpha _{2}+\alpha _{3}-\beta _{1}-\beta _{2}-\beta _{3})
\notag \\
&&+[[\$_{000}^{k}-\$_{111}^{k}]\sin (\delta )\sin (\theta _{1})\sin (\theta
_{2})\sin (\theta _{2})\cos (\alpha _{1}+\alpha _{2}+\alpha _{3}-\beta
_{1}-\beta _{2}-\beta _{3}) \notag \\
&&+[\$_{110}^{k}-\$_{001}^{k}]\sin (\delta )\sin (\theta _{1})\sin (\theta
_{2})\sin (\theta _{2})\cos (\alpha _{1}+\alpha _{2}-\alpha _{3}+\beta
_{1}+\beta _{2}-\beta _{3}) \notag \\
&&+[\$_{010}^{k}-\$_{101}^{k}]\sin (\delta )\sin (\theta _{1})\sin (\theta
_{2})\sin (\theta _{2})\cos (\alpha _{1}-\alpha _{2}+\alpha _{3}+\beta
_{1}-\beta _{2}+\beta _{3}) \notag \\
&&+[\$_{100}^{k}-\$_{011}^{k}]\sin (\delta )\sin (\theta _{1})\sin (\theta
_{2})\sin (\theta _{2})\cos (\alpha _{1}-\alpha _{2}-\alpha _{3}+\beta
_{1}-\beta _{2}-\beta _{3})]\times \notag \\
&&[\frac{\mu _{p}^{(2)}}{8}(\cos ^{2}(\gamma /2)-\sin ^{2}(\gamma /2))]
\end{eqnarray}%
where
\begin{eqnarray}
\mu _{p}^{(j)} &=&(1-p_{j})(1-2p_{j}+4\mu _{j}p_{j}-2\mu
_{j}^{2}p_{j}+p_{j}^{2}-2\mu _{j}p_{j}^{2}+\mu _{j}^{2}p_{j}^{2}) \notag \\
\eta _{1} &=&\cos ^{2}(\gamma /2)\cos ^{2}(\delta /2)+\sin ^{2}(\gamma
/2)\sin ^{2}(\delta /2) \notag \\
\eta _{2} &=&\{Sin^{2}(\gamma /2)\cos ^{2}(\delta /2)+\sin ^{2}(\delta
/2)\cos ^{2}(\gamma /2) \notag \\
\ &=&\frac{1}{2}\sin (\delta )\sin (\gamma )\text{ }c_{i}=\cos ^{2}\frac{%
\theta _{i}}{2},s_{i}=\sin ^{2}\frac{\theta _{i}}{2}
\end{eqnarray}%
where $j=1$ or $2$. The payoffs for the three players can be found by
substituting the appropriate values for $\$_{lmn}^{k}$ into equation (14).
Elements of the classical payoff matrix for the Prisoner's Dilemma game are
given in table 1 . The payoff matrix under decoherence can be obtained by
setting $\mu =0$ i.e. by setting $\mu _{p}^{(j)}=(1-p_{j})^{3}$ in equation
(15). It is important to mention that for $p$ and $\mu $\ we mean $%
p_{1}=p_{2}=p$ and $\mu _{1}=\mu _{2}=\mu $\ unless otherwise specified. Our
results are consistent with Ref. [25, 27] and can be verified from equation
(14) when all the three players resort to their Nash equilibrium strategies.
It can be seen that the decoherence causes a reduction in the payoffs of the
players in the memoryless case (see equation (14)). We consider here that
Alice and Bob are restricted to play classical strategies, i.e., $\alpha
_{1}=\alpha _{2}=\beta _{1}=\beta _{2}=0$, whereas Charlie is allowed to
play the quantum strategies as well. It is shown that the quantum player
outperforms the classical players for all values of the decoherence
parameter $p$ for an entire range of the memory parameter $\mu $. Under
these circumstances, it is seen that in contradiction to the two-player
Prisoner's Dilemma\ quantum game, for maximum degree of correlations the
effect of decoherence survives and it does not behave as a noiseless game.
It can be seen that the memory compensates the payoffs reduction due to
decoherence. Further more, it is shown that the memory has no effect on the
Nash equilibrium of the game. Alice's best strategy ($\alpha _{1}=\theta
_{1}=\pi /2,$ and $\beta _{1}=0)$ remains her best strategy throughout the
course of the game. This implies that the correlated noise has no effect on
the Nash equilibrium of the game.
\section{Results and discussions}
To analyze the effects of correlated noise (memory) and decoherence on the
dynamics of the three-player Prisoner's Dilemma quantum game. We consider
the restricted game scenario where Alice and Bob are allowed to play the
classical strategies, i.e., $\alpha _{1}=\alpha _{2}=\beta _{1}=\beta _{2}=0$%
, whereas Charlie is allowed to play the quantum strategies. In figure 2, we
have plotted the players payoffs as a function of the decoherence parameter $%
p$ for the dephasing channel. It is seen that the quantum player out scores
the classical players for all values of the decoherence parameter $p$ for
the memoryless ($\mu =0)$ case. It is shown that even for a maximum degree
of memory i.e. $\mu =1,$ the quantum player can outperform the classical
players, which is in contradiction to the two-player Prisoner's Dilemma\
quantum game. In addition, the decoherence effects persist for maximum
correlation and it does not behave as a noiseless game, contrary to the
two-player case. In figure 3, we have plotted payoffs of the classical and
the quantum players as a function of the memory parameter $\mu $ for $p=0.3$
and $0.7$ respectively.\ It is seen that memory compensates the payoffs
reduction due to decoherence. In figures 4 and 5, we have plotted Alice's
payoff as a function of her strategies $\alpha _{1}$ and $\theta _{1}$ for $%
p=\mu =0.3$ and $p=\mu =0.7$ respectively. It can be seen that the memory
has no effect on the Nash equilibrium of the game. It is evident from
figures 4 and 5 that the best strategy for Alice is $\alpha _{1}=\theta
_{1}=\pi /2,$ and $\beta _{1}=0$. It remains her best strategy for the full
range of the decoherence parameter $p$ and the memory parameter $\mu ,$
throughout the course of the game. Therefore, it can be inferred that
correlated noise has no effect on the Nash equilibrium of the game. In
comparison to the investigations of Cao et al. \cite{rr}, it is shown that
the new Nash equilibrium, appearing for a specific range of the decoherence
parameter $p,$ disappears under the effect of correlated noise. As it can be
seen that for the entire range of the decoherence parameter $p$ and the
memory parameter $\mu ,$ the Nash equilibrium of the game does not change
(see figures 4 and 5). Further more, it can also be seen that the payoffs of
the players are increased with the addition of the correlated noise as can
be seen from figures 4 and 5 respectively, for the entire ranges of the
decoherence and the memory parameters.
\section{Conclusions}
We present a quantization scheme for the three-player Prisoner's Dilemma
game under the effect of decoherence and correlated noise. We study the
effects of decoherence and correlated noise on the game dynamics. We
consider a restricted game situation, where Alice and Bob are restricted to
play the classical strategies, i.e., $\alpha _{1}=\alpha _{2}=\beta
_{1}=\beta _{2}=0$, however Charlie is allowed to play the quantum
strategies as well. It is shown that the quantum player is always better off
for all values of the decoherence parameter $p$ for increasing values of the
memory parameter $\mu $. It is seen that for maximum degree of correlations,
the effect of decoherence does not vanish in comparison to the two-player
Prisoner's Dilemma quantum game. The three-players game doe not become
noiseless game which is in contradiction to the two-player case. It is also
seen that for the maximum degree of memory i.e. $\mu =1$, that the quantum
player can out score the classical players for an entire range of the
decoherence parameter $p$. The payoffs reduction due to the decoherence is
controlled by the memory parameter throughout the course of the game.\
Furthermore, it is shown that the memory has no effect on the Nash
equilibrium of the game.
|
1,314,259,993,236 | arxiv | \section{Introduction}
\label{Introduction}
Understanding how spiral patterns form in disk galaxies is a long--standing issue in astrophysics. Two of the most influential theories to explain the formation of spiral structure in disk galaxies are named stationary density wave theory and swing amplification. The stationary density wave theory poses that spiral arms are static density waves \citep{Lindblad, LS64}. In this scenario spiral arms are stationary and long--lived. The swing amplification proposes instead that spiral structure is the local amplification in a differentially rotating disk \citep{G, JT, SC, Sellwood, E11, Elena13}. According to this theory indiviual spiral arms would fade away in one galactic year and should be considered transient features. Numerical experiments suggest that non--linear gravitational effects would make spiral arms fluctuate in density locally but be statistically long--lived and self--perpetuating \citep{Elena13}.
To complicate the picture there is the finding that many galaxies in the nearby universe are grand--design, bisymmetric spirals. These galaxies may show evidence of a galaxy companion, suggesting that the perturbations induced by tidal interactions could induce spiral features in disks by creating localized disturbances that grow by swing amplification \citep{ k, B3, Ga, Elena16, P16}. Some studies have been devoted to explore galaxy models with bar--induced spiral structure \citep{conto} and spiral features explained by a manifold \citep{conto, A}. It is also possible that a combination of these models is needed to describe the observed spiral structure. We refer the interested reader to comprehensive reviews of different theories of spiral structure in \cite{DB14} and to \cite{Shu} for detailed explanations of the origin of spiral structure in stationary density wave theory.
The longevity of spiral structure can be tested observationally. In fact, in the stationary density wave theory, spiral arms are density waves moving with a single constant angular pattern speed. The angular speed of stars and gas equals the pattern speed at the corotation radius. Inside the corotation radius, material rotates faster than the spiral pattern. When the gas enters the higher--density region of spiral arms, it may experience a shock which may lead to star formation \citep{R69}. Consequently, the stars born in the molecular clouds in spiral arms eventually overtake the arms and move away from the spiral patterns as they age. This drift causes an age gradient across the spiral arms. If spiral arms have a constant angular speed, then we expect to find the youngest star clusters near the arm on the trailing side, and the oldest star clusters further away from the spiral arms inside the corotation radius \citep[e.g.,][]{M09}. Outside the corotation radius, the spiral pattern moves faster than the gas and leads to the opposite age sequence.
\citet{DP10} carried out numerical simulations of the age distribution of star clusters in four different spiral galaxy models, including a galaxy with a fixed pattern speed, a barred galaxy, a flocculent galaxy, and an interacting galaxy. The results of their simulations show that in a spiral galaxy with a constant pattern speed or in a barred galaxy, a clear age sequence across spiral arms from younger to older stars is expected. In the case of a flocculent spiral galaxy, no age gradient can be observed in their simulation. Also in the case of an interacting galaxy, a lack of an age gradient as a function of azimuthal distance from the spiral arms is predicted. A simulation of an isolated multiple--arm barred
spiral galaxy was performed by \cite{grand}, who explored the location of star particles as a function of age around the spiral arms. Their simulation takes into account radiative cooling and star formation. They found no significant spatial offset between star particles of different ages, suggesting that spiral arms in such a spiral galaxy are not consistent with the long--lived spiral arms predicted by the static or stationary density wave theory. In a recent numerical study, \cite{D17} looked in detail at the spatial distribution of stars with different ages in an isolated grand--design spiral galaxy. They found that star clusters of different ages are all concentrated along the spiral arms without a clear age pattern.
A simple test of the stationary density wave theory consists of looking for a colour gradient from blue to red across spiral arms due to the progression of star formation. It is important to note that this method can be affected by the presence of dust. Several observational studies have tried to test the stationary density wave theory by looking for colour gradients across the spiral arms. In an early study of the ($B-V$) colours and total star formation rates in a sample of spiral galaxies with and without grand design patterns, \cite{Bruce86} found no evidence for an excess of star formation due to the presence of a spiral density wave, and explained the blue spiral arm colours as a result of a greater compression of the gas compared to the old stars, with star formation following the gas. \cite{M09} studied the colour gradients across the spiral arms of 13 SA and SAB galaxies. Ten galaxies in their sample present the expected colour gradient across their spiral arms.
A number of observational studies have used the age of stellar clusters in nearby galaxies as a tool to test the stationary density wave theory. \citet{S09} studied the spatial distribution of 1580 stellar clusters in the interacting, grand--design spiral M51a from Hubble Space Telescope (HST) $UBVI$ photometry. They found no spatial offset between the azimuthal distribution of cluster samples of different age. Their results indicate that most of the young (age < 10~Myr) and old stellar clusters (age > 30~Myr) are located at the centers of the spiral arms. \cite{k10} also mapped the age of star clusters as a function of their location in M51a using HST data and found no clear pattern in the location of star clusters with respect to their age. Both above studies suggest that spiral arms are not stationary, at least for galaxies in tidal interaction with a companion. In order to study the spatial distribution of star--forming regions, \cite{Sanchez} produced an age map of six nearby grand--design and flocculent spiral galaxies. Only two grand--design spiral galaxies in their sample presented a stellar age sequence across the spiral arms as expected from stationary density wave theory.
In galaxies where spiral arms are long--lived and stationary as predicted by the static density wave theory, one would expect to find an angular offset among star formation and gas tracers of different age within spiral arms \citep{R69}. The majority of observational studies of the spiral density wave scenario have tried to examine such an angular offset \citep{vogel, Rand}. \cite{Tam8} detected an angular offset between HI (a tracer of the cold dense gas) and 24~$\rm \mu$m emission (a tracer of obscured star formation) in a sample of 14 nearby disk galaxies. An angular offset between CO (a tracer of molecular gas) and H$\rm \alpha$ (a tracer of young stars) was detected for 5 out of 13 spiral galaxies observed by \cite{Egusa}. In another observational work, \cite{Foyle} tested the angular offset between different star formation and gas tracers including HI, $\rm H_{2}$, 24~$\rm \mu $m, UV (a tracer for unobscured young stars) and 3.6~$\rm \mu$m emission (a tracer of the underlying old stellar population) for 12 nearby disk galaxies. They detected no systematic trend between the different tracers. Similarly, \cite{F13} found no significant angular offset between H$\rm \alpha$ and UV emission in NGC~4321. \cite{L13} found a large angular offset between CO and H$\rm \alpha$ in M51a while no significant offsets have been found between HI, 21~cm, and 24~$\rm \mu$m emissions. These searches for offsets are based on the assumption that the different tracers represent a time sequence of the way a moving density wave interacts with gas and triggers star formation. \cite{Elmegreen2014} used the S$^4$G survey \citep{Sheth} and discovered embedded clusters inside the dust lanes of several galaxies with spiral waves, suggesting that star formation can sometimes start quickly.
In a recent observational study, \cite{S17} carried out a detailed investigation of a spiral arm segment in M51a. They measured the radial offset of the star clusters of different ages (< 3~Myr, and 3--10~Myr) and star formation tracers (HII regions and 24~$\rm \mu$m) from their nearest spiral arm. No obvious spatial offset between star clusters younger and older than 3 Myr was found in M51a. They also found no clear trend in the radial offset of HII regions and 24~$\mu$m. Similarly, \cite{chandar17} compared the location of star clusters with different ages (< 6~Myr, 6--30~Myr, 30--100~Myr, 100--400~Myr, and > 400 Myr) with the spiral patterns traced by molecular gas, dust, young and old stars in M51a. They found cold molecular gas and dark dust lanes to be located along the inner edge of the arms while the outer edge is defined by the old stars (traced with 3.6~$\rm \mu$m) and young star clusters. The observed sequence in the spiral arm of M51a is in agreement with the prediction from stationary density wave theory. \cite{chandar17} also measured the spatial offset between molecular gas, young (< 10~Myr) and old star clusters (100--400~Myr) in the inner (2.0--2.5~kpc) and outer (5.0--5.5~kpc) spiral arms in M51a. They found an azimuthal offset between the gas and star clusters in the inner spiral arm zone, which is consistent with the spiral density wave theory. In the outer spiral arms, the lack of such a spatial offset suggests that the outer spiral arms do not have a constant pattern speed and are not static. \cite{chandar17} found no star cluster age gradient along four gas spurs (perpendicular to the spiral arms) in M51a.
In conclusion, there have been numerous observational studies aiming to test the longevity of the spiral structure. In many cases, the conclusions show conflicting results and the nature of spiral arms is still an open question.
The main goal of this study is to test whether spiral arms in disk galaxies are static and long--lived or locally changing in density and locally transient. This work is based on the Legacy ExtraGalactic UV Survey (LEGUS)\footnote{https://legus.stsci.edu} observations obtained with HST \citep{C15}. The paper is organized as follows: The survey and the sample galaxies are described in \S~\ref{The LEGUS Galaxy Samples}. The selection of the star cluster samples is presented in \S~\ref{s3}. We investigate the spatial distribution together with clustering of the selected clusters in \S~\ref{location}. In \S~\ref{Azimutahl distribution}, we describe the results and analysis and how we measure the spatial offset of our star clusters across spiral arms. In \S~\ref{2arms} we discuss whether the two spiral arms of our target galaxies have the same nature. In \S~\ref{chandra}, we use a non--LEGUS star cluster catalogue to measure the spatial offset of star clusters in M51a and we present our conclusions in \S~\ref {Summary}.
\section{The sample galaxies}
\label{The LEGUS Galaxy Samples}
LEGUS is an HST Cycle 21 Treasury programme that has observed 50 nearby star--forming dwarf and spiral galaxies within 12~Mpc. High-- resolution images of these galaxies were obtained with the UVIS channel of the Wide Field Camera Three (WFC3), supplemented with archival Advanced Camera for Surveys (ACS) imaging when available, in five broad band filters, $NUV\,(F275W)$, $U \,(F336W)$, $B \,(F438W)$, $V \,(F555W)$, and $I \,(F814W)$. The pixel scale of these observations is $ \rm 0.04^{\arcsec} \, pix^{-1}$. A description of the survey, the observations, the image processing, and the data reduction can be found in \cite{C15}.
Face--on spiral galaxies with prominent spiral structures are interesting candidates to study stationary density wave theory. Therefore, three face--on spiral galaxies, namely NGC~1566, M51a, and NGC~628 were selected from the LEGUS survey for our study. The morphology, distance, corotation radius, and the pattern speed of each galaxy are listed in Table~\ref{tab:properties of galaxies}. The UVIS and ACS footprints of the pointings (red and yellow boxes, respectively) overlaid on Digitized Sky Survey (DSS) images of the galaxies are shown in Fig.~\ref{fig:galaxies} together with their HST red, green, and blue colour composite mosaics.
\begin{table*}
\caption{Fundamental properties of our target galaxies.}
\label{tab:properties of galaxies}
\begin{tabular}{lcccccccccccccc}
\hline
\hline
Galaxy & Morphology & D [Mpc]& $\rm M_{\star} \, (M_{\sun})$ & SFR (UV) $\rm(M_{\sun} \, yr^{-1}) $ & $\rm R_{cr}$ [$\mathrm{kpc}$] & $ \rm \Omega_{p}$ [$\rm km\, s^{-1}\, \rm kpc^{-1}$] & Ref \\
\hline
NGC~1566 & SABbc &18& $\rm 2.7\times 10^{10}$ & 2.026&10.6 & 23$\pm$2 &1\\
M51a & SAc &7.6&$\rm 2.4\times 10^{10}$ & 6.88&5.5 &38$\pm$7 &2 \\
NGC~628 & SAc &9.9 &$\rm 1.1\times 10^{10}$ & 3.6& 7 &32$\pm$2& 3 \\
\hline
\hline
\\
\end{tabular}
\vspace{1ex}
\raggedright Column 1, 2: Galaxy name and morphological type as listed in the NASA Extragalactic Database (NED) \\
\raggedright Column 3: Distance\\
\raggedright Column 4: Stellar mass obtained from the extinction--corrected B--band luminosity \\
\raggedright Column 5: Star formation rate calculated from the GALEX far--UV, corrected for dust attenuation \\
\raggedright Column 6: Co--rotation radius\\
\raggedright Column 7: Pattern speed \\
\raggedright Column 8: References for the co--rotation radii and pattern speeds: 1- \cite{A04}, 2- \cite{z4}, 3- \cite{Sakhibov}\\
\end{table*}
\subsection{NGC~1566}
NGC~1566, the brightest member of the Dorado group, is a nearly face--on (inclination = $\rm 37.3^{\circ}$) barred grand--design spiral galaxy with strong spiral structures \citep{Debra2}. The distance of NGC~1566 in the literature is uncertain and varies between 5.5 and 21.3~Mpc. In this study, we revised the distance of 13.2~Mpc listed in \cite{C15} and adopted a distance of 18~Mpc \citep{sabbi}. NGC~1566 has been morphologically classified as an SABbc galaxy because of its intermediate--strength bar. It hosts a low--luminosity active galactic nucleus (AGN) \citep{Combes}. The star formation rate and stellar mass of NGC~1566 are $ \rm 2.0 \, M_{\sun }yr^{-1}$and $\rm 2.7 \times 10^{10} \, M_{\sun }$, respectively within the LEGUS field of view \citep{sabbi}.Two sets of spiral arms can be observed in NGC~1566. The inner arms connect with the star--forming ring at 1.7~kpc \citep{S15}, which is covered by the LEGUS field of view (see Fig.~\ref{fig:galaxies}, top panel). The outer arms beyond 100~arcseconds (corresponding to 8 kpc ) are weaker and smoother than the inner arms.
\begin{figure*}
\centering
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{NGC1566_pointings.pdf}
\end{subfigure}%
~
~ \hspace{-1.8cm}
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[width=1\linewidth]{NGC1566_reduced.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{M51_pointings.pdf}
\end{subfigure}%
~
~ \hspace{-1.8cm}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{NGC5194.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{NGC628_pointings.pdf}
\end{subfigure}%
~
~ \hspace{-1.8cm}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{NGC628.pdf}
\end{subfigure}
\caption{Left: UVIS (red boxes) and ACS (yellow boxes) footprints on DSS images of the galaxies NGC~1566, M51, and NGC~628 (from top to bottom, respectively). The horizontal bar in the lower left corner denotes the length scale of 60 arcsec. North is up and East to the left. Right: Colour composite images for the same galaxies, constructed from LEGUS imaging in the filters $F275W$ and $F336W$ (blue), $F438W$ and $F555W$ (green), and $F814W$ (red). The central UVIS pointing (white) of M51a was taken from the observations for proposal 13340 (PI: S. Van Dyk).}
\label{fig:galaxies}
\end{figure*}
\subsection{M51a}
M51a (NGC~5194) is a nearby, almost face--on (inclination = $\rm 22 ^{\circ} $) spiral galaxy located at a distance of 7.6 Mpc \citep{Tonry}. It is a grand design spiral galaxy morphologically classified as SAc with strong spiral patterns \citep{Debra2}. M51a is interacting with a companion galaxy, M51b (NGC~5195). M51a has a star formation rate and a stellar mass of $\rm 6.9 \, M_{\sun } yr^{-1}$and $\rm 2.4 \times 10^{10}M_{\sun }$, respectively \citep{Lee, Both}.
Five UVIS pointings in total were taken through LEGUS observations: 4 pointings cover the center, the north--east, and the south--west regions of M51a, and one covers the companion galaxy M51b.
\subsection{NGC~628}
NGC~628 (M74) is the largest galaxy in its group. This nearby galaxy is seen almost face--on ($\rm i = 25.2 ^{\circ}$) and is located at a distance of 9.9 Mpc \citep{Oliver}. It has no bulge \citep{cor} and is classified as a SAc spiral galaxy. Its star formation rate and stellar mass obtained from the extinction--corrected B--band luminosity are $ \rm 3.6 \, M_{\sun } yr^{-1}$and $\rm 1.1 \times 10^{10}M_{\sun }$, respectively \citep{Lee, Both}. NGC~628 is a multiple--arm spiral galaxy \citep{Debra} with two well--defined spiral arms. It has weaker spiral patterns than NGC~1566 and M51a \citep{Debra2}. The LEGUS UVIS observations of NGC~628 consist of one central and one east pointing that were combined into a single mosaic for the analysis.
\section{Stellar cluster samples}
\label{s3}
\subsection{Selection from star cluster catalogues}
In this section, we provide a detailed explanation of the process adopted to select star cluster candidates in our target galaxies. A general description of the standard data reduction of the LEGUS sample can be found in \cite{C15}. A careful and detailed description of the cluster extraction, identification, classification, and photometry is given in \cite{Angela17} and \cite{messa}. Stellar cluster candidates were extracted with SExtractor \citep{Bertin} in the five standard LEGUS filters. The resulting cluster candidate catalogues include sources with a $V$--band concentration index (CI)\footnote{the magnitude difference between apertures of radius 1 pixel and 3 pixels} larger than the CI of star--like sources, which are detected in at least two filters with a photometric error $\leq$ 0.3 mag. The photometry of sources in each filter was corrected for the Galactic foreground extinction \citep{Schlafly}. In order to derive the cluster physical properties such as age, mass, and extinction, the spectral energy distribution (SED) of the clusters was fitted with Yggdrasil stellar population models \citep{Z11}. The uncertainties derived in the physical parameters of the star clusters are on average $\rm 0.1\, \rm dex$ \citep{Angela17}. For some of the LEGUS galaxies, star cluster properties were also estimated based on a Bayesian approach, using the Stochastically Lighting Up Galaxies (SLUG) code \citep{sila}. A detailed and complete explanation of the Bayesian approach can be found in \cite{krumholz}.
Each source in the stellar cluster catalogue that is brighter than -6 mag in the $V$--band, and detected in at least four bands, has been morphologically classified via visual inspection by three independent members of the LEGUS team \citep{katie15, Angela17}. The inspected clusters were divided into four morphological classes: Class~1 contains compact, symmetric, and centrally concentrated clusters. Class~2 includes compact clusters with a less symmetric light distribution, Class~3 represents less compact and multi--peak cluster candidates with asymmetric profiles, and Class~4 consists of unwanted objects like single stars, multiple stars, or background sources. Unclassified objects were labeled as Class~0.
In addition, a machine--learning (ML) approach was tested to morphologically classify the stellar clusters in an automated fashion. A forthcoming paper (Grasha et al., in prep.) will present the ML code that was used for cluster classification in the LEGUS survey and the degree of agreement with human classification. An initial comparison between human and ML classification in M51a was already discussed by \citet{messa}.
For our analysis, we use stellar cluster properties estimated with Yggdrasil deterministic models based on the Padova stellar libraries (see \citet{Z11} for details) with solar metallicity, the Milky Way extinction curve \citep{Cardeli}, and the \cite{Kroupa} stellar initial mass function (IMF).
We also selected clusters based on human visual classification for NGC~628, a combination of human and machine learning classification in NGC~1566, and only machine learning for M51a. Star clusters classified as Class~4 and Class~0 are excluded from our analysis. Among our target galaxies, there is a total number of 1573, 3374, and 1262 star cluster candidates classified as Class 1, 2, and 3 in NGC~1566, M51a, and NGC~628, respectively.
A detailed description of the properties of the final cluster catalogues of M51a and NGC~628 and their completeness can be found in \citet{messa} and \citet{Angela17}.
\begin{figure}
\centering
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{age_mass_NGC1566.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{age_mass_M51.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{age_mass_NGC628.pdf}
\end{subfigure}
\caption{Distribution of ages and masses of the star clusters (class 1, 2, and 3) in NGC~1566, M51a, and NGC~628. The colours represent different age bins: blue (the young sample), green (the intermediate--age sample), red (the old sample), and black (excluded star clusters). The number of clusters in each sample is shown in parentheses. The horizontal dotted lines in NGC~1566 show the applied mass cut of 5000 $\rm M_{\sun}$ up to the age of 100~Myr and $\rm 10^{4} \, \rm M_{\sun}$ up to the age of 200~Myr. The applied mass cut of 5000 $\rm M_{\sun}$ up to the age of 200~Myr in M51a and NGC~628 are also by horizontal dotted lines. The solid black line shows the 90\% completeness limit of 23.5 mag in the $V$--band in NGC~1566 and the magnitude cut of $\rm M_{V}$ = -6 mag in M51a, and NGC~628, respectively.}
\label {fig:age_mass}
\end{figure}
\begin{table*}
\centering
\caption{The number of star clusters in the \enquote{young}, \enquote{intermediate--age}, and \enquote{old} samples in our target galaxies.}
\label{tab:cluster sample}
\begin{tabular}{lcccc}
\hline
\hline
Galaxy & age (Myr) < 10 & 10 $ \leq $ age (Myr) < 50 & 50 $\leq $ age (Myr) $ \leq $ 200 \\
\hline
NGC 1566 & 392 & 679 &124 \\
M51a & 361 & 441 & 979 \\
NGC 628 & 77 & 111 & 302 \\
\hline
\hline
\end{tabular}
\end{table*}
\subsection {Selection of star clusters of different ages}
\label{selection of star clusters of different ages}
In this study, we use the age of star clusters in our galaxy sample as a tool to find a possible age gradient across the spiral arms predicted by the stationary density wave theory. Therefore, we group star clusters into three different cluster samples according to their ages.
The estimated physical properties of star clusters based on the Yggdrasil deterministic models are inaccurate for low--mass clusters \citep{krumholz}. A comparison between the deterministic approach based on Yggdrasil models and the Bayesian approach with SLUG models presented by \cite{krumholz} suggests that the derived cluster properties are uncertain at cluster masses below 5000 $\rm M_{\sun}$. We adopted the same mass cut--off and for NGC~628 and M51a in our analysis. Using the luminosity corresponding to this mass, namely $\rm M_{V}$ = $-6$ mag ($\rm m_{V}$ = 23.4 and 23.98 mag for NGC~628 and M51a, respectively) results in an age completeness limit of $\rm \leq 200\, \rm Myr$. In \citet{Angela17} and \citet{messa} the magnitude cut at $\rm M_{V}$ < $-6$ mag is a more conservative limit than the magnitude limit corresponding to 90\% of completeness in the recovery of sources. We have tested our results using different mass cuts as well as by removing any constraint on the limiting mass, and we have not observed any significant change in the age distributions of the clusters as a function of azimuthal distances. Thus, the results presented in \S~\ref{Azimutahl distribution} and \S~\ref{2arms} are robust against uncertainties in the determination of cluster physical properties.
NGC~1566 is the most distant galaxy within our LEGUS sample. Due to the large distance of this galaxy, the 90\% completeness limit ($\rm m_{V}$ = 23.5 mag) is significantly brighter than $\rm M_{V}$ = $-6$ mag. Therefore, in order to select star clusters in NGC~1566, we used the 90\% completeness limit and a= mass cut of 5000 $\rm M_{\sun}$ for the cluster ages up to 100~Myr and $\rm 10^{4} \rm M_{\sun}$ for the 100--200~Myr old star clusters (see Fig.~\ref{fig:age_mass}). Applying these two criteria reduced our cluster samples from 1573 to 1195 clusters for NGC~1566, from 3374 to 1781 clusters for M51a, and from 1262 to 490 for NGC~628.
Then, we selected three cluster samples of different ages for each galaxy as follows:
\begin{description}
\item[$\bullet$] \enquote{Young} star clusters: age (Myr) < 10
\item[$\bullet$] \enquote{Intermediate--age} star clusters: 10 $\rm \leq$ age (Myr) < 50
\item[$\bullet$] \enquote{Old} star clusters: 50 $\rm \leq$ age (Myr) $\leq$ 200
\end{description}
The number of star clusters in the \enquote{young}, \enquote{intermediate--age}, and \enquote{old} samples is shown in Tab.~\ref{tab:cluster sample}.
Fig.~\ref{fig:age_mass} displays the age--mass diagram of star clusters in NGC~1566, M51a, and NGC~628. The young, the intermediate--age, and the old star cluster samples are shown in blue, green, and red colors, respectively. The excluded star clusters (due to the mass cut) are shown in black. The horizontal and vertical dotted lines show the applied mass cut of $ \rm 5000\, \rm M_{\sun}$ and its corresponding completeness limit at a stellar age of $ 200\, \rm Myr$, respectively.
\section{Spatial distribution and clustering of star clusters}
\label{location}
In Fig.~\ref{fig:clusters}, we plot the spatial distribution of star clusters of different ages in the galaxies NGC~1566, M51a, and NGC~628. The young, intermediate--age, and old stellar cluster samples are shown in blue, green, and red, respectively. In general, we observe a similar trend in our target galaxies: First, the young and the intermediate--age star clusters mostly populate the spiral arms rather than the interarm regions. This is particularly evident for NGC 1566 and M51a, which show strong and clear spiral structures in young and intermediate--age star clusters. Second, the old star clusters are less clustered and more widely spread compared to the young and intermediate–age star cluster samples.
Our findings are similar to other literature results on the spatial distribution of star clusters of different ages: \cite{D17}, using LEGUS HST data found that in NGC~1566 the 100~Myr old star clusters clearly trace the spiral arms while in NGC~628 star clusters older than 10~Myr show only weak spiral structures. \cite{chandar17}, using other HST data observed that M51a shows weak spiral structure in older star clusters (>100~Myr).
\begin{figure*}
\centering
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{NGC1566_s1.pdf}
\end{subfigure}%
~
~ \hspace{-1.54cm}
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{NGC1566_s2.pdf}
\end{subfigure}%
~
~ \hspace{-1.54cm}
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{NGC1566_s3.pdf}
\end{subfigure}%
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{M51_s1.pdf}
\end{subfigure}%
~
~ \hspace{-1.54cm}
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{M51_s2.pdf}
\end{subfigure}%
~
~ \hspace{-1.54cm}
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{M51_s3.pdf}
\end{subfigure}%
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{NGC628_s1.pdf}
\end{subfigure}%
~
~ \hspace{-1.54cm}
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{NGC628_s2.pdf}
\end{subfigure}%
~
~ \hspace{-1.54cm}
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{NGC628_s3.pdf}
\end{subfigure}%
\caption{The spatial distribution of star clusters of different age in the galaxies NGC~1566, M51a, and NGC~628 (from top to bottom) superimposed on the $B$--band images. The blue, green, and red circles show the young (age (Myr) < 10), intermediate--age (10 $ \leq $ age (Myr) < 50), and old star clusters (50 $\leq $ age (Myr) $ \leq $ 200), respectively. The black outlines show the UVIS footprints. The horizontal bar in the lower left corner denotes the length of 2~kpc. North is up and East to the left.}
\label {fig:clusters}
\end{figure*}
Clustering of star clusters has been observationally investigated for a number of local star--forming galaxies \citep[e.g.,][]{Efremov,EE}. In a detailed study of clustering of the young stellar population in NGC~6503 based on the LEGUS observations, \cite{d15} found that younger stars were more clustered compared to the older ones. \cite{katie15} investigated the spatial distribution of the star clusters in NGC~628 from the LEGUS sample. Their findings confirmed that the degree of the clustering increases with decreasing age. More recently, \citet{grasha17a} studied the hierarchical clustering of young star clusters in a sample of six LEGUS galaxies. Their results suggested that the youngest star clusters are strongly clustered and the degree of clustering quickly drops for clusters older than 20 Myr and the galactic shear appears to drive the largest sizes of the hierarchy in each galaxy \citet{grasha17b}.
Adopting a similar approach as \cite{katie15}, we use the two--point correlation function to test whether or not the clustering distribution of the clusters in our selected age bins shows the expected age dependence. The two--point correlation function $\rm \omega (\theta)$ is a powerful statistical tool for quantifying the probability of finding two clusters with an angular separation $\rm \theta$ against a random, non--clustered distribution \citep{peebles}. Here we use the Landy--Szalay \citep{LS} estimator, which has little sensitivity to the presence of edges and masks in the data:
\begin{equation}
\omega(\theta) = \frac{r (r-1)}{n (n-1)}\frac{DD}{RR} - \frac{(r-1)}{n}\frac{DR}{RR}+1,
\end{equation}
where $ n$ and $r$ are the total number of data and random points, respectively. $ DD$, $ RR$, and $ DR$ are the total numbers of data--data, random--random, and data--random pair counts with a separation $\rm \theta \pm d\theta$, respectively. We construct a random distribution of star clusters that has the same sky coverage and masked regions (e.g., the ACS chip gap) as the images of each galaxy.
Fig.~\ref{fig:two_point} displays the two--point correlation function for the star clusters in different age bins as defined for our galaxy samples. The blue, green, and red colours represent the young, intermediate--age, and old star cluster samples in each galaxy, respectively. The error bars on the two--point correlation function were estimated using a bootstrapping method with 1000 bootstrap resamples.
The general distribution of the star cluster samples in the target galaxies shows a similar trend: Independent of the presence of spiral arms, young clusters show hierarchical structure, whilst the old star clusters show a non--clustered, smooth distribution.
\begin{figure}
\centering
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{2p_NGC1566.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{2p_M51.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{2p_NGC628.pdf}
\end{subfigure}
\caption{ two--point correlation function for the star cluster samples of different ages as a function of angular distance (arcseconds) in NGC~1566, M51a, and NGC~628. The young, intermediate--age, and old star cluster samples are shown in blue, green, and red, respectively. The error bars were computed based on a bootstrapping method. The number of star clusters in each age bin are listed in parentheses.}
\label {fig:two_point}
\end{figure}
\section{Are the spiral arms static density waves?}
\label{Azimutahl distribution}
As discussed in \S~\ref{Introduction}, the stationary density wave theory foresees that the age of stellar clusters inside the corotation radius increases with increasing distance from the spiral arms. In other words, we expect to find a shift in the location of stellar clusters with different ages.
In order to test whether the distribution of star clusters of different ages in our target galaxies agrees with the expectations from the stationary density wave theory, we need to quantify the azimuthal offset between star clusters of different ages.
\subsection{Spiral arm ridge lines definition}
First of all, we need to locate the spiral arms of our galaxy sample. We wish to define a specific location in each spiral arm so we can measure the relative positions of the star clusters in a uniform way. We use the dust lanes for this purpose because they are narrow and well--defined on optical images.
As gas flows into the potential minima of a density wave, it gets compressed and forms dark dust lanes in the inner part of the spiral arms, where star formation is then likely to occur \citep{R69}. We have used the $B$--band images for this purpose since most of the emission is due to young OB stars and dark obscuring dust lanes can be better identified in this band.
To better define the average positions of the dust lanes, we used a Gaussian kernel (with a 10 pixels sigma) to smooth the images, reduce the noise, and enhance the spiral structure. In the smoothed images the dust lanes are clearly visible as dark ridges inside the bright spiral arms. We defined these dark spiral arm ridge lines manually. For the remainder of this paper, we refer to the southern arm and northern arm as \enquote{Arm~1} and \enquote{Arm~2}, respectively. Fig.~\ref{fig:arms} presents the defined spiral arm ridge lines (red lines) overplotted on the smoothed $B$--band images of NGC~1566, M51a, and NGC~628.
\begin{figure}
\centering
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{arms_NGC1566.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{arms_M51.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{arms_NGC628.pdf}
\end{subfigure}
\caption{The location of spiral arm ridge lines is shown by red lines overplotted on the smoothed $B$---band images of NGC~1566, M51a, and NGC~628. We refer to the southern arm and northern arm as "Arm~1" and "Arm~2", respectively. The two black dashed circles in each panel mark the onset of the bulge and the location of the co--rotation radius of the galaxies. The horizontal bar in the lower left corner denotes a length scale of 2 kpc. North is up and East to the left.}
\label {fig:arms}
\end{figure}
\subsection{Measuring azimuthal offset}
Knowing the position of star clusters and spiral arm ridge lines in our target galaxies allowed us to measure the azimuthal distance of a star cluster from its closest spiral arm, assuming that it rotates on a circular orbit.
We limited our analysis to the star clusters located in the disk where spiral arms exist. The disk of a galaxy can be defined by its rotation curve. The rotational velocity increases when moving outwards from the central bulge--dominated part and becomes flat in the disk--dominated part of the galaxy. We derived a radius of 2~kpc for the bulge--dominated part of our galaxies using the rotation curves of \cite{k2000} for NGC~1566, \cite{sofi2, sofi1} for M51a, and \cite{combes} for NGC~628. Furthermore, we limited our analysis to star clusters located inside the corotation radius. If stationary density waves are the dominant mechanism driving star formation in spiral galaxies we expect to find an age gradient from younger to older clusters inside the corotation radius.
The bulge--dominated region and co--rotation radius of each galaxy are shown in Fig.~\ref{fig:arms}. The adopted corotation radii of the galaxies are listed in Tab.~\ref{tab:properties of galaxies}.
Fig.~\ref{fig:hist} (left panels) shows the normalized distribution of the azimuthal distance of star clusters in the three age bins from their closest spiral arm ridge line in NGC~1566, M51a, and NGC~628. The error bars in each sample were calculated by dividing the square root of the number of clusters in each bin by the total number of clusters. We note that an azimuthal distance of zero degrees shows the location of the spiral arm ridge lines and not the center of the arms. Positive (negative) azimuthal distributions indicate that a cluster is located in front of (behind) the spiral arm ridge lines. Blue, green, and red colours represent the young, intermediate--age, and old star cluster samples, respectively.
Fig.~\ref{fig:hist} (right panels) shows the cumulative distribution function of star clusters as a function of the azimuthal distance. In order to test whether the samples come from the same distribution, we used a two--sample Kolmogorov--Smirnov test (hereafter K--S test). Since we aim at finding the age gradient in front of the spiral arms, the K--S test was only calculated for star clusters with positive azimuthal distances. The probability that two samples are drawn from the same distribution (p--values) and the maximum difference between pairs of cumulative distributions (D) are listed in Tab~\ref{tab3}.
\begin{figure*}
\centering
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{hist_NGC1566.pdf}
\end{subfigure}%
~
~ \hspace{-0.5cm}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{cum_NGC1566.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{hist_M51.pdf}
\end{subfigure}%
~
~ \hspace{-0.5cm}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{cum_M51.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{hist_NGC628.pdf}
\end{subfigure}%
~
~ \hspace{-0.5cm}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{cum_NGC628.pdf}
\end{subfigure}
\caption{The normalized distribution of azimuthal distance (in degrees) of the star cluster samples from their closest spiral arm (left panels) and the cumulative distribution function of star clusters of different ages as a function of the azimuthal distances (in degrees) in NGC~1566, M51a, and NGC~628. The young (< 10~Myr), intermediate--age (10--50~Myr), and old star cluster samples (50--200~Myr) are shown in blue, green, and red, respectively. The number of star clusters located in the disk and inside the corotation radius of each galaxy is listed in parantheses.}
\label {fig:hist}
\end{figure*}
In the case of NGC~1566 (Fig.~\ref{fig:hist}, top), we see that the young and intermediate--age star cluster samples are peaking close to the location of the spiral arm ridge lines (azimuthal distance of 0--5 degrees) while the old sample peaks further away from the ridge lines (azimuthal distances of 5--10 degrees). The derived p--values are lower than the test's significance level (0.05) of the null hypothesis, i.e., that the two samples are drawn from the same distribution. As a consequence, our three star cluster samples are unlikely to be drawn from the same population. A clear age gradient across the spiral arms can be observed in NGC~1566, which is in agreement with the expectation from stationary density wave theory. The existence of such a pattern supports the picture of an age sequence in the model of a grand--design spiral galaxy and a barred galaxy suggested by \cite{DP10, dimit17}.
No obvious age gradient from younger to older is seen in the azimuthal distributions of the star cluster samples in M51a (Fig.~\ref{fig:hist}, middle). What is remarkable here is that the older star clusters are located closer to the spiral arm ridge lines than the young and intermediate--age star clusters. The K--S test indicates that the probability that the young star cluster sample is drawn from the same distribution as the intemediate-age and old star cluster samples is more than 10\%. The derived p--value for the intermediate--age and old cluster samples is lower than the significance level of the K--S test and rejects the null hypothesis that the two samples are drawn from the same distribution. The lack of an age pattern is consistent with the observed age trend for an interacting galaxy, modeled based on M51a, suggested by \cite{DP10}. Our result is compatible with a number of observational studies have found no indication for the expected spatial offset from the stationary density wave theory in M51a \citep{S09, k10, Foyle, S17}.
There is no evident trend in the azimuthal distribution of star clusters in NGC~628 (Fig.~\ref{fig:hist}, bottom). The majority of the young star clusters tends to be located further away from the ridge lines (azimuthal distance of 20--25 degrees). The calculated p--values from the K--S test are larger than 0.05, which suggests weak evidence against the null hypothesis. As a result, the three young, intermediate--age, and old star cluster samples are drawn from the same distribution. The absence of an age gradient across the spiral arms in NGC~628 is consistent with a simulated multiple arm spiral galaxy by \cite{grand}.
\section{The origin of two spiral arms}
\label{2arms}
An observational study by \cite{Egusa17}, based on measuring azimuthal offsets between the stellar mass (from optical and near--infrared data) and gas mass distributions (from CO and HI data) in two spiral arms of M51a, suggest that the origin of these spiral arms differs. One spiral arm obeys the stationary density wave theory while the other does not.
In another recent study of M51a, \cite{chandar17} quantified the spatial distribution of star clusters with different ages relative to different segments of the two spiral arms of M51a traced in the 3.6~$\mu$m image. They observed a similar trend for the western and eastern arms: the youngest star clusters (< 6~Myr) are found near the spiral arm segments, and the older clusters (100--400~Myr) show an extended distribution.
In this section, we test whether measuring the azimuthal offset of star cluster samples from each spiral arm individually leads to different results. We assume that a star cluster whose distance from Arm~1 is smaller than its distance from Arm~2 belongs to Arm~1 and vice versa.
Fig.~\ref{fig:age_hist} shows the normalized distribution of ages of star clusters associated with Arm~1 (shown in red) and Arm~2 (shown in blue) in each of the galaxies. No significant differences between the age distribution of star clusters belonging to the two spiral arms in our target galaxies can be observed. Also, the K--S test indicates that the age distributions of star clusters relative to Arm~1 and Arm~2 in each galaxy are drawn from the same population.
\begin{figure}
\centering
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{NGC1566_arm_clusters.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{M51_arm_clusters.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{NGC628_arm_clusters.pdf}
\end{subfigure}
\caption{The distribution of the age of star clusters associated with Arm~1 (red) and Arm~2 (blue) in NGC~1566, M51a, and NGC~628. The number of star clusters relative to the Arm~1 and Arm~2 is listed in parantheses.}
\label {fig:age_hist}
\end{figure}
In Fig.~\ref{fig:hist-arms} we compare the normalized azimuthal distribution of the three young, intermediate--age, and old star cluster samples relative to Arm~1 (left panels) and Arm~2 (right panels) in our target galaxies. As before, our analysis was limited to the star clusters positioned in the disk and inside the corotation radius of our target galaxies.
\begin{figure*}
\centering
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{NGC1566_arm1.pdf}
\end{subfigure}%
~
~ \hspace{-0.5cm}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{NGC1566_arm2.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{M51_arm1.pdf}
\end{subfigure}%
~
~ \hspace{-0.5cm}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{M51_arm2.pdf}
\end{subfigure}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{NGC628_arm1.pdf}
\end{subfigure}%
~
~ \hspace{-0.5cm}
\begin{subfigure}[t]{0.49\textwidth}
\includegraphics[width=1\linewidth]{NGC628_arm2.pdf}
\end{subfigure}
\caption{The normalized distribution of azimuthal distance (in degrees) of the star cluster samples belonging to Arm~1 (left panels) and Arm~2 (right panels) in NGC~1566, M51a, and NGC~628. Blue, green, and red colours present the young (< 10 Myr ), intermediate--age (10--50 Myr), and old star cluster samples (50--200 Myr), respectively. The number of star clusters corresponding to Arm~1 and Arm~2 is listed in parantheses. The error bars in each sample were calculated by dividing the square root of the number of clusters in each bin by the total number of clusters.}
\label {fig:hist-arms}
\end{figure*}
The upper panels of Fig.~\ref{fig:hist-arms} exhibit a noticeable age gradient across both spiral arms of NGC~1566. The young star clusters are highly concentrated towards the location of Arm~1 and Arm~2 while the older ones are peaking further away from the two spiral arms.
The second row panels of Fig.~\ref{fig:hist-arms} show the azimuthal distance of star cluster samples across the two arms of M51a. This galaxy displays an offset in the location of young and old star clusters across Arm~1. The young star clusters culminate close to Arm~1 (at azimuthal distances of 2--6 degrees) while the old ones are positioned further away (at azimuthal distances of 6--10 degrees). Even though M51a shows an age gradient across the Arm~1 at first glance, the K--S test does not imply significant differences between the young and old star cluster samples (all derived p--values are larger than the test's significance level). We do not observe any shift in the azimuthal distribution of the star cluster samples across Arm~2 in M51a.
In the case of NGC~628, no obvious age gradient across Arm~1 and Arm~2 is observed (the lower panels of Fig.~\ref{fig:hist-arms}). It is important to note that our results are inconclusive for the young star clusters associated with Arm~2 due to the small number statistics. Hence, we also explored the change in the azimuthal distribution of the star clusters by including clusters with masses < 5000 $\rm M_{\sun}$ and ages > 200 Myr. The observed differences are not significant and the general trend is the same as before.
Thus, measuring the azimuthal distance of the star clusters from the two individual spiral arms in each galaxy suggests that the two spiral arms of our target galaxies may have the same physical origin.
\section{Comparison with the non--LEGUS cluster catalogue of M51}
\label{chandra}
\begin{table*}
\centering
\caption{The maximum difference between pairs of cumulative distributions (D) of azimuthal distance of star clusters and the probability that two samples are drawn from the same distribution (p--values) of the two sample K--S test in NGC~1566, M51a, and NGC~628.}
\label{tab3}
\begin{tabular}{llccccc}
\hline \hline
\multirow{2}{*}{Galaxy} & \multicolumn{2}{c}{Young \& Intermediate--age} & \multicolumn{2}{c}{Young \& Old} & \multicolumn{2}{c}{Intermediate--age \& Old} \\ \cline{2-7}
& \multicolumn{1}{c}{D} & p--value & D & p--value & D & p--value \\ \hline
NGC~1566 & 0.15 & $ \rm 3.78\times 10^{-3}$ & 0.31 & $\rm 2.88 \times 10^{-5}$ & 0.26 & $\rm 6.19 \times 10^{-5}$ \\
M51a & 0.15 & 0.10 & 0.13 & 0.10 & 0.17 & $\rm 2.4 \times 10^{-3}$ \\
NGC~628 & 0.21 & 0.49 & \multicolumn{1}{c}{0.17} & 0.47 & \multicolumn{1}{c}{0.19} & 0.10 \\ \hline \hline
\end{tabular}
\end{table*}
In this section, we use the \cite{chandar16} catalogue (hereafter CH16 catalogue) to measure the azimuthal offsets of star clusters with different ages in M51a and to compare the results with our analysis based on the LEGUS catalogue. We caution that the south--eastern region of M51a is not covered by the LEGUS observations. We also investigated whether our results are biased due to the absence of star clusters from that region.
\cite{chandar16} provided a catalogue of 3816 star clusters in M51a based on HST ACS/WFC2 images obtained the equivalents of $UBVI$ and H$\rm \alpha$ filters. \cite{messa} compared the age distributions of star clusters in common between the LEGUS and CH16 catalogue. They observed that
a large number of young star clusters (age < 10~Myr) in \cite{chandar16} have a broad age range (age: 1--100~Myr) in the LEGUS catalogue. They argued that the discrepancies in the estimated ages are due to the use of different filter combinations.
In Fig.~\ref{fig:age_mass_chandar}, we show the distribution of ages and masses of star clusters in M51a from the CH16 catalogue. In order to be able to compare our results, we considered a mass--limited sample with masses > 5000 $\rm M_{\sun}$ and ages < 200~Myr and selected the same age bins as before: The young (< 10~Myr), intermediate--age (10--50~Myr), and old star cluster samples (50--200~Myr).
\begin{figure}
\centering
\includegraphics[width=0.49\textwidth]{age_mass_chandar.pdf}
\caption{The distribution of ages and masses of the 3816 star clusters in M51a, based on the CH16 catalogue. The young (<10~Myr ), intermediate--age (10--50~Myr), and old (50--200~Myr) star clusters are shown in blue, green, and red, respectively. The black points indicate excluded star clusters due to the applied mass cut and the imposed completeness limit. The number of clusters in each sample is listed in parentheses. The horizontal and vertical dotted lines show the applied mass cut of 5000 $\rm M_{\sun}$ and the corresponding detection completeness limit of 200~Myr, respectively. }
\label {fig:age_mass_chandar}
\end{figure}
In Fig.~\ref{fig:clusters-ch16}, we plot the spatial distribution of the young, intermediate--age, and old star clusters based on the CH16 catalogue in M51a. As we can see, M51a displays a very clear and strong spiral pattern in the young star clusters. The intermediate--age star clusters tend to be located along the spiral arms while the old ones are more scattered and populate the inter--arm regions. Recently, \cite{chandar17} using the CH16 catalogue found that the youngest star clusters (< 6~Myr) are concentrated in the spiral arms (defined based on 3.6~$\mu$m observations). The older star clusters (6--100~Myr) are also found close to the spiral arms but they are more dispersed, and the spiral structure is not clearly recognisable in older star clusters (> 400 Myr).
\begin{figure*}
\centering
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{M51_s1_ch16.pdf}
\end{subfigure}%
~
~ \hspace{-1.54cm}
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{M51_s2_ch16.pdf}
\end{subfigure}%
~
~ \hspace{-1.54cm}
\begin{subfigure}[t]{0.40\textwidth}
\includegraphics[width=1\linewidth]{M51_s3_ch16.pdf}
\end{subfigure}%
\caption{The spatial distribution of the young (blue), intermediate--age (green), and old (red) star clusters in M51a taken from the CH16 catalogue. The black outlines show the area covered by the LEGUS observations. The horizontal bar indicates the length of 2~kpc, corresponding to $\rm 54^ {\arcsec}$. North is up and East to the left.}
\label {fig:clusters-ch16}
\end{figure*}
\begin{figure*}
\centering
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{M51_chandar.pdf}
\end{subfigure}%
~
~ \hspace{0 cm}
\begin{subfigure}[t]{0.50\textwidth}
\includegraphics[width=1\linewidth]{cum_M51_chandar.pdf}
\end{subfigure}
\caption{The normalized azimuthal distribution (left) and the cumulative distribution function as a function of the azimuthal distance (right) of three star cluster samples in M51a, based on the CH16 catalogue. The young, intermediate--age, and old star clusters are shown in blue, green, and red, respectively. The number of star clusters in each age bin (located in the disk and inside the co--rotation radius of M51a) is listed in parentheses. The error bars in each sample were calculated by dividing the square root of the number of clusters in each bin by the total number of clusters.}
\label {fig:azi_ch}
\end{figure*}
In order to quantify the possible spatial offset in the location of the three young, intermediate--age, and old star cluster samples from the CH16 catalogue across the spiral arms, we computed the normalized azimuthal distribution and corresponding cumulative distribution function of the star cluster samples in Fig.~\ref{fig:azi_ch}. We applied our analysis only to the star clusters positioned in the disk and inside the co--rotation radius of M51a (2.0--5.5~kpc). Our result demonstrates that the three young, intermediate--age, and old star cluster samples peak at an azimuthal distance of 6 degrees from the location of the spiral arms. We observe no obvious offsets between the azimuthal distances of the three star cluster age samples in M51a. \cite{chandar17}, using the same cluster catalogue, quantified the azimuthal offset of molecular gas (from PAWS and HERACLES) and young (<10~Myr) and intermediate--age (100--400~Myr) star clusters in the inner (2--2.5~kpc) and outer (5--5.5~kpc) annuli of the spiral arms. They found that in the inner annuli the young star clusters show an offset of 1~kpc from the molecular gas while there is no offset between the molecular gas and young and old star clusters in the outer portion of the spiral arms.
Adopting the CH16 catalogue, we found that there is no noticeable age gradient across the spiral arms of M51a, which is in agreement with our finding based on the LEGUS star cluster catalogue.
\section{DISCUSSION AND CONCLUSIONS}
\label{Summary}
The stationary density wave theory predicts that the age of star clusters increases with increasing distance away from the spiral arms. Therefore, a simple picture of the stationary density wave theory leads to a clear age gradient across the spiral arms. In this study, we are testing the theory that spiral arms are static features with constant pattern speed. For this purpose, we use the age and position of star clusters relative to the spiral arms.
We use high--resolution imaging observations obtained by the LEGUS survey \citep{C15} for three face--on LEGUS spiral galaxies, NGC~1566, M51a, and NGC~628.
We have measured the azimuthal distance of the LEGUS star clusters from their closest spiral arm to quantify the possible spatial offset in the location of star clusters of different ages (< 10~Myr, 10--50~Myr, and 50--200~Myr) across the spiral arms. We found that the nature of spiral arms in our target galaxies is not unique. The main results are summarized as follows:
\begin{itemize}
\item Our detailed analysis of the azimuthal distribution of star clusters indicates that there is an age sequence across spiral arms in NGC~1566. NGC~1566 shows a strong bar and bisymmetric arms typical of a massive self--gravitating disk \citep{Elena15}. We speculate that when disks are very self--gravitating the bar and the two--armed features dominate a large part of the galaxy, producing an almost constant pattern speed. The observed trend is also in agreement with what was found by \cite{DP10} in simulations of a grand design and a barred spiral galaxy.
\item We find no age gradient across the spiral arms of M51a. This galaxy shows less strong arms and a weaker bar and hence a less
self--gravitating disk. The absence of an age sequence in M51a indicates that the grand--design structures of this galaxy are not the result of a steady--state density wave, with a fixed pattern speed and shape, as in the early analytical models. More likely, the spiral is a density wave that is still changing its shape and amplitude with time in reaction to the recent tidal perturbations. A possible mechanism to explain the formation and presence of grand--design structures in spiral galaxies is an interaction with a nearby companion \citep{Toomre 72, k, B3}. Since such an interaction is obviously occurring in M51a, tidal interactions could be the dominant mechanism for driving its spiral patterns. \cite{DP10} simulated M51a with an interacting companion (M51b), and observed no age gradient across the tidally induced grand--design spirals arms.
Our findings are consistent with the results of several other observational studies, which did not find age gradients as expected from the spiral density wave theory in M51a \citep{S09, k10, Foyle, S17}.
\item NGC~628 is a multiple--arm spiral galaxy with weak spiral arms consistent with a pattern speed decreasing with radius and multiple corotation radii. In this case we find no significant offset among the azimuthal distributions of star clusters with different ages, which is consistent with the swing amplification theory. The lack of such an age offset is in agreement with an earlier analysis of NGC~628 \citep{Foyle}, and consistent with the spatial distribution of star clusters with different ages in the simulated multiple--arm spiral galaxy by \cite{grand}.
\end{itemize}
\section*{Acknowledgements}
This work is based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5--26555. These observations are associated with program 13364. Support for Program 13364 was provided by NASA through a grant from the Space Telescope Science Institute.
This research has made use of the NASA/IPAC Extragalactic Database (NED), which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
A.A. acknowledges the support of the Swedish Research Council (Vetenskapsr{\aa}det) and the Swedish National Space Board (SNSB). D.A.G kindly acknowledges financial support by the German Research Foundation (DFG) through programme GO 1659/3--2.
|
1,314,259,993,237 | arxiv | \section*{Data Availability}
Data from the GEUVADIS project is available through ArrayExpress database (www.ebi.ac.uk/arrayexpress) under accession number E-GEUV-6.
Data from the EGA project is available through European Genome-Phenome Archive (ega-archive.org) under Study ID: EGAS00001001895 and Dataset ID: EGAD00001002714.
The source code for {\sc BREM}{} is freely available at \href{https://github.com/bayesomicslab/BREM}{https://github.com/bayesomicslab/BREM}.
\section{Discussion}
\label{sec:con}
In this work, we presented {\sc BREM}{}, a new hierarchical \textbf{B}ayesian admixture model for the \textbf{r}econstruction of \textbf{e}xcised \textbf{m}RNA and quantifying differential SME usage.
The structure of our graphical model depends on both modelling assumptions of the data generating process and an interval graph that encodes relationships between excisions (for which we prove is parsimonious).
We develop efficient inference algorithms based on a polynomial time computation of node covers and local search over independent sets.
We presented the new problem formulation of SME reconstruction, which interpolates between the local splicing and full-transcript views of alternative splicing.
To enable the comparison between local splicing, full-transcript, and excision based methods, we developed two partial homogeneity scores to match computed transcript segments to reference annotations. Finally, we demonstrated that {\sc BREM}{} is accurate in terms of SME reconstruction and identifying differential expression when compared to four state-of-the-art methods on simulated data and that it captures relevant biological signal in bulk and single-cell RNA-seq data.
There are several interesting directions for future work, both in terms of SME modelling and addressing limitations of {\sc BREM}{}.
First, a natural extension to the parametric admixture model presented here, comes from placing nonparametric priors on both the individual specific ($\theta$) and global ($\beta$) SME distributions~\parencite{teh2006hierarchical}.
An immediate benefit to this Bayesian nonparametric modelling is that model selection of $K$ is integrated with the inference algorithm; also, the complexity of the model can adapt with new samples, for example, from a different tissue or disease condition.
Second, explicit modelling of the count-based nature of single cell RNA-seq data could, in theory, be accommodated with varying data likelihoods in our probabilistic model.
Third, although we defined our differential SME model based on expression profiles across SMEs in a gene, it could similarly be constructed to test differential SMEs within a gene.
Fourth, SME-QTLs are a natural analog to splicing QTLs (sQTLs)~\parencite{gtex2015genotype} and transcript ratio QTLs (trQTLs)~\parencite{lappalainen2013transcriptome}, but may require extensive experimental validation to evaluate.
Lastly, reads covering a single exon could be incorporated to improve abundances estimation, model allele specific expression, or detect alternative transcription start or end sites and retained introns (all of which cannot be detected by {\sc BREM}{} due to its focus on excised mRNA).
\section{Introduction}\label{aba:sec1}
Alternative splicing (AS) is characterized by the excision of pre-mRNA segments (typically intronic RNA) by the RNA-protein spliceosome complex and enables a single gene to produce multiple distinct and functionally diverse protein isoforms~\parencite{wilkinson2020rna}.
Alternative splicing is both prevalent, affecting an estimated 95\% of human protein-coding genes~\parencite{pan2008deep}, and integral for human adaptation~\parencite{barbosa2012evolutionary,keren2010alternative,merkin2012evolutionary}, gene regulation and tissue identity~\parencite{boudreault2016global,kornblihtt2013alternative,barbosa2012evolutionary,baralle2017alternative}, and disease etiology and drug resistance~\parencite{tazi2009alternative,lee2016therapeutic,yang2019aberrant}.
Given the importance of AS in developmental biology and disease etiology, considerable effort has been devoted to computationally infer both the structure and expression of alternatively spliced transcripts across differing cellular contexts.
High-throughput single cell and bulk RNA sequencing (scRNA-seq and RNA-seq respectively) provide experimental platforms for discovering and quantifying alternative splicing from short-read data.
After sequencing, reads are typically mapped to a reference genome with a splice-aware aligner that accounts for intronic gaps in the read alignment~\parencite{dobin2013star}.
Reads that map to a region for which intronic RNA was removed (splice junctions) are informative of the latent transcript diversity of the sample and can be assembled into putative transcripts \textit{de novo} or with reference transcriptome annotations.
Quantification is then determined as a function of the number of reads mapped to a specific transcript.
However, the computational characterization of AS is challenging due to biological variability and technological limitations.
First, both the structure and frequency of spliced transcripts (hereafter, \textit{transcripts} for brevity), differ by population, sex, tissue, and cell type \parencite{blekhman2010sex,ongen2015alternative,lappalainen2013transcriptome,park2018expanding,gtex2020gtex}.
Second, the structure of transcripts is often unknown or incomplete for many cell types, cell states, or non-model organisms~\parencite{morillon2019bridging}.
Third, transcripts have significant overlap of both retained and excised sequence making it difficult to distinguish the transcript of origin from short-read sequencing.
Lastly, the short read-lengths of high-throughput sequencing technologies limit the number of splice junctions observed in any single observation.
Long-read sequencing technologies yield observations with many more splice junctions but suffer from higher costs, larger error rates, and lower throughput~\parencite{mantere2019long}.
Despite these challenges, a significant number of isoform reconstruction and quantification methods have been developed with different modelling assumptions and reconstruction goals~\parencite{aguiar2018bayesian,vaquero2016new,li2018annotation,trapnell2013differential}.
Among these annotation-based methods, the majority reconstruct \textit{full-length} transcripts defined by their \textit{composite exons}.
The Bayesian isoform discovery and individual specific quantification (BIISQ) method models transcript reconstruction with a nonparametric Bayesian hierarchical model, where samples are mixtures of transcripts sampled from a population transcript distribution~\parencite{aguiar2018bayesian}.
While BIISQ was shown to have high accuracy on low abundance isoforms, it requires both the genes and the composite exon coordinates, and is unable to construct isoform transcripts that deviate from this reference annotation.
Cufflinks and StringTie are two methods that construct full-length transcripts and can operate both with or without transcript annotations.
Cufflinks reconstructs transcripts as minimum paths in an associated graph, where the aligned reads are vertices, and edges denote the compatibility of isoforms~\parencite{trapnell2010transcript}.
StringTie models transcript reconstruction using maximum network flow on a splice graph, where paths and read coverage inform isoform composition and quantification respectively~\parencite{pertea2015stringtie}.
Both are well-established state-of-the-art methods, but consider samples individually during the initial reconstruction.
For many genes, this reconstruction problem is underdetermined, uncertainty in 5' or 3' splice sites make it difficult to identify constituent exons, and variability of read depths due to technical artifacts or biological biases obfuscates reconstruction and quantification~\parencite{mcintyre2011rna}.
In fact, full-length transcripts can be difficult to reconstruct and quantify even when transcriptome annotations are known~\parencite{vaquero2016new}.
\begin{figure*}[h!]
\centering
\includegraphics[width=1\textwidth]{figs/overview.png}
\caption{\textbf{{\sc BREM}{} overview.}
(A) Junction reads are extracted from short-read RNA-seq data and (B) used to construct an interval graph where nodes are mRNA excisions and edges connect two mRNA excisions if they overlap.
{\sc BREM}{} is an admixture model where the sequences of mRNA excisions (SMEs) is informed by the interval graph.
(C) Posterior inference algorithms yield SMEs and counts of reads mapped to SMEs, which are used to compute differential SME usage.
Local splicing methods would conflate the first two transcripts while full-length methods maybe struggle differentiating the last two transcripts due to their large sequence overlap.
In both cases, these issues may affect differential splicing estimates.
}
\label{fig:overview}
\end{figure*}
Recently, several methods have been proposed that focus on the RNA that is excised from pre-mRNA transcripts and relax the requirement of reconstructing full transcripts.
The local splicing and hierarchical model rMATS detects differential usage of exons through the comparison of exon-inclusions in junction reads among five different alternative splicing events~\parencite{shen2014rmats}.
Interestingly, LeafCutter focuses on the mRNA that is excised rather than the constituent exons of a transcript to identify local splicing events.
First, LeafCutter computes local splicing events from RNA-Seq data then constructs a graph $G_L = (V_L,E_L)$ where vertices, $V_L$, are excisions and edges, $E_L$, connect excisions that share a donor or acceptor splice site~\parencite{li2018annotation}.
Subsequently, differential splicing of excised sequences in the connected components of $G_L$ is computed using a Dirichlet-Multinomial generalized linear model on read counts.
LeafCutter does not suffer the same disadvantages of methods that use exonic sequences or attempt to reconstruct full-length transcripts, though at the expense of the inability to identify certain splicing events like alternative transcription start sites.
These methods are ideal for emerging technologies like scRNA-seq and tissue or disease specific splicing, since these applications suffer from low coverages and incomplete annotations; instead, they use sequences that overlap mRNA excisions (or \textit{junctions}), which are much more easily inferred by short read sequences at lower coverages and without transcript or exon annotations~\parencite{li2018annotation,gtex2020gtex}.
While these \textit{local} splicing methods do not suffer from the same disadvantages as the \textit{transcript-based} methods, they are limited to singular splicing events (or small neighborhoods around an event)~\parencite{li2018annotation,shen2014rmats,vaquero2016new}.
As a result, these methods may conflate multiple transcripts that share splice events, making (a) quantification and downstream haplotype analysis difficult, and (b) differential expression subject to ambiguity of transcript-level contributions~(Fig.~\ref{fig:overview}).
Here, we propose a hierarchical \textbf{B}ayesian admixture model for the \textbf{r}econstruction of \textbf{e}xcised \textbf{m}RNA ({\sc BREM}{}), a novel approach to isoform reconstruction and differential expression.
Unlike Cufflinks and StringTie, {\sc BREM}{} considers all samples jointly in a formal probabilistic model; further, {\sc BREM}{} does not require exon or transcript-level annotations and enables \textit{local}-to-\textit{full-length} transcript reconstruction~(Fig.~\ref{fig:overview}).
First, we define the \textit{sequence of mRNA excisions (SME) reconstruction problem}, focusing on the assembly of excised mRNA into sequences of mRNA excisions (hereafter, simply \textit{excisions}) from RNA-seq data.
Then, we develop a novel hierarchical Bayesian admixture model to solve the SME reconstruction problem and a differential splicing workflow based on model parameter estimates in a generalized linear model; admixture refers to samples being modelled as collections of sequence reads that are themselves sampled from global mixture components (SMEs).
We demonstrate the theoretical compactness of {\sc BREM}{} and develop Gibbs Sampling and local search-based inference algorithms that model the discovery of new SMEs as computing independent sets in an interval graph.
We demonstrate increased precision, recall, and F1 score for transcript reconstruction on simulated data using both well-established and novel measures and highly accurate and sensitive differential SME identification.
Lastly, we evaluate {\sc BREM}{} on both bulk and scRNA sequencing data based on transcript reconstruction, novelty of transcripts produced, sensitivity to hyperparameters, and a functional analysis of differentially expressed SMEs, demonstrating that {\sc BREM}{} captures relevant biological signal.
\section*{Competing interest statement}
The authors declare no competing interests.
\section*{Acknowledgements}
We are gracious to the GEUVADIS and EGAS00001001895 projects for providing open and easy access to experimental data.
MH, DM, and DA were funded by DA's University of Connecticut start-up research funds.
\printbibliography
\end{refsection}
\newpage
\begin{refsection}
\include{supp}
\printbibliography
\end{refsection}
\end{document}
\section{Methods}\label{sec:met}
We are given RNA sequencing data $\mathcal{D}$ that has been aligned to a reference genome.
Let the samples be indexed by $i$ for $i \in \{1,\dots,N\}$ and the total number of junction reads per sample be $J_i$.
Though a non-trivial problem, we assume aligned sequence reads can be assigned to a specific gene for ease of exposition.
The \textbf{isoform reconstruction problem} aims to reconstruct, for each sample $i$, the full-length isoform transcripts as defined by their component exons.
Reads that overlap exon junctions (i.e. \textit{junction reads}) are highly informative for transcript reconstruction.
In contrast, the \textbf{splice event reconstruction problem} aims to identify singular splicing events that exist in any transcript expressed in $\mathcal{D}$.
Since there need not be assembly of transcripts, this problem is generally a simpler computational task than the \textit{isoform reconstruction problem}, but can still yield biologically interesting insights, e.g., differential usage of particular splice sites.
Here, we introduce the \textbf{SME reconstruction problem}: given aligned RNA-seq data $\mathcal{D}$, reconstruct sequences of co-occurring excised mRNA.
This problem interpolates between the isoform reconstruction problem and the splice event reconstruction problem as special cases: i.e., when the sequences are defined as full-length transcripts or singular splice sites.
Since methods that focus on local splicing events cannot compute reliable transcript abundances, we only consider the problem of differential expression between two groups.
Whenever the context is clear, we will refer to both differential usage of local splicing events, transcripts, and SMEs simply as \textit{differential expression}.
\subsection{{\sc BREM}{}: \textbf{B}ayesian \textbf{R}econstruction of \textbf{E}xcised \textbf{m}RNA}
\label{sec:modeldescription}
{\sc BREM}{} solves the SME reconstruction problem by representing samples as mixtures of SMEs sampled from a global distribution that is learned across samples (Fig.~\ref{fig:overview}).
SMEs are sequences of mRNA excisions, typically, but not limited to introns, that can be assigned to the same transcript (i.e., they do not overlap).
Briefly, {\sc BREM}{} models samples as mixtures of SMEs, which are themselves mixtures of junction reads.
{\sc BREM}{} learns the structure of SMEs, a global distribution over SMEs, a mapping between junctions reads and SMEs, and a sample-specific distribution of SMEs.
A separate model is built for each gene, which includes $i=1 \dots N$ samples that are collections of reads overlapping excision junctions.
Let the set of unique excisions within a gene be $V$, which is indexed by $v \in \{1, \dots, |V|\}$.
The $i^{th}$ sample consists of $1, \dots, J_i$ junction reads.
The goals are to reconstruct the latent SMEs and assign sample reads to SMEs for subsequent differential testing of SMEs.
{\sc BREM}{} consists of two major components: (a) combinatorial model for SME structure and (b) a probabilistic model for SME admixture.
\subsubsection{Combinatorial model for SME structure.}
\label{combstruct}
We represent excisions as intervals on the genome, defined as tuples: \textit{(start position, terminal position)} and SMEs as sequences of excisions.
The goal is to arrange excisions into $K$ SMEs, such that no SME contains two overlapping excisions.
To enforce this criteria, we compute a graph $G = (V, E)$, with $v \in V$ for unique excision $v$ and $(v_1, v_2) \in E$ if $v_1 = (s_1, t_1)$ and $v_2 = (s_2, t_2)$ intersect, i.e. $\min(t_1, t_2) - \max(s_1, s_2) > 0$.
Note that independent sets in this graph correspond to valid SMEs for which no pair of excisions overlap.
In our probabilistic model, we enforce that two excisions should not be expressed in the same SME if they are connected in $G$ using Bernoulli random variables.
For instance, if $v_1$ and $v_2$ are excisions connected in $G$, then we can enforce $v_1 \oplus v_2$ where $\oplus$ is exclusive OR.
We create a Bernoulli random variable $b_{kv_1}$, where $b_{kv_1}=1$ if $v_1$ is in the $k^{th}$ SME and $0$ otherwise.
Similarly $(1-b_{kv_1})$ is $1$ if $v_2$ is present and $0$ otherwise.
Unfortunately, this strategy does not scale well, as the number of random variables in the probabilistic model would be proportional to the number of conflicts.
For example, consider the complete bipartite graph $K_{1,k}$ (or, the star $S_k$).
This tree has a single internal node and $k$ leaves, which would generate $k$ Bernoulli random variables because there are $k$ conflicts.
However, notice that since the internal node is connected to all leaves, if the excision represented by the internal node is selected to be in the SME, none of the leaves may be added and the model can be described with a single random variable.
A parsimonious representation of SMEs reduces the number of parameters in our model, making inference more efficient and mitigating issues associated with model non-identifiability.
Consider the excision graph as defined earlier: $G=(V,E)$ where $v \in V$ for unique excision $v$ and $(v_1,v_2) \in E$ if $v_1$ and $v_2$ intersect.
Edges represent excisions that cannot be co-expressed in the same SME.
Let $C$ be the set of Bernoulli random variables required to encode all conflicts in $G(V,E)$.
\begin{proposition}
\label{theo:mvc}
Choosing the minimum number of variables required to encode all conflicts between excisions i.e., computing $C$ such that $|C|$ is minimum, can be done in time $O(|E|)$.
\end{proposition}
Since each edge in $E$ denotes two excisions that cannot coexist in the same cluster, we need at least one incident node of each edge to exist in $C$; this is a node cover of $G$.
A minimum node cover has the smallest cardinality among all node covers and therefore a corresponding $C$ for which $|C|$ is the smallest.
Since $G$ is an interval graph, computation of a minimum node cover can be done in $O(|E|)$ time~\parencite{marathe1992generalized}.
\subsubsection{Probabilistic model for SME admixture.}
The probabilistic component of {\sc BREM}{} consists of a model for SME structure that is shared across all samples (Fig.~\ref{fig:overview}, B and Fig.~\ref{fig:graphicalmodel}, left) and a model for the SME composition of a specific sample (Fig.~\ref{fig:overview}, C and Fig.~\ref{fig:graphicalmodel}, right); complete model details can be found in \S \ref{supmoddets}.
The structure of an SME consists of the inclusion or exclusion of excisions.
We place an explicit beta-Bernoulli prior on excisions to control the sparsity of SMEs.
\begin{align*}
b_{kv} & \sim Bernoulli(\pi_k), \forall v\in C \\
s.t. & \hspace{20pt} \bm{b_{k\cdot}} \in \Omega \\
\pi_k & \sim Beta (r,s)
\end{align*}
for hyperparameters $r$ and $s$ and the space of valid SMEs $\Omega$, which is defined by all subsets of excisions in which there exists no two elements that conflict in $G$; thus, $\Omega$ is equivalent to the \textit{set of all (not necessarily maximal) independent sets in $G$}.
In total, we only instantiate $|C|$ Bernoulli variables since we can encode all $v \in C$ using $b_{kv}$ and all $\hat{v} \notin C$ with variables of the form $(1-b_{kv})$ for some $v \in C$.
If a single $\hat{v} \notin C$ is adjacent to two or more $v \in C$, one adjacent $v$ is selected at random for the encoding.
Importantly, $r$ and $s$ can be adjusted to encourage shorter or longer SMEs by affecting the prior probability of excision inclusion (see \S \ref{trecon}).
We model the $k^{th}$ SME, $\beta_k$, as a degenerate Dirichlet distribution whose dimension is controlled by the beta-Bernoulli prior~\parencite{wang2009decoupling,aguiar2018bayesian}.
Intuitively, we discourage excisions to occupy the same SME based on the structure of the excision interval graph $G$.
In SME $k$, we enforce these constraints through the $|V|$-dimensional $\bm{b_k} = (b_{k1}, \dots, b_{k|V|})$ vector, in which $b_{kv}$ selectively turns off or on dimension $v$.
Note that there are only a total of $|C|$ unique $b_{kv}$ as some of these variables are repeated due to excision constraints.
\begin{equation}
\label{dirk}
\beta_k \sim Dirichlet_{|V|}(\bm{\eta} \odot \bm{b_k})
\end{equation}
where $\bm{\eta} = (\eta_1, \dots, \eta_{|V|})$ is a hyper-parameter and notation $\odot$ signifies element-wise vector multiplication.
Equation \ref{dirk} also highlights non-identifiability issues when $|C|$ is not minimum.
For example, consider an excision graph $G=(V,E)$ where $V=\{v_1,v_2\}$ and $E=\{(v_1,v_2)\}$ and a non-parsimonious encoding that has a variable for each excision: $C=\{b_{kv_1},b_{kv_2}\}$.
Setting $b_{kv_1}=1$ and $b_{kv_2}=1$ is equivalent to $b_{kv_1}=0$ and $b_{kv_2}=0$.
In general, consider a clique of size $L$.
Selection of any subset of $C$ such that $|C|>1$ results in the same degeneracy of $\beta_k$.
The model for the SME composition of a specific sample describes both the distribution over SMEs and a mapping between junction reads and SMEs.
The proportion of SMEs in sample $i$ is modelled by $\theta_i$, which follows a $K$ dimensional Dirichlet distribution.
The $k^{th}$ dimension represents the probability of observing an excision from SME $k$.
\begin{equation*}
\label{eq:gentheta}
\theta_{i} \sim Dirichlet_K(\bm{\alpha})
\end{equation*}
where $\bm{\alpha}= (\alpha_1, \dots, \alpha_K)$ are hyperparameters.
Sample $i$ has $J_i$ observations of junction reads that overlap one or more excisions.
The assignment of junction read $j$ in sample $i$ to a SME is denoted by $z_{ij} \in \{1, \dots, K\}$ and follows a Multinomial distribution.
\begin{equation*}
\label{eq:genz}
z_{ij} \sim Multinomial(\theta_{i})
\end{equation*}
The data likelihood is represented by observed random variables $w_{ij}$, modelling the $j^{th}$ junction read in sample $i$ ($w_{ij} \in \{1, \dots, |V|\}$) and follows a Multinomial distribution.
\begin{equation*}
\label{eq:genw}
w_{ij} \sim Multinomial (\beta_{z_{ij}})
\end{equation*}
Here, the parameter for the Multinomial is the $\beta$ selected by variable $z_{ij}$.
\subsection{Inference Algorithm}
\label{sec:inf}
We develop a Gibbs sampling algorithm to fit our model that
proceeds by first sampling parameter values from their priors.
Then, for each latent variable $z$, we sample from their complete conditionals, i.e., the probability of $z$ given all other random variables in the model (see \S \ref{sec:supp_gibbs}).
To determine Gibbs sampling convergence, we used Relative Fixed-Width Stopping Rules (RFWSR)~\parencite{flegal2015relative}.
RFWSR sequentially checks the width of a confidence interval relative to a threshold (here, $\sigma = 0.001$) based on the effective sample size.
Most variables yield efficient updates (full derivations are provided in the supplemental materials \S \ref{sec:supp_gibbs}); however, special consideration is required for $z$ and $b$.
\subsubsection{Sampling \texorpdfstring{$\bm{z}$}{\textbf{z}}.}
Gibbs sampling can be inefficient, particularly for admixture models where the likelihood computation requires iterating over the full, typically high dimensional data~\parencite{hoffman2013stochastic}.
We improved the efficiency of our inference algorithms by exploiting the low dimensionality of junction reads.
In admixture modelling,
the dimension of the data is typically much larger than the number of distinct data items in a sample.
E.g., in topic modelling, the number of words in the vocabulary is much larger than the number of unique words in a document.
However, in this context, the number of distinct excisions in a gene is much fewer than the size of the total number of reads.
We can exploit the fact that we treat the expression of junction reads as draws from a Multinomial.
Given probabilities $p_1, p_2, \dots, p_K$ such that $\sum_{i = 1}^{K}p_i = 1$ and $S$ as the number of draws, the naive algorithm for sampling from a discrete distribution divides the interval $[0,1]$ into $K$ segments with the length equal to $p_1, p_2, \dots, p_K$.
A number is sampled from the Uniform distribution $\mathcal{U}(0,1)$, and the matched category is found using binary search; this procedure is repeated $S$ times.
The sampling algorithm requires $\mathcal{O}(K)$ time for initialization, and then $\mathcal{O}(S \log(K))$ time for sampling~\parencite{startek2016asymptotically}.
In our setting, for a given gene and excision, $K$ is the number of SMEs and the number of draws, $S$, is the number of times the excision appeared in the gene.
So, using this scheme to sample the latent variable for SME assignment yields a
complexity of $\mathcal{O}(K + S \log(K))$, compared with $\mathcal{O}(SK + S\log(K))$ which saves significant time when $S >> K$.
\subsubsection{Sampling \texorpdfstring{$\bm{b}$}{\textbf{b}}.}
\label{sec:algorithm}
Each iteration of Gibbs sampling requires sampling $\bm{b_k}$, which is non-trivial since the distribution of $\bm{b_k}$ is defined over independent sets of $G$.
At each iteration we perform a local search among valid configurations (independent sets) in $\Omega$ using a novel local search algorithm for independent sets.
At iteration \textit{t}, given the current configuration $\bm{b_k}^{t}$ and $\beta_k^{t}$ we select $\Phi = \{\phi_1, \phi_2, \dots, \phi_T\}$ valid configurations (Alg. \ref{alg:localindsearch}), among which we sample according to a Multinomial distribution (Sec. \ref{sec:supp_localsearch}).
\begin{equation*}
\bm{b_k}^{t+1}\sim Multinomial(\phi_1^{t}, \phi_2^{t}, \dots, \phi_T^{t})
\end{equation*}
After Gibbs Sampling converges, {\sc BREM}{} collapses SMEs with the same excision configuration.
\subsubsection{Bounding the number of SMEs.}
In order to guarantee the constraints are respected, we need to compute a lower bound on the number of SMEs.
The minimum number of SMEs is equal to the chromatic number of $G$.
Since $G$ is an interval graph this number is the same as the number of vertices in the maximum clique ($K$).
Therefore, we set the minimum number of SMEs to $K$ such that there exists at least one SME for each $b$ variable.
\subsection{Differential SMEs}
Here, we define our generalized linear model (GLM) to compute differential SME usage using the fitted model parameters in {\sc BREM}{}.
We quantify differential SME usage based on the expression profile across all SMEs for $2$ groups of samples in each gene.
The $z$ variables represent the mapping of an excision observation to a SME.
Let the counts across all unique junction reads for sample $i$ be denoted $z_{i}$.
Then, we can express $z_{i}$ as a Dirichlet-multinomial
\begin{equation*}
z_{i1},\dots,z_{iJ_i} \sim DirMult \left(\sum_j z_{ij}, \alpha \odot p_i \right)
\end{equation*}
where $p_{ij} = \frac{\exp(x_i \beta_j + \mu_j)}{\sum_k \exp(x_i \beta_k + \mu_k)}$. We set $\alpha \sim \gamma(1 + 10^{-4}, 10^{-4})$ to stabilize maximum likelihood estimation~\parencite{li2018annotation}.
Finally, to test differential SMEs between two groups, we construct two models: (a) a DirMult GLM where we set $x_i=0$ for one group and $x_i=1$ for the other and (b) a DirMult GLM where all $x_i=0$.
Differential SMEs are quantified by a likelihood ratio test with $K-1$ degrees of freedom, where $K$ is the number of SMEs.
\section{Related Work}
\label{sec:rw}
Methods for AS characterization can be grouped into three categories based on their assumed input annotations: no reference annotations (\textit{de novo} assembly), gene transcripts and their exon composition (transcript annotation-based), and gene starting and ending positions only (transcript annotation-free).
\textit{De novo} transcriptome assembly methods, like Trinity~\parencite{haas2013novo} and ABySS~\parencite{birol2009novo}, compute transcripts from unaligned sequence reads, typically without the benefit of reference annotations.
When a reference genome sequence is well characterized, transcript annotation-based and annotation-free methods have been shown to produce more accurate transcripts and quantifications~\parencite{marchant2016comparing}; since our focus is on species with well-categorized genome sequences, we restrict our attention to methods that assume sequences reads can be mapped to a genome reference.
Transcript annotation-based and annotation-free isoform reconstruction methods begin by aligning RNA-seq reads to a reference genome using a splice-aware aligner~\parencite{langmead2012fast,dobin2013star}.
The overwhelming majority of these methods reconstruct full-length transcripts as ordered sets of exons, focusing on the RNA that is retained.
The Bayesian isoform discovery and individual specific quantification (BIISQ) method models transcript reconstruction with a nonparametric Bayesian hierarchical model, where samples are mixtures of transcripts sampled from a population transcript distribution~\parencite{aguiar2018bayesian}.
While BIISQ was shown to have high accuracy on low abundance isoforms, it requires both the genes and the composite exon coordinates, and is unable to construct isoform transcripts that deviate from this reference annotation.
Cufflinks and StringTie are two methods that construct full-length transcripts and can operate both with or without transcript annotations.
Cufflinks reconstructs transcripts as minimum paths in an associated graph, where the aligned reads are vertices, and edges denote the compatibility of isoforms~\parencite{trapnell2010transcript}.
StringTie models transcript reconstruction using maximum network flow on a splice graph, where paths and read coverage inform isoform composition and quantification respectively~\parencite{pertea2015stringtie}.
Both are well-established state-of-the-art methods, but consider samples individually during the initial reconstruction.
Additionally, all aforementioned methods are restricted to constructing full-length isoforms, a problem that is made challenging by exon boundaries that are difficult to identify and variability in read depths across transcripts.
A more recent class of isoform reconstruction and quantification methods focus on characterizing local splicing events.
The local splicing and hierarchical model rMATS detects differential usage of exons through the comparison of exon-inclusions in junction reads among five different alternative splicing events~\parencite{shen2014rmats}.
Interestingly, LeafCutter focuses on the mRNA that is excised rather than the constituent exons of a transcript to identify local splicing events.
First, LeafCutter computes local splicing events from RNA-Seq data then constructs a graph $G_L = (V_L,E_L)$ where vertices, $V_L$, are excisions and edges, $E_L$, connect excisions that share a donor or acceptor splice site~\parencite{li2018annotation}.
Subsequently, differential splicing of excised sequences in the connected components of $G_L$ is computed using a Dirichlet-Multinomial generalized linear model on read counts.
LeafCutter does not suffer the same disadvantages of methods that use exonic sequences or attempt to reconstruct full-length transcripts, though at the expense of the inability to identify certain splicing events like alternative transcription start sites.
These methods also may fail to capture interactions between splicing events on the same transcript and may conflate transcripts that share splice events.
For example, if two transcripts share an excision, the read counts on the shared junction will be summed conflating the two transcripts and potentially masking differential expression across two populations (Figure~\ref{fig:overview} C).
Our method, {\sc BREM}{}, is situated in between full-length transcript and local splicing methods~(Fig.~\ref{fig:overview}).
It benefits from the transcript annotation-free nature of excisions, while also being able to support both local splicing events, full-length transcripts, and variable lengths in between.
{\sc BREM}{} also shares similarities with BIISQ in that it considers all samples jointly and is defined as a formal probabilistic model, enabling the quantification of uncertainty and direct interpretation of fitted model parameters that are used to both explore the results and develop a method for differential testing.
\section{Results}
\label{sec:results}
All transcript annotation-free AS characterization methods must first reconstruct spliced transcripts based only on aligned RNA-seq data and approximate gene starting and ending coordinates.
Here, we consider four state-of-the-art methods for AS characterization: rMATS~\parencite{shen2014rmats}, LeafCutter~\parencite{li2018annotation}, Cufflinks~\parencite{trapnell2010transcript}, and StringTie~\parencite{pertea2015stringtie}.
These four methods range from single splicing event to full-length transcripts and so we refer to their reconstructed output collectively as \textit{transcript segments}.
SMEs can be interpreted as the sequence of mRNA excisions within a transcript segment.
Since these methods have different targets for reconstruction, comparing them presents a number of challenges.
First, transcript segments must be mapped to a reference annotation to evaluate reconstruction accuracy.
Computed full-length transcripts may include or exclude a subset of excisions or differ slightly in excision coordinates.
Methods that consider single splice events may produce many slightly different versions of the same excision and are attempting to solve a less complex problem than full-length transcript reconstruction.
Second, each method computes different abundance measures that may be unavailable to competing methods (e.g., FPKM is poorly defined for singular splicing events).
Therefore, we develop two measures for matching computed transcript segments of any size to a reference set of transcripts and focus on the evaluation of differential splicing for whichever transcript segment is produced by each method.
\subsection{Evaluation Criteria}
To appropriately evaluate methods that compute transcript segments of varying size we define two measures based on excision matching.
If the set of expressed transcripts is known \textit{a priori}, computed excisions can be evaluated using variants of homogeneity scores and partial precision and recall~\parencite{aguiar2018bayesian}, however that is not the case here.
Since we can compute excisions from exons, but not vice versa, we define transcript segments by the set of their component excisions.
Reconstructed transcript segments may include exons from disparate transcripts.
With a known reference, we compute the number of excisions that appear in any reference transcript and normalize by the total number of excisions.
We define the $k^{th}$ computed transcript segment $T_k$ as a subset of excisions, or, $T_k \subseteq \{1,\dots,V\}$.
Let the set of reference transcripts be $T^t$
and the $v^{th}$ excision be $e_v$.
The set $T^t$ either represents a simulated baseline or known experimental transcripts from a well characterized cell type.
The \textit{partial homogeneity score (phs)} for transcript $T_k$ in sample $i$ can be computed as
\begin{equation*}
s_i^{phs}(T_k)=max_{T \in T^t} \frac{\sum_{e_j \in T_k} \mathbbm{1}\left[e_j \in T \right]}{ |T_k| }
\end{equation*}
where $\mathbbm{1}[e_j \in T]$ is $1$ if $e_j$ matches an excision in $T$ and $0$ otherwise.
Here, we consider two excisions $e_v$ and $e_w$ as matching if the donor and acceptor splice sites of $e_v$ are at most $6$ bases from the donor and acceptor splice sites of $e_w$.
The score $s_i^{phs}$ enforces that excisions are sampled from the same true transcript and is normalized by the size of the computed transcript segment.
We also define $\hat{s}_i^{phs}$, which normalizes computed transcript segments by the true transcript length.
\begin{equation*}
\hat{s}_i^{phs}(T_k)=max_{T \in T^t} \frac{\sum_{e_j \in T_k} \mathbbm{1}\left[e_j \in T \right]}{ |T| }
\end{equation*}
Both scores $s_i^{phs}$ and $\hat{s}_i^{phs}$ are related to the Jaccard index but importantly emphasize different goals.
Score $s_i^{phs}$ will tend to produce better scores for methods that compute shorter transcript segments; as long as the shorter transcript segments are accurate (they are contained within true transcripts), this score will be close to $1$.
In contrast, $\hat{s}_i^{phs}$ prefers longer transcript segments and will be close to $1$ if the computed transcript is both accurate and full-length.
Either $s_i^{phs}$, $\hat{s}_i^{phs}$, Jaccard index, or some linear combination thereof can be used depending on the goals of the study.
Finally, let the set of computed transcripts be $T_{(i)}^c=\{T_k\}$.
An overall score for sample $i$ can then be computed as
\begin{gather*}
s_i^{phs}=\frac{\sum_{T_k \in T_{(i)}^c} s_i^{phs}(T_k)}{|T_{(i)}^c|} \qquad \text{and} \qquad \hat{s}_i^{phs}=\frac{\sum_{T_k \in T_{(i)}^c} \hat{s}_i^{phs}(T_k)}{|T_{(i)}^c|}
\end{gather*}
To compute precision and recall, we first match computed transcript segment $T_k$ to the true transcript $T \in T^t$ with maximum $s_i^{phs}$ or $\hat{s}_i^{phs}$.
Let the matched transcript be $T^*$.
Then, we label each excision $e_j \in T_k$ as a true positive (TP) if $1\left[e_j \in T^* \right]=1$ and a false positive (FP) otherwise.
Excisions are labeled as false negatives (FN) if they exist in a true transcript but were not included in any computed transcript.
The F1 score is computed as the harmonic mean of precision and recall, which are computed as: $precision=\frac{TP}{TP+FP}$ and $recall=\frac{TP}{TP+FN}$.
\subsection{Data}
\label{sec:sim-res}
\subsubsection{Simulations}
We evaluate isoform reconstruction with extensive simulations from the Polyester simulator~\parencite{frazee2015polyester}.
We consider protein coding genes from reference chromosomes of the GENCODE comprehensive gene annotation version V34 (human genome version GRCh38/hg38)~\parencite{frankish2019gencode}.
We generated a diverse set of genes by randomly sampling from GENCODE until we had at least $60$ genes in each of the following categories of transcript counts (isoforms) $ \in \{2, 3, 5, 7, 10, 15, 25\}$, ($420$ genes in total).
For each gene, we simulated $800$ samples at $50x$ coverage and then downsampled each gene to produce new datasets of $25x$ and $5x$ coverage.
The samples were simulated using $8$ groups of $100$ samples each with different fold changes ($1$, $1$, $1$, $1.1$, $1.25$, $1.5$, $3$, and $5$) to allow for estimation of false discoveries and differential splicing sensitivity~\parencite{li2018annotation}.
The number of reads varied per sample based on a negative binomial distribution for read counts~\parencite{frazee2015polyester}.
The output FASTA files from Polyester were aligned to the human genome (version GRCh38/hg38) using STAR aligner with default parameters and GENCODE v34 annotations~\parencite{dobin2013star}.
In total, we simulated $1260 (= 420 genes \times 3 coverages)$ genes yielding over a million BAM files.
Data simulation steps are detailed in Sec.~\ref{sec:supp_data_sim} (See Fig. \ref{fig:data_sim} for additional information).
\subsubsection{Experimental Data}
To evaluate our differential SME model, we consider both bulk and single-cell sequencing experimental data.
The GEUVADIS data contains bulk RNA-seq data from lymphoblastoid cell lines in $465$ individuals.
The samples provided from this data set are ethnically diverse and span five populations: Utah residents with northern and western European ancestry (CEU), Finnish from Finland (FIN), British from England and Scotland (GBR), Toscani from Italia (TSI), and Yoruba from Ibadan, Nigeria (YRI); each population consists of $89-95$ samples.
In our differential SME analysis, we group the CEU, FIN, GBR, and TSI populations into European (EUR) and classified the YRI population as African (AFR).
We also consider single-cell data from the European Genome-Phenome Archive (EGA).
This data was used to investigate the response of monocytes to bacterial and viral stimuli in two populations, each with $100$ males self-reported as having predominately African ancestry (AFB) or European ancestry (EUB) within Ghent Belgium~\parencite{rotival2019defining}.
Up to five samples from peripheral blood mononuclear cells were collected for each individual resulting in $970$ total samples.
One sample remained untreated, while the four other samples were exposed over 6 hours to bacterial lipopolysaccharide (LPS), synthetic triacylated lipopeptide (Pam$_3$CSK$_4$), imidazoquinoline compound (R848), and human seasonal influenza A virus (IAV).
We compared the untreated samples with the group of treated samples to evaluate differential SMEs.
\subsection{Preprocessing}
\label{sec:preprocessing}
The input to our model is the set of junction reads mapped to a reference genome.
In this work, we map reads to the reference genome using STAR aligner (V. 2.7.3a).
We used gene annotations whenever possible, including in STAR alignments since some methods require gene annotations and STAR highly recommends using them when available.
Gene annotations were also used to generate the BAM files for the GEUVADIS data.
However, gene annotations are not required to execute {\sc BREM}{}.
\subsubsection{Excision Extraction}
\label{sec:intron}
We build a model for each gene independently based on approximate gene starting and terminal coordinates.
Given approximate gene coordinates, we extract reads in each sample that overlap excisions (junction reads) using RegTools (version 0.5.1).
The intervals of genes that overlap are combined.
We refined the set of excisions by removing reads that do not map uniquely (e.g., due to paralogous genes), short excisions ($<50bp$), long excisions ($>500,000bp$), and false positive splice junctions identified by Portcullis~\parencite{mapleson2018efficient}.
The extracted junction from the mapped reads form the input to {\sc BREM}{}.
\subsection{Model Selection}
\label{sec:exp-res}
{\sc BREM}{} assumes that the number of SMEs ($K$) is given as input.
However, the number of SMEs should not be less than the chromatic number of the excision interval graph, or, equivalently, the maximum independent set ($IS$) of the complement graph.
Using the interval graph property, we can compute the chromatic number in polynomial time.
Then, for each gene, we trained our model with $K = IS + x$, where $x \in \{0,2,4,6,8,10,12,14,16\}$ and selected the model with the highest \textit{predictive likelihood}.
Predictive likelihood is commonly used to perform model selection on admixture and topic models~\parencite{wallach2009evaluation} and is less prone to overfitting than likelihood.
To select hyperparameters, we implemented a grid search on held-out genes where $\alpha \in \{0.001, 0.01, 1, 5, 10\} $, $\eta \in \{0.01, 1, 5, 10\}$, $r$ and $s \in \{1, 5, 10\} (r = s)$.
Throughout the subsequent experiments, we set $\eta = 0.01$ and $\alpha=r=s=1$ for both simulated and experimental data.
For convergence, we check RFWSR every 50 iterations after burn-in (500 iterations) and stop sampling after 100 iterations if RFWSR$<\sigma$.
\subsection{Transcript Reconstruction}
\label{trecon}
We applied {\sc BREM}{}, Cufflinks, LeafCutter, StringTie, and rMATS to reconstruct transcripts, splice events, or SMEs in all $1260 (420 \times 3)$ simulated genes.
Before computing precision, recall, and F1 score, the computed transcript segments must be matched to true transcripts; here, we quantify this using the partial homogeneity scores~(Fig.~\ref{fig:phsf}).
\begin{figure}[h]
\centering
\includegraphics[trim={0 0 0 0}, clip, width=0.85\textwidth]{figs/bioinf_X1_bamie_phs_equations_violin_2.png}
\caption{\textbf{Transcript segment matching to reference.} Violin plots for $s^{phs}$, $\hat{s}^{phs}$ and their harmonic mean across five methods in the simulated data. The horizontal lines show the quartiles in each of the plots.}
\label{fig:phsf}
\end{figure}
\noindent LeafCutter and rMATS match true transcripts well when normalizing by the number of excisions in the computed transcript segments ($s^{phs}$); however, when normalizing by the true transcript, the $\hat{s}^{phs}$ score for both methods predictably decreases due to the size of transcript segments produced.
Since Cufflinks and StringTie both aim to reconstruct full-length transcripts, they perform comparatively well when normalizing by the size of the true transcript; however, Cufflinks score dramatically decreases when normalizing by the size of the computed transcript, indicating that its computed transcript lengths in terms of excisions, are inaccurate.
\begin{figure}
\centering
\includegraphics[trim={0 0 0 0}, clip, width=0.75\textwidth]{figs/Fig3_3in2.png}
\caption{\textbf{Performance on simulated data.} Precision, recall and F1 Score on simulated data for {\sc BREM}{} (blue), Cufflinks (orange), LeafCutter (green), StringTie (red) and rMATS (purple) based on \textbf{(a)} $S^{phs}$ and \textbf{(b)} $\hat{S}^{phs}$.}
\label{fig:prf1}
\end{figure}
Interestingly, StringTie does not suffer from the same significant decrease as Cufflinks, though both StringTie and Cufflinks exhibited high variance.
In comparison, {\sc BREM}{} demonstrated far less variability in $s^{phs}$ and $\hat{s}^{phs}$ than Cufflinks and StringTie, while maintaining high performance.
Next, to evaluate the impact of the parameter that controls the number of SMEs ($K$), we varied $K$ from the chromatic number in the excision interval graph (equivalently, size of the maximum independent set ($IS$) in the complement graph) to $IS + 16$.
The trend for both $s^{phs}$~(Fig.~\ref{fig:fig1}, top) and $\hat{s}^{phs}$~(Fig.~\ref{fig:fig1}, bottom) are similar: as $K$ increases, precision increases initially and then remains flat while recall decreases monotonically.
This is likely due to two factors.
First, {\sc BREM}{} collapses SMEs with the same excision configuration after convergence.
This means that the \textit{effective K} is much lower when $K$ is much larger than the number of alternative transcripts.
Second, {\sc BREM}{} benefits from the flexibility of additional SMEs initially, but eventually when $K>>IS$, {\sc BREM}{} learns SMEs that are low abundance and noisy.
Having matched computed transcripts with true transcripts, we next evaluated each method with respect to precision, recall, and F1 score for the top match using $s^{phs}$~(Fig.~\ref{fig:prf1}a) and $\hat{s}^{phs}$~(Fig.~\ref{fig:prf1}b).
First, rMATS is highly selective, exhibiting high precision regardless of the length of the transcript.
This, however, is to be expected since rMATS scores consistently high when considering $s^{phs}$, but also consistently low when considering $\hat{s}^{phs}$.
Since rMATS is concerned only with singular splicing events, in either case, the task is less difficult.
On the other hand, LeafCutter performs some local assembly of splicing events into clusters and thus has a more difficult assembly task than rMATS, though performs similarly in terms of F1 score.
Both Cufflinks and StringTie exhibit high variance, but perform considerably better than the local splicing methods in terms of recall.
{\sc BREM}{}, situated between these extremes, achieves higher precision for most genes than the full-length transcript methods and substantially higher recall and F1 with lower standard errors.
We also tested precision, recall, and F1 score as a function of the complexity of the overlap graph (defined by the number of edges).
For genes yielding complex graphs ($|E| > 200$), {\sc BREM}{} achieves the highest recall and F1 Score, while rMATS is the most precise~(Fig.~\ref{fig:prf_nf}).
Importantly, this shows that {\sc BREM}{} performs well when there is substantial overlap among the transcripts.
As a function of the number of excisions, {\sc BREM}{} also achieves the highest recall (Fig \ref{fig:prf_nodes}).
The flexibility of our admixture modelling allows {\sc BREM}{} to focus on producing high confidence transcript segments rather than fixing the size to be small (e.g., individual splice events) or large (full-length transcript).
Next, we tested the sensitivity of {\sc BREM}{} to model parameters; in particular, we tested how the mean posterior SME length (denoted $|SME|$ and defined by the number of excisions in a SME) varied as a function of $K$, $r$, and $s$. We trained the model setting $r, s \in \{0.1, 1, 10, 100\}$ and $K \in \{IS, IS + 5, IS + 10, IS + 15\}$.
The parameter $K$ did not correlate with $|SME|$, likely due to {\sc BREM}{} collapsing posterior SMEs with the same excision usage.
However, as we increased the prior mean of $Beta(r,s)$, $|SME|$ also increased~(Fig.~\ref{fig:sensitivity_rs}).
This is consistent with the interpretation of $r$ and $s$ in the model: $r$ and $s$ control the prior probability of including an excision in SMEs.
As the mean of $Beta(r,s)$ increases, larger SMEs become more likely in the posterior.
However, this relationship is not strictly monotonic, as other model parameters, properties of the transcripts, and stochasticity of model inference interact with the effect of $r$ and $s$ on $|SME|$.
{\sc BREM}{} is also fast, with the running time increasing linearly as a function of $K$, $|C|$, and the average number of junction reads across samples (Fig. \ref{fig:runtime_2}).
\subsection{Differential Expression in Simulated Data}
Each simulated gene consisted of $8$ groups of $100$ samples with fold changes $1$, $1$, $1$, $1.1$, $1.25$, $1.5$, $3$, and $5$.
We computed differential expression for each method and all pairwise groupings of the samples ($28$ in total).
Pairwise comparisons between the first three groups enabled estimation of false discoveries.
We randomized the processing order of genes and allocated each method a full week on a $128$ core computer to processes the simulated data (Table~\ref{tab:ds}).
StringTie, rMATS, LeafCutter, and {\sc BREM}{} all finished in less than a day, while Cufflinks only finished $21.4\%$ of configurations.
Additionally, recent comparisons have shown higher ability to detect differential splicing for rMATS, StringTie, and LeafCutter when compared to Cufflinks~\parencite{li2018annotation,shen2014rmats,pertea2015stringtie}; thus, we excluded Cufflinks from the comparison.
{\sc BREM}{} achieved the highest sensitivity and accuracy of identifying differential usage (of SMEs), though StringTie achieved relatively high sensitivity with an impressively high specificity ($0.996$).
\subsection{Differential Expression in Experimental Data}
We applied {\sc BREM}{} and our differential SME model to both GEUVADIS and EGA datasets.
After filtering genes expressed at low levels and those without conflicts in the excision interval graph, we applied {\sc BREM}{} to infer SMEs in $3983$ and $4278$ genes in the GEUVADIS and EGA data respectively.
Using our results on the precision and recall for {\sc BREM}{} with varying $K$~(Fig.~\ref{fig:fig1}), we set $K=IS+4$.
We then applied our Dirichlet Multinomial model to compute differential SME usage across the two data sets.
We used the super population (African vs. European) to group samples in GEUVADIS and treatment status in the EGA dataset.
After multiple comparisons correction using Benjamini-Hochberg~\parencite{benjamini1995controlling}, p-values were well-calibrated~(Fig.~\ref{fig:exp}) and we observed $2105$ and $1961$ genes with significant differential SME usage in the GEUVADIS and EGA data (FDR corrected $p<0.05$).
\begin{figure}
\centering
\includegraphics[trim={20 30 0 10},width=0.60\textwidth]{figs/bioinf_sensitivity_analysis_.png}
\caption{\textbf{Sensitivity analysis of SME size with respect to the parameters $r$ and $s$.} X-axis depicts combinations of $\pi$ variable prior parameters, ordered by $\pi$ mean, i.e., $\frac{r}{r+s}$. In y-axis, we compute the average SME size.}
\label{fig:sensitivity_rs}
\end{figure}
\begin{table}[h!]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{Method} & \textbf{Accuracy} & \textbf{Sensitivity} & \textbf{Specificity} \\ \hline
StringTie & $0.253$ & $0.164$ & $\bm{0.996}$ \\ \hline
rMATS & $0.130$ & $0.0284$ & $0.990$ \\ \hline
LeafCutter & $0.131$ & $0.0292$ & $0.980$ \\ \hline
{\sc BREM}{} & $\bm{0.303}$ & $\bm{0.233}$ & $0.889$ \\ \hline
\end{tabular}
\caption{\textbf{Differential Splicing Results on Simulated Data.}}
\label{tab:ds}
\end{table}
We conducted a gene ontology (GO) analysis using genes with differential SME expression as the target list and all genes input into {\sc BREM}{} as the background list~\parencite{eden2009gorilla}.
In the GEUVADIS data, the top $12$ GO terms in the Biological Process ontology ranked by p-value (p $<2.97 \times 10^{-6}$) referenced regulation of biomolecular processes.
This is consistent with a growing body of evidence that suggests splicing plays a major role in regulating gene expression~\parencite{gehring2020anything,gutierrez2015tissue} and metabolism~\parencite{kozlovski2017role,annalora2017alternative,qiao2019comprehensive}.
In the Molecular Function ontology, alternative splicing plays an integral role in the top $18$ GO terms, which reference ATP, DNA, drug, and other molecular binding (p $<5.69 \times 10^{-5}$)~\parencite{sciarrillo2020role,ji2020silico}.
In the EGA data, both molecular function and biomolecular processes exhibited significant associations with regulation of and binding to kinase proteins (GO:0046330, GO:0043507, GO:0046328, GO:0019901; p $<9.7 \times 10^{-4}$).
Alternative splicing is known to (a) regulate the binding of kinase proteins~\parencite{kelemen2013function} and (b) increase kinase protein diversity~\parencite{anamika2009functional}.
\subsection{Novel Splice Junctions, SMEs and Transcripts}
We quantified the total number and percentages of novel versus known splice junctions and SMEs or transcripts in both GEUVADIS and EGA datasets.
Since our processing pipeline focuses on excisions, and is thus similar to LeafCutter, and we are testing reconstruction, we compared our results to only Cufflinks and StringTie.
We only consider SMEs that are expressed in $10$ or more samples, where the $k^{th}$ SME is considered expressed in sample $i$ if there exists $10$ or more $z_{ij}=k$.
We allowed inferred junction locations to differ by at most $6$ nucleotide bases from the reference to be considered matching.
We followed the recommended pipelines for StringTie and Cufflinks and merged per-sample assemblies.
We considered an SME or computed transcript as novel if it was not a subset of an annotated transcript (and known otherwise).
Splice junctions are considered novel if they do not exist in the reference (and known otherwise).
All methods produce far fewer novel SMEs, transcripts, and splice junctions in GEUVADIS compared to the EGA data (Fig. \ref{fig:novel}, a and b).
This may be due to the reference transcripts of lymphoblastoid cell lines being more well characterized than monocytes or due to the differences in sequencing platforms (bulk versus single cell RNA-seq).
{\sc BREM}{} and Cufflinks produced larger proportions of known to novel splice junctions and SMEs or transcripts in GEUVADIS whereas StringTie produces far more novel transcripts (Fig. \ref{fig:novel}, c and d).
This discrepancy was larger in the EGA data, where StringTie produced much higher proportions of novel transcripts ($<5\%$ were observed in the reference).
\section*{Supplementary Materials}
\label{sec:supp}
\addcontentsline{toc}{section}{Appendices}
\renewcommand{\thesubsection}{S\arabic{subsection}}
\renewcommand{\thefigure}{S\arabic{figure}}
\setcounter{figure}{0}
\setcounter{subsection}{0}
\subsection{Related Work}
\input{related_work.tex}
\subsection{Additional Model Details}
\label{supmoddets}
\subsubsection{Notations}
Variables and indices, parameters and hyper-parameters and sets are as following:
\begin{itemize}
\item $V$ is the set of unique excisions, indexed by $v$ and its size of denoted by $|V|$.
\item $N$ is the number of samples and are indexed by $i$.
\item $J_i$
is the number of excisions in $i$th sample.
The excisions of a sample are indexed by $j$. But the length of all the samples are not necessarily the same. Furthermore, some samples might not have some of the excisions from the set of unique excisions $V$.
\item $K$ is the number of sequences of mRNA excisions (SMEs).
For the $j$th excision in the $i$th sample ($i\in \{1, \dots,N\}$ and $j \in \{1, \dots, J_i\}$), we assign an SME $k \in \{1, \dots, K\}$.
\item Graph $G = (V, E)$, where $V$ is the set of excisions and there is an edge between two excisions \textit{iff} their intersection is non-empty.
\item $\Omega$ is the set of all the independent sets in $G$.
\item $\mathcal{N}_v = \{u|\{u, v\} \in E(G)\}$ is the set of all the neighbors of node $v$ in the interval graph $G$.
\item $\phi^{it}_{k}$ is the selected configuration as SME $k$ in the iteration $it$ and follows a Multinomial distribution \\ ($ \sim Multinomial (\phi_{k1}, \phi_{k2}, \dots, \phi_{kt}, \dots, \phi_{kT})$).
\item $C$ is the set of all Bernoulli random variables required for encoding all conflicts in $G(V,E)$, and $|C|$ is equal to minimum node cover in $G$.
\item Hyper-parameter $\bm{\alpha} = (\alpha_1, \dots, \alpha_K)$ is a $K$-dimensional vector and prior for $\theta$ variable.
\item For the $i$th sample, variable $\theta_i \sim Dirichlet_K(\bm{\alpha})$ is a $K$-dimensional Dirichlet distribution and represents the proportions of the SMEs in sample $i$.
So $\bm{\theta}$ is a $N \times K$ matrix such that each row shows the distribution of SMEs for a sample and $\theta_{ik}$ shows the proportion of SME $k$ in sample $i$ ($\bm{\theta} \in \mathbb{R}^{N \times K}$).
\item Variable $z_{ij}$ is the SME assignment for $j$th excision in $i$th sample. It can take a natural value between $1$ and $K$ and follows a Multinomial distribution ($\bm{Z} \in \{1, \dots, K\}^{N \times J}$ and $z_{ij} \sim Multinomial(\theta_i)$).
\item Hyper-parameters $r$ and $s$ are priors for $\pi$ Beta distribution.
\item Variable $\pi_k \sim Beta(r,s), \forall k=\{1, \dots, K\}$, so $\bm{\pi}$ is a $K$-dimensional vector and prior for Bernoulli variable $\bm{b}$.
\item Hyper-parameter $\bm{\eta} = (\eta_1, \dots, \eta_{|V|})$ is a $|V|$-dimensional vector and prior for $\beta$ variable.
\item For SME $k$, $\beta_k \sim Dirichlet_{|V|}(\bm{\eta} \odot \bm{b_{k}})$ is a $|V|$-dimensional Dirichlet which represents the distribution of the SME $k$ over the excisions. $|V|$-dimensional vector $\bm{b_{k}} = (b_{k1}, \dots, b_{k|V|})$ (also written as $\bm{b_{k.}}$) is the $k$th row of the $\bm{b}$ matrix and collects the Bernoulli variables for all the unique excisions.
The \textit{dot} in $\bm{b_{k.}}$ means all the unique excisions in row $k$.
Notation $\odot$ is element-wise multiplication.
Matrix $\bm{\beta}$ is $K \times |V|$ and the element in $k$th row and $v$th columns shows the proportion of excisions $v$ in SME $k$, so matrix $\bm{\beta} \in \mathbb{R}^{K \times |V|}$.
Note that the Bernoulli variables can turn off/on certain dimensions of $\beta$ variables.
\item In the $i$th sample, the $j$th excision is $w_{ij}$ and is observed and follows a Multinomial distribution ($w_{ij} \sim Multinomial(\beta_{z_{ij}})$), so matrix $\bm{W}$ is a $N \times J$ ($W \in \{1, \dots, |V|\}^{N \times J}$) and $w_{ij}$ is the element in $i$th row and $j$th column of $\bm{W}$ and is $j$th excision in $i$th sample and is observed.
Note that in $\bm{W}$, row $i$ correspond to sample $i$, but not all the rows have the same number of columns due to the differences between the number of excisions in different samples, \emph{i.e.} row $i$ has exactly $J_i$ columns (elements) which correspond to the excisions in the sample $i$. We call $\bm{W}$ here as a matrix for the ease of notation, but it is actually a list of list.
The same explanation applies to matrix $\bm{Z}$ too.
\item $\oplus$ is exclusive OR.
\item $\odot$ is element-wise vector multiplication.
\end{itemize}
\newpage
\subsubsection{Graphical Model}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{figs/ismb_model.png}
\caption{Graphical model for {\sc BREM}{}. The variables $\bm{\pi}$, $\bm{b}$, and $\bm{\beta}$ control the global sequence of mRNA excisions (SME) structure, while $\bm{w}$, $\bm{z}$, and $\bm{\theta}$ control the sample-specific distribution of SMEs.}
\label{fig:graphicalmodel}
\end{figure}
\begin{align*}
\theta_i & \sim Dirichlet_K(\bm{\alpha}), &\forall i \in \{1, \dots, N\}\\
z_{ij} & \sim Multinomial(\theta_i), & \forall i \in \{1, \dots, N\}, \forall j \in \{1, \dots J_i\}\\
w_{ij} & \sim Multinomial(\beta_{z_{ij}}), &\forall i \in \{1, \dots, N\}, \forall j \in \{1, \dots J_i\}\\
\beta_k & \sim Dirichlet_{|V|}(\bm{\eta} \odot \bm{b_{k}}), &\forall k \in \{1, \dots, K\}\\
b_{kv} & \sim Bernoulli(\pi_k), &\forall k \in \{1, \dots, K\}, \forall v \in |C|\\
\pi_k & \sim Beta(r,s), &\forall k \in \{1, \dots, K\}
\end{align*}
In the calculations, B(.) in Dirichlet distribution is
\begin{align*}
B(\bm{\alpha}) &= \frac{\prod_{i=k}^K \Gamma(\alpha_i)}{\Gamma(\sum_{i=1}^K \alpha_i)}
\end{align*}
in which $\bm{\alpha}$ is the vector of concentration parameters and $K \ge 2$ is the number of SMEs
in Dirichlet. Gamma function $\Gamma(n)=(n-1)!$.
$\bm{C}$ is a $N \times K$ matrix, in which $c_{ik}$ is the number of excisions in the $i$th sample that have been assigned to SME $k$.
$\bm{\lambda}$ is a $K\times |V|$ matrix, in which $\lambda_{kv}$ is the number of times excision $v$ has been assigned to SME $k$.
\subsection{Inference - Gibbs Sampling}
\label{sec:supp_gibbs}
We fist compute the complete conditionals of all the variables in the model and then sample the variables according to order described in this section.
\subsubsection{Complete Conditional of \texorpdfstring{$\bm{\theta_i}$}{Ti}}
\begin{align}
p(\theta_i|\bm{\alpha}, z_{i, j=1:J_i}) & \propto p(z_{i, j=1:J_i}, \bm{\alpha}, \theta_i) \nonumber \\
&= \prod_{j=1}^{J_i} p(z_{ij}|\theta_i) p(\theta_i|\bm{\alpha}) \label{eq:theta2}
\end{align}
The first term in Eq. \ref{eq:theta2} is a Multinomial and the second term is a $K$-dimensional Dirichlet. So for updating $\theta_i$ we have:
\begin{align*}
p(\theta_i|\bm{\alpha}, z_{i, j=1:J_i}) & \propto \frac{B(\bm{\alpha}+\bm{c_{i.}})}{B(\bm{\alpha})} \times Dirichlet_K (\bm{\alpha}+\bm{c_{i.}})
\end{align*}
where $\bm{c_{i.}}$ is a $K$-dimensional vector containing the proportion of SMEs in sample $i$.
\begin{align*}
\bm{c_{i.}} = [c_{i,k=1}, c_{i, k=2}, \dots, c_{i, k=K}]
\end{align*}
\subsubsection{Complete Conditional of \texorpdfstring{$\bm{Z}$}{\textbf{Z}}}
Lets consider complete conditional of one variable $z_{ij}$ (SME assignment of $j$th excision in $i$th sample):
\begin{align*}
p(z_{ij}|\theta_i, w_{ij}, \bm{\beta_{1:K}}) & \propto p(z_{ij}, w_{ij}, \theta_i, \bm{\beta_{1:K}})\\
& \propto p(w_{ij}|z_{ij},\bm{\beta_{1:K}}) p(z_{ij}|\theta_i)
\end{align*}
The notation $\bm{\beta_{1:K}}$ means that variable $z_{ij}$ is dependent on $\beta_1$ to $\beta_K$.
Since $z_{ij} \in \{1, 2, \dots, K\}$ (discrete random variable), complete conditional for an assignment $z_{ij} = k$ would be
\begin{align*}
p(z_{ij} = k|\theta_i, w_{ij}, \bm{\beta_{1:K}}) &= \frac{p(z_{ij}=k| \theta_i) p(w_{ij}| z_{ij}=k, \bm{\beta_{1:K}})}{\sum_{k=1}^K p(z_{ij}=k| \theta_i) p(w_{ij}| z_{ij}=k, \bm{\beta_{1:K}})}
\end{align*}
And in sample $i$, for SME $k$:
\begin{align*}
p(z_{ij} = k|\theta_i, w_{ij}, \bm{\beta_{1:K}}) & \propto p(z_{ij}=k| \theta_i) p(w_{ij}| z_{ij}=k, \bm{\beta_{1:K}})\\
& \propto \theta^{c_{ik}}\theta^{\alpha_i-1} \times \prod_{v=1}^{|V|} \beta_{kv}^{\lambda_{kv}} \beta^{\eta_{kv}b_{kv}-1}
\end{align*}
The probability of assigning the (unique) excision $v$ (in any position $j$ in sample $i$) to SME $k$:
\begin{align*}
p(z_{ij} = k|\theta_i, w_{ij} = v, \bm{\beta_{1:K}}) &= \frac{\theta_{ik} \beta_{kv}}{\sum_{k=1}^{K}\theta_{ik} \beta_{kv}}
\end{align*}
\subsubsection{Complete Conditional of \texorpdfstring{$\bm{\beta}$}{\textbf{B}}}
For SME $k$, $\beta_k$ is a $|V|$-dimensional Dirichlet distribution over the unique excisions. We used Bernoulli variables $b$ that restrict the number of excisions in one SME (the size of $\beta_k$ variables) by defining $\beta_k \sim Dirichlet_{|V|}(\eta_1 b_{k1}, \dots, \eta_{|V|}b_{k|V|})$. So $\bm{\beta}$ is a degenerate Dirichlet distribution.
\begin{align*}
p(\beta_k|\bm{W}, \bm{Z}, \bm{b_{k.}}) & \propto p(\beta_k, w_{..}, z_{..}, \bm{b_{k.}})\\
&= p(w_{..}|z_{..},\beta_k) p(\beta_k| \bm{b_{k.}},\bm{\eta})
\end{align*}
For SME $k$:
\begin{align*}
p(\beta_k|\bm{W}, \bm{Z}, \bm{b_{k.}}) & \propto \prod_{v=1}^V \beta_{kv}^{\lambda_{kv}} \times \frac{\Gamma (\sum_{v=1}^V \eta_v b_{kv}}{\prod_{v=1}^V \Gamma(\eta_v b_{kv})} \times \prod_{v=1}^V \beta_{kv}^{\eta_v b_{kv}-1} \\
& \propto \prod_{v=1}^V \beta_{kv}^{\lambda_{kv}} \times \beta_{kv}^{\eta_v b_{kv}-1} \\
& \propto Dir_{|V|}(\bm{\lambda_{k.}}+\bm{\eta} \odot \bm{b_{k.}})
\end{align*}
In which vector $\bm{\lambda_{k.}}$ is the count of excisions that have been assigned to SME $k$:
\begin{equation*}
\bm{\lambda_{k.}} = [\lambda_{k, v=1}, \dots, \lambda_{k, v=|V|}]
\end{equation*}
\subsubsection{Complete Conditional of \texorpdfstring{$\bm{b}$}{\textbf{b}}}
Let the constrained space be $\Omega$, which spans over the set of all independent sets of excisions' interval graph $G$ (See Section \ref{sec:modeldescription}).
Here, we consider computing the Gibbs updates for independent sets $\{\Phi_1,\dots,\Phi_T\} \in \Omega$.
For ease of exposition, we consider $b_{kv}$ given a configuration $\hat{\Phi}$.
For an excision $v$, we compute the probability of occurrence of $v$ in SME $k$. This probability is obtained by the complete conditional of $b_{kv}$.
Note: For computing $b_{kv}$, we need to consider relevant dimensions of Dirichlet.
For example, in a SME $k$, for the calculation of complete conditional for $b_{kv} = 1$, such dimensions include all the excisions that are not in the neighborhood of $v$ ($\{v'|v'\notin \mathcal{N}_v\}$).
$\mathcal{N}_v$ is the set of all the neighbors of $v$, and does not include $v$ itself (open neighborhood).
We denote $\bm{b^{(-kv)}}$ as the vector of $b$ variables with $b_{kv}$ removed and suppress hyperparameters for readability when appropriate.
\begin{align*}
p(b_{kv}=1| \bm{\beta}, \bm{\pi}, \bm{b^{(-kv)}},\bm{W}, \bm{Z}, \bm{\theta}) \propto p(\bm{\beta}, \bm{\pi}, \bm{b},\bm{W}, \bm{Z}, \bm{\theta})\\
\propto p(b_{kv}=1| \beta_k,\pi_k,\bm{b_{k.}})\\
= p(b_{kv}=1|\pi_k) p(\pi_k|r,s)p(\beta_k|\bm{b_{k.}}, \bm{\eta}) \label{eq:gibbsb1}\\
= p(b_{kv} = 1|\pi_k) \frac{\Gamma(r+s)}{\Gamma(r)\Gamma(s)} \pi_k^{r-1} (1 - \pi_k)^{s-1} \\ \frac{\Gamma(\sum_{i \in \hat{\Phi} \cup \{b_{kv}\}} \eta_i b_{ki})}{\prod_{i \in \hat{\Phi} \cup \{b_{kv}\}}\Gamma(\eta_i b_{ki})} \prod_{i \in \hat{\Phi} \cup \{b_{kv}\}} \beta_{ki}^{\eta_i b_{ki} - 1}\\
\propto p(b_{kv}=1|\pi_k) \pi_k^{r-1} (1 - \pi_k)^{s-1} \\ \frac{\Gamma(\sum_{i \in \hat{\Phi} \cup \{b_{kv}\}} \eta_i b_{ki})}{\prod_{i \in \hat{\Phi} \cup \{b_{kv}\}}\Gamma(\eta_i b_{ki})} \prod_{i \in \hat{\Phi} \cup \{b_{kv}\}} \beta_{ki}^{\eta_i b_{ki} - 1}\\
\propto \pi_k \pi_k^{r-1} (1 - \pi_k)^{s-1} \\ \frac{\Gamma(\sum_{i \in \hat{\Phi} \cup \{b_{kv}\}} \eta_i b_{ki})}{\prod_{i \in \hat{\Phi} \cup \{b_{kv}\}}\Gamma(\eta_i b_{ki})} \prod_{i \in \hat{\Phi} \cup \{b_{kv}\}} \beta_{ki}^{\eta_i b_{ki} - 1}\\
\propto \pi_k^{r} (1 - \pi_k)^{s-1} \frac{\Gamma(\sum_{i \in \hat{\Phi} \cup \{b_{kv}\}} \eta_i b_{ki})}{\prod_{i \in \hat{\Phi} \cup \{b_{kv}\}}\Gamma(\eta_i b_{ki})} \prod_{i \in \hat{\Phi} \cup \{b_{kv}\}} \beta_{ki}^{\eta_i b_{ki} - 1}
\end{align*}
which is the product of a $Beta(r+1,s)$ and a degenerate Dirichlet.
In SME $k$, for computing complete conditional for $b_{kv} = 0$ we need to involve the other excisions except $v$, so:
\begin{align*}
p(b_{kv}=0| \bm{\beta}, \bm{\pi}, \bm{b^{(-kv)}},\bm{W}, \bm{Z}, \bm{\theta}) \propto p(\bm{\beta}, \bm{\pi}, \bm{b},\bm{W}, \bm{Z}, \bm{\theta}) \\
\propto p(b_{kv}=0| \beta_k,\pi_k,\bm{b_{k.}})
\propto p(b_{kv}=0|\pi_k)p(\pi_k|r,s)p(\beta_k|b_{k.}, \eta) \\
= p(b_{kv}=0|\pi_k) \frac{\Gamma(r+s)}{\Gamma(r)\Gamma(s)} \pi_k^{r-1} (1 - \pi_k)^{s-1} \frac{\Gamma(\sum_{i \in \hat{\Phi} \setminus \{b_{kv}\}} \eta_i b_{ki})}{\prod_{i \in \hat{\Phi} \setminus \{b_{kv}\}}\Gamma(\eta_i b_{ki})} \prod_{i \in \hat{\Phi} \setminus \{b_{kv}\}} \beta_{ki}^{\eta_i b_{ki} - 1}\\
\propto p(b_{kv}=0|\pi_k) \pi_k^{r-1} (1 - \pi_k)^{s-1} \frac{\Gamma(\sum_{i \in \hat{\Phi} \setminus \{b_{kv}\}} \eta_i b_{ki})}{\prod_{i \in \hat{\Phi} \setminus \{b_{kv}\}}\Gamma(\eta_i b_{ki})} \prod_{i \in \hat{\Phi} \setminus \{b_{kv}\}} \beta_{ki}^{\eta_i b_{ki} - 1}\\
\propto (1-\pi_k) \pi_k^{r-1} (1 - \pi_k)^{s-1} \frac{\Gamma(\sum_{i \in \hat{\Phi} \setminus \{b_{kv}\}} \eta_i b_{ki})}{\prod_{i \in \hat{\Phi} \setminus \{b_{kv}\}}\Gamma(\eta_i b_{ki})} \prod_{i \in \hat{\Phi} \setminus \{b_{kv}\}} \beta_{ki}^{\eta_i b_{ki} - 1}\\
\propto \pi_k^{r-1} (1 - \pi_k)^{s} \frac{\Gamma(\sum_{i \in \hat{\Phi} \setminus \{b_{kv}\}} \eta_i b_{ki})}{\prod_{i \in \hat{\Phi} \setminus \{b_{kv}\}}\Gamma(\eta_i b_{ki})} \prod_{i \in \hat{\Phi} \setminus \{b_{kv}\}} \beta_{ki}^{\eta_i b_{ki} - 1}
\end{align*}
which is the product of a $Beta(r,s+1)$ and a degenerate Dirichlet.
In general, we have $T$ independent sets $\{\Phi_1,\dots,\Phi_T\}$ and the update is computed by sampling
\begin{equation}
Cat\left( \frac{p(\Phi_1)}{\sum p(\Phi_{i=1}^T)},\dots,\frac{p(\Phi_T)}{\sum p(\Phi_{i=1}^T)} \right)
\end{equation}
We develop two algorithms for computing $\{\Phi_1,\dots,\Phi_T\}$.
First, we compute $p(b_{kv}=1|\cdot)$ and $p(b_{kv}=0|\cdot)$.
Then, for every node that is a neighbor to $b_{kv}$, we know this node is currently off.
So, we compute $p(b_{ki}=1,b_{kv}=0)$ for each $i \in \mathcal{N}_v$ if setting $b_{ki}=1$ yields a valid configuration.
Second, we update SME structure by moving from one independent set to another that is 'close'.
In the interval graph of excisions $G = (V,E)$, let $\Omega$ be the set of all the independent sets and $\phi_{kt} \subseteq \Omega$ be the $t$\textsuperscript{th} locally generated valid configuration for SME $k$. We define the neighbor of $\phi_{kt}$ as $\mathcal{N}(\phi_{kt})$, i.e. the set of the nodes that intersect with some of the nodes in $\phi_{kt}$ (or $\mathcal{N}(\phi_{kt}) = \{u| \{u,v\} \in E\text{ for some }v \in \phi_{kt}\}$). Then $p(\phi_{kt})$ is computed as the following:
\begin{align*}
p(\phi_{kt}| \bm{\beta}, \bm{\pi}, \bm{b},\bm{W}, \bm{Z}, \bm{\theta}) \propto \pi_k^{r+|\phi_{kt}|-1} (1-\pi_k)^{s+|\mathcal{N}(\phi_{kt})|-1} \frac{\Gamma(\sum_{i \in V\setminus \mathcal{N}(\phi_{kt})} \eta_i b_{ki})}{\prod_{i \in V\setminus \mathcal{N}(\phi_{kt})}\Gamma(\eta_i b_{ki})} \prod_{i \in V\setminus \mathcal{N}(\phi_{kt})} \beta_{ki}^{\eta_i b_{ki} - 1} \\
\end{align*}
\subsubsection{Complete Conditional of \texorpdfstring{$\pi_k$}{\textbf{Pi}}}
We define $m_k$ equal to the number of excisions that are selected in SME $k$, \emph{i.e.} the excisions whose corresponding Bernoulli variable is $1$ in the current Gibbs iteration:
\begin{equation*}
m_k = \sum_{v \in V} \bm{1}[b_{kv}=1], \hspace{20pt} \forall k \in K
\end{equation*}
\begin{align*}
p(\pi_k|\bm{b_{k.}},r,s)& \propto p(\pi_k, \bm{b_{k.}},r,s) \\
& =p(\pi_k|r,s)p(b_{k.}|\pi_k)\\
& =\frac{\Gamma(r+s)}{\Gamma(r)\Gamma(s)} \times \pi_k^{r-1}(1-\pi_k)^{s-1} \times \pi_k^{m_k} (1-\pi_k)^{|V|-m_k} \\
& \propto \pi_k^{r+m_k-1}(1-\pi_k)^{s+|V|-m_k-1} \\
&=\frac{\Gamma(r+m_k)\Gamma(s+|V|-m_k)}{\Gamma(r+s+|V|)} \times Beta(r+m_k, s+|V|-m_k)\\
& \propto Beta(r+m_k, s+|V|-m_k)
\end{align*}
\subsubsection{Likelihood}
Likelihood defines how likely, the data is generated according to the generative model.
\begin{align*}
p(\bm{W}|\bm{\beta}, \bm{Z}) & \propto p(\bm{W}, \bm{\beta}, \bm{Z}) \\
& \propto p(\bm{W}|\bm{\beta}, \bm{Z})p(\bm{Z}|\bm{\theta}) \\
& \propto \prod_{i=1}^N \prod_{k=1}^{K} \prod_{v=1}^{|V|} \beta_{kv}^{\xi^{(i)}_{kv}}
\end{align*}
where $\xi_{kv}^{(i)}$ is the number of times excision $v$ is assigned to SME $k$ in the sample $i$.
\subsubsection{Gibbs sampling algorithm}
The order of sampling is as follows:
\begin{align*}
& p(\theta_i|\bm{\alpha}, z_{i, j=1:J_i}) \propto \prod_{j=1}^{J_i} p(z_{ij}|\theta_i) p(\theta_i|\bm{\alpha}) \\
& p(z_{ij}|\theta_i, w_{ij}, \bm{\beta_{1:K}}) \propto p(w_{ij}|z_{ij},\bm{\beta_{1:K}}) p(z_{ij}|\theta_i) \\
& p(\beta_k|\bm{W}, \bm{Z}, \bm{b_{k.}}) \propto p(w_{..}|z_{..},\beta_k) p(\beta_k| \bm{b_{k.}},\bm{\eta}) \\
& p(b_{kv}=1| \bm{\beta}, \bm{\pi}, \bm{b^{(-kv)}},\bm{W}, \bm{Z}, \bm{\theta}) \propto p(b_{kv}=1| \beta_k,\pi_k,\bm{b_{k.}}) \\
& p(\pi_k|\bm{b_{k.}},r,s) \propto p(\pi_k|r,s)p(\bm{b_{k.}}|\pi_k)
\end{align*}
\subsubsection{Local search algorithm}
\label{sec:supp_localsearch}
We are given $\beta_k$, the proportion of excisions in SME $k$, $T$, the number of local independent sets, $S$, the set of nodes in SME $k$ (current configuration), $G = (V,E)$, the interval graph of the excisions, and $\omega(\bar{G})$, the size of maximum clique in the complement graph of $G$.
Then, Algorithm \ref{alg:localindsearch} outputs set $\Phi$, which includes $T$ local independent sets.
Since $G$ is an interval graph, independent sets can be computed efficiently~\cite{andrade2012fast}.
The algorithm first decides whether to add or remove elements from the current configuration by sampling from a Bernoulli which is proportional to the size of current configuration.
Then, excisions are selectively added or removed with probability proportional to $\beta_k$.
After Gibbs Sampling converges, {\sc BREM}{} collapses SMEs with the same excision configuration.
\newpage
\begin{algorithm}
\caption{Local Independent Set Search}
\label{alg:localindsearch}
\textbf{Input:} $ \beta_k$, $T$, $S$, $G=(V,E)$, $\omega(\bar{G})$\\
\textbf{Output:} $\Phi$
\hrule
\begin{algorithmic}[1]
\State $\Phi \gets \emptyset$
\While {$|\Phi| < T$}
\State $r \gets \mathcal{B}ern(1-\frac{|S|}{\omega(\bar{G})})$ \Comment{Sample proportional to $\omega(\bar{G})$}
\If{$r = 1$}
\State $\mathcal{N}_S \gets \{u\in V(G)| \{u,v\} \in E(G)\text{ for some }v \in V(G)\}$ \Comment{Set S neighborhood}
\State $free \gets V \setminus (\mathcal{N}_S \cup S)$
\If{$free \neq \emptyset$}
\State $sel \gets Cat(\beta_{k,i \in free})$ \Comment{\parbox[t]{.45\linewidth}{Among the nodes that if added, keeps S independent set, select based on their $\beta$ distribution}}
\State $S \gets S \cup \{sel\}$
\State $\Phi.append(S)$
\EndIf
\Else
\State $del \gets Cat(1-\beta_{k,i\in S})$ \Comment{\parbox[t]{.5\linewidth}{Among S, Select based on their $\beta$ distribution}}
\State $S \gets S \setminus \{del\}$
\State $\Phi.append(S)$
\EndIf
\EndWhile
\end{algorithmic}
\end{algorithm}
\subsection{Additional Notes for the Minimum Node Cover Algorithm}
\label{sec:node}
To identify the minimum node cover of an interval graph $G = (V,E)$ we followed an incremental algorithm proposed by~\cite{marathe1992generalized}.
The proof of correctness specifies that nodes are included in the vertex cover set only if their presence is absolutely essential.
Leveraging the properties of the interval graph, the algorithm first orders the vertices according to a PEO (Perfect Elimination Ordering) \cite{golumbic2004algorithmic} known as IG ordering \cite{ramalingam1988unified} in linear time and space on the order of $|E|$.
Then at each iteration, the minimum edge index which is connected to vertices is obtained.
This index captures the nesting property of the maximal clique in the graph.
Finally, by updating a weight counter associated to each nodes according to the mentioned index, the necessity of adding vertices to the minimum node cover is assessed.
Since calculating the minimum index for vertices takes linear time and for each vertex, the number of weight updates is equal to the degree of that vertex
(overall $\mathcal{O}(\sum_{v\in V} d_v)$ where $d_v$ is the degree of vertex $v$)
, the whole time complexity of the algorithm is on the order of $\mathcal{O}(|E|)$, where $|E|$ is the cardinality of edge set in the interval graph.
\subsection{Preprocessing of Samples}
\label{sec:supp_data_sim}
In this section, we present the commands we ran in the preprocessing step.
\subsubsection{STAR}
\begin{verbatim}
STAR --runThreadN 20 --genomeDir ../genome_data/genome_index/ \
--outFileNamePrefix ./person_${i}_ \
--twopassMode Basic --outSAMstrandField intronMotif \
--outSAMtype BAM SortedByCoordinate \
--readFilesIn ${1}/person_${i}_1.fa ${1}/person_${i}_2.fa
\end{verbatim}
GEUVADIS BAM files were downloaded from ArrayExpress (accession E-GEUV-6), which were generated by aligning fastq files using TopHat version 2.0.9 and human genome assembly version hg19.
We generated the EGA and simulated data BAM files using the STAR aligner (version 2.7.3a).
In this example $\${1}$ is the directory where the files are located.
The input file 'person\_*\_1.fa' is a collection of genes for the $i^{th}$ sample on forward sequence and the second is the collection of genes on the backward sequence.
This allows us to use the twopassMode and we also used the intronMotif in order to obtain spliced alignments (XS).
Once the files have been aligned they are then separated out into the individual bam files of just one gene to work on at a time.
\subsubsection{Regtools}
\begin{verbatim}
regtools junctions extract -s 0 -a 6 -m 50 -M 500000
\end{verbatim}
Regtools was used for an efficient filtering of the junctions.
Here the two \%s are the bam file and output name respectively.
On all data used in this project, EGA, Geuvadis, and simulations, we used the following flags:
\begin{itemize}
\item -s: finds XS/unstranded flags
\item -a: minimum anchor length into exon ($6$ bp)
\item -m: minimum intron size ($50$ bp)
\item -M: Maximum intron size ($500000$ bp)
\end{itemize}
\subsubsection{Portcullis}
The first step of portcullis is preparing the FASTA file of the reference genome; we present here an example used in the data simulations.
\begin{verbatim}
portcullis prep -t 20 -v --force -o
GRCh38.primary_assembly.genome.fa
\end{verbatim}
Portcullis was run on our simulations and both experimental results.
Here \%s is the name of the output folder and BAM file.
\begin{verbatim}
portcullis junc -t 20 -v -o
--intron_gff
\end{verbatim}
The next step is to extract junctions in a GFF format.
Here \%s refers to the name of the folder to search.
\begin{verbatim}
portcullis filt -t 20 -v -n --max_length 500000 \
--min_cov 30 -o
--intron_gff portcullis_all.junctions.tab
\end{verbatim}
Finally, we filter excisions; we only keep excisions that have a max length of $500000$ and have a minimum coverage of $30$.
Here again \%s is points to the gene folder to search into for the input files.
After portcullis is complete, we keep excisions with a 90\% overlap between regtools and portcullis.
\subsection{Additional Data Details}
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{figs/data2.png}
\caption{Histogram of the minimum node cover set size with respect to the number of unique intron excisions in a) simulated data b) GEUVADIS c) EGA.}
\label{fig:data_sim}
\end{figure}
\newpage
\subsection{Additional Results on Simulated Data}
\subsubsection{{\sc BREM}{} performance as a function of \texorpdfstring{$\bm{k}$}{\textbf{k}}.}
We evaluated the impact of the parameter that controls the number of SMEs ($K$) on the precision and recall; we varied $K$ from the chromatic number in the excision interval graph (equivalently, the size of the maximum independent set ($IS$) in the complement graph) to $IS + 16$.
Fig.~\ref{fig:fig1} shows precision and recall for $s^{phs}$ and $\hat{s}^{phs}$ in top and bottom.
\begin{figure}[h]
\centering
\includegraphics[trim={0 0 0 0}, clip, width=0.75\textwidth]{figs/both_bamie_relative_k_eq2_eq3.pdf}
\caption{Precision and recall for $s^{phs}$ (top) and $\hat{s}^{phs}$ (bottom) in models where $k = IS + x$, $x \in \{2, 4, 6, 8, 10, 12, 14, 16\}$ and $IS$ is the size of maximum independent set in the complement of the interval graph.}
\label{fig:fig1}
\end{figure}
\subsubsection{Performance of the model when alternative transcripts having substantial overlap in sequence content}
We evaluated the performance of the methods when there are substantial overlap in the interval graph.
We considered genes yielding complex graphs, i.e., the number of excision overlaps exceed 200 ($|E| > 200$).
Fig.~\ref{fig:prf_nf} shows the performance metrics on for different methods.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{figs/bioinf_complexity_supp.png}
\caption{Performance in complex genes where the number of edges exceed $200$. The x-axis shows the number of edges in the interval graph and the y-axis is the performance metric. }
\label{fig:prf_nf}
\end{figure}
\newpage
\subsubsection{Performance of the methods as function of number of unique intron excisions.}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{figs/bioinf_complexity_supp_n_nodes.png}
\caption{Performance as a function of number of excisions.The x-axis shows the number of nodes in the interval graph (unique intron excisions) and the y-axis is the performance metric. }
\label{fig:prf_nodes}
\end{figure}
\subsubsection{{\sc BREM}{} running time}
Since the number of iterations varied from different runs of the same gene, we computed the running time of {\sc BREM}{} per iteration for the simulated groups of genes.
We omitted the group of $17$ genes that had an average number of junction reads that was larger than $60,000$ since this group was small relative to the sample size ($1260$ genes).
Then we plotted the running time as a result of the number of $K$, the size of minimum node cover and the average number of junction reads (Fig~\ref{fig:runtime_2}). The model was trained on a server with Intel(R) Xeon(R) Gold 6242 @ 2.80GHz CPUs.
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{figs/run_time4.png}
\caption{{\sc BREM}{} running time (Sec. per iteration) as a function of a) number of SMEs ($K$), b) size of the minimum node cover set, c) average sample size.}
\label{fig:runtime_2}
\end{figure}
\newpage
\subsection{Additional Results on Experimental Data}
\subsubsection{P-value calibration.}
\begin{figure}[h!]
\centering
\includegraphics[width=1\textwidth]{figs/p_vals.pdf}
\caption{Differential SMEs are significantly enriched in the GEUVADIS and EGA data.}
\label{fig:exp}
\end{figure}
\newpage
\subsubsection{Novel Introns and SMEs in Experimental data}
We computed the count and percentage of introns and SMEs that are present in or absent from the annotation reference file.
Since our processing pipeline focuses on excisions, and is thus similar to LeafCutter, and we are testing reconstruction, we compared our results to only Cufflinks and StringTie.
After running the methods on experimental genes, we merged single individual transcript reconstructions to produce a single file per gene for Cufflinks and StringTie.
In BREM, a SME is expressed if it contains more than 10 junctions mapped to it. Additionally, the number of samples that express that SME is larger than 10.
Then in the merged file per gene, if a SME in {\sc BREM}{} or a transcript in StringTie and Cufflinks is a subset of any of the annotated transcripts, we count it as present, otherwise it is counted as absent. (Fig. \ref{fig:novel}, a and b)
For percentage plots, we take the percentage of present or absent for splice junctions and SMEs or transcripts separately (Fig. \ref{fig:novel}, c and d).
\begin{figure}[h]
\centering
\includegraphics[trim={0 0 0 0}, clip, width=1\textwidth]{figs/merged.png}
\caption{The count and percentage of mRNA excisions and SMEs or transcripts that are present or absent in the annotation file across {\sc BREM}{}, Cufflinks, and StringTie for experimental data. The two top plots show the counts of present and absent junctions for SMEs or transcripts in a) GEUVADIS and b) EGA.
The two plots in the bottom row show the percentage of present or absent junctions for SMEs or transcripts in c) GEUVADIS and d) EGA.}
\label{fig:novel}
\end{figure}
\newpage
|
1,314,259,993,238 | arxiv | \section{Introduction}\label{sec:INT}
A well-known, quite interesting solution in General Relativity~(GR) is a geometrical bridge connecting two far away regions in the Universe. It can also turn out that the bridge even connects two different Universes. Hermann Weyl was the first to discuss, in 1921, this concept of a wormhole or bridge~\cite{Weyl1921}. After that, a now famous example of a static wormhole appeared, now known as an Einstein-Rosen bridge~\cite{Einstein1935}. According to the discussions in the more recent literature, a traversable wormhole admits superluminal travel as a global effect of spacetime topology, making of the object a very interesting concept in modern theoretical physics (see, e.g.,~\cite{Houndjo:2012}~-~\cite{Rahaman:2007}). In general, a wormhole may be visualized as a tunnel with two mouths or ends, through which observers may safely traverse. The wormhole concept can be presented in terms of the metric, with several constraints, which any solution must satisfy in order to qualify as a wormhole. The metric of a static wormhole can be written as~\cite{Morris:1988}
\begin{equation}\label{eq:WHMetric}
ds^{2} = -U(r) dt^{2} + \frac{dr^{2}}{V(r)} + r^{2}d\Omega^{2},
\end{equation}
where $d\Omega^{2} = d\theta^{2} + sin^{2}\theta d\phi^{2}$ and $V(r) = 1-b(r)/r$. The function $b(r)$ in Eq.~(\ref{eq:WHMetric}) is called the shape function, since it represents the spatial shape of the wormhole. The redshift function $U(r)$ and the shape function $b(r)$ are bound to obey the following conditions~\cite{Morris:1988}:
\begin{enumerate}
\item The radial coordinate $r$ lies between $r_{0} \leq r < \infty$, where $r_{0}$ is the radius of the throat. The throat is the minimal surface area of the attachment.
\item At the throat, $r=r_{0}$, $b(r_{0}) = r_{0}$, and for the region out of the throat $1- b(r)/r > 0$.
\item $b^{\prime}(r_{0}) < 1$ (with the $\prime$ meaning derivative with respect to $r$); i.e., it should obey the flaring out condition at the throat.
\item $b(r)/r \to 0$, as $|r| \to \infty$, for asymptotic flatness of the space-time geometry.
\item $U(r)$ must be finite and non-vanishing at the throat $r_{0}$.
\end{enumerate}
However, in theory, it could be possible that the wormhole solution is not asymptotically flat, i.e., that the $b(r)/r \to 0$ condition is not satisfied and the wormhole is non-traversable. It is know from studies in the recent literature that, in these cases, to make the wormhole traversable one can efectively glue an exterior flat geometry into the interior geometry at some junction radius and thus get a useful result. Below, for some of the exact wormhole models to be considered in this paper, we will see that they actually are non-traversable wormholes, and this is the reason why this procedure could become potentially important for us. But, on the other hand, since the study of such models would be cumbersome, and it lies beyond the scope of the present paper, we will omit this treatment here. Another interesting aspect concerning traversable wormholes is their possible existence due to exotic matter at the throat, thus violating the null energy condition~(see for instance ~\cite{Morris:1988}~-~\cite{Visser2002}). This simply implies that the exotic matter either induces very strong negative pressures, or that the energy density is negative, as seen by static observers.
An important point is that the link between the existence of matter with negative pressure, in order to construct the wormhole configuration, and the explanation of the recently discovered accelerated expansion of the Universe has generated renewed interest towards wormholes. It is well known that, in the case of GR, it is necessary to have an energy source generating a negative pressure, in order to accelerate the expansion of the Universe (\cite{Bamba2012}~-~\cite{Khurshudyan2017i}, and references therein). The same source could in principle be used to construct a wormhole configuration for distant travel. There are actually different dark energy models, including some fluid models as the Chaplygin and the van der Waals gasses, with non-linear equations of state. In the recent literature, there are also various ways to present dark energy, thus motivating different studies, some of which have a lot in common with the models discussed here. On the other hand, in order to make a specific dark energy model to work competitively well, one needs to involve additional ideas, like a non-gravitational interaction between dark energy and dark matter. A non-gravitational interaction can be useful to solve the cosmological coincidence problem, as well, as has been discussed in various papers using phase-space analysis. But, also, a non-gravitational interaction can suppress or generate future time singularities. Detailed discussions of some of these topics can be found in the references at the end of this paper.
An alternative way to avoid dark energy and non-gravitational interactions of any sort, in the explanation of the observational data, is to consider modified theories of gravity. In the recent literature there are several well-motivated modifications of GR that have been used to construct wormholes, black holes, gravastars, and other kinds of star models. The advantage of a modification, making it very attractive for different applications, is the possibility to avoid the need of introducing any sort of dark energy~(see, e.g., \cite{Harko:2011kv}~-~\cite{Bamba:2008ut}). More precisely, a generic modification will add a term into the field equations, which in comparison to the field equation for GR will be interpreted as dark energy. A modified theory can be constructed by changing either the geometric or the matter part of the theory. In other words, each modification comes with its particular interpretation of the energy content of the Universe, responsible for its dynamics and physics.
Consideration of extra material contributions can, on the other hand, turn into a viable modified theory of gravity, as in the case of $f(\textit{R}, \textit{T})$ gravity, where $\textit{T}$ is the trace of the energy-momentum tensor, given by the following form of the total action~\cite{Harko:2011kv}
\begin{equation}\label{eq:Action}
S = \frac{1}{16 \pi} \int{ d^{4}x\sqrt{-g} f(\textit{R}, \textit{T}) } + \int{d^{4}x\sqrt{-g} L_{m}},
\end{equation}
where $f(\textit{R}, \textit{T})$ is an arbitrary function of the Ricci scalar, $R$, and of the trace of the energy-momentum tensor $T$, while $g$ is the metric determinant, and $L_{m}$ the matter Lagrangian density, related
to the energy-momentum tensor as
\begin{equation}
T_{ij} = -\frac{2}{\sqrt{-g}} \left[ \frac{\partial (\sqrt{-g} L_{m}) }{\partial g^{ij}} - \frac{\partial}{\partial x^{k}}
\frac{\partial(\sqrt{-g}L_{m})}{\partial(\partial g^{ij}/\partial x^{k})} \right].
\end{equation}
A priori, we would expect that the material corrections yielding this $f(\textit{R}, \textit{T})$ gravity could come from the existence of imperfect fluids. On the other hand, quantum effects, such as particle production, can also become a motivation to consider matter-content-modified theories of gravity. However, each of these specific modifications (change of the classical matter part of the theory) must be dealt with carefully, in order to avoid misleading interpretations of the results' physical meaning. Actually, $f(\textit{R}, \textit{T})$ gravity seems well suited to address wormhole construction issues and, being free from misleading aspect, has been very intensively considered in the recent literature. On the other hand, however, wormholes have not been detected yet and our final aim, as of now, in the study of these solutions can only be to improve our theoretical knowledge of the same. The growing number of papers that address different aspects of wormholes aim at clarifying its physical nature and this forces us to make different assumptions about its matter content, some of which can make the field equations too complicated to be treated analytically. Fortunately, in the literature we have various interesting exact wormhole models, obtained for GR and some modified theories of gravity. The models of the present paper will also be dealt with analytically, and will provide a new class of wormhole solutions not reported elsewhere.
In particular, we will be interested in finding new exact static wormhole models assuming different hypotheses for their matter content, in the frame of $f(\textit{R}, \textit{T})$ gravity with the action given by Eq.~(\ref{eq:Action}). In other words, we will construct exact wormhole models assuming that the energy density of the wormhole matter can be described by one of the following expressions: $\rho(r) = \alpha R(r) + \beta R^{\prime}(r)$, $\rho(r) = \alpha R^{2}(r) + \beta R^{\prime}(r)$ or $\rho(r) = \alpha R(r) + \beta R^{2}(r)$, respectively, and with $f(\textit{R}, \textit{T}) = R + 2 \lambda T$. Studying a particular wormhole solution, corresponding to $\rho(r) = \alpha R(r) + \beta R^{\prime}(r)$, we will conclude that, for appropriate values of the parameters of the model, we can expect violation of the NEC in terms of the pressure $P_{r}$, and of the DEC in terms of $P_{l}$, at the throat. But, on the other hand, $\rho \geq 0$, and validity of the NEC and DEC in terms of the pressures $P_{l}$ and $P_{r}$, will be assured everywhere, including at the throat of the wormhole. Therefore, we will report also violation of the WEC in terms of the $P_{r}$ pressure, while it will be satisfied in terms of the other pressure, $P_{l}$. Moreover, for the same model, our study will also conclude that, in the case $\beta > 0$, we will mainly observe regions where the violation of the NEC in terms of $P_{r}$ causes a violation of the DEC in terms of the $P_{l}$ pressure. However, if we consider a domain where $\beta < 0$, then we will find regions where both energy conditions in terms of both pressures are valid, simultaneously. We must also mention that the validity of the WEC in this case will also be observed owing to the fact that $\rho \geq 0$ is satisfied. A similar situation will be obtained when considering the impact of the $\alpha$ parameter on the validity of the energy conditions. A detailed analysis of the energy conditions for other models is to be found in the appropriate subsections below.
In the second part of the paper, corresponding to a different choice for $f(\textit{R}, \textit{T}) = R + \gamma R^{2} + 2 \lambda T$ gravity, in addition to the form of the matter-energy density, we will also specify the functional form of the shape function, and will establish the possible existence of appropriate static wormhole configurations; i.e. we will find the forms of the pressures $P_{r}$ and $P_{l}$ yielding a static traversable wormhole solution. In particular, we assume the two energy density profiles: $\rho(r) = \alpha R(r) + \beta R^{2}(r)$, and $\rho(r) = \alpha R(r) + \beta r^{3} R^{2}(r)$, respectively, to describe the matter content of the wormhole. Further, we will take the functional form of the shape function to be $b(r) = \sqrt{\hat{r}_{0} r}$~(where $\hat{r}_{0}$ is a constant). Study of the model with $\rho(r) = \alpha R(r) + \beta R^{2}(r)$ will lead to the result that there is a region where both energy conditions, i.e. NEC~($\rho + P_{r} \geq 0$ and $\rho + P_{l} \geq 0$) and DEC~($\rho - P_{r} \geq 0$ and $\rho - P_{l} \geq 0$), in terms of both pressures, are valid. In all cases, the Ricci scalar has the form
\begin{equation}\label{eq:RicciConst}
R(r) = \frac{2 b^{\prime}(r)}{r^{2}},
\end{equation}
which is obtained directly from the wormhole metric Eq.~(\ref{eq:WHMetric}). In the future, as it was done above already, we will omit the argument $r$, writing $R$ instead of $R(r)$.\\
The paper is organized as follows. In Sect.~\ref{sec:WMFE} we present a detailed form of the field equations to be solved, for $f(\textit{R}, \textit{T}) = R + 2 \lambda T$ and $f(\textit{R}, \textit{T}) = R + \gamma R^{2} + 2 \lambda T$ gravity, respectively. In Sect.~\ref{sec:RT} three exact wormhole solutions will be discussed, assuming that the matter content of the wormhole can be described by one of the following energy density profiles $\rho(r) = \alpha R(r) + \beta R^{\prime}(r)$, $\rho(r) = \alpha R^{2}(r) + \beta R^{\prime}(r)$ and $\rho(r) = \alpha R(r) + \beta R^{2}(r)$, when $f(\textit{R}, \textit{T}) = R + 2 \lambda T$. In Sect.~\ref{sec:RR2T} two wormhole solutions will be obtained, by assuming that $f(\textit{R}, \textit{T}) = R + \gamma R^{2} + 2 \lambda T$, and that the profile of the wormhole matter is one of the following: $\rho(r) = \alpha R(r) + \beta R^{2}(r)$, and $\rho(r) = \alpha R(r) + \beta r^{3} R^{2}(r)$. In all cases, a detailed study of the validity of the energy conditions will be carried out. Finally, Sect.~\ref{sec:Discussion} is devoted to a discussion and conclusions, with indication of future possible directions to pursue in order to complete this research project.
\section{Field equations}\label{sec:WMFE}
In this section, we adress some issues that are crucial to construct exact traversable wormhole solutions. Following Refs.~\cite{Cataldo:2011} and~\cite{Rahaman:2007}, we consider the case $U(r) = 1$. Moreover, the explicit form of the field equations for both gravities will be given. To proceed, let us assume that $L_{m}$ depends on the metric components only, which means that
\begin{equation}
T_{ij} = g_{ij}L_{m} - 2 \frac{\partial L_{m}}{\partial g^{ij}}.
\end{equation}
Varying the action, Eq.~(\ref{eq:Action}), with respect to the metric $g_{ij}$, provides the field equations
$$f_{R}(\textit{R}, \textit{T}) \left( R_{ij} - \frac{1}{3} R g_{ij} \right) + \frac{1}{6} f(\textit{R}, \textit{T}) g_{ij} = 8\pi G \left( T_{ij} - \frac{1}{3} T g_{ij} \right) -f_{T}(\textit{R}, \textit{T}) \left( T_{ij} - \frac{1}{3} T g_{ij} \right)$$
\begin{equation}
-f_{T}(\textit{R}, \textit{T}) \left( \theta_{ij} - \frac{1}{3} \theta g_{ij} \right) + \nabla_{i}\nabla_{j} f_{R}(\textit{R}, \textit{T}) ,
\end{equation}
with $f_{R}(\textit{R}, \textit{T}) = \frac{\partial f(\textit{R}, \textit{T}) }{\partial R}$, $f_{T}(\textit{R}, \textit{T}) = \frac{\partial f(\textit{R}, \textit{T}) }{\partial T}$ and
\begin{equation}
\theta_{ij} = g^{ij} \frac{\partial T_{ij}}{\partial g^{ij}}.
\end{equation}
To obtain wormhole solutions, we make a further assumption, namely that $L_{m} = - \rho$, in order not to imply the vanishing of the extra force. Now, if we take into account that $f(\textit{R}, \textit{T}) = R + 2 f(T)$, with $f(T) = \lambda T$~($\lambda$ is a constant), we can rewrite the above equations as
\begin{equation}\label{eq:G}
G_{ij} = (8\pi + 2\lambda) T_{ij} + \lambda (2\rho + T)g_{ij},
\end{equation}
where $G_{ij}$ is the the usual Einstein tensor. After some algebra, for $3$ of the components of the field equations, Eq.~(\ref{eq:G}), we get~\cite{Moraes:2017c}
\begin{equation}\label{eqF1}
\frac{b^{\prime}(r)}{r^{2}} = (8\pi + \lambda)\rho - \lambda (P_{r} + 2P_{l}),
\end{equation}
\begin{equation}\label{eqF2}
-\frac{b(r)}{r^{3}} = \lambda \rho + (8\pi + 3\lambda)P_{r} + 2\lambda P_{l},
\end{equation}
\begin{equation}\label{eqF3}
\frac{b(r) - b^{\prime}(r)r}{2r^{3}} = \lambda \rho + \lambda P_{r} + (8\pi + 4 \lambda) P_{l},
\end{equation}
where we have used the static wormhole metric given by Eq.~(\ref{eq:WHMetric}). To derive the above equations, we have considered an anisotropic fluid with matter content of the form $T^{i}_{j} = diag(-\rho, P_{r},P_{l},P_{l})$, where $\rho = \rho (r)$~($P_{r} = P_{r}(r)$ and $P_{l} = P_{l}(r)$) is the energy density, while $P_{r}$ and $P_{l}$ are the radial and lateral pressures, respectively. They are measured perpendicularly to the radial direction. The trace, $T$, of the energy-momentum tensor reads $T = -\rho + P_{r} + 2P_{l}$. Moreover, Eqs.~(\ref{eqF1})~-~(\ref{eqF3}) admit the solutions
\begin{equation}\label{eq:rho}
\rho = \frac{b^{\prime}(r) }{r^{2}(8 \pi + 2 \lambda )},
\end{equation}
\begin{equation}\label{eq:Pr}
P_{r} = - \frac{b(r)}{r^{3}(8\pi + 2\lambda )},
\end{equation}
and
\begin{equation}\label{eq:Pl}
P_{l} = \frac{b(r) - b^{\prime}(r)r}{2r^{3}(8\pi + 2\lambda )}.
\end{equation}
It is obvious that when imposing a form for the energy density, then the shape function $b(r)$ is obtained by direct integration of Eq.~(\ref{eq:rho}).
To finish this section, we recall some aspects concerning the above calculations, which will yield the equations required to construct wormhole solutions in
\begin{equation}
f(R,T) = R + \gamma R^{2} + 2 f(T)
\end{equation}
gravity. In particular, it is easy to see that for the wormhole metric Eq.~(\ref{eq:WHMetric}), for $\hbox{$\rlap{$\sqcup$}\sqcap$} f_{R}$, we have
\begin{equation}
\hbox{$\rlap{$\sqcup$}\sqcap$} f_{R} = \left ( 1-\frac{b(r)}{r} \right ) \left( \frac{f^{\prime}_{R}}{r} + f^{\prime \prime}_{R} + \frac{f^{\prime}_{R} (b(r) - b^{\prime}(r) r)}{2r^{2} (1 - b(r)/r)} \right),
\end{equation}
while
\begin{equation}
\nabla_{1}\nabla_{1} f_{R} = \frac{f^{\prime}_{R}(b(r) - b^{\prime}(r)r)}{2r^{2} (1-b(r)/r)} + f^{\prime \prime}_{R},
\end{equation}
\begin{equation}
\nabla_{2} \nabla_{2} f_{R} = r\left ( 1-\frac{b(r)}{r} \right ) f^{\prime}_{R},
\end{equation}
$\nabla_{0} \nabla_{0} f_{R} = 0$ and $\nabla_{3} \nabla_{3} f_{R} = r\left ( 1-\frac{b(r)}{r} \right ) f^{\prime}_{R} \sin^{2}\theta$. Therefore, after some algebra, for the field equations we obtain
\begin{equation}
\frac{b^{\prime}(r)}{r^{2}} = 8\pi \rho - \frac{\gamma}{2}R^{2} - \lambda T + \hbox{$\rlap{$\sqcup$}\sqcap$} f_{R},
\end{equation}
$$
-\frac{b(r)}{r^{3}} = 8 \pi P_{r} + 2 \lambda (P_{r} + \rho) + \frac{\gamma}{2}R^{2} + \lambda T + 2 \gamma R \left( \frac{b(r)-b^{\prime}(r)r}{r^{3}}\right) + $$
\begin{equation}
+ \frac{b(r) - b^{\prime}(r)r}{2r^{2}} f^{\prime}_{R} + \left( 1 - \frac{b(r)}{r}\right) f^{\prime \prime}_{R} - \hbox{$\rlap{$\sqcup$}\sqcap$} f_{R},
\end{equation}
and
\begin{equation}
\frac{b(r) - b^{\prime}(r)r}{2r^{3}} = 8 \pi P_{l} + 2 \lambda(P_{l} + \rho) - \gamma R \frac{b(r) + b^{\prime}(r)r}{r^{3}} + \frac{\gamma}{2}R^{2} + \lambda T + \frac{1}{r} \left( 1 - \frac{b(r)}{r}\right) f^{\prime}_{R} - \hbox{$\rlap{$\sqcup$}\sqcap$} f_{R}.
\end{equation}
\section{Models in $f(\textit{R}, \textit{T}) = R + 2 \lambda T$ gravity}\label{sec:RT}
In this section we perform an analysis of $3$ different exact static wormhole models, taking into account that $f(\textit{R}, \textit{T}) = R + 2 \lambda T$.
\subsection{Matter with $\rho(r) = \alpha R(r) + \beta R^{\prime}(r)$}
Let us study wormhole formation in the presence of matter when its energy density is given by
\begin{equation}\label{eq:RRprime_rho}
\rho(r) = \alpha R(r) + \beta R^{\prime}(r),
\end{equation}
where the prime means derivative with respect to $r$, while $R(r)$ is the Ricci scalar given by Eq.~(\ref{eq:RicciConst}). With such assumption, a direct integration of Eq.~(\ref{eq:rho}) gives a wormhole solution described by the following shape function
\begin{equation}\label{eq:RRprime_br}
b(r) = c_2-\frac{4 \beta c_1 (\lambda +4 \pi ) e^{\frac{-A r}{4 \beta \lambda +16 \pi \beta }} \left(32 \beta ^2 (\lambda +4 \pi )^2+ A^2 r^2 +8 \beta (\lambda +4 \pi ) A r\right)}{A^3},
\end{equation}
where $A = 4 \alpha \lambda +16 \pi \alpha -1$. Despite the long and complicated form of the shape function, Eq.~(\ref{eq:RRprime_br}), the derivative has a very simple form, as
\begin{equation}
b^{\prime}(r) = c_{1} r^2 e^{\frac{-A r}{4 \beta \lambda +16 \pi \beta }}.
\end{equation}
Therefore, eventually after some algebra, for the explicit form of the energy density we get
\begin{equation}\label{eq:RRprime_rhoEXP}
\rho = \frac{c_{1}}{2 \lambda +8 \pi } e^{\frac{-A r}{4 \beta \lambda +16 \pi \beta }},
\end{equation}
and it is not hard to find the explicit forms of the $P_{r}$ and $P_{l}$ pressures from Eqs.~(\ref{eq:Pr}) and~(\ref{eq:Pl}), respectively. After some algebra, we obtain
\begin{equation}\label{eq:RRprime_Pr}
P_{r} = -\frac{c_{2}}{2 r^3 (\lambda +4 \pi)} + \frac{4 \beta c_{1} e^{r \left(\frac{1}{4 \beta \lambda +16 \pi \beta }-\frac{\alpha }{\beta }\right)} \left(32 \beta ^2 (\lambda +4 \pi )^2+8 \beta (\lambda +4 \pi ) A r + A^{2} r^{2}\right)}{A^3},
\end{equation}
and
\begin{equation}\label{eq:RRprime_Pl}
P_{l} = \frac{c_{2}}{4 (\lambda +4 \pi ) r^3} - \frac{ c_{1} e^{\frac{-A r}{4 \beta \lambda +16 \pi \beta }} (4 \beta (\lambda +4 \pi )+A r) \left(32 \beta ^2 (\lambda +4 \pi )^2+ A^{2}r^{2}\right)}{4 A^3 (\lambda +4 \pi ) r^3},
\end{equation}
respectively. Now, let us discuss a particular wormhole solution described by $\rho$, $P_{r}$ and $P_{l}$ given by Eq.~(\ref{eq:RRprime_rhoEXP}), Eq.~(\ref{eq:RRprime_Pr}) and Eq.~(\ref{eq:RRprime_Pl}), respectively.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=80 mm]{Fig1_a.jpeg} &&
\includegraphics[width=80 mm]{Fig1_b.jpeg} \\
\end{array}$
\end{center}
\caption{Behavior of the shape function $b(r)$ for the model described by Eq.~(\ref{eq:RRprime_rho}) (left plot). We see that the solution for $b(r)$ satisfies $1- b(r)/r > 0$, for $r > r_{0}$. The rhs plot shows that the DEC in terms of the $P_{r}$ pressure, and the NEC in terms of the $P_{l}$ pressure are valid everywhere, and that $\rho \geq 0$ also holds everywhere. On the other hand, the same plot demonstrates that the NEC and the DEC in terms of the $P_{r}$ and $P_{l}$ pressures, respectively, are not valid at the throat of the wormhole. This particular wormhole model has been obtained for $c_{1} = 0.05$, $c_{2} = 1.275$, $\alpha = 1.5$, $\beta = 2.5$ and $\lambda = 1$.}
\label{fig:Fig0}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=80 mm]{Fig2_a.jpeg} &&
\includegraphics[width=80 mm]{Fig2_b.jpeg} \\
\includegraphics[width=80 mm]{Fig2_c.jpeg} &&
\includegraphics[width=80 mm]{Fig2_d.jpeg} \\
\end{array}$
\end{center}
\caption{Behavior of the NEC in terms of the pressures $P_{r}$ and $P_{l}$, respectively, is depicted on the top panel. The NEC in terms of $P_{r}$ is given by the top-left plot, while the top-right plot represents the NEc in terms of the $P_{l}$ pressure. The bottom panel shows the behavior of the DEC in terms of both pressures. In particular, the bottom-left plot corresponds to the behavior of the DEC in terms of the $P_{r}$ pressure. The graphical behavior of the DEC in terms of the $P_{l}$ pressure is presented on the bottom-right plot. The model is described by Eq.~(\ref{eq:RRprime_rho}) and the depicted graphical behavior for the energy conditions has been obtained for $c_{2} = 1.5$, $c_{1} = 0.5$, $\alpha = 0.5$, $\lambda = -10$, and for different negative values of $\beta$ parameter.}
\label{fig:Fig0_a}
\end{figure}
A particular wormhole solution can be found, for instance, if we consider $c_{1} = 0.05$, $c_{2} = 1.275$, $\alpha = 1.5$, $\beta = 2.5$ and $\lambda = 1$. This is a wormhole model, the throat of which occurs at $r_{0} \approx 0.8$ and $b^{\prime}(r_{0}) \approx 0.02$. The graphical behaviors of the shape function and $1-b(r)/r$ are presented on the left plot of Fig.~(\ref{fig:Fig0}). On the other hand, the graphical behavior of the energy conditions can be found on the right plot of the same figure. In particular, for this specific wormhole solution we should expect a violation of the NEC in terms of the pressure $P_{r}$ and of the DEC in terms of $P_{l}$, at the throat. On the other hand, $\rho \geq 0$, and the validity of the NEC and DEC in terms of the pressures $P_{l}$ and $P_{r}$ can be observed everywhere, including at the throat of the wormhole. Therefore, we will observe also the violation of the WEC in terms of the $P_{r}$ pressure, while it will remain valid in terms of the $P_{l}$ pressure.
In general, our study shows that, for the case $\beta > 0$, we will mainly get regions where the violation of the NEC in terms of $P_{r}$ cause a violation of the DEC in terms of $P_{l}$ pressure. However, if we consider $\beta < 0$ regions, then we can find regions where both energy conditions in terms of both pressures are satisfied, at the same time. The plots of Fig.~(\ref{fig:Fig0_a}) correspond to an example of one of these valid regions, where both the NEC and the DEC in terms of both pressures are fulfilled, and this for $c_{2} = 1.5$, $c_{1} = 0.5$, $\alpha = 0.5$, $\lambda = -10$, and for different values of the $\beta$~($<0$) parameter. Also, it should be mentioned that the validity of the WEC in this case follows from the fact that $\rho \geq 0$ is also satisfied. A similar situation has been reached when we have studied the impact of the $\alpha$ parameter on the validity of the energy conditions.
\subsection{Matter with $\rho(r) = \alpha R^{2}(r) + \beta R^{\prime}(r)$}
Now, we will concentrate our attention on another exact static wormhole model, which can be described by the following shape function
\begin{equation}\label{R2Rprime_br}
b(r) = \frac{\beta \left(8 \beta (\lambda +4 \pi ) \left(r \text{Li}_2\left(A_{1}\right)-4 \beta (\lambda +4 \pi ) \text{Li}_3\left(A_{1}\right)\right)+r^2 \log \left(1-A_{1} \right)\right)}{2 \alpha }+c_4,
\end{equation}
where $A_{1} = -\frac{8 e^{\frac{r}{4 \lambda \beta +16 \pi \beta }} \alpha (\lambda +4 \pi )}{c_3}$, while $\text{Li}_2$ and $\text{Li}_3$ are the polylogarithm functions of orders $2$ and $3$, respectively. This solution, for the shape function $b(r)$, has been obtained when we have assumed that the matter content is described by the following energy density
\begin{equation}\label{eq:R2Rprime_rho}
\rho = \alpha R(r)^2+\beta R^{\prime}(r).
\end{equation}
On the other hand, as
\begin{equation}
b^{\prime}(r) = \frac{r^2}{8 \alpha (\lambda +4 \pi )+ c_{3} e^{-\frac{r}{4 \beta \lambda +16 \pi \beta }}},
\end{equation}
then, similarly to the previous case, for $\rho$, $P_{r}$ and $P_{l}$ we get
\begin{equation}\label{eq:R2Rprime_rhosol}
\rho = \frac{1}{16 \alpha (\lambda +4 \pi )^2+2 c_{3} (\lambda +4 \pi ) e^{-\frac{r}{4 \beta \lambda +16 \pi \beta }}},
\end{equation}
\begin{equation}\label{eq:R2Rprime_Pr}
P_{r} = -\frac{c_{4}}{2 (\lambda +4 \pi ) r^3} + \frac{\beta \left(8 \beta (\lambda +4 \pi ) \left(r \text{Li}_2\left(A_{1}\right)-4 \beta (\lambda +4 \pi ) \text{Li}_3\left(A_{1}\right)\right)+r^2 \log \left (1-A_{1}\right)\right)}{4 \alpha (\lambda +4 \pi ) r^3},
\end{equation}
and
\begin{equation}\label{eq:R2Rprime_Pl}
P_{l} = \frac{1}{4 (\lambda +4 \pi ) r^3} \left ( c_{4} -\frac{r^3}{8 \alpha (\lambda +4 \pi )+ c_{3} e^{-\frac{r}{4 \beta \lambda +16 \pi \beta }}} + 2 \left( 2 (\lambda +4 \pi ) r^3 P_{r} + c_{4} \right) \right ),
\end{equation}
respectively.
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=80 mm]{Fig3_a.jpeg} &&
\includegraphics[width=80 mm]{Fig3_b.jpeg} \\
\end{array}$
\end{center}
\caption{A plot of the shape function $b(r)$ for the model described by Eq.~(\ref{eq:R2Rprime_rho}) is presented on the lhs. It readily shows that the solution for $b(r)$ satisfies $1- b(r)/r > 0$, for $r > r_{0}$. The rhs plot proves that the DEC in terms of the $P_{r}$ pressure, and the NEC in terms of the $P_{l}$ pressure are valid everywhere, and that $\rho \geq 0$ also holds everywhere. On the other hand, the same plot also shows that the NEC and the DEC in terms of the pressures $P_{r}$ and $P_{l}$, respectively, are not valid at the throat of the wormhole. This specific wormhole model has been obtained for $c_{3} = 7.23$, $c_{4} = 1.5$, $\alpha = 0.5$, $\beta = -0.05$, and $\lambda = -5$. The throat occurs for $r_{0} = 0.8$ and $b^{\prime}(r_{0}) \approx 0.015$.}
\label{fig:Fig0_1}
\end{figure}
\begin{figure}[h!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=80 mm]{Fig4_a.jpeg} &&
\includegraphics[width=80 mm]{Fig4_b.jpeg} \\
\end{array}$
\end{center}
\caption{The graphical behavior of NEC in terms of $P_{r}$ pressures is presented on the left plot. The right plot represents the graphical behavior $\rho$. The model is described by Eq.~(\ref{eq:R2Rprime_rho}) and the presented graphical behavior has been obtained for $c_{3} = 1.23$, $c_{4} = 0.5$, $\alpha = 0.5$, and $\lambda = 5$. and for different negative values of $\beta$ parameter.}
\label{fig:Fig0_b}
\end{figure}
The depicted behaviors of the shape function and the energy conditions for a specific wormhole model, corresponding to $c_{3} = 7.23$, $c_{4} = 1.5$, $\alpha = 0.5$, $\beta = -0.05$, and $\lambda = -5$ can be found in Fig.~(\ref{fig:Fig0_1}). The throat of this specific wormhole occurs at $r_{0} = 0.8$ and $b^{\prime}(r_{0}) \approx 0.0154$. The study of this particular case shows that the energy conditions have the same qualitative behavior as in the case for the model with the energy density described by Eq.~(\ref{eq:RRprime_rho}). Therefore, we will omit further discussion of this issue, to save space. Rather, we would like to concentrate our attention on some region where both the NEC and WEC in terms of the $P_{r}$ pressure are fulfilled, since $\rho \geq 0$. We can see this in Fig.~(\ref{fig:Fig0_b}). The behavior of the NEC and $\rho$ depicted there correspond to $c_{3} = 1.23$, $c_{4} = 0.5$, $\alpha = 0.5$, and $\lambda = 5$, and for some negative values of the $\beta$ parameter. Moreover, we also see that, for the same case, the NEC in terms of $P_{l}$ will be valid too, but only for $\beta \in [-0.1, 0.0]$. On the other hand, the DEC in terms of the $P_{r}$ pressure will not be valid at all, while the DEC in terms of $P_{l}$ will be fulfilled. Always, the parameter space can be divided in such a way that, at each region, some group of energy conditions are valid. In future analyses, when we will be able to constrain the equation of state of the wormhole matter content, then it will be possible to mention clearly to which region of the model parameter space we should concentrate our attention.
In the next subsection we will consider other exact static wormhole solutions, which, in order to be to be traversable, should be described by a constant shape function: despite a complex form of the matter energy density and the exact form of the shape function, which is not a constant. In other words, the solutions obtained will be described by non-traversable exact static wormhole models, i.e., in these cases, when the shape function is not constant, we will obtain non-traversable wormhole solutions characterized by the impossibility to satisfy the asymptotic flatness requirement. In summary, the study of these new models shows that the parameters of the same are such that the shape functions turn out to be constant, in order to describe viable traversable wormholes.
\subsection{Matter with $\rho(r) = \alpha R(r) + \beta R^{2}(r)$}\label{ss:ssec_3}
The study of another case shows that, if we assume the matter content of the wormhole to be
\begin{equation}\label{eq:RR2_rho}
\rho = \alpha R(r) + \beta R^{2}(r),
\end{equation}
then we obtain two wormhole solutions. One solution will describe a wormhole with a constant shape function, i.e. $b(r) = const$~(which means that we have traversable wormhole solution), while the second solution will describe a wormhole with
\begin{equation}\label{eq:RR2_br}
b(r) = c_5-\frac{r^3 (4 \alpha \lambda +16 \pi \alpha -1)}{24 \beta (\lambda +4 \pi )},
\end{equation}
where $c_{5}$ is the integration constant. Now, let us concentrate our attention to the last case, that is, when the shape function is given by Eq.~(\ref{eq:RR2_br}). In particular, for $\rho$, $P_{r}$ and $P_{l}$, we get
\begin{equation}
\rho = \frac{1-4 \alpha (\lambda +4 \pi )}{16 \beta (\lambda +4 \pi )^2},
\end{equation}
\begin{equation}
P_{r} = \frac{1}{48 (\lambda +4 \pi )^2} \left( \frac{4 \alpha (\lambda +4 \pi )-1}{\beta }-\frac{24 c_{5} (\lambda +4 \pi )}{r^3} \right),
\end{equation}
and
\begin{equation}
P_{l} = \frac{1}{48 (\lambda +4 \pi )^2} \left( \frac{4 \alpha (\lambda +4 \pi )-1}{\beta }+\frac{12 c_{5} (\lambda +4 \pi )}{r^3} \right),
\end{equation}
respectively, using Eqs.~(\ref{eq:rho}),~(\ref{eq:Pr}), and~(\ref{eq:Pl}). Moreover, the NEC in terms of $P_{r}$ and $P_{l}$, reads
\begin{equation}
\rho + P_{r} = \frac{1}{24 (\lambda +4 \pi )^2} \left( \frac{1-4 \alpha (\lambda +4 \pi )}{\beta }-\frac{12 c_{5} (\lambda +4 \pi )}{r^3} \right),
\end{equation}
and
\begin{equation}
\rho + P_{l} = \frac{6 \beta c_{5} (\lambda +4 \pi )+r^3 (1-4 \alpha (\lambda +4 \pi ))}{24 \beta (\lambda +4 \pi )^2 r^3}.
\end{equation}
On the other hand, the DEC in terms of $P_{r}$ and $P_{l}$, reads
\begin{equation}
\rho - P_{r} = \frac{6 \beta c_{5} (\lambda +4 \pi )+r^3 (1-4 \alpha (\lambda +4 \pi ))}{12 \beta (\lambda +4 \pi )^2 r^3},
\end{equation}
and
\begin{equation}
\rho - P_{l} = \frac{r^3 (1-4 \alpha (\lambda +4 \pi ))-3 \beta c_{5} (\lambda +4 \pi )}{12 \beta (\lambda +4 \pi )^2 r^3},
\end{equation}
respectively. However, further study shows that only the solutions with constant shape function represent traversable wormholes. In other cases, we will have non-traversable wormhole models. Moreover, exact wormhole solutions possessing the same properties can be constructed with $\rho = \alpha R(r) + \beta R^{-2}(r)$, $\rho = \alpha R(r) + \beta r R^{2}(r)$, $\rho = \alpha R(r) + \beta r^{-1} R^{2}(r)$, $\rho = \alpha R(r) + \beta r ^{2} R^{2}(r)$ and $\rho = \alpha R(r) + \beta r^{3} R^{2}(r)$, as well. In other words, in these cases the study also confirms that for the values of the model parameters ensuring that the shape function satisfies the required constraints, including the asymptotic flatness requirement, leaves only the wormhole models with constant shape function---the values of the model parameters makes the shape function $b(r)$ a constant. On the other side, a situation similar to the two previous cases, of the NEC and DEC in terms of the $P_{r}$ and $P_{l}$ pressures, is not valid at the throat of the wormhole. Validity will be observed only far from the throat. Another family of exact wormholes with the same nature can be constructed when the matter energy density has the following form
\begin{equation}
\rho = \alpha r^m R(r) \log (\beta R(r)).
\end{equation}
We already mentioned that, in theory, we can glue an exterior flat geometry into the interior geometry at some junction radius, making these solutions to represent traversable wormholes. However, an interesting question relevant to models presented above arises. Namely, what is the role of $R^{\prime}(r)$ in the traversable wormhole formation process? On the other hand, another interesting question is: does the assumption $L_{m} = -\rho$, with the matter energy density considered in this subsection, prevent the formation of traversable wormholes? This should be answered, as well. We hope to discuss these issues in a forthcoming paper.
\section{Some models in $f(\textit{R}, \textit{T}) = R + \gamma R^{2} + 2 \lambda T$ gravity}\label{sec:RR2T}
In this section we want to address another interesting question concerning the models obtained in \ref{ss:ssec_3}, i.e., the models, which for $f(\textit{R}, \textit{T}) = R + 2 \lambda T$ gravity yield non-traversable wormholes. We will here consider these models from another viewpoint. Namely, if the reason for non-traversability were the considered form of $f(\textit{R}, \textit{T})$ gravity, then we could change this and consider, for instance, gravity of the form $f(\textit{R}, \textit{T}) = R + \gamma R^{2} + 2 \lambda T$. On the other hand, in order to construct exact wormhole models, we take advantage of the following shape function
\begin{equation}\label{eq:brgiven}
b(r) = \sqrt{\hat{r}_{0} r}.
\end{equation}
It is easy to see that it describes a traversable wormhole. In this case, according to the structure of the equations, we need just reconstruct the forms of the $P_{r}$ and $P_{l}$ pressures, from two algebraic equations.
Let us, as an example, study the model described by Eq.~(\ref{eq:RR2_rho}). After some algebra, both pressures can be written in the following way
\begin{equation}\label{eq:Prgiven}
P_{r} = -\frac{2 r^2 (4 \alpha (\lambda +4 \pi )+1) \sqrt{r \hat{r}_{0}}+70 \gamma \sqrt{r \hat{r}_{0}}+8 \beta (\lambda +4 \pi ) \hat{r}_{0}-71 \gamma \hat{r}_{0}}{8 (\lambda +4 \pi ) r^5},
\end{equation}
and
\begin{equation}\label{eq:Plgiven}
P_{l} = \frac{256 \pi ^2 \left(\alpha r^2 \sqrt{r \hat{r}_{0}}+\beta \hat{r}_{0}\right) + B + C}{16 \lambda (\lambda +4 \pi ) r^5},
\end{equation}
where
$$B = 8 \pi \left(2 r^2 (8 \alpha \lambda -1) \sqrt{r \hat{r}_{0}}+50 \gamma \sqrt{r \hat{r}_{0}}+16 \beta \lambda \hat{r}_{0}-57 \gamma \hat{r}_{0}\right),$$ and
$$C = \lambda \left(2 r^2 (8 \alpha \lambda -1) \sqrt{r \hat{r}_{0}}+170 \gamma \sqrt{r \hat{r}_{0}}+16 \beta \lambda \hat{r}_{0}-185 \gamma \hat{r}_{0}\right).$$
This means, that we have a traversable wormhole model, which shape function is given by Eq.~(\ref{eq:brgiven}), the matter content is described by Eq.~(\ref{eq:RR2_rho}) for the energy density, while the $P_{r}$ and $P_{l}$ pressures are given by Eqs.~(\ref{eq:Prgiven}) and~(\ref{eq:Plgiven}), respectively. Moreover, the plotted behaviors of the NEC and DEC in terms of both pressures, $P_{r}$ and $P_{l}$, are in Fig.~(\ref{fig:Fig1_a}). These behaviors have been obtained for $\hat{r}_{0} = 2$, $\lambda = -15$, $\beta = 0.7$, $\alpha = 1.5$, and for different values of the $\gamma$ parameter, responsible for the $R^{2}$ contribution to gravity. We see, that there is a region where both energy conditions in terms of both pressures are fulfilled. Moreover, we checked also that for the same region presented in Fig.~(\ref{fig:Fig1_a}), the SEC is also valid, and since $\rho \geq 0$, then we expect also fulfillment of the WEC. However, another interesting situation, which deserves our attention, has been observed for $\hat{r_{0}} = 2$, $\lambda = 15$, $\beta = 0.7$ and $\alpha = -1.5$, with different values for the $\gamma$ parameter. In particular, we have observed, that for small $r$ and for the considered $\gamma \in [0,3]$, the NEC in terms of both pressures is not valid. Moreover, fulfillment in both cases will be achieved for the same $r$ and $\gamma$. The validity of the DEC in terms of $P_{l}$ will be observed in the whole considered region, while the DEC in terms of $P_{r}$, for small $r$,
will not be satisfied. This is the same behavior we have observed for other models of this paper. But, why is this particular model interesting for us? Because further analysis shows that, for small $r$, we will have $\rho < 0$, which means the WEC is also violated. Now, in the case of the previous model, violation of the WEC was due to the non-validity of $\rho + P \geq 0$, and always we had $\rho \geq$; but here this violation comes from, the violation of $\rho + P \geq 0$, and $\rho \geq 0$. We see also that, for the case considered, the parameter $\gamma$ does not play a role in the validity of the energy conditions. Definitely, the family of wormholes presented here requires further study to be better understood.
\begin{figure}[tp!]
\begin{center}$
\begin{array}{cccc}
\includegraphics[width=80 mm]{Fig5_a.jpeg} &&
\includegraphics[width=80 mm]{Fig5_b.jpeg} \\
\includegraphics[width=80 mm]{Fig5_c.jpeg} &&
\includegraphics[width=80 mm]{Fig5_d.jpeg} \\
\end{array}$
\end{center}
\caption{The behavior of the NEC in terms of the $P_{r}$ and $P_{l}$ pressures is depicted on the top panel. The NEC in terms of $P_{r}$ corresponds to the top-left plot, while the top-right plot is for the NEC in terms of the $P_{l}$ pressure. The bottom panel represents the graphical behavior of the DEC in terms of both pressures. In particular, the bottom-left plot corresponds to the behavior of the DEC in terms of the $P_{r}$ pressure. The plot of the DEC in terms of the pressure $P_{l}$ is presented on the bottom-right plot. The model is described by Eq.~(\ref{eq:RR2_rho}) and the plotted behavior for the energy conditions is for the values $\hat{r}_{0} = 2$, $\lambda = -15$, $\beta = 0.7$, $\alpha = 1.5$, and for different values of the parameter $\gamma$, responsible for the $R^{2}$ contribution to gravity.}
\label{fig:Fig1_a}
\end{figure}
Another candidate for a traversable wormhole can be a model described by the following matter content
\begin{equation}
\rho(r) = \alpha R(r) + \beta r^{3} R^{2}(r),
\end{equation}
with
\begin{equation}
P_{r} = -\frac{8 \beta (\lambda +4 \pi ) r^3 \hat{r}_{0}+2 r^2 (4 \alpha (\lambda +4 \pi )+1) \sqrt{r \hat{r}_{0}}+70 \gamma \sqrt{r \hat{r}_{0}}-71 \gamma \hat{r}_{0}}{8 (\lambda +4 \pi ) r^5},
\end{equation}
and
\begin{equation}
P_{l} = \frac{256 \pi ^2 r^2 \left(\alpha \sqrt{r \hat{r}_{0}}+\beta r \hat{r}_{0}\right) + B_{1} + C_{1}}{16 \lambda (\lambda +4 \pi ) r^5},
\end{equation}
where
$$B_{1} = \lambda \left(16 \beta \lambda r^3 \hat{r}_{0}+2 r^2 (8 \alpha \lambda -1) \sqrt{r \hat{r}_{0}}+5 \gamma \left(34 \sqrt{r \hat{r}_{0}}-37 \hat{r}_{0}\right)\right),$$ and
$$C_{1} = 8 \pi \left(16 \beta \lambda r^3 \hat{r}_{0}+2 r^2 (8 \alpha \lambda -1) \sqrt{r \hat{r}_{0}}+\gamma \left(50 \sqrt{r \hat{r}_{0}}-57 \hat{r}_{0}\right)\right),$$
qualitatively having the same behavior in terms of the energy conditions.
\section{\large{Discussion and conclusions}}\label{sec:Discussion}
In this paper, we have constructed a number of wormhole models corresponding to the family $f(\textit{R}, \textit{T})$ of extended theories of gravity. We have restricted ourselves to the two cases $f(\textit{R}, \textit{T}) = R + \lambda T$ and $f(\textit{R}, \textit{T}) = R + \gamma R^{2}\lambda T$, respectively ($T = \rho + P_{r} + 2P_{l}$ being the trace of the energy momentum tensor). In the first case, we have investigated three different wormhole models, assuming that the energy density profile of wormhole matter can be parametrized by the Ricci scalar. Specifically, we have considered the following three possibilities for $\rho$, namely $\rho(r) = \alpha R(r) + \beta R^{\prime}(r)$, $\rho(r) = \alpha R^{2}(r) + \beta R^{\prime}(r)$ and $\rho(r) = \alpha R(r) + \beta R^{2}(r)$, to describe the wormhole matter. In the first two cases, we have proven the possibility of having traversable wormhole formation. Moreover, the study of a particular wormhole solution described, e.g., by $\rho(r) = \alpha R(r) + \beta R^{\prime}(r)$, shows that for appropriate values of the parameters of the model, one can expect violation of the NEC in terms of $P_{r}$, and of the DEC in terms of the $P_{l}$ pressure, at the throat. On the other hand, $\rho \geq 0$, and the validity of both the NEC and DEC in terms of the $P_{l}$ and $P_{r}$ pressures, respectively, can be checked everywhere, including at the wormhole throat. Therefore, we observe also violation of the WEC in terms of the $P_{r}$ pressure, while it is still valid in terms of the $P_{l}$ pressure. Moreover, for the same model, our study also shows that, generically, for $\beta > 0$, regions will be found where the violation of the NEC in terms of $P_{r}$ will induce a violation of the DEC in terms of $P_{l}$. However, if we consider $\beta < 0$ regions, then we can encounter domains where both energy conditions in terms of both pressures are valid, at the same time.
On the other hand, for the second model, described by $\rho(r) = \alpha R^{2}(r) + \beta R^{\prime}(r)$, a particular traversable wormhole solution has been found, provided we take $c_{3} = 7.23$~(as integration constant), $c_{4} = 1.5$~(as integration constant), $\alpha = 0.5$, $\beta = -0.05$, and $\lambda = -5$, respectively. The throat of this specific wormhole occurs for $r_{0} = 0.8$ and $b^{\prime}(r_{0}) \approx 0.0154$. The study of this particular case shows that the energy conditions have the same qualitative behavior as in the case of the previous model. However, further analysis shows the existence of regions where the NEC and WEC in terms of the $P_{r}$ pressure can be valid, since $\rho \geq 0$, and this can be achieved for $c_{3} = 1.23$, $c_{4} = 0.5$, $\alpha = 0.5$, $\lambda = 5$, and for some negative values of the parameter $\beta$. Moreover, we also saw that, for the same case, the NEC in terms of $P_{l}$ will be valid only for $\beta \in [-0.1, 0.0]$. On the other hand, the DEC in terms of the $P_{r}$ pressure will not be valid at all, while the DEC in terms of $P_{l}$ will be fulfilled.
In summary, we can claim that, in all cases, the parameter space can be split in a such a way that some set of the energy conditions are indeed fulfilled. In the future, if we could manage to be able to constrain the equation of state of the wormhole matter, then it might be possible to clearly identify to which region on the model parameter space one should further concentrate attention.
We now briefly mention other important aspects of our study, when the energy density profile of the wormhole matter is given by the following form
$\rho(r) = \alpha R(r) + \beta R^{2}(r)$. In this case our analysis has led to two solutions for the shape function. The constant solution describes a traversable wormhole. However, non-constant-shape functions will describe non-traversable wormhole solutions. Moreover, further study shows also, that non-traversable exact wormhole solutions can be found for the cases when $\rho = \alpha R(r) + \beta R^{-2}(r)$, $\rho = \alpha R(r) + \beta r R^{2}(r)$, $\rho = \alpha R(r) + \beta r^{-1} R^{2}(r)$, $\rho = \alpha R(r) + \beta r ^{2} R^{2}(r)$, $\rho = \alpha R(r) + \beta r^{3} R^{2}(r)$, and $\rho = \alpha r^m R(r) \log (\beta R(r))$, respectively. In theory, we know that, in the case of a non-traversable wormhole solution, we can glue an exterior flat geometry into the interior geometry at some junction radius, making these solutions represent traversable wormholes. This procedure can be put at work here. On the other hand, a sequence of interesting questions, relevant to the models discussed above, remain yet to be answered. In particular, what is the main role of the term $R^{\prime}(r)$ in the traversable wormhole formation procedure? Another question is: does the assumption $L_{m} = -\rho$, with the matter energy density considered in this paper, radically prevents the formation of traversable wormhole? Or, it is there a different reason for that? We expect to be able to answer these questions in a forthcoming paper.
Additionally, in the second part of the paper, we have dealt with two particular exact traversable wormhole models, for $f(\textit{R}, \textit{T}) = R + \gamma R^{2}\lambda T$ gravity.
We have considered, in particular, the two matter profiles given by $\rho(r) = \alpha R(r) + \beta R^{2}(r)$, and $\rho(r) = \alpha R(r) + \beta r^{3} R^{2}(r)$, respectively. Both of them describe wormhole models with shape function $b(r) = \sqrt{\hat{r}_{0} r}$~(where $\hat{r}_{0}$ is a constant). For this case, we have also studied the validity of the energy conditions and, specifically, in the case of the model with $\rho(r) = \alpha R(r) + \beta R^{2}(r)$, we have observed that there is a region where both energy conditions, i.e., the NEC~($\rho + P_{r} \geq 0$ and $\rho + P_{l} \geq 0$), and DEC~($\rho - P_{r} \geq 0$ and $\rho - P_{l} \geq 0$), in terms of both pressures, are fulfilled.
To finish, we have here obtained a number of new, exact wormhole solutions, which in several cases are also traversable. The new models had been derived by assuming specific forms of the wormhole matter profile, parametrized by the Ricci scalar. Study of the validity of the energy conditions reveals that the solutions constructed exhibit a rich behavior, being always possible to find some regions in which the NEC and the DEC, in terms of both pressures, are fulfilled simultaneously. On the other hand, the regions where some of the energy conditions are violated may also reveal very useful, in the future, when some astronomical information on wormholes and their matter content starts to be accumulated. The study carried out here is merely an initial step towards a deeper investigation of these new types of wormholes. In particular, we still need to understand what is the main role of the $R^{\prime}(r)$ term for traversable wormhole formation. Another interesting question is the following: is the assumption $L_{m} = -\rho$, with the matter energy density considered in this paper, crucial in order to prevent the formation of traversable wormholes? Is there another reason for that? These issues are relevant, in view of the situation observed in Sect.~\ref{ss:ssec_3}. We expect to clarify them in forthcoming papers involving, in particular, the study of the shadows and gravitational lensing properties of the new wormholes.
\section*{Acknowledgements}
EE has been partially supported by MINECO (Spain), Project FIS2016-76363-P, by the Catalan Government 2017-SGR-247, and by the CPAN Consolider Ingenio 2010 Project.
MK is supported in part by Chinese Academy of Sciences President's International Fellowship Initiative Grant (No. 2018PM0054).
|
1,314,259,993,239 | arxiv | \section{Introduction}
Mental health disorders are a significant problem in the United States and worldwide, with the number of adults struggling with mental illness increasing every day \cite{czeisler2020mental}. Two common mental health illnesses are Post-Traumatic Stress Disorder (PTSD) and depression, which affects 250 million people worldwide \cite{world2017depression}. PTSD symptoms include sleep disturbances, hyperarousal, and persistent intrusive memories of trauma, while depression symptoms include fatigue, markedly diminished interest in most activities, and depressed mood \cite{american2013diagnostic}. Wearable devices can detect alterations in daily activity and other behavioral patterns that could result from worsening symptoms.
Wearable devices are ideal tools for monitoring because they provide a passive data collection method that can track high-risk individuals and detect when a user's physical behavior becomes anomalous or suggests a negative change in prognosis. Actigraphy data has been used to estimate disturbances in sleep \cite{long2017actigraphy} and diagnostic, and severity information for mental health disorders \cite{tahmasian2013clinical, khawaja2014actigraphy, hvolby2008actigraphic}, though previous work suggests detecting sleep disturbances experienced by people with mental health disorders is markedly more difficult \cite{biddle2015accuracy, inman1990sleep, chung2010relationship}. Furthermore, actigraphy derived rest--activity metrics and variability have been previously investigated in depression cohorts \cite{burton2013activity, krafty2019measuring}. Deep learning methods could learn meaningful representations from the actigraphy data. Also, unsupervised learning is well suited for this problem space because the data without the labels derived from clinical surveys could be utilized. This research represents the first attempt to apply unsupervised deep learning methods to actigraphy data for feature extraction and representation.
In this work, we aim to develop models to estimate PTSD and depression as determined by clinical surveys using locomotor activity measured from a wearable device. We developed two models that used actigraphy data from a four-week window previous to the clinical surveys.
\section{Methods}
\subsection{Dataset and Preprocessing}
The data used in this work is part of the Advancing Understanding of RecOvery afteR traumA (AURORA) study dataset, which consists of individuals enrolled in the emergency department within 72 hours after experiencing a traumatic event \cite{mclean2020aurora}.
For the current study, we present the analysis of the data from 1113 participants enrolled between July 31, 2017, and July 31, 2019. Participants were $35\pm13.1$ years old and 36\% male. Traumatic events that qualified automatically for study enrollment were motor vehicle collision, physical assault, sexual assault, fall $>10$ feet, or mass casualty incidents. Participants were asked to wear a research wristwatch (Verily Life Sciences, San Francisco) at least 21 hours a day for the study period and at subsequent times that vary by the study participant.
\begin{figure}[b]
\centering
\includegraphics[width=0.80\textwidth]{figures/fused.pdf}
\caption{(a) Daily actigraphy levels for one participant. (b) CNN-LSTM and (c) variational autoencoder models.}
\label{fused}
\end{figure}
Clinical follow-up surveys via web-based or phone assessments captured the mental health symptoms at eight weeks after initial evaluation. The Posttraumatic Stress Disorder Checklist for DSM-5 (PCL-5) was used to measure PTSD symptoms and participants with PCL-5 score greater than 28 were labeled as PTSD \cite{blevins2015posttraumatic}. Scored depression variables from PROMIS Depression - Short Form 8b (PROM-Dep8b) were used to measure depression symptoms, with a threshold of 60 for depression \cite{amtmann2014comparing}. One item from Pittsburgh Sleep Quality Index Addendum \cite{insana2013validation} (``how often did you awaken from sleep with severe anxiety or panic’’) was used to assess the difficulty in staying asleep (PanicSleep). Also, the participant's general physical health status in the 30 days pre-trauma was used as a feature to test if it could increase the performance of models based on passive actigraphy data. This pre-health score is a derived normative score based on questions from the 12-Item Short Form Health Survey (SF-12) \cite{ware199612}.
The raw 3D accelerometer data were collected from the research wristwatch at a sampling frequency of 30 Hz. These data were converted to activity counts which measure movement intensity. The Z-axis actigraphy data were bandpass filtered from 0.25-11 Hz to eliminate extremely slow or fast movements \cite{Ancoli-Israel2003}. Then, the maximum values inside non-overlapping one-second windows were summed for each 30-second `epoch’ of data \cite{Borazio2014}. Fig. \ref{fused} (a) is a "double plot" that shows activity levels measured over 28 days. Each column of the plot is created by stacking two consecutive days of data. Participants with more than 50\% missingness were excluded from the analysis, and missing data sections were replaced with zeros.
\subsection{Models and Experimental Setup}
The outcomes of the clinical surveys were used to create the classes for the binary classification experiments. In the first experiment, the PCL-5 survey was used by itself, while in the second experiment, PCL-5 and PanicSleep survey scores were used. Lastly, all three surveys were combined to find the participants who experienced both depression and PTSD symptoms to create the unhealthy class. In the experiments, all participants from the unhealthy class were used, and the over-represented healthy class was under-sampled so that the results were not biased due to the unequal class prevalence. For the deep learning models, internal 5-fold cross-validation was performed. All experiments were repeated 30 times on external folds with different random samples from the majority (healthy) class.
A variational autoencoder (VAE) is an unsupervised generative model that has an encoding phase in which the input data is projected onto lower-dimensional latent representations and a decoding phase that reconstructs the input, as shown in Fig. \ref{fused} (c). However, in the VAE model, the encoder is trained under the restriction that the latent representations follow a Gaussian distribution $N(Z_{\mu},Z_{\sigma})$. In this work, unlabelled actigraphy maps were used to train a convolutional VAE with two 2D convolution layers (Conv2D) with 16 and 32 number of filters and kernel sizes of 3. The number of units in the dense layers was set to 16. The number of filters in the Conv2DTranspose layers were 32, 16, and 1. The embedding dimension of VAE was 8. The model was trained for 30 epochs with a batch size of 128. Then, the latent representation of the actigraphy maps ($z_{act}$) was used as input features to a logistic regression model in binary classification experiments.
Secondly, an alternative supervised CNN-LSTM model was trained to estimate mental health outcomes from clinical surveys. The number of filters in the Conv1D layer was set to 32, and the kernel size was 3. The number of units in the LSTM and the dense layer was set to 20. Actigraphy data were inputted as 24-hour subsequences, and the model was trained for 30 epochs with a batch size of 32.
Lastly, 100 healthy and 100 unhealthy artificial actigraphy maps were generated with VAE models by using randomly sampled encoding vectors. The artificial data was used in the training step of the CNN-LSTM model to test if the performance will be improved.
\section{Results and Discussion}
\begin{figure}[htp]
\centering
\includegraphics[width=0.55\textwidth]{figures/latenttraversal.pdf}
\caption{Latent traversals of pre-trained VAE model. Each row reconstructs an actigraphy map as the value of each latent dimension is traversed between [-2, 2] while keeping the values of all other latent variables fixed. Most of the variables show decreasing of daily energy, while $z_{8}$ shows the circadian phase change.}
\label{traversals}
\end{figure}
In this work, we analyzed actigraphy data to estimate mental health outcomes after acute trauma. First, we used a convolutional VAE to extract unsupervised features from the four-week actigraphy data plots. The latent variables from the VAE model were fed into a logistic regression classifier for the binary classification task. We visualized what the different VAE latent representations learned by plotting their traversals as shown in Fig. \ref{traversals}. Then, we compared the performance with fully-supervised CNN-LSTM models and also used artificially generated data from the VAE model to enhance the performance of the CNN-LSTM approach.
We observed that when the unsupervised features extracted with the VAE model were combined with the physical health before the traumatic event (captured with the $SF-12$), the model achieved an AUC of $0.64$ and an accuracy of $0.60$ in differentiating healthy participants from participants showing PTSD and depression symptoms as determined by clinical surveys. When model was reduced to passive data only (by removing the $SF-12$ feature), the AUC dropped by 3\%, but the accuracy was unchanged. The model performance was also tested to identify participants with PTSD and sleep disturbance, as shown in Table \ref{VAEresults}. The CNN-LSTM model had an accuracy of $0.56$ and an AUC of $0.57$ in classifying healthy participants and participants showing PTSD and depression symptoms. By incorporating the artificial data generated by the VAE, the recall of the model increased from $0.45$ to $0.60$, while other metrics did not change.
\begin{table}[t!]
\centering
\caption{VAE model mental health outcome estimation performance. Results are reported as mean(standard deviation) of the external folds of each experiment.}
\label{VAEresults}
\begin{tabular}{@{}ccccccc@{}}
\toprule
\textbf{Outcome} & \textbf{Features} & \textbf{\begin{tabular}[c]{@{}c@{}}Num.\\ healthy/\\ unhealthy\end{tabular}} & \textbf{Acc.} & \textbf{AUC} & \textbf{Precision} & \textbf{Recall}\\ \midrule
{\small PCL-5} & $z_{act}$ & 554/559 & 0.55(0.01) & 0.56(0.01) & 0.55(0.01) & 0.51(0.01)\\ \midrule
\begin{tabular}[c]{@{}c@{}}{\small PanicSleep}\\ {\small PCL-5}\end{tabular} & $z_{act}$ & 520/149 & 0.59(0.02) & 0.61(0.03) & 0.59(0.02) & 0.57(0.02)\\ \midrule
\begin{tabular}[c]{@{}c@{}}{\small PanicSleep}\\ {\small PCL-5}\\ {\small PROM-Dep8b}\end{tabular} & $z_{act}$ & 494/111 & 0.60(0.03) & 0.61(0.03) & 0.60(0.04) & 0.58(0.03)\\ \midrule
\begin{tabular}[c]{@{}c@{}}{\small PanicSleep}\\ {\small PCL-5}\\ {\small PROM-Dep8b}\end{tabular} & \begin{tabular}[c]{@{}c@{}}$z_{act}$\\ {\small SF-12}\end{tabular} & 479/108 & 0.60(0.02) & 0.64(0.03) & 0.61(0.02) & 0.57(0.03)\\ \bottomrule
\end{tabular}
\end{table}
\begin{table}[t!]
\centering
\caption{CNN-LSTM model mental health outcome estimation performance. Results are reported as mean(standard deviation) of the external folds of each experiment.}
\label{CNNLSTMresults}
\begin{tabular}{@{}ccccccc@{}}
\toprule
\textbf{Outcome} & \textbf{Features} & \textbf{\begin{tabular}[c]{@{}c@{}}Num.\\ healthy/\\ unhealthy\end{tabular}} & \textbf{Acc.} & \textbf{AUC} & \textbf{Precision} & \textbf{Recall} \\ \midrule
{\small PCL-5} & $z_{act}$ & 554/559 & 0.52(0.01) & 0.53(0.01) & 0.53(0.02) & 0.40(0.04) \\ \midrule
\begin{tabular}[c]{@{}c@{}}{\small PanicSleep}\\ {\small PCL-5}\end{tabular} & $z_{act}$ & 520/149 & 0.56(0.03) & 0.59(0.03) & 0.58(0.04) & 0.48(0.06)\\ \midrule
\begin{tabular}[c]{@{}c@{}}{\small PanicSleep}\\ {\small PCL-5}\\ {\small PROM-Dep8b}\end{tabular} & $z_{act}$ & 494/111 & 0.56(0.04) & 0.57(0.04) & 0.58(0.05) & 0.45(0.08)\\ \bottomrule
\end{tabular}
\end{table}
In conclusion, leveraging the unsupervised learning, the VAE model can extract more informative features and achieved higher accuracy compared to the CNN-LSTM. VAE approach compresses four-week worth of actigraphy data into an $8$ dimensional vector. Therefore, this method could result in immense memory savings for applications with more data streams or long-term studies and could be adapted to different and novel devices.
\begin{ack}
The funding for the study was provided by NIMH U01MH110925, the US Army Medical Research and Material Command, The One Mind Foundation, and The Mayday Fund. Verily Life Sciences and Mindstrong Health provided some of the hardware and software used to perform study assessments.
\end{ack}
|
1,314,259,993,240 | arxiv | \section{Introduction} \label{s:intro}
The number of possible social choice rules is huge, and even the number
of those singled out in the literature for analysis is large.
Researchers have tried many different axioms in order to classify and
characterize these rules, sometimes leading to impossibility theorems.
New rules are still being introduced, and the subject is far from tidy.
A promising unifying framework is that of \emph{distance
rationalization} (abbreviated DR), whereby some subset $D$ (the
\emph{consensus set}) of the set of elections is distinguished. That
subset is further partitioned into finitely many subsets, on each of which there is a
different social outcome. We choose a distance measure $d$ on elections,
and for each election $E$ outside $D$, an outcome is chosen socially if
and only if it is chosen in some election in $D$ minimizing the distance
under $d$ to $E$.
This approach
``decomposes" a rule into simpler components $D$ and $d$.
Although the basic idea is quite old (arguably going back to Condorcet's maximum likelihood approach to voting), systematic study of this approach began with
Nitzan, Lerer and Campbell \cite{Nitz1981, LeNi1985, CaNi1986}. Their work shows that almost
every known rule can be represented
in the DR framework, and the main interest in the subject is when we can
choose the distance and consensus notions to be natural and computationally tractable. More recently, Elkind, Faliszewski, and Slinko \cite{EFS2012, EFS2015, ES2015}
have further developed the theory, following Meskanen and Nurmi \cite{MeNu2008}.
In particular, they have focused on the important class of \emph{votewise} distances and
obtained useful sufficient conditions on $(D,d)$ so that the induced rule
satisfies desirable properties such as monotonicity, anonymity, and
homogeneity.
\subsection{Our contribution} \label{ss:contrib}
We deal with several foundational and definitional issues,
many of which have not been discussed by previous authors (in some cases
because the level of generality they used was not sufficient to
distinguish these concepts). Some of our
contribution consists of a more efficient and rigorous presentation of
known material. Section~\ref{s:defs} develops the basic notation and terminology. Our
approach is similar to that taken by previous researchers, but there are
some improvements. We aim to operate generally (for example, by using a hemimetric
rather than a metric, and considering social choice and social welfare functions in a single analysis), and explicitly distinguish several concepts that have
sometimes been conflated in previous work. In Section~\ref{s:foundation}
we give necessary and sufficient conditions for a rule to be distance rationalizable, improving slightly on results of the abovementioned authors. We pose an interesting question regarding uniqueness of representation
in the DR framework, which does not appear to have been noticed before. We give
a counterexample in Section~\ref{ss:unique}.
In Section~\ref{s:quot} we explain how
equivalence relations and symmetries between elections allow us to describe DR
rules more compactly, and make the connection between the original profile-based definitions and the quotient representations explicit. The distinction between compatible and totally compatible distances is important and new, and the idea of a distance being \emph{simple} with respect to an equivalence relation is also new as far as we know. None of our results in this section rely on the distance being votewise and are proved for general consensuses; the applications therefore generalize results of Elkind, Faliszewski and Slinko \cite{EFS2015}. In particular, we make the connection between $\ell^1$-votewise distances and
the Earth Mover distance, relating the subject of distance rationalization of anonymous rules to the theory of optimal transportation and maximum weight matchings. We believe that this new connection will prove
fruitful in future work.
We apply the above results to neutrality and anonymity, obtaining complete characterizations in Propositions~\ref{prop:neut} and~\ref{prop:anon}. In Section~\ref{s:homog} we deal with homogeneity, which is not quite covered by the results on groups. Our approach shows that the reason Dodgson's rule is not homogeneous is because the equivalence relation is induced by the action of a monoid (``group without inverses") that is not a group.
Specializing to votewise distances, we concentrate in Section~\ref{s:VMP} on what we term the Votewise Minimizer Property, which is a way of requiring the consensus and distance to combine well. This allows us to give improved sufficient conditions for DR rules to satisfy homogeneity, consistency, and continuity.
\section{Basic definitions} \label{s:defs}
We use standard concepts of social choice theory. Not all of these concepts have completely
standardized names. We shall need to deal with several candidate and voter sets simultaneously, which explains the generality of our definitions.
However in many cases it suffices to deal with a fixed finite voter and candidate set.
\begin{defn}
\label{def:rankings}
We fix an infinite set $C^* = \{c_1, c_2, \dots, \dots \}$ of potential \hl{candidates} and an infinite set $V^*= \{v_1, v_2, \dots, \}$ of potential
\hl{voters}. Let $C\subseteq C^*$. For each $s\geq 1$, an
\hl{$s$-ranking} is a strict linear order of $s$ elements chosen from
$C$. The set of all $s$-rankings is denoted $L_s(C)$. When $C$ is finite and $s = |C|$, we
write simply $L(C)$. When $s=1$, we identify $L_1(C)$ with $C$ in the natural way.
\end{defn}
\begin{remark}
When $C$ is finite, of size $m$ say, the set $L_s(C)$ consists of strict linear orderings of $C$ and has size $m(m-1)\cdots (m-s+1)$. By fixing a default linear
ordering on $C$, we can interpret elements of $L_s(C)$ as partial permutations of $C$ in the usual way.
\end{remark}
\begin{defn}
\label{def:profiles}
A \hl{profile} is a function $\pi: V \to L(C)$ where $V\subset V^*$ and $C\subset C^*$ are finite.
We denote the set of all profiles by $\mathcal{P}$. An \hl{election} is a triple $(C,V, \pi)$
with $\pi\in \mathcal{P}$ and $\pi:V \to L(C)$. We denote the set of all elections with fixed $C$ and $V$ by
$\mathcal{E}(C, V)$, and the set of all elections by $\mathcal{E}$.
\end{defn}
\begin{remark}
By definition $\pi(v)\in L(C)$ for each $v\in V$.
If $C$ is linearly ordered as described above, then $\pi(v)^{-1}$ denotes the inverse permutation, and for each
$c\in C$, $r(\pi(v),c):=\pi(v)^{-1}(c)$ gives the rank of $c$ in $v$'s preference order.
Of course, $C$ and $V$ are implicit in the definition of $\pi$, so strictly speaking an election is completely determined by a profile.
We distinguish the two concepts because we sometimes want to deal with several different voter or candidate sets at the same time, and because
$C$ is not really completely determined --- any superset of $C$ would also work.
\end{remark}
\begin{defn}
\label{def:rules}
A \hl{social rule of size $s$} is a function $R$ that takes each election $E = (C,V, \pi)$ to
a nonempty subset of $L_s(C)$. When there is a unique $s$-ranking chosen, the
word ``rule" becomes ``function". When $s=1$, we have the usual
\hl{social choice function}, and when $s=m$ the usual \hl{social welfare
function}.
For each subset $D$ of $\mathcal{E}$ we can consider a \hl{partial social rule with domain $D$}
to be defined as above, but with domain restricted to $D$. We denote the domain of a partial social rule $R$ by $D(R)$. If $R$ and $R'$ are partial social rules such that $D(R) \subseteq D(R')$ and $R(E) = R'(E)$ for all $E \in D(R)$ then we say that $R'$ \hl{extends} $R$.
\end{defn}
\begin{remark}
Most previous work has dealt only with the cases $s=1$ and $s=m$.
\end{remark}
\subsection{Consensus}
\label{ss:consensus}
Intuitively, a consensus is simply a socially agreed unique outcome on some set of
elections. We now define it formally.
\begin{defn}
\label{def:cons}
An $s$-\hl{consensus} is a partial social function $\mathcal{K}$ of
size $s$. The domain $D(\mathcal{K})$ of
$\mathcal{K}$ is called an \hl{$s$-consensus set}
and is partitioned into the inverse images $\mathcal{K}_r:=
\mathcal{K}^{-1}(\{r\})$.
\end{defn}
\begin{remark}
Note that we allow $\mathcal{K}_r$ to be empty. This happens rarely for natural rules in the distance
rationalizability framework, because it implies that there is no election for which $r$ is the unique social choice.
However it is technically useful and allows us to deal with varying sets of candidates.
\end{remark}
It often makes sense to ensure coherence between the various values of $s$ for which we formalize a given consensus notion.
\begin{defn}
\label{def:restrict}
Let $\mathcal{K}$ be a $1$-consensus. For each $s$ we define an $s$-consensus $\mathcal{K}_{(s)}$ (the \hl{$s$-restriction} of $\mathcal{K}$) as follows. For each candidate $c$, $\mathcal{K}_c$ is defined. Given $E=(C,V, \pi)\in \mathcal{E}$, define $E_{-c}$ to be the election $(C\setminus\{c\}, V, \pi')$, where $\pi'$ is obtained from $\pi$ by erasing $c$ from each ranking.
Let $D_2$ be the set of all elections $E$ such that both $E=(C, V, \pi)$ and $E_{-c}$ both belong to the domain of $\mathcal{K}$, where $c = \mathcal{K}(E)$. Letting $c' =
\mathcal{K}(E_{-c})$, define $\mathcal{K}_{(2)}$ on $D_2$ by its output, the $2$-ranking $cc'$. Continue by induction, reducing the domain at each step if necessary, and output a single $s$-ranking.
\end{defn}
Several specific consensuses have been described in the literature. Here we unify the presentation
of several of the most common ones.
\begin{defn}(qualified majority consensus)
\label{def:sunam}
Let $1/2 \leq \alpha < 1$. The \hl{$(\alpha, s)$-majority consensus}
$\unam^{(\alpha, s)}$ is the $s$-consensus with domain consisting of
all elections with the following property: there is some fraction $p> \alpha$ of the voters,
all of whom agree on the order of the top $s$
candidates. The consensus choice is this common $s$-ranking.
Special cases:
\begin{itemize}
\item When $\alpha = 1/2$, we obtain the usual \hl{majority $s$-consensus} $\maj^s$.
\item The limiting value as $\alpha \to 1$ gives the case of unanimity. We denote this by $\sunam^s$. When $s=|C|$, we simply write $\sunam$ (called the
\hl{strong unanimity consensus}),
whereas when $s = 1$, for consistency with previous authors we denote it $\wunam$, the \hl{weak unanimity consensus}.
\end{itemize}
\end{defn}
\begin{remark}
In general, the $s$-restriction of $\unam^{\alpha, 1}$ is not $\unam^{\alpha, s}$: if a majority rank $a$ first, and a majority of those rank $b$ above all candidates other than $a$, it is not necessarily the case that a majority of votes have $ab$ at the top (the fraction is more than $2\alpha - 1$, however). However, a majority of the original voters rank $b$ either first or second. The $s$-restriction is the consensus for which more than fraction $\alpha$ of voters agree on the top candidate, more than $\alpha$ agree on the top two, etc.
However, $\sunam^s$ is indeed the $s$-restriction of $\wunam$: if all voters rank $a$ first and all rank $b$ over all candidates other than $a$, then all agree on the ranking $ab$, etc.
\end{remark}
\begin{defn}(qualified Condorcet consensus)
\label{def:cond}
Let $1/2 \leq \alpha < 1$. The $\alpha$-\hl{Condorcet consensus} $\cond^{\alpha}$ has
domain consisting of all elections for which an $\alpha$-Condorcet
winner exists. That is, there is a (necessarily unique) candidate $c$ such that for any other candidate $c'$, a fraction strictly greater than $\alpha$ of voters rank $c$ over $c'$.
We define $\cond^{(\alpha, s)}$ to be the $s$-restriction of $\cond^{\alpha}$.
Special cases:
\begin{itemize}
\item When $\alpha = 1/2$ we denote this by $\cond$, the usual Condorcet consensus.
\item When $\alpha \to 1$, we obtain $\sunam$.
\end{itemize}
\end{defn}
\subsection{Distances}
\label{ss:metrics}
We require a notion of distance on elections. We aim to be as general as
possible.
\begin{defn}(distance)
\label{def:dist}
A \hl{distance} (or \hl{hemimetric}) on $\mathcal{E}$ is a function
$d:\mathcal{E} \times \mathcal{E} \to \ensuremath{\reals_+} \cup \{\infty\}$ that satisfies the
identities
\begin{itemize}
\item $d(x,x) = 0$,
\item $d(x,z) \leq d(x,y) + d(y,z)$.
\end{itemize}
A \hl{pseudometric} is a distance that also satisfies
\begin{itemize}
\item $d(x,y) = d(y,x)$.
\end{itemize}
A \hl{quasimetric} is a distance that also satisfies
\begin{itemize}
\item $d(x,y) = 0 \Rightarrow x = y$.
\end{itemize}
A \hl{metric} is a distance that is both a quasimetric and a pseudometric.
We call a distance \hl{standard} if $d(E, E') = \infty$ whenever $E$ and $E'$ have
different sets of voters or candidates (this term has not been used in previous literature).
\end{defn}
\begin{eg}
\label{eg:insdel}
Let $d_{del}(E,E')$ (respectively $d_{ins}(E,E')$) be defined as the
minimum number of voters we must delete from (insert into) election $E$
in order to reach election $E'$ (or $+\infty$ if $E'$ can never be
reached). Each of $d_{ins}$ and $d_{del}$ is a nonstandard quasimetric.
\end{eg}
\begin{eg} (shortest path distances)
\label{eg:geodesic}
Consider a digraph $G$ with nodes indexed by elements of $\mathcal{E}$,
and some edge relation between elections. Define $d$ to be the (unweighted) shortest path distance in $G$. This is a quasimetric. It is a metric if
the underlying digraph is a graph. For example, $d_H, d_K, d_{ins}, d_{del}$
are defined via essentially this construction. Note that it suffices to specify for which
$E, E'$ we have $d(E,E') = 1$ in order to specify such a distance, and not every quasimetric is a shortest path distance, even after scaling by a constant, because if there are two points at distance $3$ there must also be points at distance $2$, for example.
\end{eg}
\begin{eg}(some strange distances)
\label{eg:weird dist}
The following distances will be useful for existence results later.
Let $R$ be a rule.
The first is a metric used by Campbell and Nitzan \cite{CaNi1986}. Define $d$ as follows.
$$
d(E, E') =
\begin{cases}
0 & \text{if $E=E'$}\\
1 & \text{if $|R(E)| = 1$ and $R(E) \subset R(E')$}\\
1 & \text{if $|R(E')| = 1$ and $R(E') \subset R(E)$}\\
2 & \text{otherwise}.\\
\end{cases}
$$
We claim that $d$ is a metric. The only non-obvious axiom is the triangle inequality. It suffices to consider the case where $E, E', E''$ are distinct. Then $d(E,E') + d(E', E'') \geq 1+1=2\geq d(E, E'')$, yielding the result.
The second distance is a variant of the first, where instead we define $d(E, E') = 0$ if and only if $E=E'$ or $R(E) = R(E')$ and $|R(E)| = 1$. This is a pseudometric, since elections with the same unique winner are at distance zero. To prove the triangle inequality, first note that $R(E, E') \leq 1$ if and only if $R(E) \subseteq R(E')$ and $|R(E)| = 1$, or the analogous condition with $E$ and $E'$ exchanged holds. If $2\geq d(E,E'') > d(E,E') + d(E',E'')$, then at least one of the two terms on the right is $0$ and the other is at most $1$. Thus (without loss of generality) $E$ and $E'$ have a common unique winner under $R$ and $R(E') \subseteq R(E'')$, yielding the contradiction $d(E,E'') \leq 1$.
The third distance is the shortest path metric defined as follows: there is an edge joining $E$ and $E'$
if and only if $|R(E')| = 1$ and $R(E') \subset R(E)$, or the same with $E$ and $E'$ exchanged (these are the same as the cases defining $d(E,E') = 1$ in the definition of the Campbell-Nitzan distance).
\end{eg}
\subsubsection{Votewise distances}
\label{sss:votewise}
One commonly used class of distances consists of the \hl{votewise}
distances formalized in \cite{EFS2015}, which we now define after some preliminary work. They are each based on
distances on $L(C)$. See \cite{Diac1988} for basic
information about metrics on the symmetric group.
\begin{eg}
\label{eg:dist Sn}
The most commonly used such distances on $L(C)$ are as follows.
\begin{itemize}
\item the \hl{discrete metric} $d_H$, defined by
$$d_H(\rho, \rho') =
\begin{cases} 1 \quad \text{if $\rho = \rho'$} \\
0 \quad \text{otherwise}.
\end{cases}
$$
\item the \hl{inversion metric} $d_K$ (also called the swap, bubblesort or Kendall-$\tau$ metric), where $d_K(\rho, \sigma)$ is the
minimal number of swaps of adjacent elements required to convert $\rho$ to $\sigma$.
\item \hl{Spearman's footrule} $d_S$, defined by
$$d_S(\rho, \rho'):= \sum_{c\in C} |r(\rho, c) -r(\rho',c)|.$$
\end{itemize}
\end{eg}
\begin{defn}
\label{def:norm}
A \hl{seminorm} on a real vector space $X$ is a real-valued function $N$ satisfying the identities
\begin{itemize}
\item $N(x+y) \leq N(x)+N(y)$
\item $N(\lambda x) = |\lambda| N(x)$
\end{itemize}
for all $x,y \in X$ and all $\lambda\in \mathbb{R}$. Note that this implies that $N(0) = 0$ and
$N(x) \geq 0$ for all $x\in X$.
A \hl{norm} is a seminorm that also satisfies
\begin{itemize}
\item $N(x) = 0 \Rightarrow x=0$.
\end{itemize}
\end{defn}
\begin{remark}
Every seminorm induces a pseudometric via $d(x,y) = ||x-y||$. This is a metric if and only if the
seminorm is a norm.
\end{remark}
\begin{eg}
\label{eg:norm}
Consider an $n$-dimensional space $X$ with fixed basis $e_1, \dots, e_n$ and corresponding
coefficients $x_i$ for each element $x\in X$. Fix $p$ with $1\leq p < \infty$ and define the
$\ell^p$-norm on $X$ by
$$
||x||_p = \left(\sum_{i=1}^n |x_i|^p\right)^{1/p}.
$$
When $p = \infty$ we define the $\ell^\infty$ norm by
$$
||x||_\infty = \max_{1\leq i \leq n} |x_i|.
$$
\end{eg}
\begin{defn} (votewise distances)
\label{def:votewise}
Choose a family $\{N_n\}_{n\geq 1}$ of
seminorms, where $N_n$ is defined on $\mathbb{R}^n$. Fix candidate set $C$ and voter set $V$, and choose a distance $d$ on $L(C)$. Extend $d$ to a
function on $\mathcal{P}(C,V)$ by taking $n = |V|$ and defining for $\sigma, \pi
\in \mathcal{P}(C,V)$
$$d^{N_n}(\pi,\sigma):= N_n(d(\pi_1, \sigma_1), \dots, d(\pi_n, \sigma_n)). $$
This yields a distance on elections having the
same set of voters and candidates. We complete the definition of the extended distance
(which we denote by $d^N$) on $\mathcal{E}$ by declaring it to be standard.
We use the abbreviation $d^p$ for $d^{\ell^p}$, and sometimes we even
use just $d$ for $d^N$ if the meaning is clear.
\end{defn}
\begin{remark}
Note that if $d$ is a metric and $N$ is a norm, then $d^N$ is a metric.
\end{remark}
\begin{eg} (famous votewise distances)
\label{example:dist}
The distances $d^1_H$ and $d^1_K$ are
called respectively the \hl{Hamming metric} and \hl{Kemeny metric}.
The Hamming metric measures the number of voters whose preferences
must be changed in order to convert one profile to another, and as such has an interpretation in
terms of bribery. The Kemeny metric measures how many swaps of adjacent candidates are
required, and is related to models of voter error.
Among the many other votewise metrics, we single out $d^1_S$, sometimes called
the \hl{Litvak distance}.
\end{eg}
\subsubsection{Tournament distances}
\label{sss:tour dist}
Some distances depend only on the net support for candidates.
\begin{eg} (tournament distances)
\label{example:dist 2}
Given an election $E= (C, V, \pi)$, we form the pairwise majority digraph $\Gamma(E)$ with
nodes indexed by the candidates, where the arc from $a$ to $b$ has
weight equal to the \emph{net support} for $a$ over $b$ in a pairwise contest.
Formally, there is an arc from $a$ to $b$ whose weight equals $n_{ab} - n_{ba}$, where
$n_{ab}$ denotes the number of rankings in $\pi$ in which $a$ is above $b$.
Let $M(E)$ be the weighted adjacency matrix of $\Gamma(E)$ (with respect to an arbitrarily chosen
fixed ordering of $C$). Given a seminorm $N$ on the space of all $|C| \times |C|$ real matrices,
we define the $N$-\hl{tournament distance} by
$$
d^N(E, E') = N(M(E) - M(E')).
$$
A closely related distance is defined in the
analogous way, but where each element of the adjacency matrix is
replaced by its sign ($1$, $0$, or $-1$). We call this the
$N$-\hl{reduced tournament distance}. We denote the special cases
where $N$ is the $\ell^1$ norm on matrices by $d^T$ and $d^{RT}$ respectively. A (reduced)
tournament distance cannot be a metric, even if $N$ is a norm, because
it does not distinguish points (the mapping $E \mapsto M(E)$ is not one-to-one). However, it is a pseudometric.
\end{eg}
\subsection{Combining consensus and distance}
\label{ss:combine}
In order for a rule to be definable via the DR construction, it is
necessary that the first following property holds. The second property avoids trivialities and ensures some theorems in Section~\ref{s:foundation} are true. We shall assume both properties
from now on.
\begin{defn} \label{def:distinguish}
Let $d$ be a distance on $\mathcal{E}$ and $\mathcal{K}$ a consensus. Say that $(\mathcal{K}, d)$
\hl{distinguishes consensus choices} if whenever $x\in \mathcal{K}_r, y \in \mathcal{K}_{r'}$
and $r\neq r'$, then $d(x,y) > 0$.
\end{defn}
We use a distance to extend a consensus to a social rule in the natural
way. The choice at a given election $E$ consists of all $s$-rankings $r$
whose consensus set $\mathcal{K}_r$ minimizes the distance to $E$. We
introduce the idea of a score in order to use our intuition about
positional scoring rules.
\begin{defn} (DR scores and rules)
\label{def:rules}
Suppose that $\mathcal{K}$ is an $s$-consensus and
$d$ a distance on $\mathcal{E}$. Fix an election $E\in \mathcal{E}$.
The \hl{$(\mathcal{K}, d, E)$-score} of $r\in L_s(C^*)$ is defined by
$$
|r| : = \inf_{E'\in \mathcal{K}_r} d(E, E').
$$
The rule $R:=\R(\mathcal{K}, d)$ is defined by
\begin{equation}
\label{eq:argmin}
R(E) = \arg\min_{r} |r|.
\end{equation}
We say that $R$ is \hl{distance rationalizable} (DR) with respect to
$(\mathcal{K}, d)$.
\end{defn}
\begin{remark}
Note that if $\mathcal{K}_r$ is empty, then $|r| = \infty$. DR scores are defined so that they are
nonnegative, and higher score corresponds to larger distance. This is not consistent with
the usual scoring rule interpretation in Example~\ref{eg:scoring}, but the two notions of score
are closely related. Our DR scores have the form $M - s$ where $s$ is the score
associated with the scoring rule and $M$ depends on $E$ but not on any $r\in
L_s(C)$.
\end{remark}
\subsection{Some specific rules}
\label{ss:spec rules}
Table~\ref{t:DR egs} presents a few known rules in this framework. Most of the rules in the
table are well known. We single out the following less obvious references. The \hl{modal ranking rule} was investigated by Caragiannis, Procaccia and Shah \cite{CPS2014}. The \hl{voter replacement rule}
(VRR) was
defined essentially as a missing entry in such a table \cite{EFS2012}.
The entries marked ``trivial" are so labelled because in those cases every election not in
$\mathcal{K}$ is at distance $+\infty$ from every $\mathcal{K}_r$. Missing entries reflect on the authors'
knowledge,
and may have established names. Our table overlaps with that in \cite{MeNu2008} --- note that the
$(\cond, d_H^1)$ entry is incorrect in that reference, as pointed out by Elkind, Faliszewski
and Slinko \cite{EFS2012}. Our table also overlaps one presented by Elkind, Faliszewski
and Slinko \cite{EFS2015}.
\begin{table}
\begin{tabular}{c|cccc}
$\mathcal{K} / d$ & $\sunam$ & $\wunam$ & $\cond$ & $ \cond^m$ \\
\hline
$d_K^1$ & Kemeny & Borda & Dodgson & \\
$d_H^1$ &modal ranking& plurality & VRR & \\
$d_S^1$ &Litvak &Borda&Dodgson &\\
$d_T$ &Kemeny&Borda&maximin& \\
$d_{RT}$ &Copeland &Copeland & Copeland & Slater\\
$d_{ins}$ &trivial &trivial & maximin & \\
$d_{del}$ &modal ranking & plurality &Young & \\
\end{tabular}
\caption{Some known rules in the DR framework (see discussion in Section~\ref{ss:spec rules})}
\label{t:DR egs}
\end{table}
\begin{eg} (scoring rules)
\label{eg:scoring}
The \hl{positional scoring rule} defined by a family of \hl{weight vectors} $w:=w^{(m)}$ satisfying $w_1
\geq \dots \geq w_m, w_1 > w_m$ elects all candidates with maximal score, where
the score of $a$ in the profile $\pi$ is defined as $\sum_{v\in V}
w_{r(\pi(v), a)}$. The positional scoring rule defined by $w$ has the form $\R(\wunam,
d_w^1)$ where $d_w$ is the distance on rankings defined by $$d_w(\rho,
\rho') = \sum_{c\in C} |w_{r(\rho,c)} - w_{r(\rho',c)}|.$$
\end{eg}
\begin{remark}
Note that $d_w$ is a metric on $L_s(C)$ if and only if $w_1, \dots, w_s$
are all distinct. The score of $r$ under the rule defined by $w$ is the
difference $nw_1 - |r|$. For example, for Borda with $m$ candidates (corresponding to $w =(m-1, m-2, \dots, 1, 0)$, the
maximum possible score of a candidate $c$ is $(m-1)n$, achieved only for
those elections in $\wunam_c$. The score of $c$ under Borda is exactly
$(m-1)n - K$ where $K$ is the total number of swaps of adjacent
candidates needed to move $c$ to the top of all preference orders in
$\pi(E)$.
Plurality (corresponding to $w = (1,0,0, \dots, 0)$) and Borda are special cases, where $d_w$ simplifies to $d_H^1$
and $d_S^1$ respectively. As far as the distance to $\wunam$ or $\cond$ is
concerned, $d_S^1$ and $d_K^1$ are proportional, but
they are not proportional in general \cite[p. 298--299]{MeNu2008}.
\end{remark}
\if01
\begin{remark}
Consider the set of all elections for which all
positional scoring rules yield the same unique winner. By convexity of
the set of weight vectors, it suffices to check the finite set of rules
defined by weight vectors $(1, 1, \dots, 1, 0, \dots , 0)$ where the
number of $1$'s is fixed and at most $k-1$; in other words the \hl{$k$-approval rules} for $1\leq
k \leq m-1$. This just describes the Lorenz consensus in another way.
\end{remark}
\fi
\begin{eg} (Copeland's rule)
\hl{Copeland's rule} can be represented as $\R(\cond, d_{RT})$. Indeed,
in an election $E$, the Copeland score of a candidate $c$ (the number
of points it scores in pairwise contests with other candidates) equals
$n-1-s$, where $s$ is the minimum number of pairwise results that must be changed
for $E$ to change to an election that belongs to $\cond_c$.
\end{eg}
Every rule $\R(\mathcal{K}, d)$, where $\mathcal{K}$ is a $1$-consensus, automatically
yields a social rule $\R^s(\mathcal{K}, d)$ of size $s$ as follows.
\begin{defn}
\label{def:score}
Let $1\leq s \leq m$ and suppose that $\mathcal{K}$ is a $1$-consensus and $d$
a distance on $\mathcal{E}$. We define a social rule $\R^s(\mathcal{K}, d)$ of size
$s$ by choosing $s$ elements in increasing order of score (if there are ties in the scores, we consider all possible such orderings).
\end{defn}
\begin{remark}
$\R^s(\mathcal{K}, d)$ is single-valued if and only if the lowest $s$ scores of candidates are distinct.
Note that if the $s$-consensus $\mathcal{K}'$ is a restriction of the $1$-consensus $\mathcal{K}$, it is not necessarily the case that $\R(\mathcal{K}', d)
= \R^s(\mathcal{K}, d)$. For example, $\sunam$ is a restriction of $\wunam$, and $\R(\wunam, d_K^1)$ is the social choice rule, Borda's rule. By above, we can also define the social welfare version of Borda's rule. However, $\R(\sunam, d_K^1)$ is Kemeny's rule. The social choice rule obtained by taking the top element of the ranking given by Kemeny's rule is also sometimes called Kemeny's rule. All four rules mentioned here are different.
\end{remark}
\section{Existence and uniqueness}
\label{s:foundation}
The DR framework is not very restrictive without further assumptions on
$\mathcal{K}$ and $d$, as shown by Campbell and Nitzan \cite{CaNi1986}.
\subsection{Existence}
\label{ss:exist}
We give necessary and sufficient conditions, an improvement on
\cite[Prop. 4.4]{CaNi1986} and \cite[Thm 2]{EFS2015}.
\begin{defn}
\label{def:Kmax}
For each rule $R$, there is a unique \hl{maximum consensus}
$\mathcal{K}^{\max}(R)$, namely that whose consensus set $D^{\max}$ consists of all
elections on which $R$ gives a unique output, which we define as the consensus choice.
\end{defn}
\begin{remark}
Most rules commonly used in practice have ties, so that the domain of $\mathcal{K}^{\max}(R)$ is
smaller than the domain of $R$. For example, if a social choice rule satisfies anonymity (symmetry with respect to voters) and neutrality (symmetry with respect to candidates) and is faced with a profile containing exactly one of each possible preference order, it must select all candidates as winners.
\end{remark}
\begin{defn}
\label{def:nonimp}
The \hl{unique image} of an $s$-rule $R$ is the set of all $r\in L_s(C)$ which occur as the unique
winner in some election. That is, there exists $E\in \mathcal{E}$ such that $R(E) = \{r\}$.
The \hl{image} of the rule is the set of all $r\in L_s(C)$ which occur as a winner in some election. That is, there exists $E\in \mathcal{E}$ such that $r\in R(E)$.
The rule satisfies \hl{nonimposition} if every $r\in L_s(C)$ occurs as a unique winner somewhere ---
in other words, the unique image of $R$ equals $L_s(C)$.
\end{defn}
\begin{remark}
Although slightly confusing (the image should perhaps be a set of subsets rather than their union) this is the standard terminology for set-valued mappings in mathematics.
\end{remark}
We need to rule out the possibility of an election being at infinite distance from all consensus elections. There is no problem with an election being equidistant from all nonempty consensus sets, but without this assumption, empty consensus sets will (by convention) also be at the same distance.
\begin{defn}
\label{def:nontriv}
Say that $(\mathcal{K}, d)$ is \hl{nontrivial} if for each $E\in \mathcal{E}$ there is some $r$ for which $d(E, \mathcal{K}_r) < \infty$.
\end{defn}
\begin{prop}
\label{prop:gen}
Let $\mathcal{K}$ be a consensus and $R$ a rule. There exists a nontrivial distance rationalization
$R = \R(\mathcal{K}, d)$ if and only if the following two conditions hold:
\begin{enumerate}[(i)]
\item $R$ extends $\mathcal{K}$;
\item the image of $R$ equals the image of $\mathcal{K}$.
\end{enumerate}
Furthermore, $d$ can be chosen to be a metric.
\end{prop}
\begin{proof}
The first condition is necessary: because of the assumption that $\mathcal{K}$ distinguishes consensus choices, if $E\in \mathcal{K}_r$ then $d(E, \mathcal{K}_r) = 0$ but $d(E, F) > 0$ for all $F\in \mathcal{K}_{r'}$ and all $r'\neq r$. The second condition is necessary: the image of $\mathcal{K}$ is contained in the image of $R$ because $R$ extends $\mathcal{K}$, and by the nontriviality assumption, if $\mathcal{K}_r = \emptyset$ then $r$ is not a winner at any election.
Now assume that the two conditions hold. Let $d$ denote either of the first two distances in
Example~\ref{eg:weird dist} and let $S = \R(\mathcal{K}, d)$. We claim that $S = R$ (note that rationalization is nontrivial because the distances are finite). Let $E\in \mathcal{E}$. Since $R$ extends $\mathcal{K}$ the result is immediate if $E\in \mathcal{K}_r$ for some $r$, because then $R(E) = \{r\}$ and $S(E) = \{r\}$ since $d$ distinguishes consensus choices.
Now suppose that $E$ is not a member of $\mathcal{K}_r$ for any $r$. Note that
$d(E, \mathcal{K}_r) \geq 2$ if $r\not\in R(E)$. For each $r\in R(E)$, by assumption $r$ is in the image of $\mathcal{K}$, so there is $F\in \mathcal{K}_r$ with $d(E, F) = 1$ (for the first and third distances, or the second if $|R(E)| > 1$) or $F\in \mathcal{K}_r$ with $d(E,F) = 0$ (for the second distance, if $|R(E)| = 1$). Thus $S(E)$ is precisely the set of $r$ for which $r\in R(E)$, in other words $S(E) = R(E)$.
\end{proof}
\begin{cor}
\label{cor:gen}
Let $R$ be a rule. There exists a distance $d$ and consensus $\mathcal{K}$ such that $R = \R(\mathcal{K}, d)$ if and only if the image of $R$ equals its unique image.
\end{cor}
\begin{proof}
Let $\mathcal{K} = \mathcal{K}^{\max}(R)$. The image of $\mathcal{K}$ is precisely the unique image of $R$, and
$R$ clearly extends $\mathcal{K}$, so this follows directly from Proposition~\ref{prop:gen}.
\end{proof}
\begin{remark}
In Proposition~\ref{prop:gen}, if we assume that $R$ and $\mathcal{K}$ satisfy nonimposition, then condition (ii) is satisfied. In this case $R$ is
distance rationalizable if and only if it extends $\mathcal{K}$ \cite[Prop. 4.4]{CaNi1986}. In this case, the third distance from Example~\ref{eg:weird dist} can be used. Note that in general the third distance does not work --- consider a rule for which only one consensus set $\mathcal{K}_a$ is nonempty, yet the rule returns a disjoint two-element set $\{b,c\}$ at some election $E$. The third distance would then yield $\{a,b,c\}$ at $E$, a contradiction.
However the
assumption of nonimposition is
not necessary --- consider the rule in which $R(E) = \{r\}$ for every
election $E$, and choose $d$ to be the discrete metric.
\end{remark}
Thus if $\mathcal{K}$ is specified, the question of existence is settled.
For example, every social welfare rule satisfying the usual unanimity axiom (if every voter has the same preference order, the rule outputs precisely this common ranking) can be rationalized with
respect to $\sunam$.
In view of the flexibility of the DR framework, it is clear that the key idea is to
make an appropriate choice of a ``small" $\mathcal{K}$ and ``natural" $d$ so as to recapture
rule $R$ via $R= \R(\mathcal{K}, d)$.
\subsection{Uniqueness}
\label{ss:unique}
We now turn to the question of uniqueness. The construction in the proof of
Proposition~\ref{prop:gen} shows that changing both $\mathcal{K}$ and $d$ can lead to the same rule. When $\mathcal{K}$ is fixed and $d$ varies, the rule often changes. However it sometimes does not change, as can be seen from Table~\ref{t:DR egs}. A general class of examples where the rule does not change is discussed in Section~\ref{ss:group}.
Similarly, when $d$ is fixed and $\mathcal{K}$ varies,
the rule sometimes does not change. For example, consider Copeland's rule,
which can be described as $\R(\cond, d_{RT})$. It can also be described as
$\R(\wunam, d_{RT})$, because for each $a\in C$, every point of $\cond_a$ is at
distance zero from $\wunam_a$ with respect to $d_{RT}$.
The standard examples such as Borda's, Kemeny's, and Copeland's rules all behave well when we extend the consensus set beyond the one used to define them. This leads us to the following question: if $R$ has the form $\R(\mathcal{K}, d)$, and $\mathcal{K}'$ is a consensus that extends $\mathcal{K}$, is it necessarily the case that $R = \R(\mathcal{K}', d)$? In particular, does $R= \R(\mathcal{K}^\text{max}(R),d)$? The answer is no in general, as we now show.
\begin{eg}
\label{eg:hillas no}
We will define $R = \R(\mathcal{K}, d)$. First, let $C=\{a,b,c,c'\}$ and let $V$ be a voter set of size $n$. Let $G$ be the graph having the following two connected components. The first one includes all elections where all voters rank $a$ or $b$ first, and all elections where the number of voters ranking $a$ first equals the number of votes ranking $b$ first. The other component contains all other elections. Now, assume that in each component, only elections differing by one vote are linked. Define $d$ to be the shortest path distance defined by $G$.
Now, define the domain of $\mathcal{K}_{c'}$ to be the second component and the domain of $\mathcal{K}_c$ to be the elections where everyone ranks $c$ or $c'$ first. Let $\mathcal{K}_a$ be the set of elections where $a$ or $b$ are ranked first but $a$ gets more first place than $b$ and $K_b$ be the set of elections where $a$ or $b$ are ranked first but $b$ gets more first place than $a$. Let $R=\R(\mathcal{K},d)$.
Consider an election $E$ where $a$ and $b$ get an equal number of votes $x/2$. Clearly, if $x=0$, $E$ is in $K_c$. Else if $x=n$, then $E$ is at distance $1$ from $K_c$ and $K_b$. Indeed, $R(E)=\{c\}$ if and only if $x<n/2$ and $R(E)=\{a,b\}$ if and only if $x>n/2$.
So indeed, $\mathcal{K}^{\max}(R)$ contains all elections except the ones where $a$ and $b$ get an equal number $x\geq n/2$ of votes. Now, let $R'=\R(\mathcal{K}^{\max}(R),d)$. We still consider elections $E$ where $a$ and $b$ get an equal number of votes $x/2$, but we note that $R'(E)=\{c\}$ if and only if $x<3n/4$. Thus, $R'\not=R$.
\end{eg}
\section{Quotients}
\label{s:quot}
Symmetries of voting rules occur very often in practice. In this section, we
show how to express distance rationalization using only symmetric objects and functions.
We start with general equivalence relations, then equivalence relations induced by actions of symmetry groups, and then consider special cases of such actions. In Sections~ \ref{ss:anon},~\ref{s:homog} and ~\ref{ss:neutral}, we apply the general results to anonymity and homogeneity, neutrality and reversal symmetry.
We use a general equivalence relation $\sim$ on $\mathcal{E}$, which we shall specialize in later sections. All our definitions in this section are understood to be with respect to $\sim$. For example, we
may refer to ``compatibility" and ``total compatibility" without mentioning $\sim$ directly.
Let $\overline{E}$ denote the equivalence class of $E$, and let $\mathcal{Q}$ denote the
set of equivalence classes. The usual quotient map $E \mapsto \overline{E}$ takes $\mathcal{E}$ onto $\mathcal{Q}$.
\begin{defn}
\label{def:commute}
Let $R$ be a partial social rule. Then $R$ is \hl{compatible} with $\sim$ if
$R(E) = R(E')$ whenever $\overline{E} = \overline{E'}$.
\end{defn}
\begin{remark}
In usual mathematical terms, $R$ is compatible with $\sim$ if and only if it is an \hl{invariant} for $\sim$ or a \hl{morphism} for $\sim$.
\end{remark}
\begin{defn}
If $R$ is compatible then we may define a mapping $\overline{R}$ on $\mathcal{Q}$ via $\overline{R}(\overline{E}) = R(E)$ for every
$E\in \mathcal{E}$ (it is well-defined precisely because of compatibility of $R$). We call $\overline{R}$ a \hl{partial social rule on $\mathcal{Q}$}.
\end{defn}
\begin{remark}
We shall apply this construction later, where $\sim$ is the relation defining anonymity or homogeneity, in which case everything makes sense because the projection to the quotient space does not change the candidate sets. However, if the projection does affect the candidate sets (as with neutrality) the result may look strange and the interpretation rather uninteresting, although the theorems will be correct. For example, a rule compatible with the equivalence relation defining neutrality must be the constant rule which chooses the same $r$ at every election, or the rule that chooses all possible $r$ at every election. We discuss this more in Section~\ref{ss:neutral}.
\end{remark}
\subsection{Totally compatible distances}
\label{ss:tot comp}
\begin{defn}
\label{def:tot comp}
A distance $d$ is \hl{totally compatible} with $\sim$ if $d(E, E') = d(F, F')$ whenever
$\overline{E}= \overline{F}$ and $\overline{E'} = \overline{F'}$.
\end{defn}
\begin{remark}
\label{r:tot comp}
In usual mathematical terms, $d$ is totally compatible if and only if it is an invariant for the equivalence relation $\sim_2:= \left( \sim \times \sim \right)$ on $\mathcal{E} \times \mathcal{E}$ for which
$(a,b) \sim_2 (c,d)$ if and only if $a\sim c$ and $b\sim d$. Provided that $\sim$ is not contained in the identity relation (so some equivalence class
has size greater than $1$), a totally compatible $d$ is not a quasimetric, because whenever $\overline{E} = \overline{E'}$, necessarily $d(E, E') = 0$.
\end{remark}
A totally compatible distance relates directly to a distance on $\mathcal{Q}$. The proof of the next result is immediate from the definitions.
\begin{prop}
\label{prop:quot dist}
The set of distances on $\mathcal{Q}$ and the set of totally compatible distances on $\mathcal{E}$ are in bijection under the map $d\leftrightarrow \delta$ defined as follows. Given $d$, define $\delta(\overline{E}, \overline{E'}) = d(E, E')$. Given $\delta$, define $d(E, E') = \delta(\overline{E}, \overline{E'})$.
\hfill\ensuremath{\square}
\end{prop}
We want to define DR rules on $\mathcal{Q}$.
\begin{defn}
\label{def:DR quot}
Let $\delta$ be a distance on $\mathcal{Q}$ and $\mathcal{K}$ a consensus on $\mathcal{Q}$. The rule $\R(K, d)$ is defined using the analogue of \eqref{eq:argmin}.
\end{defn}
\begin{prop}
\label{prop:DR compat}
The following conditions are equivalent for a social rule $R$ on $\mathcal{E}$.
\begin{enumerate}[(i)]
\item $R$ is compatible and distance rationalizable.
\item $R = \R(\mathcal{K}, d)$ where $\mathcal{K}$ is compatible and $d$ is totally compatible.
\item $\overline{R}$ is distance rationalizable on $\mathcal{Q}$.
\end{enumerate}
\end{prop}
\begin{proof}
Suppose that the first condition holds.
We use the consensus $\mathcal{K}:=\mathcal{K}^{\max}(R)$ from Definition~\ref{def:Kmax} (which is compatible
because $R$ is compatible). We can recapture $R$ as $\R(\mathcal{K}, d)$ by
defining $d$ to be the second distance in Example~\ref{eg:weird dist}. Let $R' = \R(\mathcal{K}, d)$. Then if $E \in D(\mathcal{K})$, necessarily $R'(E) = R(E)$. If $E \not \in D(\mathcal{K})$ then
$R'(E)$ is precisely the set of $r$ for which $r\in R(E)$, namely $R(E)$. Thus $R' = R$. It remains only to check that $d$ is totally compatible. Since $d$ is defined in terms only of the images $R(E)$ and $R$ is compatible, this follows immediately.
Suppose that the second condition holds. Define $\delta(\overline{E}, \overline{E'}) = d(E, E')$ (this is well-defined since $d$ is totally compatible. Define $K = \overline{K}$. Then $\overline{R} = \R(\overline{K}, \delta)$.
Finally, suppose that the third condition holds. Define $\mathcal{K}, d$ by composing $K, \delta$ with the projection to $\mathcal{Q}$. Then $R = \R(\mathcal{K}, d)$ and $R$ is compatible since $R(E) = R(\overline{E})$.
\end{proof}
The distance and consensus used in the proof of Proposition~\ref{prop:DR compat} are rather unnatural (note that the first and third distances in Example~\ref{eg:weird dist} would not even work, being metrics). We now consider more natural constructions that relate to the original distance and use the equivalence relation explicitly.
\subsection{Quotient distances}
\label{ss:quot dist}
When dealing with equivalence classes, the obvious idea is to use a quotient distance \cite{DeDe2009}. This concept is relatively little-known.
\begin{defn}
\label{def:quot dist}
We define $\overline{d}: \mathcal{Q} \times \mathcal{Q} \to \mathbb{R}_+$ to be the \hl{quotient distance}
induced by $\sim$.
\end{defn}
\begin{remark}
The standard construction of quotient distance $\overline{d}$ is as follows:
\begin{equation}
\label{eq:quot}
\overline{d}(x,y) = \inf \sum_{i=1}^k d(E_i, E'_i)
\end{equation}
where the infimum is taken over all \emph{admissible paths}, namely
paths such that $E'_i \sim E_{i+1}$ for $1\leq i \leq k-1$, $E=E_1, E' = E'_k$, $E$ projects
to $x$ and $E'$ to $y$.
\end{remark}
We now focus on a special situation where $\overline{d}$ has a much simpler formula.
\begin{defn}
\label{def:d bar}
Let $d$ be a distance on $\mathcal{E}$. Define $\tilde{d}$ on $\mathcal{Q}$ by
$$
\tilde{d}(x,y) = \inf_{\overline{E} = x, \overline{E'} = y} d(E,E').
$$
\end{defn}
\begin{remark}
If $d$ is totally compatible with $\sim$, then in the definition of
$\tilde{d}$, all the distances on the right are the same, so that $\tilde{d}(x,y) = d(E, E')$ whenever
$\overline{E} = x, \overline{E'} = y$.
\end{remark}
\begin{defn}
\label{def:simple}
A distance on $\mathcal{E}$ is \hl{simple} for $\sim$ if $\tilde{d} = \overline{d}$.
\end{defn}
\begin{remark}
Every totally compatible distance is simple, because in the definition of
$\overline{d}$ the minimum is achieved when $k=1$ (use induction on $k$ and the inequality
$d(E, E_1') + d(E_2, E_2') = d(E, E_1') + d(E_1', E_2') \leq d(E, E_2')$).
\end{remark}
Non-simple distances do arise in our framework.
\begin{eg}
\label{eg:nonsimple}
Let $\sim$ be the equivalence relation on $\mathcal{E}$ defined as follows: $E= (C, V, \pi) \sim E'= (C, V, \pi')$ if and only if there are precisely two top-ranked candidates in all the votes in $\pi \cup \pi'$ (each other election forms its own singleton equivalence class). If the two candidates in question are $x, y$, denote by $\mathcal{E}_{xy}$ the equivalence class so defined.
Let $d= d_H$ and suppose $E = (C, V, \pi)\in \mathcal{E}_{ab}, E'= (C, V, \pi')\in \mathcal{E}_{cd}$, where $\{a,b\} \cap \{c,d\} = \emptyset$. Then $\tilde{d}(\overline{E}, \overline{E'})=n$. However
$\overline{d}(\overline{E}, \overline{E'}) = 2$, since $E$ and $E'$ are each equivalent to elections that are at distance $1$ from $\mathcal{E}_{bc}$. Thus $d_H$ is not simple for $\sim$.
\end{eg}
\begin{prop}\label{prop:simple compat}
Let $d$ be a simple distance and $\mathcal{K}$ a compatible consensus.
Then for every $r$ and $E$, $\overline{d}(\overline{E},
\overline{\mathcal{K}}_r) = d(E, \mathcal{K}_r)$.
Thus if $\R(\mathcal{K}, d)$ is compatible, then it satisfies
$\overline{\R(\mathcal{K}, d)} = \R(\overline{\mathcal{K}}, \overline{d})$.
\end{prop}
\begin{proof}
We have
$$
\overline{d}(\overline{E}, \overline{\mathcal{K}}_r)
= \min_{E'\in \mathcal{K}_r} \overline{d}(\overline{E}, \overline{E'})
= \min_{E'\in \mathcal{K}_r} \tilde{d}(\overline{E}, \overline{\mathcal{K}}_r)
= \min_{E'\in\mathcal{K}_r} \min_{E''\sim E'} d(E, E'')
= \min_{E''\in \mathcal{K}_r} d(E, E'') = d(E, \mathcal{K}_r).
$$
The first equality holds by definition of distance to a set, the second because $d$ is simple, the
third by definition of $\tilde{d}$, the fourth by compatibility of $\mathcal{K}$ and the fifth for the same reason as
the first.
Now let $S = \R(\overline{K}, \overline{d})$. Then $S(\overline{E}) = R(E)$ by above, for all $E$. If $R:=\R(\mathcal{K}, d)$ is compatible then $\overline{R}$ exists and $\overline{R}(\overline{E}) = R(E)$ for all $E$. Thus $S=\overline{R}$.
\end{proof}
\begin{remark}
The condition that the distance be simple is necessary. For example, consider the setup of Example~\ref{eg:nonsimple}, where the consensus sets have the form $\mathcal{E}_{xy}$.
\end{remark}
\begin{remark}
The condition that $\R(\mathcal{K}, d)$ is compatible is not always satisfied, as we see when studying homogeneity in Section~\ref{s:homog}. In Section~\ref{ss:group} we give sufficient conditions for it to be satisfied automatically.
\end{remark}
\subsection{Symmetry groups}
\label{ss:group}
Equivalence is a form of symmetry between elections. Proposition~\ref{prop:simple compat} is clearly useful, but the only simple distances we have seen so far are
totally anonymous ones, for which the result is obvious. We introduce a strengthening of equivalence that will yield simple distances. We apply it
in later subsections to discuss anonymity, neutrality, reversal symmetry and homogeneity.
We recall some basics of the theory of group actions on sets. Let $X$ be a set and $G$ a subgroup of the group of all permutations of $X$. The \hl{orbit} of $x\in X$ under $G$ is the set of all $g(x)$ as $g$ ranges over $G$.
\begin{defn}
\label{def:equiv}
Let $\sim$ be an equivalence relation on $\mathcal{E}$ and let $G$ be a group acting on $\mathcal{E}$ via morphisms. In other words, for each $g\in G$, $E \sim E'$ implies $g(E) \sim g(E')$.
If the equivalence classes of $\sim$ are precisely the orbits under the action of $G$, then we say $\sim$ is \hl{induced by} $G$.
The distance $d$ is \hl{$G$-equivariant} if $G$ acts via isometries:
$$
d(g(E), g(E')) = d(E, E') \qquad \text{for all $E, E' \in \mathcal{E}, g\in G$}.
$$
The partial social rule $\mathcal{K}$ is \hl{$G$-invariant} if $R(g(E)) = R(E)$ for all $E\in D(\mathcal{K}), g\in G$.
\end{defn}
\begin{prop}
\label{prop:DR invariant}
Suppose that $G$ is a group that induces $\sim$ via an action on $\mathcal{E}$. The following conditions are equivalent for a social rule $R$.
\begin{enumerate}[(i)]
\item $R$ is $G$-invariant and distance rationalizable.
\item $R = \R(\mathcal{K}, d)$ where $\mathcal{K}$ is $G$-invariant and $d$ is $G$-invariant.
\item $R = \R(\mathcal{K}, d)$ where $\mathcal{K}$ is $G$-invariant and $d$ is $G$-equivariant.
\item $\overline{R}$ is distance rationalizable on $\mathcal{Q}$.
\end{enumerate}
\end{prop}
\begin{proof}
The first, second and fourth parts are equivalent by Proposition~\ref{prop:DR compat}. The second implies the third by definition. Suppose that the third condition holds. It remains to show that $R$ is $G$-invariant. Fix arbitrary $r$ and $g\in G$. Since $\mathcal{K}$ is $G$-invariant, $g(\mathcal{K}_r) = \mathcal{K}_r$.
Then $d(E,\mathcal{K}_r)= d(g(E), g(\mathcal{K}_r)) = d(g(E), \mathcal{K}_r)$. Thus $R(E) = R(g(E))$, yielding the second condition.
\end{proof}
However, the proof of Proposition~\ref{prop:DR invariant} does not give a relationship between the distances used in parts (ii) and (iii). We now proceed to clarify this. We first give an important sufficient condition for a distance to be simple.
\begin{prop}
\label{prop:simple}
Let $G$ be a group, and suppose that $\sim$ is induced by $G$, while $d$ is $G$-equivariant. Then $d$ is simple.
Furthermore
$$
\overline{d}(\overline{E}, \overline{E}') = \min_{g\in G} d(E,g(E')) = \min_{g\in G} d(g(E), E').
$$
\end{prop}
\begin{proof}
We show that for each $x,y\in \mathcal{Q}$, the minimum value of $k$ for
paths achieving the minimum in \eqref{eq:quot} is always $1$. Assume that this is not the case, so
there exist $x,y$, a minimum $k>1$ and admissible paths such that
$$
\overline{d}(x,y) = \sum_{i=1}^k d(E_i, E'_i).
$$
Choose $g,h\in G$ so that $g(E_k) = E'_{k-1}$ (possible since $\sim$ is induced by $G$).
Then by $G$-equivariance and the triangle inequality
\begin{align*}
d(E_{k-1}, E'_{k-1}) + d(E_k, E'_{k}) &= d(E_{k-1}, E'_{k-1}) + d(g(E_k), g(E'_{k})) \\ & = d(E_{k-1}, E'_{k-1}) + d(E'_{k-1}, g(E'_k)) \\
&\geq d(E_{k-1}, g(E'_k)).
\end{align*}
This contradicts the minimality of $k$, and this contradiction shows that $d$ is simple.
The other formulae for $\tilde{d}$
follow immediately, because all $E'$ projecting to
$y$ are equivalent, so that each can map onto any other via some $g$.
\end{proof}
\begin{prop}
\label{prop:nice dist}
Suppose that $R = \R(\mathcal{K}, d)$ where $\mathcal{K}$ is $G$-invariant and $d$ is $G$-equivariant.
Then $\overline{R} = \R(\overline{\mathcal{K}}, \tilde{d})$.
\end{prop}
\begin{proof}
$R$ is $G$-invariant by Proposition~\ref{prop:DR invariant}, so $\overline{R}$ exists. By Proposition~\ref{prop:simple}, $d$ is simple. The result follows from Proposition~\ref{prop:simple compat}.
\end{proof}
\begin{remark} Also $R = \R(\mathcal{K}, d')$
where $d'$ is $G$-invariant and $\overline{d'} = \tilde{d}$. In fact
$d'(E,E')$ equals the familiar quantity $\min_{g,g'\in G} d(g(E), g'(E'))$.
\end{remark}
\subsection{Neutrality and reversal symmetry}
\label{ss:neutral}
Proposition~\ref{prop:DR invariant} does not apply to the study of these properties, because rather than $R(g(E)) = R(E)$, we want $R(g(E)) = g(R(E))$: the output of the rule changes in a consistent way. This is because the group action changes the candidates, unlike the case with anonymity.
A social rule of size $s$ is a mapping taking each election to a set of $s$-rankings. If a group $G$ acts on the set of rankings, then there is a natural induced action on social rules: $g(R)$ is the rule for which $g(R)(E) = g(R(E))$. If the group acts on $s$-rankings for all $s$, we can say more.
\begin{defn}
Suppose that $G$ is a group acting on $L(C^*)$ such that it maps $L_s(C^*)$ to itself for every $s$. The partial social rule $R$ is \hl{$G$-equivariant} if the identity $R(g(E)) = g(R(E))$ holds.
\end{defn}
An example of this is \hl{reversal symmetry}. A social welfare rule satisfies reversal symmetry if turning all input rankings upside down also reverses the output. The group in question is the group of order $2$, generated by $g$ say. All our examples so far of distances on rankings have satisfied reversal symmetry.
\begin{remark}
Reversal symmetry has been defined for social choice rules as: if $a$ is the unique winner in the original profile, then $a$ is not a winner in the reversed profile.
For example, the social welfare version of Borda's rule satisfies this. However for social choice rules which do not come from social welfare rules, this is inconsistent with our definition above. In the case of DR rules, there is no difficulty, because each social choice rule yields a social welfare rule as in Definition~\ref{def:score}.
\end{remark}
It is straightforward to prove an analogue of Proposition~\ref{prop:DR invariant} that extends to the case of $G$-equivariant rules. We omit the details.
\begin{prop}
\label{prop:reversal}
If $\mathcal{K}$ and $d$ satisfy reversal symmetry then so does $\R(\mathcal{K}, d)$.
Conversely, if $R$ satisfies reversal symmetry then $R = \R(\mathcal{K}, d)$ where $\mathcal{K}$ and $d$ both do.
\hfill\ensuremath{\square}
\end{prop}
For example, since $\sunam, d_H^1$ and $d_K^1$ each satisfy reversal symmetry, so do the modal ranking rule and Kemeny's rule.
If a group $G$ acts on $C^*$ then there is a natural induced action on $s$-rankings, whereby the ranking $a_1\cdots a_s$ maps to $g(a_1)\cdots g(a_s)$. An example of this involves \hl{neutrality}. In this case $G$ the group of all permutations of the candidates.
Neutrality is a very natural
condition for consensuses and for distances, and is satisfied by all our main examples. It means that the identities of candidates are not relevant because each candidate is treated symmetrically.
\begin{prop}
\label{prop:neut}
Let $\mathcal{K}$ be a neutral consensus and $d$ a neutral distance. Then $\R(\mathcal{K}, d)$ is neutral. Conversely, if $R$ is neutral and distance rationalizable then $R = \R(\mathcal{K}, d)$ where
$\mathcal{K}$ and $d$ are neutral.
\hfill\ensuremath{\square}
\end{prop}
\subsection{Anonymity}
\label{ss:anon}
We now apply Proposition~\ref{prop:DR invariant} directly.
First we discuss the concept of anonymity. Several authors use a fixed finite
voter set
and define a rule to be anonymous if the rule is invariant under permutations
of the set. This deals with the order of voters, but not their identities.
On the other hand, allowing arbitrary identities leads us to issues of
classes that are not sets, category theory, etc. Our convention that there
is a single countably infinite set of voters allows us to deal both with
the order and identity of voters.
We start with an example that explains the need to distinguish
$G$-equivariance from total compatibility.
\begin{eg} \label{eg:anon dist}
Consider $R:=\R(\wunam, d^1_H)$, plurality rule. The consensus $\wunam$
is anonymous, because identities and order of voters are not important.
The distance $d^1_H$ is \emph{not}
totally anonymous, but it is anonymous. Consider the case $E = (C,V,\pi), E' = (C,V, \pi')$,
where $C = \{a,b\}$, $V = \{v_1, v_2\}$,
$\pi = (ab, ba)$, $\pi' = (ba, ab)$. Then $d(E, E') \neq 0 = d(E,E)$, although each of $E$ and $E'$ is obtained from the other by permutation of the
set of voters.
\end{eg}
We can use the results of Section~\ref{ss:group} directly.
\begin{defn}
\label{def:anon2}
Let $G$ be the group of all bijections of $V^*$. For any set $X$, we define an action of $G$ on functions in the usual way by $g\cdot f (v) = f(g(v))$.
In particular for each $C$ we can apply this to $X = L(C)$. This allows us to define an action on $\mathcal{E}$ via $g\cdot (C, V, \pi):= (C, g(V), g\cdot \pi)$. Let $\sim$ be the
equivalence relation induced by this action. A partial rule is \hl{anonymous} if it is compatible with $\sim$. A distance is \hl{anonymous} if
it is $G$-equivariant, and \hl{totally anonymous} if it is totally compatible with $\sim$.
We denote $\mathcal{Q}$ by $\mathcal{V}$ and call it the set of \hl{anonymous profiles} or \hl{voting situations}.
\end{defn}
\begin{remark}
An anonymous profile is completely determined by the numbers of voters having each preference order, and hence is encoded by
a \hl{multiset} $\nummap(E)$ on $L(C)$ of weight $|V|$ (we call $\nummap$ the \hl{vote number map} --- note that $E \sim E'$ if and only if $\nummap(E) = \nummap(E')$).
A rule is anonymous if its output depends only on the anonymous profile, and not the particular voters or their order.
\end{remark}
\begin{eg}
\label{eg:anon}
Let $C=\{c_1, c_2\}$. Let $g$ be the bijection of $V^*$ that transposes $1$ and $2$. Let $E = (C, \{v_1\}, \{ba\}), E' = (C, \{v_1, v_2\}, \{ba,ab\})$. Then $g(E)$ is an election in which $v_1$ votes $ab$ and $v_2$ votes $ba$, while in $g(E')$, $v_2$ votes $ab$ and $v_1$ does not vote.
\end{eg}
\subsubsection{(Totally) anonymous distances}
\label{sss:anon dist}
The next result is obvious, but useful. Part (ii) was observed by Elkind, Faliszewski and Slinko \cite[Proposition~1]{EFS2015}.
\begin{prop}
\label{prop:tot anon}
The following results hold.
\begin{enumerate}[(i)]
\item Every (reduced) tournament distance is totally anonymous.
\item A votewise distance is anonymous if and only if its underlying seminorm is symmetric.
\item A votewise distance based on a norm
cannot be totally anonymous.
\end{enumerate}
\end{prop}
\begin{proof}
The first part is clear because the definition uses only the number of
votes of each type. The second follows immediately from the definitions. For the third part, use the idea of Example~\ref{eg:anon dist}.
\end{proof}
\begin{eg}
\label{eg:votewise anon}
For an anonymous votewise distance, we can let $S=\{s_1^{n_1}, \dots, s_k^{n_k}\}$
denote the multiset of weight $n$ corresponding to the $n$-tuple $(a_1,
\dots, a_n)\in \mathbb{R}^n$. We can define $N(S) = N_n(a)$, where $n =
\sum_i n_i s_i$ can be computed knowing only $S$.
For example, consider the $\ell^p$ norm for $1\leq p \leq \infty$ defined on $\mathbb{R}^n$. This yields an anonymous
votewise distance when coupled with any underlying distance $d$ on $L(C)$.
\end{eg}
\begin{eg} \label{eg:Hamming-quot}
Consider the Hamming distance $d:=d_H^1$. For each $x,y\in \mathcal{V}$,
the distance $\overline{d}(x, y)$ is the minimum number of voters whose
votes must be changed in order to transform $x$ into $y$. For example if
$x\in \mathcal{V}$ has $2$ $abc$ voters and $3$ $bac$ voters,
while $y$ has $2$ $bac$ voters and $3$ $cba$ voters, then
$\overline{d}(x,y) = 3$. Note that for
the Kemeny metric $d:=d_K^1$, $\overline{d}(x,y) = 8$.
\end{eg}
\subsubsection{Anonymous DR rules}
\label{sss:anon rule}
Because of the special form of the equivalence relation for anonymity (it does not touch the candidate sets), a partial social rule on $\mathcal{V}$ has a nice form. Indeed, many authors define voting rules directly on $\mathcal{V}$.
We simply define an anonymous rule in the DR framework by choosing a consensus notion $K$ on $\mathcal{V}$ and a distance $\delta$ on $\mathcal{V}$, and using the analogue of~\eqref{eq:argmin}.
Because $\mathcal{V}$ can be described as multisets which correspond geometrically to histograms, this may allow us to create interesting anonymous rules using geometric intuition.
The next characterization, which follows directly from Propositions~\ref{prop:DR invariant} and~\ref{prop:nice dist}, answers positively a question raised in \cite[p. 362, discussion after Prop. 4]{EFS2015}.
\begin{prop}
\label{prop:anon}
If $\mathcal{K}$ and $d$ are anonymous, then $\R(\mathcal{K}, d)$ is anonymous and $\overline{R} = \R(\mathcal{K}, \tilde{d})$. Conversely if $R$ is anonymous and distance rationalizable, then $R = \R(\mathcal{K}', d')$ where $\mathcal{K}$ is anonymous and $d'$ is totally anonymous.
\hfill\ensuremath{\square}
\end{prop}
This applies to all consensuses described so far, and to all votewise distances based on symmetric seminorms, in addition to tournament distances. Thus all rules in Table~\ref{t:DR egs} are anonymous.
Note that elements of $\mathcal{V}$ can be encoded by multisets which are essentially histograms. A standard measure of distance between histograms is the \hl{Earth Mover} or \emph{transportation} distance. The interpretation in our situation, when $d$ is anonymous
and standard, is that we must move voter mass between types of voters while incurring
the minimum cost (distance). In fact in this case $\tilde{d}$ is exactly the Earth Mover distance based on $d$. Computing it is a special case of the \emph{linear assignment problem} of operations research. The minimum can be computed in polynomial time via the ``Hungarian method" \cite{Munk1957}. An equivalent formulation of the problem is to find a minimum weight matching in a bipartite graph.
\section{Homogeneity}
\label{s:homog}
In this case we use a slightly different equivalence relation.
\begin{defn}
\label{def:distmap}
Let $E = (C,V, \pi)$ be an election, where $n=|V|$.
The \hl{vote distribution} associated to $E$ is the probability distribution on $L(C)$ induced by
the multiset $\nummap(E)$, which we denote $\distmap(E)$. The vote distribution map defines an equivalence relation $\sim$ on $\mathcal{E}$ in the usual way. We denote the
quotient space by $\mathcal{P}$.
\end{defn}
\begin{defn}
\label{def:homog}
A rule is \hl{homogeneous} if and only if it is compatible with $\sim$. A distance is called \hl{totally homogeneous} if it is totally compatible with $\sim$.
\end{defn}
\begin{remark}
In other words, for a homogeneous rule, the set of winners depends only on the probability
distribution of voter types --- cloning each voter the same number of times makes no difference to the result.
Our definition of homogeneity implies anonymity, because the equivalence relation used in
this section refines the one used for anonymity. Some authors do not make it clear whether they consider homogeneous rules to be anonymous, because they give a definition in terms of profiles in which the cloned voters occupy a particular position. Of course, the two definitions are the same in the presence of anonymity.
It is important to note that $\sim$ is not induced by a group action. Rather, there is a monoid (a ``group without inverses") acting. On $\mathcal{V}$ there is an action of the positive integers under multiplication
where for each $x\in \mathcal{V}$, $k\cdot x$ is the voting situation formed by adding $k-1$ copies of each voter. A rule is homogeneous if it is anonymous and invariant under the action of this monoid. Thus, for example, starting with an election $E$ and doubling or tripling the number of voters will lead to equivalent elections $2E, 3E$, but there is not
necessarily any way to get from $3E$ to $2E$ via an element of the monoid, because of the lack of inverses. This has important consequences, as we now see.
\end{remark}
The above remark shows that Proposition~\ref{prop:DR invariant} does not necessarily apply to homogeneity. In fact the conclusion is known to be false.
\begin{eg}
\label{eg:dodgson}
Consider $\mathcal{K} = \cond$ and $d = d^1_K$. The rule $\R(\mathcal{K}, d)$ is known as \hl{Dodgson's rule}
and is known not to be homogeneous, although it is anonymous. For example, consider the following example of Fishburn \cite{Fish1977} with $C=\{a_1, \dots, a_7, x\}$. We start with $a_1\dots a_7$, and consider all its $7$ cyclic permutations. We then insert $x$ between the $4$th and $5$th entries in each case, so $x$ is always in $5$th position.
Then $d^1_K(E, \cond_x) = 7$ ($x$ must switch past each $a_i$ exactly once) but $d^1_K(E, \cond_{a_i}) = 6$ for each $i$, because, for example, $x$ must switch past $a_7, a_6, a_5$ respectively $3,2,1$ times.
However, let $k\geq 1$ and consider the election $kE$. Then $k^{-1}d^1_K(kE, \cond_x) \to 3.5$ as $k\to \infty$ (because we need only just over $1/2$ a switch per $a_i$), while $k^{-1}d^1_K(kE, \cond_{a_i}) \to 4.5$ (because, for example, $a_1$ must switch past $a_7, a_6, a_5$ respectively just over $2.5,1.5,0.5$ times).
In the analogue of the proof of Proposition~\ref{prop:DR invariant}, we can conclude only that $kE$
minimizes the distance to elements of $D(\mathcal{K})$ of the form $kE'$, but not to all of $D(\mathcal{K})$.
\end{eg}
\begin{remark}
The same example shows that $\R(\cond, d^1_H)$, the Voter Replacement Rule, is not homogeneous. In this case the limiting distances to $\cond_x$ and $\cond_{a_i}$ are $1.75$ and $1.5$, while the distances for the original $E$ are both equal to $2$.
\end{remark}
In order to prove a result similar to Proposition~\ref{prop:DR invariant}, we need a strong condition on $\mathcal{K}$.
We call an anonymous consensus \hl{divisible} if every element of $\mathcal{K}_r$ with $kn$ voters has the
form $kE$ where $E$ has $n$ voters. This is a very strong condition --- taking $n=1$ shows that $\mathcal{K}$
is extended by $\sunam$ (up to possible permutation of the winners).
We now generalize \cite[Thm 8]{EFS2015}, which dealt with the case where
$\mathcal{K} = \sunam$ and $d$ is $\ell^p$-votewise, based on an underlying pseudometric.
\begin{defn}
\label{def:homog dist}
An anonymous distance on $\mathcal{E}$ is \hl{homogeneous} if for each $k\geq
1$ and each $E,E'\in \mathcal{E}$,
$$d(E, E') = d(kE, kE').$$
A family of symmetric seminorms $N$ is \hl{homogeneous} if $N_{nk}(x^{(k)})
= N_n(x)$ for all $x\in \mathbb{R}^n$ and all $k\geq 1$. Here $x^{(k)}$
denotes the element of $\mathbb{R}^{nk}$ obtained by concatenating $k$
copies of $x$.
\end{defn}
\begin{remark}
The reader should avoid confusion by noting
that the term \emph{homogeneous} is often used for the different property of a seminorm
expressed by the identity $N(\lambda x) = |\lambda| N(x)$.
\end{remark}
\begin{remark}
Let $d$ be a standard distance and $N$ a symmetric seminorm. Then $d^N$ can be normalized to be homogeneous. Explicitly, let $d_*^N(E, E') = n^{-1} d^N(E, E')$ where $E = (C,V, \pi)$ and $|V| = n$. The DR rules defined by $d^N$ and $d_*^N$ are the same, since we are only scaling the distance by a constant factor.
\end{remark}
\begin{prop}
\label{prop:divis}
Let $\mathcal{K}$ be a homogeneous divisible consensus and $d$ a homogeneous distance. Then
$\R(\mathcal{K}, d)$ is homogeneous.
\end{prop}
\begin{proof}
The proof of Proposition~\ref{prop:DR invariant} adapts directly to this case, as described above.
\end{proof}
Thus we recapture the well-known fact that Kemeny's rule $\R(\sunam, d_K^1)$ is homogeneous. Proposition~\ref{prop:divis} shows, for example, that
although Dodgson's
rule can be rationalized with respect to $\sunam$ and some distance (since Dodgson's rule satisfies the unanimity axiom),
no such distance can be homogeneous.
\section{The Votewise Minimizer Property}
\label{s:VMP}
The failure of $\R(\mathcal{K}, d)$ to inherit various conditions from $\mathcal{K}$ is related to the fact that minimization does not respect various operations. Roughly speaking, votewise distances combine better with votewise consensuses. We now make some technical (and rather strong) definitions that allow for several positive results when dealing with votewise distances.
\begin{defn}
\label{def:CMP}
Let $\mathcal{K}$ be a compatible consensus and $d$ a compatible distance. Say that $(\mathcal{K}, d)$ has
the \hl{compatible minimizer property} (CMP) if for each $E, E'\in \mathcal{E}$ with $E\sim E'$ and each $r$,
$d(E, \mathcal{K}_r) = d(E', \mathcal{K}_r)$.
\begin{remark}
If $\sim$ is induced by a group action then the CMP is automatically satisfied, as used in the proof of Proposition~\ref{prop:DR invariant}.
The analogue of Proposition~\ref{prop:DR invariant} does not hold for general equivalence relations, as we see in Example~\ref{eg:dodgson}. However, with the additional assumption of the CMP, everything works well.
\end{remark}
\begin{prop}
\label{prop:CMP}
Let $\mathcal{K}$ be a compatible consensus and $d$ a compatible distance, and suppose that $(\mathcal{K}, d)$ satisfies the CMP. Then $\R(\mathcal{K}, d)$ is compatible.
\end{prop}
\begin{proof}
Let $E, E'\in \mathcal{E}$ with $E\sim E'$. By CMP, $d(E, \mathcal{K}_r) = d(E', \mathcal{K}_r)$ for all $r$ and in particular the minimizing values of $r$ are the same.
\end{proof}
\end{defn}
\begin{eg}
\label{eg:cond homog}
If $d$ is totally compatible then the CMP is automatically satisfied. Thus, for example, every rule $\R(\mathcal{K}, d)$, where $d$ is a tournament distance and $\mathcal{K}$ is anonymous and homogeneous, is anonymous and homogeneous.
\end{eg}
\begin{eg}
\label{eg:VMP no}
Consider the
election $E=(C,V,\pi)$ where $C=\{a,b\}$, $V$ has size $5$, and $\pi =
\{ab,ab,ba,ba,ba\}$. Then $d(E, \cond_a) = 1$ for $d\in \{d_H, d_K\}$,
and every minimizer differs from $\pi$ only in that precisely one of the
$ba$ voters switches to $ab$. However, if we consider $3E$ then each minimizer requires not $3$, but $2$ switches. Thus $(\cond, d)$
does not satisfy the CMP with respect to the equivalence relation used to define homogeneity.
This also shows that
$(\maj,d)$ need not satisfy the CMP, because $\maj$
coincides with $\cond$ when $m=2$.
Thus we should not necessarily expect Dodgson's rule or the Voter Replacement Rule to be homogeneous, and indeed they are not, as Example~\ref{eg:dodgson} shows.
\end{eg}
\begin{defn}
\label{definition:vote min}
Suppose that $d$ is votewise and anonymous, and $\mathcal{K}$ is anonymous.
Say that $(\mathcal{K}, d)$ satisfies the \hl{votewise minimizer property} (VMP) if the following condition is satisfied.
\begin{quotation}
For each $r\in L_s(C)$ and each election $E = (C,V,\pi)\in \mathcal{E}$, there
exists a minimizer $(C,V,\pi^*)\in \mathcal{K}_r$ of the distance from $E$ to
$\mathcal{K}_r$, so that for all $i, d(\pi_i,\pi^*_i)$ depends only on $\pi_i$
and $r$.
\end{quotation}
\end{defn}
\begin{prop}
\label{prop:VMP char}
If $(\mathcal{K}, d)$ satisfies the VMP, then
\begin{itemize}
\item $d(\pi_i, \pi_i^*)$ has the form $\delta(t,\rho)$ for some function $\delta$, where $t,\rho \in L(C)$;
\item $d(E, \mathcal{K}_r)$ has the form $N(S)$ where $S$ is the
multiset of all values of $\delta(t, \rho)$ counted with multiplicity.
\end{itemize}
\end{prop}
\begin{proof}
This follows directly from the definitions.
\end{proof}
\begin{eg}
\label{eg:VMP formula}
Let $\mathcal{K} = \wunam$ and $d = d_K$, and $N = \ell^2$. For each $E=(C,V,\pi)\in \mathcal{E}$ and $a\in C$, $d(E, \wunam_a) = N(d(\pi_1, \pi_1^*), \dots , d(\pi_n, \pi_n^*))$.
We can take $\pi^*$ to be the ranking derived from $\pi$ by swapping $a$ to the top. Thus
$d(E, \wunam_a)^2 = \sum_{t\in L(C)} n(t) r(t,a)^2$, where $n(t)$ is the number of times $t$ occurs in $\pi$.
\end{eg}
\begin{prop}
\label{prop:VMPimpCMP}
Let $\mathcal{K}$ be an anonymous consensus and $d$ a votewise anonymous and homogeneous distance.
If $(\mathcal{K}, d)$ satisfies the VMP, then it satisfies the CMP with respect to the equivalence relation defining homogeneity.
\end{prop}
\begin{proof}
For each $E$ and $r$, there is a minimizer of $d(E, \mathcal{K}_r)$ for which the distance has the form $N(S)$ where $S$ is the multiset of values of $d(\pi_i, \pi'_i)$ occurring. Thus it depends only on the equivalence class with respect to anonymity. By homogeneity of $d$, it in fact only depends on the equivalence class of with respect to homogeneity.
\end{proof}
Example~\ref{eg:VMP no} shows that the VMP is not always satisfied, and Proposition~\ref{prop:VMP} gives sufficient conditions for it to be satisfied.
\begin{prop}
\label{prop:VMP}
Let $d$ be an anonymous votewise distance on $\mathcal{E}$.
Suppose that the $s$-consensus $\mathcal{K}$ satisfies the following: for each $r\in L_s(C)$, there is a nonempty subset $S_r$ of $L(C)$ such that
$\mathcal{K}_r$ consists precisely of elections for which no voter has a ranking in $S_r$.
Then $(\mathcal{K}, d)$ satisfies the VMP.
\end{prop}
\begin{proof}
The minimizer in question is obtained by, for each $i$,
choosing the closest element of $L(C)$ under the underlying distance.
\end{proof}
\begin{eg}
\label{eg:VMP yes}
$(\sunam^s, d)$ satisfies the VMP for each $s$, because we can take $S_r$ to be the set of rankings which do not agree with $r$ in all of their top $s$ places. Any consensus which $\sunam^s$ extends also satisfies VMP. For example, we can choose one fixed ranking that does agree with $r$ in the top $s$ places, and define $S_r$ to be its complement. Note that this example is not neutral.
\end{eg}
\subsection{Homogeneity}
\label{ss:VMP homog}
So far we can only show homogeneity when using $\sunam$. We want to widen this to at least $\wunam$. We use a definition from Elkind, Faliszewski and Slinko \cite{EFS2015}.
\begin{defn}
We call a seminorm $N$ \hl{monotone in the positive orthant} if whenever
$0\leq x_i \leq y_i$ for all $i$, $N(x) \leq N(y)$.
\end{defn}
\begin{prop}
\label{prop:homog VMP} Suppose that $\mathcal{K}$ is homogeneous, $d^N$ is votewise, anonymous and homogeneous, $(\mathcal{K}, d^N)$ satisfies the VMP, and $N$ is monotone in the positive orthant.
Then $\R(\mathcal{K}, d^N)$ is homogeneous.
\end{prop}
\begin{proof}
Let $E\in \mathcal{V}$ and $k\geq 1$. First note that $d(E, \mathcal{K}_r) = d(kE, k\mathcal{K}_r) \geq d(kE, \mathcal{K}_r)$. We now prove the converse inequality.
By VMP, $d(E, \mathcal{K}_r) = N(S)$ where $S$ is the multiset of values $d(\pi, \pi^*)$. Also by VMP and homogeneity, $d(kE, \mathcal{K}_{r}) = N(kS')$ where $S'$ is the multiset of values $d(\pi, \pi^{**})$ (here the minimizer may depend on $k$, so $\pi^{**}$ may not equal $\pi^*$). Note that $S'$ is elementwise at least as great as $S$, because $\pi^*$ is a minimizer. By homogeneity and monotonicity in the positive orthant, $d(kE, \mathcal{K}_{r}) = N(S') \geq N(S) = d(E, \mathcal{K}_r)$, as required.
Proposition~\ref{prop:CMP} now gives the result.
\end{proof}
\begin{cor}
\label{cor:homog}
If $1\leq p \leq \infty$, then
$\R(\sunam^s, d^p)$ is homogeneous.
\end{cor}
\begin{remark}
The case $p = \infty$ allows for stronger results \cite[Thm 9]{EFS2015}.
\end{remark}
\subsection{Consistency}
\label{ss:consist}
Consistency, introduced by Young \cite{Youn1975}, deals with the effect of splitting the voter set into two parts.
\begin{defn}
\label{def:consist}
Let $E = (C, V, \pi)$ and $E' = (C, V', \pi') \in \mathcal{E}$ where $V \cap V' = \emptyset$. We define $E+E' = (C, V \cup V', \pi'')$, where
$$
\pi''(v) = \begin{cases} \pi(v) \qquad \text{if $v \in V$;} \\ \pi'(v) \qquad \text{if $v \in V'$.} \end{cases}
$$
A partial social rule $R$ is \hl{consistent} if whenever
$R(E) \cap R(E') \neq \emptyset$, necessarily $R(E) \cap R(E') = R(E+E')$.
\end{defn}
\begin{remark}
A consensus is consistent if and only if each consensus set is closed under the $+$ operation.
\end{remark}
The next result generalizes Elkind, Faliszewski and Slinko \cite[Thm 7]{EFS2015}.
\begin{prop}
\label{prop:consist VMP}
Suppose that $\mathcal{K}$ is consistent, $d$ is votewise with respect to a homogeneous norm and
$(\mathcal{K},d)$ satisfies the VMP. Then $\R(\mathcal{K}, d)$ is consistent.
\end{prop}
\begin{proof}
Let $E, E'\in \mathcal{E}$ such that $R(E) \cap R(E') \neq \emptyset$. We show that for all $r$ there are minimizers $m(E, r), m(E', r)$ and $m(E+E', r)$ such that $m(E,r) + m(E',r) = m(E+E', r)$. The result then follows just as in Proposition~\ref{prop:CMP}.
The claim follows easily from the VMP. Because minimization of the distance to $r$ occurs votewise, it
respects the split into $E$ and $E'$.
\end{proof}
\begin{cor}
\label{cor:consist}
If $1\leq p \leq \infty$, then
$\R(\sunam^s, d^p)$ is consistent.
\end{cor}
Recall that Kemeny's rule is consistent when properly considered as a social welfare rule, but not when considered as a social choice rule (this point may potentially confuse readers of \cite{EFS2015}).
\subsection{Continuity}
\label{ss:VMP contin}
After fixing an arbitrary ordering on $L(C)$, each partial social rule on $\simp_\mathbb{Q}(L(C))$ of size $1$ can be identified with
an arbitrary function on an arbitrary nonzero subset of the rational points of the $6$-simplex
$\simp_6$, with image contained in $C$. Rules defined in this level of generality are not easy to deal with. Young \cite{Youn1975} introduced the axiom of continuity.
\begin{defn}
\label{def:continuous}
An anonymous rule $R$ is \hl{continuous} if when $E = (C, V, \pi), E' = (C, V', \pi')$ and $R(E) = \{r\}$
then $R(kE + E') = \{r\}$ for all sufficiently large integers $k$.
\end{defn}
If $R$ is homogeneous then it is continuous if and only if every vote distribution sufficiently close
to $E$ in the $\ell^1$-norm on $\simp(L(C))$ yields the same output as $E$. We do not know of any voting rule seriously considered in the literature that is not continuous.
We now give a slight generalization of a result of Elkind, Faliszewski and Slinko \cite[Thm 6]{EFS2015}.
\begin{prop}
\label{prop:contin VMP}
Suppose that $\mathcal{K}$ is continuous and homogeneous, $d$ is votewise with respect to a continuous homogeneous seminorm and $(\mathcal{K},d)$ satisfies the VMP. Then $\R(\mathcal{K}, d)$ is continuous.
\end{prop}
\begin{proof}
Let $E=(C, V, \pi), E'=(C, V', \pi') \in \mathcal{E}$ with $R(E) = \{r\}$. Thus there is $F=(C, V, \tau) \in \mathcal{K}_r$ such that for all $r'\neq r$ and all $F'\in \mathcal{K}_{r'}$,
$d(E, F) < d(E, F')$. Note that $d(E, F) = N(S)$ where $S$ is the multiset of all $d(\pi_i, \tau_i)$, and similarly $d(E, F') = N(S')$.
Fix $F''=(C, V', \pi'')\in \mathcal{K}_r$. Then $d(kE+E', kF+F'') = N(kS+S'')$. Also for $d(kE+E', F')=N(kS'+T)$ for some $T$. By homogeneity of the norm and its continuity, we have for sufficiently large $k$, with $\varepsilon:=|V'|/k$,
$d(kE+E', kF+F'') = N(S+\varepsilon S'') < N(S'+\varepsilon T ).$
\end{proof}
\begin{cor}
\label{cor:}
If $1\leq p \leq \infty$, then $\R(\sunam^s, d^p)$ is continuous.
\end{cor}
\section{Conclusions and future work}
\label{s:conc}
We have clarified the relationship between distance rationalizability and axiomatic properties of social rules, and given improved necessary and sufficient conditions for rules to satisfy several of these axioms. The results show clearly that votewise distances combine better with votewise consensuses (which we define as those satisfying the VMP). The more complicated structure of consensuses such as $\cond^s$ compared to $\sunam^s$ is reflected in the failure of various properties to extend. Of course, VMP is a very strong property, and we
do not know of consensuses other than $\sunam^s$ that satisfy it
generally. However VMP and CMP may be satisfied a particular $(\mathcal{K}, d)$ pair in a given application.
What seems clear is that votewise distances work best with ``votewise consensuses", and Condorcet consensus with tournament distances. Mixing the two yields rules such as Dodgson's and the Voter Replacement Rule which fail to satisfy basic properties such as homogeneity.
We have only a few sufficient conditions for homogeneity of a DR rule. If the rule is not homogeneous, a homogeneous rule similar to the original may be found.
In \cite{Fish1977} a way around the nonhomogeneity of Dodgson's and Young's rules was
found, by using a limiting process to redefine the distance. This is unsatisfactory --- it is not even clear that the limit exists. Presumably using the construction $\R(\overline{\mathcal{K}}, \overline{d})$ may work, but it is not completely clear to us.
Systematic exploration of the space of rules $\R(\sunam^s, d^p)$ where $d$ is a neutral distance on rankings, may well unearth new rules with desirable properties. These rules are already known to be continuous, neutral, anonymous, homogeneous and consistent. Other possibly desirable properties may also be satisfied: Kemeny's rule, which falls into this class, also
satisfies a Condorcet property for social welfare rules \cite{YoLe1978} while scoring rules are also monotonic as social choice rules.
To our knowledge, $\ell^p$ votewise distances with $1 < p < \infty$ have not been
studied systematically. Also, in addition to the discrete, inversion and Spearman metrics on rankings discussed here, there are many interesting distances on rankings yet to be explored. Besides votewise distances, there are many other interesting distances on $\mathcal{V}$ and $\mathcal{P}$, which may yield useful new social rules. Such \hl{distances on multisets} and \hl{statistical distances} are heavily used in many application areas \cite{DeDe2009}.
In a related forthcoming work, the present authors use the framework of distance rationalizability of anonymous and homogeneous rules to study the decisiveness of such rules. We expect other applications, for example by using different groups of symmetries.
\printbibliography
\end{document}
\comment{what can we say in general about DR and quotient distances before moving to simple ones?}
\comment{MW: I don't see why Fishburn 1977 construction works - why does the limit exist? We should define it better, using the equivalence relation somehow. What is the relation with $\R(\overline{\mathcal{K}}, \overline{d})$ if $\R(\mathcal{K}, d)$ is not homogeneous?}
\comment{MW:how to show Condorcet rules are continuous?}
\comment{MW: is there a standard way to make a graphic distance into a metric, and if so does it generalize EFS2012 procedure?}
\begin{remark}
\comment{BH - please fix up this example, or just remove it if too difficult - it is about VMP}
It is not sufficient to assume that the consensus sets have some missing rankings. Take three distinct rankings $r_1, r_2, r_3$, and a consensus $\mathcal{K}_{r_1}$
being something like \verb+"at least 2
\end{remark}
\begin{prop}
\label{prop:short path}
\comment{MW quote result or give proof, use median term}
A quasimetric on $\mathcal{E}$ is a shortest path distance for some nonempty
edge relation on $\mathcal{E}$ if and only if it takes integer values, and
for each $y, x\in \mathcal{E}$ such that $2\leq d(y,x) < \infty$, there is
$z\in \mathcal{E}, z\neq x, z\neq y$ such that $d(y,z) + d(z,x) = d(y,x)$.
\end{prop}
\begin{proof}
Each shortest path quasimetric satisfies the given conditions. For the
converse, suppose that $d$ is a quasimetric on $\mathcal{E}$ satisfying the
given conditions. It follows that for every $y,x$, there are $x_0=x,
\dots, x_k=y$ such that $d(y,x) = k$ and each $d(x_i, x_{i+1}) = 1$.
Define an arc between $E$ and $E'$ if and only if $d(E,E') = 1$. Let
$d'$ be the shortest path distance on the associated digraph. It follows
by induction on the minimal value of $k$ that $d = d'$.
\end{proof}
Another question concerns changing the set of candidates. For some
consensuses that are coherent in a way described below, there is a
natural way to redefine a $1$-consensus $\mathcal{K}$ as an $s$-consensus, for
$s > 1$. We simply construct the ranking from the top. The first element
$a$ is the value of the $1$-consensus, and the next is the value when
the $1$-consensus is applied to the candidate set $C\setminus\{a\}$. We
continue inductively. The coherence condition is that under deletion of
candidate $a$, $\mathcal{K}^C$ maps into $\mathcal{K}^{C\setminus\{a\}}$, for all
$a\in C$ (where $\mathcal{K}^C$ denotes the value of $\mathcal{K}$ with candidate
set $C$). For example, this condition is satisfied by $\cond$. In
general, when this latter condition is not satisfied, we must reduce
$\mathcal{K}$ at each iteration. For example, we obtain $\sunam$ from $\wunam$
by such a procedure.
\begin{prop}
\label{pr:div}
Let $\mathcal{K}$ be a homogeneous $s$-consensus. Then $\mathcal{K}$ is divisible if and only if $D(\overline{\mathcal{K}})$ is a union of corners of $\simp_\mathbb{Q}$.
\end{prop}
\begin{proof}
Suppose that $\mathcal{K}$ is divisible and let $E\in \mathcal{K}_r$. Write $E=(C, V, \pi)$ as the sum $E_1 + \dots + E_k$ where $k = |V|$ and
each $E_i= (C, \{v_i\}, \pi_i)$ is an election with a single voter. By divisibility, $E = kE'$ where $E' \in \mathcal{K}_r$.
Thus $E'$ must have only a single voter, and so $\pi$ must be a unanimous profile. Conversely, if the domain of $\mathcal{K}$ is a union of
corners, clearly every profile is unanimous and hence each $E\in \mathcal{K}_r$ has the form $kE'$ for $E'\in \mathcal{K}_r$,
whenever $k$ is a divisor of $|V(E)|$.
\end{proof}
\begin{defn}(``single-peaked consensus")
\label{def:sp}
Consider the set of \hl{single-peaked} elections (those for which
there is a fixed ordering of $C$ with respect to which the following is
true: for each voter $v$, there is an ``ideal" element $c_i$ such that
if $k\leq j \leq i$ or $i \leq j \leq k$, $v$ prefers $c_i$ to $c_j$ to
$c_k$).
If $n:=|V|$ is odd, the median of the ideal elements $c_i$ is the consensus winner.
In this case, for each $r\in L_1(C)$, every single-peaked election with median element $r$ also belongs to $\cond_r$. When $n$ is even, it is not clear how to define $\SP$.
\end{defn}
\begin{defn}
\label{def:lorenz}
Consider the set of elections for which $a$ is the winner if and only if
$a$ has at least as many the first-place votes as each other candidate, at least as many
first- and second-place votes, etc (eventually some such inequality must be strict).
We denote this consensus $\scor$. It has been called the \hl{Lorenz consensus}.
\end{defn}
\begin{remark}
$\wunam$ is a refinement of $\scor$ but otherwise there is no refinement relation
between $\scor$ and any of the other consensuses above.
\end{remark}
\comment{proof is wrong, but can we put a condition on consensus to rescue the result?}
\begin{prop}
\label{pr:hillas positive}
If $R=\R(\mathcal{K},d)$ and $d$ is a shortest path distance, then $R = \R(\mathcal{K}^{\epsilon},d)$ for all $\epsilon$. In particular, $R = \R(\mathcal{K}^{\max}(R),d)$.
\end{prop}
\begin{proof}
Consider an election $E$. If $E\in\mathcal{K}^{\epsilon}$ then it has the same winner according to $R$ and $\R(\mathcal{K}^{\epsilon},d)$ because $\mathcal{K}$ is a refinement of $\mathcal{K}^{\epsilon}$. Now suppose that
$E\not\in\mathcal{K}^{\epsilon}$ and consider a ranking $r$. The distance from $E$ to $\mathcal{K}_r$ is finite if and only if there exists a path in the underlying graph of $d$ from $E$ to $\mathcal{K}_r$. Then, by definition of $\mathcal{K}^{\epsilon}$, the last $\lfloor\epsilon\rfloor$ points of this path are in $\mathcal{K}^{\epsilon}$, and all the other points are not. So the distance from $E$ to $\mathcal{K}^{\epsilon}_r$ is at most $d(E,\mathcal{K}_r)-\lfloor\epsilon\rfloor$. Conversely, $d(E,\mathcal{K}_r)-\lfloor\epsilon\rfloor\geq d(E,\mathcal{K}^{\epsilon}_r)$, because, one can extend a path from $E$ to $\mathcal{K}^{\epsilon}_r$ in a path from $E$ to $\mathcal{K}_r$.
Now, for any $E\in\mathcal{K}^\text{max}(R)$, there exists $\epsilon$ such that $E\in\mathcal{K}^{\epsilon}$. Then, since $\mathcal{K}^{\epsilon}$ is a refinement of $\mathcal{K}^{\max}(R)$, we have that $E$ has the same winner according to $R$ and to $\R(\mathcal{K}^{\max}(R),d)$.
\end{proof}
\begin{eg} (quasimetrics)
Quasimetrics occur in situations when there is asymmetry in the cost of changing a vote.
For example, it may be much more costly (for social reasons) to change a profile away from
unanimity than towards it. Some rules, for example Young's rule, are defined in terms of deletion
of voters and for this nonstandard distances are needed. For example,
let $d'_{del}(E,E')$ (respectively $d'_{ins}(E,E')$) be defined as the
minimum number of voters we must delete from (insert into) election $E$
in order to reach election $E'$ (or $+\infty$ if $E'$ can never be
reached). Each is nonstandard and a quasimetric. Their symmetrized versions, which are metrics \cite{EFS2012}, are still nonstandard.
\end{eg}
If $1\leq s'\leq s$, the \hl{$s'$-restriction} of an $s$-ranking $r$ is the $s'$-ranking of the top $s'$ elements of $r$ in the same order.
\comment{Is there a way to define restriction of partial social function nicely? Is it useful?}
We have already seen how one partial social rule of size $s$ can extend another. Sometimes, there are different values of $s$ involved, and we can define a unified rule for all $s$.
\begin{defn}
\label{def:s-restrict}
Let $R$ be a partial social rule of size $1$ with domain $D$. Fix a linear order on $C^*$. For each $s\geq 1$ we define a partial social rule $R_s$ on domain $D_s \subseteq D$. Inductively, if $R_s$ has already been defined on $D_s$, we proceed as follows. Given $E=(C,V,\pi)\in D_s$, we can delete the top element of $C$ from all data, to obtain $E' = (C',V,\pi')$. Let $D_{s+1}$ be the set of all $E$ for which $E'\in D$. For such $E$, let $R_{s+1}(E)$ be the set of all elements of the form $r_s\cdot c$ where $r_s\in R_s(E)$ and $c\in R(E')$.
Informally, we are voting by rounds: we first find the winner, then the winner from the remaining candidates, etc. The domain naturally
We call $R^{s'}$ the \hl{$s'$-restriction} of $R$.
\comment{MW: still not working! use coherence of consensus?}
\end{defn}
\begin{remark}
We will see some examples in the next section. Note that the domain of the restriction is smaller than the domain of the original, and the output is more detailed.
\end{remark}
|
1,314,259,993,241 | arxiv | \section{Introduction}
Little is known on the study of morphological instability of the solid-liquid interface when a thin layer of moving fluid separates the developing solid from its surrounding. For example, the problem of icicle growth involves complex moving boundary problems with phase change.
When an icicle grows, a thin water film from the melting snow and ice at the root of the icicle flows down along its surface and refreezes onto it by releasing latent heat of solidification to the ambient air below 0 $^{\circ}$C. During the icicle growth, ice does not grow uniformly, but ring-like ripples are often observed on its surface. \cite{Maeno94} By supplying water continuously from the top of a wooden round stick and of a gutter on an inclined plane set in a cold room below 0 $^{\circ}$C, a ripple pattern similar to that observed on natural icicles is produced on the ice surface. \cite{Matsuda97}
Surprisingly, the distance between two peaks of ripples experimentally produced as well as that of natural icicles always measures around a centimeter scale.
Theoretical works aimed at explaining the underlying dynamic instability that produces ripples are recent. \cite{Ogawa02, Ueno03, Ueno04, Ueno07, Ueno09}
A stability analysis for the ice-water interface disturbance was developed based on heat flow in the water and atmosphere, and thin film water flow dynamics.
From the initial model, it was found that the ripple wavelength is determined from $\lambda=2\pi h_{0}\Pec_{l}/\alpha_{\rm max}$, and that the ripples should move down the icicle. \cite{Ogawa02} Here $h_{0}$ is the mean thickness of the water layer, $\Pec_{l}$ is the P${\rm \acute{e}}$clet number, which is a dimensionless number defined as the ratio of the heat transfer due to the water flow to that due to the thermal diffusion in the water layer, and $\alpha_{\rm max}$ is a dimensionless wave number at which the amplification rate of the ice-water interface disturbance acquires a maximum value.
By considering different boundary conditions from those used in the initial model, a quite different ripple formation mechanism was developed. \cite{Ueno03, Ueno04} A new formula to determine the wavelength of ripples was derived: $\lambda=2\pi (a^{2}h_{0}\Pec_{l}/3)^{1/3}$, which contains two characteristic lengths $h_{0}$ and $a$. \cite{Ueno07} Here $a$ is the capillary length associated with the surface tension of the water-air surface. In the new model, the influence of the shape of the water-air surface on the growth condition of the ice-water interface was taken into account. Therefore, another length scale $a$ was introduced. The new model also predicted that ripples should move upward. The upward ripple translation was already suggested by the observation that many tiny air bubbles were trapped in the upper side of any protruded part of ripples during the icicle growth, and lined up upward. \cite{Maeno94} However, there was no theoretical explanation for the upward ripple translation mechanism.
Both models yield one-centimeter scale wavelength, but the translational direction of the ice ripples is opposite. Recently we solved numerically the same governing equations with the same boundary conditions as those used in the initial model. However, the numerically obtained amplification rate of the ice-water interface disturbances showed positive values for all wave numbers, \cite{Ueno09} which means that $\alpha_{\rm max}$ does not exist and there is no mechanism to select a characteristic length.
On the other hand, the analytical results for the amplification rate and the translation velocity of ice ripples obtained in the new model were in good agreement with those numerically calculated. Moreover, there was also good agreement between the theoretical predictions of the dependence of ripple wavelength on slope angles of the inclined plane and water supply rates and our experimental results. Finally, upward ripple motion at about half-speed of the mean growth rate of icicle radius was observed experimentally as theoretically predicted, but downward traveling ripples were not observed. \cite{Ueno09}
\begin{figure}
\begin{center}
\includegraphics[width=12cm,height=12cm,keepaspectratio,clip]{fig1.eps}
\end{center}
\caption{Schematic view of the layer ahead of the water-air surface. Ice is covered with a supercooled water film. The $x$ axis is parallel to the direction of the supercooled water flow and the $y$ axis is normal to it. $T_{la}$ is the temperature at the water-air surface. (a) is the situation in absence of airflow. A linear air temperature distribution $\bar{T}_{a}(y)$ was assumed. (b) is the situation in presence of airflow. $\bar{U}_{a}(x,y)$ and $\bar{T}_{a}(x,y)$ are undisturbed velocity and temperature distributions. $h_{0}$ and $\delta$ are the thickness of the water layer and that of the thermal boundary layer, respectively. $g$ is the gravitational acceleration and $\theta$ is the angle with respect to the horizontal. The flowing supercooled water layer, not to scale, is much thinner than the thickness of the thermal boundary layer.}
\label{fig:ice-water-air}
\end{figure}
In the previous theoretical models, \cite{Ogawa02, Ueno03, Ueno04, Ueno07, Ueno09} ice was covered with a supercooled water layer and there was no airflow around icicles. The latent heat released at the ice-water interface was assumed to be transferred in the air by thermal diffusion through the water layer. As shown in Fig. \ref{fig:ice-water-air} (a), for simplicity, a linear air temperature distribution $\bar{T}_{a}(y)$ was assumed. \cite{Ueno04} From the energy conservation at the ice-water interface and water-air surface, the mean growth rate of icicle radius is given by $\bar{V}=-K_{a}T_{\infty}/(L\delta)$, where $K_{a}$ is the thermal conductivity of air, $L$ is the latent heat per unit volume and $T_{\infty}$ is the air temperature at a distance $\delta$ from the water-air surface. \cite{Ueno04}
The water layer of thickness $h_{0}$ changes by varying the water supply rate. \cite{Benjamin57, Landau59} However, since $\bar{V}$ does not include $h_{0}$, the icicle growth rate does not depend on the water supply rate. \cite{Maeno94, Ueno04}
Since $\bar{V}$ contains the parameters $T_{\infty}$ and $\delta$, the ice growth rate is controlled by the rate of latent heat loss from the water-air surface to the surrounding air.
However, it was not possible to estimate the value of $\bar{V}$ because the physical meaning of the assumed distance $\delta$ was unclear.
Recently, the growth of icicles has been treated as a free boundary problem to find an ideal growing shape for icicles. \cite{Short06} The latent heat transferred from the icicle surface to the surrounding air through the water layer leads to an increase in air temperature and to a change in density because it is temperature dependent. If the density decreases with increasing temperature, buoyancy force arises, and warmer air moves up along the ice surface. This effect is restricted to a thin layer ahead of the water-air surface, as shown in Fig. \ref{fig:ice-water-air} (b).
Short $\etal$ emphasized the importance of heat transfer through such a convective boundary layer around icicles, and derived a formula for the ice growth velocity normal to the icicle's surface. The form is the same as $\bar{V}$ mentioned above, but a critical difference is that the length $\delta$ in paper \cite{Short06} is the boundary layer thickness. Hence, it was possible to estimate the values of $\delta$ and $\bar{V}$ if the value of an unknown parameter in $\delta$ was given. \cite{Short06, Ueno07} It was also suggested that similarity solutions for the coupled Navier-Stokes and heat transfer equations in the Boussinesq approximation can provide the basis for understanding of the boundary layer. \cite{Short06} In this paper, the value of the unknown parameter in $\delta$ is determined by obtaining the similarity solutions.
Since heat transfer can be greatly influenced by the upward natural convection airflow, the question is whether the enhancement of heat transfer due to convection affects the wavelength of ripples on icicles.
On the other hand, it is known that the wavelengths are almost independent of the length of the icicles and the ambient air temperature.
In order to clarify these problems, in this paper, a linear stability analysis was performed on the ice-water interface disturbance during the ice growth in the presence of a supercooled water film flow and a natural convection airflow.
\section{Theory}
Instead of dealing with the elongated carrot-shaped geometry of the icicle, \cite{Short06} ice growth on a flat gutter on an inclined plane of finite length will be considered. The following theoretical analysis is restricted to two-dimensional vertical cross-sections of the gutter, as shown in Fig. \ref{fig:ice-water-air}. The origin of the $x$ axis is the bottom of the gutter and the $y$ axis is normal to it.
What is new here is that the effect of a natural convection airflow is being incorporated into the previous theoretical frameworks \cite{Ueno03, Ueno04, Ueno07, Ueno09} with modifications of some of boundary conditions, letting us treat synthetically heat flow in the ice, water and air through a disturbed ice-water interface and water-air surface, as well as thin water film flow and airflow.
\subsection{Governing equations}
The velocity components in the $x$ and $y$ directions in the water layer, $u_{l}$ and $v_{l}$, are governed by the Navier-Stokes equations driven by gravity and the continuity equation: \cite{Landau59}
\begin{equation}
\frac{\partial u_{l}}{\partial t}
+u_{l}\frac{\partial u_{l}}{\partial x}
+v_{l}\frac{\partial u_{l}}{\partial y}
=-\frac{1}{\rho_{l}}\frac{\partial p_{l}}{\partial x}
+\nu_{l}\left(\frac{\partial^{2}u_{l}}{\partial x^{2}}
+\frac{\partial^{2}u_{l}}{\partial y^{2}}\right)-g\sin\theta,
\label{eq:geq-ul}
\end{equation}
\begin{equation}
\frac{\partial v_{l}}{\partial t}
+u_{l}\frac{\partial v_{l}}{\partial x}
+v_{l}\frac{\partial v_{l}}{\partial y}
=-\frac{1}{\rho_{l}}\frac{\partial p_{l}}{\partial y}
+\nu_{l}\left(\frac{\partial^{2}v_{l}}{\partial x^{2}}
+\frac{\partial^{2}v_{l}}{\partial y^{2}}\right)-g\cos\theta,
\label{eq:geq-vl}
\end{equation}
\begin{equation}
\frac{\partial u_{l}}{\partial x}+\frac{\partial v_{l}}{\partial y}=0,
\label{eq:continuity-water}
\end{equation}
where $\nu_{l}=1.8 \times 10^{-6}$ ${\rm m^{2}/s}$ and $\rho_{l}=1.0 \times 10^{3}$ ${\rm kg/m^{3}}$ are the kinematic viscosity and the density of water, $g$ the gravitational acceleration, $p_{l}$ the pressure in water. $\theta$ is the angle with respect to the horizontal, as shown in Fig. \ref{fig:ice-water-air}.
On the other hand, employing the Boussinesq approximation, the velocity components in the $x$ and $y$ directions in the air, $u_{a}$ and $v_{a}$, are governed by the following equations driven by buoyancy force and the continuity equation: \cite{Landau59}
\begin{equation}
\frac{\partial u_{a}}{\partial t}
+u_{a}\frac{\partial u_{a}}{\partial x}
+v_{a}\frac{\partial u_{a}}{\partial y}
=-\frac{1}{\rho_{\infty}}\frac{\partial (p_{a}-p_{a0})}{\partial x}
+\nu_{a}\left(\frac{\partial^{2}u_{a}}{\partial x^{2}}
+\frac{\partial^{2}u_{a}}{\partial y^{2}}\right)+g\beta(T_{a}-T_{\infty})\sin\theta,
\label{eq:geq-ua}
\end{equation}
\begin{equation}
\frac{\partial v_{a}}{\partial t}
+u_{a}\frac{\partial v_{a}}{\partial x}
+v_{a}\frac{\partial v_{a}}{\partial y}
=-\frac{1}{\rho_{\infty}}\frac{\partial (p_{a}-p_{a0})}{\partial y}
+\nu_{a}\left(\frac{\partial^{2}v_{a}}{\partial x^{2}}
+\frac{\partial^{2}v_{a}}{\partial y^{2}}\right)+g\beta(T_{a}-T_{\infty})\cos\theta,
\label{eq:geq-va}
\end{equation}
\begin{equation}
\frac{\partial u_{a}}{\partial x}+\frac{\partial v_{a}}{\partial y}=0,
\label{eq:continuity-air}
\end{equation}
where $p_{a}$ is the pressure in air, $p_{a0}$ the static pressure, $\rho_{\infty}$ the density of air at the temperature $T_{\infty}$, $\nu_{a}=1.3 \times 10^{-5}$ ${\rm m^{2}/s}$ and $\beta=3.7 \times 10^{-3}$ $K^{-1}$ are, respectively, the kinematic viscosity and the volumetric coefficient of thermal expansion for air.
The continuity equations (\ref{eq:continuity-water}) and (\ref{eq:continuity-air}) can be satisfied by introducing the stream functions $\psi_{l}$ and $\psi_{a}$ such that
$u_{l}=\partial \psi_{l}/\partial y$,
$v_{l}=-\partial \psi_{l}/\partial x$,
$u_{a}=\partial \psi_{a}/\partial y$ and
$v_{a}=-\partial \psi_{a}/\partial x$.
Neglecting viscous dissipation in the energy equation, the equations for the temperatures in the ice $T_{s}$, water $T_{l}$ and air $T_{a}$ are \cite{Landau59}
\begin{equation}
\frac{\partial T_{s}}{\partial t}
=\kappa_{s}\left(\frac{\partial^{2} T_{s}}{\partial x^{2}}
+\frac{\partial^{2} T_{s}}{\partial y^{2}}\right),
\label{eq:geq-Ts}
\end{equation}
\begin{equation}
\frac{\partial T_{l}}{\partial t}
+u_{l}\frac{\partial T_{l}}{\partial x}
+v_{l}\frac{\partial T_{l}}{\partial y}
=\kappa_{l}\left(\frac{\partial^{2} T_{l}}{\partial x^{2}}+\frac{\partial^{2} T_{l}}{\partial y^{2}}\right),
\label{eq:geq-Tl}
\end{equation}
\begin{equation}
\frac{\partial T_{a}}{\partial t}
+u_{a}\frac{\partial T_{a}}{\partial x}
+v_{a}\frac{\partial T_{a}}{\partial y}
=\kappa_{a}\left(\frac{\partial^{2} T_{a}}{\partial x^{2}}
+\frac{\partial^{2} T_{a}}{\partial y^{2}}\right),
\label{eq:geq-Ta}
\end{equation}
where $\kappa_{s}=1.15 \times 10^{-6}$ ${\rm m^{2}/s}$, $\kappa_{l}=1.33 \times 10^{-7}$ ${\rm m^{2}/s}$ and $\kappa_{a}=1.87 \times 10^{-5}$ ${\rm m^{2}/s}$ are the thermal diffusivities of ice, water and air, respectively. Equations (\ref{eq:geq-ua}), (\ref{eq:geq-va}), (\ref{eq:continuity-air}) and (\ref{eq:geq-Ta}) are new part that has been added to the previous formulation. \cite{Ueno03, Ueno04, Ueno07, Ueno09}
\subsection{Boundary conditions at the ice-water interface and water-air surface}
\subsubsection{Hydrodynamic boundary conditions}
Neglecting the density difference between ice and water,
both velocity components $u_{l}$ and $v_{l}$ at a disturbed ice-water interface, $y=\zeta(t,x)$, must satisfy the no-slip condition:\cite{Ogawa02}
\begin{equation}
u_{l}|_{y=\zeta}=0,
\hspace{1cm}
v_{l}|_{y=\zeta}=0.
\label{eq:bc-ul-vl-zeta}
\end{equation}
The kinematic condition at a disturbed water-air surface, $y=\xi(t,x)$, is \cite{Benjamin57}
\begin{equation}
\frac{\partial \xi}{\partial t}+u_{l}|_{y=\xi}\frac{\partial \xi}{\partial x}=v_{l}|_{y=\xi}.
\label{eq:bc-kinematic-xi}
\end{equation}
The continuity of velocities of water film flow and airflow at the water-air surface is \cite{Yih67}
\begin{equation}
u_{l}|_{y=\xi}=u_{a}|_{y=\xi},
\qquad
v_{l}|_{y=\xi}=v_{a}|_{y=\xi}.
\label{eq:bc-ul-vl-ua-va-xi}
\end{equation}
The condition for continuity of shear stress at the water-air surface is \cite{Craik66, Yih67}
\begin{equation}
\rho_{l}\nu_{l}\left(\frac{\partial u_{l}}{\partial y}\Big|_{y=\xi}
+\frac{\partial v_{l}}{\partial x}\Big|_{y=\xi}\right)
=\rho_{a}\nu_{a}\left(\frac{\partial u_{a}}{\partial y}\Big|_{y=\xi}
+\frac{\partial v_{a}}{\partial x}\Big|_{y=\xi}\right).
\label{eq:bc-shear-stress-xi}
\end{equation}
The difference of the normal stress on either side of the water-air surface must be the capillary force resisting displacement: \cite{Landau59, Craik66, Yih67}
\begin{equation}
-p_{a}|_{y=\xi}+2\rho_{a}\nu_{a}\frac{\partial v_{a}}{\partial y}\Big|_{y=\xi}
-\left(-p_{l}|_{y=\xi}+2\rho_{l}\nu_{l}\frac{\partial v_{l}}{\partial y}\Big|_{y=\xi}\right)
=-\gamma\frac{\partial^{2}\xi}{\partial x^2},
\label{eq:bc-normal-stress-xi}
\end{equation}
where $\gamma=7.6 \times 10^{-2}$ N/m is the surface tension of the water-air surface. The boundary conditions (\ref{eq:bc-ul-vl-zeta}) and (\ref{eq:bc-kinematic-xi}) are the same as those used in the previous papers. \cite{Ogawa02, Ueno03, Ueno04, Ueno07, Ueno09} Since an airflow is taken into account in this paper, the continuity condition of the water film and airflow velocities at the water-air surface is a new part, and the shear and normal stress conditions are modified from those in the previous papers. \cite{Ogawa02, Ueno03, Ueno04, Ueno07, Ueno09}
\subsubsection{Thermodynamic boundary conditions}
The following boundary conditions are exactly the same as those in the previous papers. \cite{Ueno03, Ueno04, Ueno07, Ueno09}
The continuity condition of temperature is imposed at the ice-water interface:
\begin{equation}
T_{l}|_{y=\zeta}=T_{s}|_{y=\zeta}=T_{sl}+\Delta T_{sl},
\label{eq:Tsl}
\end{equation}
where $T_{sl}$ is the temperature at the flat ice-water interface and $\Delta T_{sl}$ is a deviation from it when the ice-water interface is disturbed.
The energy conservation at the ice-water interface is
\begin{equation}
L\left(\bar{V}+\frac{\partial \zeta}{\partial t} \right)
=K_{s}\frac{\partial T_{s}}{\partial y}\Big|_{y=\zeta}
-K_{l}\frac{\partial T_{l}}{\partial y}\Big|_{y=\zeta},
\label{eq:heatflux-zeta}
\end{equation}
where $L=3.3 \times 10^{8}$ ${\rm J/m^{3}}$ is the latent heat per unit volume, and $K_{s}=2.22$ ${\rm J/(m\,K\,s)}$ and $K_{l}=0.56$ ${\rm J/(m\,K\,s)}$ are thermal conductivities of ice and water, respectively.
The continuity condition of temperature is imposed at the water-air surface:
\begin{equation}
T_{l}|_{y=\xi}=T_{a}|_{y=\xi}=T_{la},
\label{eq:Tla}
\end{equation}
where $T_{la}$ is a temperature at the water-air surface.
The energy conservation at the water-air surface is
\begin{equation}
-K_{l}\frac{\partial T_{l}}{\partial y}\Big|_{y=\xi}
=-K_{a}\frac{\partial T_{a}}{\partial y}\Big|_{y=\xi},
\label{eq:heatflux-xi}
\end{equation}
where $K_{a}=0.024$ ${\rm J/(m\,K\,s)}$ is the thermal conductivity of air.
In the initial model, \cite{Ogawa02} the continuity condition of temperature at the ice-water interface and water-air surface was
$T_{s}|_{y=\zeta}=T_{l}|_{y=\zeta}=T_{sl}$ and $T_{l}|_{y=\xi}=T_{a}|_{y=\xi}=T_{la}+\Delta T_{la}$, where $\Delta T_{la}$ is a deviation from $T_{la}$ when the water-air surface is disturbed.
Instead, we use Eqs. (\ref{eq:Tsl}) and (\ref{eq:Tla}) as in the previous papers. \cite{Ueno03, Ueno04, Ueno07, Ueno09}
The difference in these boundary conditions led to critically different results between two models mentioned in the Introduction in this paper.
When the chemical potential of water equals that of ice, it seems reasonable to assume the boundary condition $T_{s}|_{y=\zeta}=T_{l}|_{y=\zeta}=T_{sl}$ at the ice-water interface, then $T_{sl}$ is the equilibrium freezing temperature ($T_{sl}=0$ $^{\circ}$C for pure water).
As in paper, \cite{Butler02} however, the chemical potential of water is not necessarily equal to that of ice because the ice-water coexistence considered here is expected to be in a non-equilibrium state in the presence of external disturbance at the water-air surface and shearing water flow.
The deviation $\Delta T_{sl}$ at the ice-water interface caused by the external disturbance does not disappear by thermal diffusion in the water because the thermal relaxation time for the temperature fluctuation with about 1 cm corresponding to ice ripple wavelength is much longer than the time defined by the inverse of the shear rate of water film flow considered here. In other words, the equilibrium state at the ice-water interface is not attained in the presence of shearing water flow. \cite{Ueno09} We will see in \ref{sec:heatflux} that $\Delta T_{sl}$ is dependent on the temperature distribution in the water layer subject to the external disturbance at the water-air surface.
On the other hand, since shear stress has a value of zero at the water-air surface, the deviation $\Delta T_{la}$ at the water-air surface disappears by thermal diffusion in the air. Hence, the temperature at the water-air surface remains at $T_{la}$, which will be determined in \ref{sec:solutionTa}.
\subsection{Perturbation}
As shown in Fig. \ref{fig:ice-water-air}, only a one-dimensional perturbation in the $x$ direction of the ice-water interface with a small amplitude $\zeta_{k}$ is considered: $\zeta(t,x)=\zeta_{k}{\rm exp}[\sigma t+i kx]$,
where $k$ is the wave number and $\sigma=\sigma^{(r)}+i \sigma^{(i)}$. Here $\sigma^{(r)}$ and $v_{p} \equiv -\sigma^{(i)}/k$ are the amplification rate and the phase velocity of the perturbation, respectively.
$\xi$, $\psi_{l}$, $\psi_{a}$, $p_{l}$, $p_{a}$, $T_{s}$, $T_{l}$ and $T_{a}$ are separated into unperturbed steady and perturbed parts as follows:
$\xi=h_{0}+\xi'$,
$\psi_{l}=\bar{\psi}_{l}+\psi'_{l}$,
$\psi_{a}=\bar{\psi}_{a}+\psi'_{a}$,
$p_{l}=\bar{P}_{l}+p'_{l}$,
$p_{a}=\bar{P}_{a}+p'_{a}$,
$T_{s}=\bar{T}_{s}+T'_{s}$,
$T_{l}=\bar{T}_{l}+T'_{l}$
and
$T_{a}=\bar{T}_{a}+T'_{a}$.
The corresponding perturbation of the water-air surface with a small amplitude $\xi_{k}$ is
$\xi'(t,x)=\xi_{k}{\rm exp}[\sigma t+i kx]$.
As in the previous papers, \cite{Ueno03, Ueno04, Ueno07, Ueno09} the following calculation is based on a linear stability analysis taking into account only the first order of $\zeta_{k}$. The quasi-stationary approximation is also used: the time dependence of the perturbed part of equations can be neglected because the time evolution of the ice-water interface perturbation is considerably slow compared to that of the above perturbation fields. \cite{Caroli92}
\subsection{Equations of flow and temperature distributions in the air boundary layer}
Natural convection airflow considered here are restricted to a boundary layer regime and to conditions that lead to a similarity solution, that is, to a description of the flow by ordinary differential equations and boundary conditions in terms of a single coordinate $\eta(x,y)$. Under this assumption the unperturbed quantities $\bar{\psi}_{a}(x,y)$ and $\bar{T}_{a}(x,y)$ are expressed as follows:\cite{Gebhart73}
\begin{equation}
\bar{\psi}_{a}=u_{a0}\delta_{0}\bar{F}_{a}(\eta)=\nu_{a}Gr\bar{F}_{a}(\eta), \qquad
\bar{T}_{a*}=\frac{\bar{T}_{a}-T_{\infty}}{T_{la}-T_{\infty}},
\label{eq:basic-psi-T}
\end{equation}
where $\eta=(y-h_{0})/\delta_{0}$, $\delta_{0}=4x/Gr$ and $u_{a0}=\nu_{a}Gr^{2}/(4x)$. Here $Gr=4(Gr_{x}/4)^{1/4}$ is the modified local Grashof number, $Gr_{x}=g\beta\Delta T_{a}x^{3}/\nu_{a}^{2}$ being the local Grashof number. $\Delta T_{a}=T_{la}-T_{\infty}$ is the temperature difference between the water-air surface and the ambient air temperature far away. $x$ is the distance from the bottom of the gutter.
Applying the boundary layer approximation to the Boussinesq equations (\ref{eq:geq-ua}), (\ref{eq:geq-va}), (\ref{eq:continuity-air}) and (\ref{eq:geq-Ta}), $\bar{\psi}_{a}(x,y)$ and $\bar{T}_{a}(x,y)$ are governed by \cite{Landau59, Schlichting99}
\begin{equation}
\frac{\partial \bar{\psi}_{a}}{\partial y}\frac{\partial^{2}\bar{\psi}_{a}}{\partial x \partial y}
-\frac{\partial \bar{\psi}_{a}}{\partial x}\frac{\partial^{2} \bar{\psi}_{a}}{\partial y^{2}}
=\nu_{a}\frac{\partial^{3}\bar{\psi}_{a}}{\partial y^{3}}+g\beta(\bar{T}_{a}-T_{\infty})\sin\theta,
\label{eq:geq-ua-basic}
\end{equation}
\begin{equation}
\frac{\partial \bar{\psi}_{a}}{\partial y}\frac{\partial \bar{T}_{a}}{\partial x}
-\frac{\partial \bar{\psi}_{a}}{\partial x}\frac{\partial \bar{T}_{a}}{\partial y}
=\kappa_{a}\frac{\partial^{2} \bar{T}_{a}}{\partial y^{2}}.
\label{eq:geq-Ta-basic}
\end{equation}
When Eq. (\ref{eq:basic-psi-T}) is substituted into Eqs. (\ref{eq:geq-ua-basic}) and (\ref{eq:geq-Ta-basic}), the dimensionless functions $\bar{F}_{a}$ and $\bar{T}_{a*}$ are obtained from the two coupled ordinary differential equations: \cite{Landau59}
\begin{equation}
\frac{d^{3}\bar{F}_{a}}{d\eta^{3}}
=-3\bar{F}_{a}\frac{d^{2}\bar{F}_{a}}{d\eta^{2}}
+2\left(\frac{d\bar{F}_{a}}{d\eta}\right)^{2}-\bar{T}_{a*}\sin\theta,
\label{eq:geq-basicFa}
\end{equation}
\begin{equation}
\frac{d^{2}\bar{T}_{a*}}{d\eta^{2}}
=-3Pr_{a}\bar{F}_{a}\frac{d\bar{T}_{a*}}{d\eta},
\label{eq:geq-basicTa}
\end{equation}
where $Pr_{a}=\nu_{a}/\kappa_{a}=0.7$ is the Prandtl number of air.
We assume stream function disturbance $\psi'_{a}$ and temperature disturbance $T'_{a}$ in the air to be of the form:
\begin{equation}
\psi'_{a}=u_{a0}f_{a}(\eta)\xi_{k}{\rm exp}[\sigma t +ikx], \qquad
T'_{a}=H_{a}(\eta)\bar{G}_{a}\xi_{k}{\rm exp}[\sigma t +ikx],
\label{eq:pert-psi-T}
\end{equation}
where $f_{a}$ and $H_{a}$ are the dimensionless disturbance amplitude functions, and $\bar{G}_{a} \equiv -\partial \bar{T}_{a}/\partial y|_{y=h_{0}}$.
When $\psi_{a}=\bar{\psi}_{a}+\psi'_{a}$ and $T_{a}=\bar{T}_{a}+T'_{a}$ are substituted into the complete equations (\ref{eq:geq-ua}), (\ref{eq:geq-va}) and (\ref{eq:geq-Ta}), we obtain the differential equations for the functions $f_{a}$ and $H_{a}$:
\begin{eqnarray}
\frac{d^{4}f_{a}}{d\eta^4}
&=&-3\bar{F}_{a}\frac{d^{3}f_{a}}{d\eta^{3}}
+\left(2\mu_{a}^{2}+i\mu_{a}Gr\frac{d\bar{F}_{a}}{d\eta}\right)\frac{d^{2}f_{a}}{d\eta^{2}}
+\left\{\mu_{a}^{2}\left(3\bar{F}_{a}+2\eta\frac{d\bar{F}_{a}}{d\eta}\right)
+\frac{d^{2}\bar{F}_{a}}{d\eta^{2}}\right\}\frac{df_{a}}{d\eta}\nonumber \\
&& -\left\{\mu_{a}^{4}+\mu_{a}^{2}(6+i\mu_{a}Gr)\frac{d\bar{F}_{a}}{d\eta}
+(2+i\mu_{a}Gr)\frac{d^{3}\bar{F}_{a}}{d\eta^{3}}\right\}f_{a} \nonumber \\
&& -\bar{G}_{a*}\frac{dH_{a}}{d\eta}\sin\theta+i\mu_{a}\bar{G}_{a*}H_{a}\cos\theta,
\label{eq:geq-fa}
\end{eqnarray}
\begin{eqnarray}
\frac{d^{2}H_{a}}{d\eta^{2}}
&=&-3Pr_{a}\bar{F}_{a}\frac{dH_{a}}{d\eta}
+\left\{\mu_{a}^{2}+Pr_{a}(-1+i\mu_{a}Gr)\frac{d\bar{F}_{a}}{d\eta}\right\}H_{a}\nonumber \\
&&-Pr_{a}/\bar{G}_{a*}(2+i\mu_{a}Gr)\frac{d\bar{T}_{a*}}{d\eta}f_{a},
\label{eq:geq-Ha}
\end{eqnarray}
where $\mu_{a}=k\delta_{0}$ is the dimensionless wave number normalized by the length $\delta_{0}$, and $\bar{G}_{a*} \equiv -d\bar{T}_{a*}/d\eta|_{\eta=0}$, whose value depends on the Prandtl number. In the stability analysis, \cite{Gebhart73} $\bar{v}_{a}=-\partial\bar{\psi}_{a}/\partial x$ and $\partial\bar{T}_{a*}/\partial x$ were neglected because the derivatives of the unperturbed fields quantities $\bar{F}_{a}$ and $\bar{T}_{a*}$ with respect to $x$ were assumed to be much smaller than those with respect to $y$. In this paper, however, these quantities in Eqs. (\ref{eq:geq-fa}) and (\ref{eq:geq-Ha}) are retained because if we neglect them, $\sigma^{(r)}$ and $v_{p}$ do not converge zero as $\mu_{a}$ approaches zero.
\subsection{Equations of flow and temperature distributions in the water layer}
The stream function disturbance $\psi'_{l}$ and temperature disturbance $T'_{l}$ in the water layer are assumed to be of the form: \cite{Ueno03, Ueno04, Ueno07, Ueno09}
\begin{equation}
\psi'_{l}=u_{l0}f_{l}(y_{*})\zeta_{k}{\rm exp}[\sigma t +ikx], \qquad
T'_{l}=H_{l}(y_{*})\bar{G}_{l}\zeta_{k}{\rm exp}[\sigma t +ikx],
\label{eq:pert-psil-Tl}
\end{equation}
where $y_{*}=y/h_{0}$, and $f_{l}$ and $H_{l}$ are the dimensionless disturbance amplitude functions. It is also assumed that the unperturbed temperature distribution in the water layer is linear, then $\bar{G}_{l} \equiv -\partial \bar{T}_{l}/\partial y|_{y=h_{0}}=(T_{sl}-T_{la})/h_{0}$.
When $\psi_{l}=\bar{\psi}_{l}+\psi'_{l}$ is substituted into Eqs. (\ref{eq:geq-ul}) and (\ref{eq:geq-vl}), the perturbed part yields the following Orr-Sommerfeld equation for $f_{l}$: \cite{Ueno03, Ueno09}
\begin{equation}
\frac{d^{4}f_{l}}{dy_{*}^{4}}
=\left(2\mu_{l}^{2}+i\mu_{l} \Rey_{l}\bar{U}_{l*}\right)\frac{d^{2}f_{l}}{dy_{*}^{2}}
-\left\{\mu_{l}^{4}+i\mu_{l} \Rey_{l}\left(\mu_{l}^{2}\bar{U}_{l*}+\frac{d^{2}\bar{U}_{l*}}{dy_{*}^{2}}\right)\right\}f_{l},
\label{eq:geq-fl}
\end{equation}
where $\mu_{l}=kh_{0}$ is the dimensionless wave number normalized by the length $h_{0}$, $\bar{U}_{l*}(y_{*})$ is the dimensionless velocity distribution in the water layer in the unperturbed state, and $\Rey_{l}\equiv u_{l0}h_{0}/\nu_{l}=3Q/(2l\nu_{l})$ is the Reynolds number. Here $Q/l$ is the water supply rate per width.
When $T_{l}=\bar{T}_{l}+T'_{l}$ are substituted into (\ref{eq:geq-Tl}), the perturbed part yields the equation for $H_{l}$: \cite{Ueno03, Ueno09}
\begin{equation}
\frac{d^{2}H_{l}}{dy_{*}^{2}}
=(\mu_{l}^{2}+i\mu_{l} \Pec_{l}\bar{U}_{l*})H_{l}
-i\mu_{l} \Pec_{l}\frac{d\bar{T}_{l*}}{d y_{*}}f_{l},
\label{eq:geq-Hl}
\end{equation}
where $\bar{T}_{l*}(y_{*})\equiv (\bar{T}_{l}(y_{*})-T_{sl})/(T_{sl}-T_{la})=-y_{*}$ is the dimensionless temperature distribution in the water layer in the unperturbed state, and $\Pec_{l}\equiv u_{l0}h_{0}/\kappa_{l}=3Q/(2l\kappa_{l})$ is the ${\rm P\acute{e}clet}$ number.
\subsection{\label{sec:linearization}Linearization of boundary conditions}
First, linearizing Eq. (\ref{eq:bc-ul-vl-zeta}) at $y=0$ yields, to the first order in $\zeta_{k}$,
\begin{equation}
\frac{df_{l}}{dy_{*}}\Big|_{y_{*}=0}+\frac{d\bar{U}_{l*}}{dy_{*}}\Big|_{y_{*}=0}=0, \qquad
f_{l}|_{y_{*}=0}=0,
\label{eq:ul-vl-h0}
\end{equation}
From the linearization of Eq. (\ref{eq:bc-kinematic-xi}) at $y=h_{0}$, the relation between the amplitude of the water-air surface and that of the ice-water interface is obtained: $\xi_{k}=-(f_{l}|_{y_{*}=1}/\bar{U}_{l*}|_{y_{*}=1})\zeta_{k}$. \cite{Ueno03, Ueno04, Ueno07, Ueno09}
Second, linearizing Eq. (\ref{eq:bc-ul-vl-ua-va-xi}) at $y=h_{0}$ yields, to the zeroth order in $\xi_{k}$,
\begin{equation}
\frac{d\bar{F}_{a}}{d\eta}\Big|_{\eta=0}=\frac{u_{l0}}{u_{a0}}\bar{U}_{l*}|_{y_{*}=1},
\qquad
\bar{F}_{a}|_{\eta=0}=0,
\label{eq:ua-va-h0}
\end{equation}
and to the first order in $\xi_{k}$,
\begin{eqnarray}
\frac{df_{a}}{d\eta}\Big|_{\eta=0}=-\frac{d^{2}\bar{F}_{a}}{d\eta^{2}}\Big|_{\eta=0}
+\frac{\delta_{0}}{h_{0}}\frac{u_{l0}}{u_{a0}}
\left\{\frac{d\bar{U}_{l*}}{dy_{*}}\Big|_{y_{*}=1}-\left(\frac{df_{l}}{dy_{*}}\Big|_{y_{*}=1}\Big/f_{l}|_{y_{*}=1}\right)
\bar{U}_{l*}|_{y_{*}=1}\right\},
\nonumber \\
f_{a}|_{\eta=0}=-\frac{u_{l0}}{u_{a0}}\bar{U}_{l*}|_{y_{*}=1}.
\label{eq:ua-va-xi}
\end{eqnarray}
The values of $u_{a0}$ are 0.38 m/s at $x=0.1$ m and 1.2 m/s at $x=1.0$ m for $\Delta T_{a}=10$ $^{\circ}$C.
On the other hand, the surface velocity of the water layer,
$u_{l0}=[g\sin\theta/(2\nu_{l})]^{1/3}[3Q/(2l)]^{2/3}$,
is about $0.78 \sim 3.62$ cm/s for typical values of $Q/l=10 \sim 100$ [(ml/h)/cm] and $\theta=\pi/2$. It should be noted that the velocity of the water film flow is much less than that of airflow. Therefore, the first equation in (\ref{eq:ua-va-h0}) and the second equation in (\ref{eq:ua-va-xi}) are approximated as $d\bar{F}_{a}/d\eta|_{\eta=0}=0$ and $f_{a}|_{\eta=0}=0$, respectively.
Even though a thin fluid layer of water flows down the ice surface, the no-slip condition at the water-air surface of the flowing water film is nearly satisfied for the velocities:
$\bar{u}_{a}=\partial \bar{\psi}_{a}/\partial y$,
$\bar{v}_{a}=-\partial \bar{\psi}_{a}/\partial x$ and
$v'_{a}=-\partial \psi'_{a}/\partial x$.
The values of $\delta_{0}$ are 3.7 mm at $x=0.1$ m and 6.6 mm at $x=1.0$ m for $\Delta T_{a}=10$ $^{\circ}$C.
On the other hand, the mean thickness of the water layer,
$h_{0}=[3\nu_{l}/(g\sin\theta)Q/l]^{1/3}$,
is about $53 \sim 115$ $\mu$m for values of $Q/l=10 \sim 100$ [(ml/h)/cm] and $\theta=\pi/2$.
Since $\delta_{0}/h_{0} \gg 1$, the second term on the right hand side of the first equation in Eq. (\ref{eq:ua-va-xi}) cannot be neglected. Hence, the no-slip condition cannot be applied to
$u'_{a}=\partial \psi'_{a}/\partial y$
at the water-air surface of the flowing water film.
Third, linearizing Eq. (\ref{eq:bc-shear-stress-xi}) at $y=h_{0}$ yields, to the zeroth order in $\xi_{k}$,
\begin{equation}
\frac{d\bar{U}_{l*}}{dy_{*}}\Big|_{y_{*}=1}
=\frac{\rho_{a}\nu_{a}(u_{a0}d^{2}\bar{F}_{a}/d\eta^{2}|_{\eta=0})/\delta_{0}}{\rho_{l}\nu_{l}u_{l0}/h_{0}}
\equiv R_{\tau_{al}},
\label{eq:shear-stress-h0}
\end{equation}
and to the first order in $\xi_{k}$,
\begin{eqnarray}
\frac{d^{2}f_{l}}{dy_{*}^{2}}\Big|_{y_{*}=1}
+\left(-\frac{d^{2}\bar{U}_{l*}}{dy_{*}^{2}}\Big|_{y_{*}=1}\Big/\bar{U}_{l*}|_{y_{*}=1}+\mu_{l}^{2}\right)f_{l}|_{y_{*}=1}
\nonumber \\
=-\frac{\rho_{a}\nu_{a}}{\rho_{l}\nu_{l}}\left(\frac{h_{0}}{\delta_{0}}\right)^{2}\frac{u_{a0}}{u_{l0}}
\left\{\frac{d^{2}f_{a}}{d\eta^{2}}\Big|_{\eta=0}+\frac{d^{3}\bar{F}_{a}}{d\eta^{3}}\Big|_{\eta=0}+\mu_{a}^{2}f_{a}|_{\eta=0}\right\}
f_{l}|_{y_{*}=1}/\bar{U}_{l*}|_{y_{*}=1},
\label{eq:shear-stress-xi}
\end{eqnarray}
where $R_{\tau_{al}}$ on the right hand side of Eq. (\ref{eq:shear-stress-h0}) should be nearly considered as the ratio of the shear stress of airflow at the water-air surface to that of the water film flow at the ice-water interface.
It is assumed that
$p'_{l}=\rho_{l}u_{l0}^{2}\Pi_{l}(y_{*})\zeta_{k}/h_{0}{\rm exp}[\sigma t+i kx]$ and
$p'_{a}=\rho_{a}u_{a0}^{2}\Pi_{a}(\eta)\xi_{k}/\delta_{0}{\rm exp}[\sigma t+i kx]$, where $\Pi_{l}$ and $\Pi_{a}$ are dimensionless amplitudes.
Substituting these forms into Eq. (\ref{eq:bc-normal-stress-xi}) and linearizing them at $y=h_{0}$ yields, to the first order in $\xi_{k}$,
\begin{eqnarray}
\frac{d^{3}f_{l}}{dy_{*}^{3}}\Big|_{y_{*}=1}
-(i\mu_{l} \Rey_{l}\bar{U}_{l*}|_{y_{*}=1}+3\mu_{l}^{2})\frac{df_{l}}{dy_{*}}\Big|_{y_{*}=1}
+i\left(\mu_{l}\Rey_{l}\frac{d\bar{U}_{l*}}{dy_{*}}\Big|_{y_{*}=1}+\alpha/\bar{U}_{l*}|_{y_{*}=1}\right)f_{l}|_{y_{*}=1}
\nonumber \\
=-\frac{\rho_{a}\nu_{a}}{\rho_{l}\nu_{l}}\left(\frac{h_{0}}{\delta_{0}}\right)^{3}\frac{u_{a0}}{u_{l0}}
\left\{
\frac{d^{3}f_{a}}{d\eta^{3}}\Big|_{\eta=0}
-\left(i\mu_{a}Gr\frac{d\bar{F}_{a}}{d\eta}\Big|_{\eta=0}+3\mu_{a}^{2}\right)\frac{df_{a}}{d\eta}\Big|_{\eta=0} \right.
\nonumber \\
\left.
+i\mu_{a}Gr\frac{d^{2}\bar{F}_{a}}{d\eta^{2}}\Big|_{\eta=0}f_{a}|_{\eta=0}+\bar{G}_{a*}H_{a}|_{\eta=0}\sin\theta
\right\}
f_{l}|_{y_{*}=1}/\bar{U}_{l*}|_{y_{*}=1},
\label{eq:normal-stress-xi}
\end{eqnarray}
where
\begin{equation}
\alpha=2(\cot\theta)\mu_{l}+\frac{2}{\sin\theta}\left(\frac{a}{h_{0}}\right)^{2}\mu_{l}^{3},
\label{eq:alpha}
\end{equation}
represents a parameter relevant to the restoring force due to the surface tension and gravity acting on the water-air surface. \cite{Ueno03, Benjamin57} Here $a=[\gamma/(\rho_{l}g)]^{1/2}$ is the capillary length associated with the surface tension $\gamma$ of the water-air surface. \cite{Landau59}
From the boundary condition (\ref{eq:shear-stress-h0}) and the no-slip condition $\bar{U}_{l*}|_{y_{*}=0}=0$, the velocity profile in the water layer is given by $\bar{U}_{l*}=y_{*}^{2}+(R_{\tau_{al}}-2)y_{*}$. However, $R_{\tau_{al}}$ is extremely small because the ratio of the viscosity of air to that of water,
$\rho_{a}\nu_{a}/\rho_{l}\nu_{l}$,
as well as $h_{0}/\delta_{0}$ are much smaller than 1.
Therefore, the shear stress-free condition, $d\bar{U}_{l*}/dy_{*}|_{y{*}=1}=0$, holds at the unperturbed water-air surface. Thus the velocity profile in the water layer is still the half-parabolic form, $\bar{U}_{l*}=y_{*}^{2}-2y_{*}$, so that the values of $\bar{U}_{l*}|_{y_{*}=1}=-1$, $d\bar{U}_{l*}/dy_{*}|_{y_{*}=0}=-2$ and $d^{2}\bar{U}_{l*}/dy_{*}^{2}|_{y_{*}=1}=2$ are used in the above boundary conditions.
Similarly, since $\rho_{a}\nu_{a}/\rho_{l}\nu_{l} \ll 1$ and $h_{0}/\delta_{0} \ll 1$ on the right hand side of Eqs. (\ref{eq:shear-stress-xi}) and (\ref{eq:normal-stress-xi}), the influence of the perturbed part of shear and normal stresses due to airflow on the water film flow at the water-air surface is negligible. Therefore, the boundary conditions for the shear and normal stresses at the perturbed water-air surface become the same as those used in the previous papers. \cite{Ueno03, Ueno04, Ueno07, Ueno09}
Finally, linearizing Eq. (\ref{eq:Tla}) at $y=h_{0}$ yields, to the zeroth order in $\xi_{k}$,
$\bar{T}_{l*}|_{y_{*}=1}=-1$,
$\bar{T}_{a*}|_{\eta=0}=1$,
and to the first order in $\xi_{k}$,
\begin{equation}
H_{l}|_{y_{*}=1}+f_{l}|_{y_{*}=1}/\bar{U}_{l*}|_{y_{*}=1}=0, \qquad
H_{a}|_{\eta=0}=1.
\label{eq:Tla-xi}
\end{equation}
Linearizing Eq. (\ref{eq:heatflux-xi}) at $y=h_{0}$ yields, to the first order in $\xi_{k}$,
\begin{equation}
\frac{dH_{l}}{dy_{*}}\Big|_{y_{*}=1}
-\frac{h_{0}}{\delta_{0}}\left(-\frac{dH_{a}}{d\eta}\Big|_{\eta=0}\right)f_{l}|_{y_{*}=1}/\bar{U}_{l*}|_{y_{*}=1}=0.
\label{eq:heatflux-xi-h0}
\end{equation}
It is convenient to define
\begin{equation}
G'^{(r)}_{a}\equiv \frac{h_{0}}{\delta_{0}}\left(-\frac{dH_{a}^{(r)}}{d\eta}\Big|_{\eta=0}\right),\qquad
G'^{(i)}_{a}\equiv \frac{h_{0}}{\delta_{0}}\left(-\frac{dH_{a}^{(i)}}{d\eta}\Big|_{\eta=0}\right),
\label{eq:Gar-Gai}
\end{equation}
which represents the real and imaginary parts of the perturbed part of the air temperature gradient at the water-air surface.
It should be noted that Eq. (\ref{eq:geq-fl}) can be independently solved with the boundary conditions (\ref{eq:ul-vl-h0}), (\ref{eq:shear-stress-xi}) and (\ref{eq:normal-stress-xi}) without considering the influence of airflow. Therefore, $f_{l}$ in Eqs. (\ref{eq:Tla-xi}) and (\ref{eq:heatflux-xi-h0}) is the same form as that in the absence of airflow.
The perturbed part of temperature in the water layer is affected by the airflow through the perturbed part of the air temperature gradient in Eq. (\ref{eq:heatflux-xi-h0}).
\subsection{Dispersion relation}
From the perturbed part of Eqs. (\ref{eq:Tsl}) and (\ref{eq:heatflux-zeta}), the dispersion relation for the perturbation of the ice-water interface is given by \cite{Ueno03, Ueno04, Ueno07, Ueno09}
\begin{equation}
\sigma=\frac{\bar{V}}{h_{0}}\left\{-\frac{dH_{l}}{dy_{*}}\Big|_{y_{*}=0}+K^{s}_{l}\mu_{l} (H_{l}|_{y_{*}=0}-1)\right\},
\label{eq:dispersion}
\end{equation}
where $K^{s}_{l}=K_{s}/K_{l}=3.96$ is the ratio of the thermal conductivity of ice to that of water. The real and imaginary parts of Eq. (\ref{eq:dispersion}) give the dimensionless amplification rate $\sigma_{*}^{(r)}\equiv \sigma^{(r)}/(\bar{V}/h_{0})$ and the dimensionless phase velocity $v_{p*}\equiv -\sigma^{(i)}/(k\bar{V})$, respectively,
\begin{equation}
\sigma_{*}^{(r)}=-\frac{dH_{l}^{(r)}}{dy_{*}}\Big|_{y_{*}=0}+K^{s}_{l}\mu_{l}(H_{l}^{(r)}|_{y_{*}=0}-1),
\label{eq:amplificationrate}
\end{equation}
\begin{equation}
v_{p*}=-\frac{1}{\mu_{l}}\left(-\frac{dH_{l}^{(i)}}{dy_{*}}\Big|_{y_{*}=0}+K^{s}_{l}\mu_{l}H_{l}^{(i)}|_{y_{*}=0}\right),
\label{eq:phasevelocity}
\end{equation}
where $H_{l}^{(r)}$ and $H_{l}^{(i)}$ are the real and imaginary parts of $H_{l}$.
The numerical procedure for obtaining the wavelength and phase velocity of ice ripples is as follows.
First, Eq. (\ref{eq:geq-fl}) is solved with the boundary conditions (\ref{eq:ul-vl-h0}), (\ref{eq:shear-stress-xi}) and (\ref{eq:normal-stress-xi}). Substituting the obtained solution $f_{l}$ into Eq. (\ref{eq:ua-va-xi}), then Eqs. (\ref{eq:geq-basicFa}), (\ref{eq:geq-basicTa}), (\ref{eq:geq-fa}) and (\ref{eq:geq-Ha}) must be solved simultaneously for a given $Gr$ with the following boundary conditions:
Eq. (\ref{eq:ua-va-h0}),
$d\bar{F}_{a}/d\eta|_{\eta=\infty}=0$,
$\bar{T}_{a*}|_{\eta=0}=1$,
$\bar{T}_{a*}|_{\eta=\infty}=0$,
Eq. (\ref{eq:ua-va-xi}),
$df_{a}/d\eta|_{\eta=\infty}=0$,
$f_{a}|_{\eta=\infty}=0$,
$H_{a}|_{\eta=0}=1$ and
$H_{a}|_{\eta=\infty}=0$.
Here, it is assumed that
$u'_{a}|_{y=\infty}=\partial \psi'_{a}/\partial y|_{y=\infty}=0$,
$v'_{a}|_{y=\infty}=-\partial \psi'_{a}/\partial x|_{y=\infty}=0$, and
$T_{a}'|_{y=\infty}=0$. \cite{Gebhart73}
Substituting the obtained solutions $f_{l}$ and $H_{a}$ into the boundary conditions (\ref{eq:Tla-xi}) and (\ref{eq:heatflux-xi-h0}), Eq. (\ref{eq:geq-Hl}) is solved. Finally, substituting the obtained solution $H_{l}$ into Eqs. (\ref{eq:amplificationrate}) and (\ref{eq:phasevelocity}) and replacing $\mu_{l}$ with $(h_{0}/\delta_{0})\mu_{a}$, it is possible to calculate the amplification rate $\sigma_{*}^{(r)}$ and phase velocity $v_{p*}$ with respect to $\mu_{a}$.
\section{Results}
\subsection{\label{sec:solutionTa}Solutions of temperature distributions in the air boundary layer}
In the absence of airflow, Eq. (\ref{eq:geq-Ha}) yields $d^{2}H_{a}/d\eta^{2}=\mu_{a}^{2}H_{a}$. With the boundary conditions $H_{a}|_{\eta=0}=1$ and $H_{a}|_{\eta=\infty}=0$, the solution is given by $H_{a}={\rm exp}(-\mu_{a}\eta)$, and hence
$G'^{(r)}_{a}=(h_{0}/\delta_{0})\mu_{a}=kh_{0}=\mu_{l}$ and $G'^{(i)}_{a}=0$.
In the presence of airflow, as shown in Fig. \ref{fig:tempprofiles-Ga} (a), $H_{a}^{(r)}$ decreases more rapidly than the exponential function, and $H_{a}^{(i)}$ acquires non-zero values. Therefore, as shown in Fig. \ref{fig:tempprofiles-Ga} (b), the value of $G'^{(r)}_{a}$ is greater than $\mu_{l}$ and $G'^{(i)}_{a}$ acquires non-zero values.
From Eq. (\ref{eq:heatflux-xi}), the energy conservation equation at the unperturbed water-air surface is $-K_{l}\partial\bar{T}_{l}/\partial y|_{y=h_{0}}=-K_{a}\partial\bar{T}_{a}/\partial y|_{y=h_{0}}$. When the linear temperature profile $\bar{T}_{l}$ in the water layer and the exact temperature profile $\bar{T}_{a*}$ in the air boundary layer are substituted into the above energy conservation equation,
$T_{la}$ in Eq. (\ref{eq:Tla}) is obtained as
\begin{equation}
T_{la} \approx T_{sl}+\frac{K_{a}}{K_{l}}\frac{h_{0}}{\delta_{0}/\bar{G}_{a*}}T_{\infty}.
\label{eq:Tla-airflow}
\end{equation}
From Eq. (\ref{eq:heatflux-zeta}), the energy conservation equation at the unperturbed ice-water interface is
$L\bar{V}=K_{l}(T_{sl}-T_{la})/h_{0}$.
Substituting Eq. (\ref{eq:Tla-airflow}) into this equation
yields
\begin{equation}
\bar{V} \approx -\frac{K_{a}T_{\infty}}{L(\delta_{0}/\bar{G}_{a*})}.
\label{eq:V-airflow}
\end{equation}
If $\delta_{0}/\bar{G}_{a*}$ in Eqs. (\ref{eq:Tla-airflow}) and (\ref{eq:V-airflow}) is considered as $\delta$ represented in Fig. \ref{fig:ice-water-air} (a), then $T_{la}$ and $\bar{V}$ in the previous papers \cite{Ueno04, Ueno07, Ueno09, Short06} or $\bar{V}$ mentioned in the Introduction in this paper are obtained.
The linear temperature profile in the air assumed in the previous papers \cite{Ueno04, Ueno07, Ueno09, Short06}
is shown by the dashed line in Fig. \ref{fig:tempprofiles-Ga} (a), which is expressed as $\bar{T}_{a*}=1-\bar{G}_{a*}\eta$. Here, $\bar{G}_{a*}=-d\bar{T}_{a*}/d\eta|_{\eta=0} $ can be estimated numerically yielding a value of about 0.5.
Using our notation, the boundary layer thickness in paper \cite{Short06} is expressed as $\delta=C\delta_{0}/\sqrt{2}$,
which must be equal to $\delta=\delta_{0}/\bar{G}_{a*}$.
From this, the parameter $C$ is determined as $C=\sqrt{2}/\bar{G}_{a*}\approx 2.8$.
Since $\bar{G}_{a*}$ is obtained from the solution of Eqs. (\ref{eq:geq-basicFa}) and (\ref{eq:geq-basicTa}), $\delta$ depends on the Prandtl number of air.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=7cm,height=7cm,keepaspectratio,clip]{fig2a.eps}\qquad
\includegraphics[width=7cm,height=7cm,keepaspectratio,clip]{fig2b.eps}
\end{center}
\caption{For $Q/l=50$ $[{\rm (ml/h)/cm}]$, $\theta=\pi/2$, $x=1.0$ m and $\Delta T_{a}=10$ $^{\circ}$C,
(a) air temperature distribution $\bar{T}_{a*}$, and distributions of ${\rm exp}(-\mu_{a}\eta)$, $H_{a}^{(r)}$ and $H_{a}^{(i)}$ at the dimensionless wave number of $\mu_{a}=4.8$.
(b) perturbed part of air temperature gradient $G'_{a}\equiv h_{0}/\delta_{0}(-dH_{a}/d\eta|_{\eta=0})$ at the water-air surface:
in the absence of airflow $G'_{a}=\mu_{l}$;
in the presence of airflow $G'^{(r)}_{a}$ and $G'^{(i)}_{a}$ are the real and imaginary parts of $G'_{a}$. Here $\mu_{a}=10$ corresponds to the wavelength of 4.1 mm when $\delta_{0}=6.6$ mm.}
\label{fig:tempprofiles-Ga}
\end{figure}
\subsection{Approximate solutions of flow and temperature distributions in the water layer}
Since $\delta_{0}$ is of the same order as the characteristic length scale of ripples, we cannot use the long wavelength approximation, the higher order of $\mu_{a}$ in Eqs. (\ref{eq:geq-fa}) and (\ref{eq:geq-Ha}) have to be retained. On the other hand, since the water layer thickness $h_{0}$ is much less than the characteristic length scale of ripples, we can neglect the higher order of $\mu_{l}$ in Eqs. (\ref{eq:geq-fl}), (\ref{eq:geq-Hl}), (\ref{eq:shear-stress-xi}) and (\ref{eq:normal-stress-xi}). Using the long wavelength approximation, $f_{l}$ and $H_{l}$ can be calculated approximately as in the previous papers. \cite{Ueno03, Ueno04, Ueno07, Ueno09}
Transferring the variable $y_{*}$ to $z=1-y_{*}$, the general solution of (\ref{eq:geq-Hl}) is expressed as: \cite{Ueno03, Ueno09}
\begin{equation}
H_{l}(z)=C_{1}\phi_{1}(z)+C_{2}\phi_{2}(z)+i\mu_{l} \Pec_{l}\int_{0}^{z}\left\{\phi_{2}(z)\phi_{1}(z')-\phi_{1}(z)\phi_{2}(z')\right\}f_{l}(z')dz',
\label{eq:sol-Hl}
\end{equation}
where $\phi_{1}$ and $\phi_{2}$ are solutions of the homogeneous equation (\ref{eq:geq-Hl}).
From Eqs. (\ref{eq:Tla-xi}) and (\ref{eq:heatflux-xi-h0}), we obtain
$C_{1}=f_{l}|_{z=0}$ and
$C_{2}=h_{0}/\delta_{0}(-dH_{a}/d\eta|_{\eta=0})f_{l}|_{z=0}$, respectively,
because $\phi_{1}|_{z=0}=1$, $\phi_{2}|_{z=0}=0$, $d\phi_{1}/dz|_{z=0}=0$ and $d\phi_{2}/dz|_{z=0}=1$.
Consequently, $H_{l}$ is expressed as
\begin{eqnarray}
H_{l}(z)
&=&f_{l}|_{z=0}\left\{\phi_{1}(z)+\frac{h_{0}}{\delta_{0}}\left(-\frac{dH_{a}}{d\eta}\Big|_{\eta=0}\right)\phi_{2}(z)\right\}
\nonumber \\
&& +i\mu_{l} \Pec_{l} \int_{0}^{z}\left\{\phi_{2}(z)\phi_{1}(z')-\phi_{1}(z)\phi_{2}(z')\right\}f_{l}(z')dz'.
\label{eq:finalsol-Hl}
\end{eqnarray}
For typical values of $h_{0}$ and $u_{l0}$, $\Rey_{l}\sim 1$ and $\Pec_{l} \sim 10$; then $\mu_{l}\Rey_{l} \ll 1$ and $\mu_{l}\Pec_{l} \sim 1$ for the length scale of ripples on icicles. Therefore, we can neglect the $\mu_{l}\Rey_{l}$ term in Eqs. (\ref{eq:geq-fl}) and (\ref{eq:normal-stress-xi}). This corresponds to neglecting the inertia term of the full Orr-Sommerfeld equation. \cite{Ueno07} Furthermore, the expansion of $\phi_{1}$ and $\phi_{2}$ with respect to $\mu_{l}\Pec_{l}$
up to the first order is sufficient. Indeed, the justification for these approximations was confirmed by our recent numerical analysis. \cite{Ueno09} Hence, it is sufficient to use the following approximate solutions:
\begin{equation}
f_{l}(z)=\frac{1}{6+i\alpha}(6+i\alpha z-6z^{2}-i\alpha z^{3}),
\label{eq:fl}
\end{equation}
\begin{equation}
\phi_{1}(z)
=1-i\left(\frac{1}{2}z^{2}-\frac{1}{12}z^{4}\right)\mu_{l}\Pec_{l},
\label{eq:phi1}
\end{equation}
\begin{equation}
\phi_{2}(z)
=z-i\left(\frac{1}{6}z^{3}-\frac{1}{20}z^{5}\right)\mu_{l} \Pec_{l}.
\label{eq:phi2}
\end{equation}
Since the direction of the $x$ axis in Fig. \ref{fig:ice-water-air} is opposite to that in the previous papers, \cite{Ueno03, Ueno04, Ueno07, Ueno09} we note that the sign of $\bar{U}_{l*}$ in this paper is opposite. This leads to different functional forms of $f_{l}$, $\phi_{1}$ and $\phi_{2}$ from those in the previous papers.
In the presence of airflow, using the approximate solutions (\ref{eq:fl}), (\ref{eq:phi1}) and (\ref{eq:phi2}), Eqs. (\ref{eq:amplificationrate}) and (\ref{eq:phasevelocity}) yield
\begin{eqnarray}
\sigma_{*}^{(r)}&=&
\frac{
G'^{(r)}_{a}\left\{36-\frac{3}{2}\alpha(\mu_{l}\Pec_{l})\right\}
+G'^{(i)}_{a}\left\{6\alpha+9\mu_{l}\Pec_{l}\right\}
-\frac{3}{2}\alpha(\mu_{l}\Pec_{l})}{36+\alpha^{2}}\nonumber \\
&&+K^{s}_{l}\mu_{l}
\frac{
G'^{(r)}_{a}\left\{36-\frac{7}{10}\alpha(\mu_{l}\Pec_{l})\right\}
+G'^{(i)}_{a}\left\{6\alpha+\frac{21}{5}\mu_{l}\Pec_{l}\right\}
-\frac{7}{10}\alpha(\mu_{l}\Pec_{l})-\alpha^{2}}
{36+\alpha^{2}},
\label{eq:amp-airflow}
\end{eqnarray}
\begin{eqnarray}
v_{p*}&=&\frac{1}{\mu_{l}}\left[
\frac{-\frac{1}{4}\alpha^{2}(\mu_{l}\Pec_{l})
+G'^{(r)}_{a}\left\{6\alpha+9\mu_{l}\Pec_{l}\right\}
-G'^{(i)}_{a}\left\{36-\frac{3}{2}\alpha(\mu_{l}\Pec_{l})\right\}}{36+\alpha^{2}} \right. \nonumber \\
&& \left.+K^{s}_{l}\mu_{l}
\frac{6\alpha-\frac{7}{60}\alpha^{2}(\mu_{l}\Pec_{l})
+G'^{(r)}_{a}\left\{6\alpha+\frac{21}{5}\mu_{l}\Pec_{l}\right\}
-G'^{(i)}_{a}\left\{36-\frac{7}{10}\alpha(\mu_{l}\Pec_{l})\right\}}
{36+\alpha^{2}}\right].
\label{eq:vp-airflow}
\end{eqnarray}
On the other hand, in the absence of airflow, since $G'^{(r)}_{a}=\mu_{l}$ and $G'^{(i)}_{a}=0$ as mentioned above, Eqs. (\ref{eq:amp-airflow}) and (\ref{eq:vp-airflow}) reduce to the previous dispersion relation: \cite{Ueno03, Ueno09}
\begin{eqnarray}
\sigma_{*}^{(r)}
&=&
\frac{
\mu_{l}\left\{36-\frac{3}{2}\alpha(\mu_{l}\Pec_{l})\right\}
-\frac{3}{2}\alpha(\mu_{l}\Pec_{l})}{36+\alpha^{2}}
+K^{s}_{l}\mu_{l}
\frac{
\mu_{l}\left\{36-\frac{7}{10}\alpha(\mu_{l}\Pec_{l})\right\}
-\frac{7}{10}\alpha(\mu_{l}\Pec_{l})-\alpha^{2}}{36+\alpha^{2}},\nonumber \\
\label{eq:Uamp-noairflow}
\end{eqnarray}
\begin{eqnarray}
v_{p*}&=&\frac{1}{\mu_{l}}
\left[\frac{-\frac{1}{4}\alpha^{2}(\mu_{l}\Pec_{l})
+\mu_{l}\left\{6\alpha+9\mu_{l}\Pec_{l}\right\}}{36+\alpha^{2}}
+K^{s}_{l}\mu_{l}\frac{6\alpha-\frac{7}{60}\alpha^{2}(\mu_{l}\Pec_{l})
+\mu_{l}\left\{6\alpha+\frac{21}{5}\mu_{l}\Pec_{l}\right\}}{36+\alpha^{2}}\right].\nonumber \\
\label{eq:Uvp-noairflow}
\end{eqnarray}
\subsection{\label{sec:wavelength}Wavelength and translation velocity of ripples}
For the water supply rate per width $Q/l=50$ [(ml/h)/cm] and the angle $\theta=\pi/2$, Figs. \ref{fig:sim-mua-amp-vp} (a) and (b) show numerically obtained the dimensionless amplification rate $\sigma_{*}^{(r)}=\sigma^{(r)}/(\bar{V}/h_{0})$ and the dimensionless translation velocity $v_{p*}=v_{p}/\bar{V}$ versus dimensionless wave number $\mu_{a}=k\delta_{0}$, respectively. The wave number of ripples that one expects to observe is that for which the amplification rate is the maximum. We also define the value of $v_{p*}$ from the wave number at which $\sigma_{*}^{(r)}$ acquires a maximum value.
In the presence of airflow, $\sigma_{*}^{(r)}$ acquires a maximum value of $\sigma^{(r)}_{*\rm max}=0.085$ at $\mu_{a}=4.8$ (solid line in Fig. \ref{fig:sim-mua-amp-vp} (a)). Since the wave number $k$ is normalized by $\delta_{0}$, the corresponding wavelength is 8.6 mm from $\lambda=2\pi\delta_{0}/\mu_{a}$. Here we have used $\delta_{0}=6.6$ mm estimated from the two parameters $x=1.0$ m and $\Delta T_{a}=10$ $^{\circ}$C.
At $\mu_{a}=4.8$, $v_{p*}=0.48$ as represented by the solid line in Fig. \ref{fig:sim-mua-amp-vp} (b).
On the other hand, in the absence of airflow, Eq. (\ref{eq:Uamp-noairflow}) acquires a maximum value of $\sigma^{(r)}_{*\rm max}=0.054$ at $\mu_{a}=4.3$ (dashed line in Fig. \ref{fig:sim-mua-amp-vp} (a)), which corresponds to the wavelength of $\lambda=9.6$mm. At $\mu_{a}=4.3$, $v_{p*}=0.59$ as represented by the dashed line in Fig. \ref{fig:sim-mua-amp-vp} (b).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=7cm,height=7cm,keepaspectratio,clip]{fig3a.eps}
\hspace{1cm}
\includegraphics[width=7cm,height=7cm,keepaspectratio,clip]{fig3b.eps}
\\[5mm]
\includegraphics[width=8cm,height=8cm,keepaspectratio,clip]{fig3c.eps}
\end{center}
\caption{For $Q/l=50$ $[{\rm (ml/h)/cm}]$, $\theta=\pi/2$ and $\delta_{0}=6.6$ mm,
(a) dimensionless amplification rate $\sigma_{*}^{(r)}=\sigma^{(r)}/(\bar{V}/h_{0})$ versus dimensionless wave number $\mu_{a}=k\delta_{0}$;
(b) dimensionless phase velocity $v_{p*}=v_{p}/\bar{V}$ versus dimensionless wave number $\mu_{a}$. Solid and dashed lines indicate the presence and absence of airflow, respectively.
(c) The behaviour of the real and imaginary parts of the perturbed temperature gradient at the water-air surface with respect to $\mu_{a}$, in the presence of airflow (solid lines) and in the absence of airflow (dashed lines).
Here $\mu_{a}=100$ corresponds to the wavelength of 413 $\mu$m when $\delta_{0}=6.6$ mm.}
\label{fig:sim-mua-amp-vp}
\end{figure}
Any disturbance near the solidification front can be initiated by non-uniformity in temperature in the vicinity of the ice-water interface. Since the water layer considered here is very thin, we cannot neglect the influence of external disturbance at the water-air surface on the growth condition of the ice-water interface.
In order to determine the growth condition from the dispersion relation (\ref{eq:dispersion}), it is necessary to obtain the perturbed temperature amplitude $H_{l}$ in the water layer. $H_{l}$ must satisfy the boundary condition (\ref{eq:heatflux-xi-h0}) which includes the perturbed air temperature gradient at the water-air surface.
Using Eq. (\ref{eq:Gar-Gai}) and $\bar{U}_{l*}|_{y_{*}=1}=-1$, the real and imaginary parts of Eq. (\ref{eq:heatflux-xi-h0}) can be written as follows:
\begin{equation}
-\frac{dH^{(r)}_{l}}{dy_{*}}\Big|_{y_{*}=1}=G'^{(r)}_{a}f^{(r)}_{l}|_{y_{*}=1}-G'^{(i)}_{a}f^{(i)}_{l}|_{y_{*}=1}, \qquad
-\frac{dH^{(i)}_{l}}{dy_{*}}\Big|_{y_{*}=1}=G'^{(r)}_{a}f^{(i)}_{l}|_{y_{*}=1}+G'^{(i)}_{a}f^{(r)}_{l}|_{y_{*}=1}.
\label{eq:dHlr-dHli-airflow}
\end{equation}
Since $G'^{(r)}_{a}=\mu_{l}$ and $G'^{(i)}_{a}=0$ in the absence of airflow, Eq. (\ref{eq:dHlr-dHli-airflow}) reduces to the previous results: \cite{Ueno09}
\begin{equation}
-\frac{dH^{(r)}_{l}}{dy_{*}}\Big|_{y_{*}=1}=\mu_{l}f^{(r)}_{l}|_{y_{*}=1}, \qquad
-\frac{dH^{(i)}_{l}}{dy_{*}}\Big|_{y_{*}=1}=\mu_{l}f^{(i)}_{l}|_{y_{*}=1}.
\label{eq:dHlr-dHli-noairflow}
\end{equation}
The solid and dashed lines in Fig. \ref{fig:sim-mua-amp-vp} (c) show the behaviour of
$-dH^{(r)}_{l}/dy_{*}|_{y_{*}=1}$, $-dH^{(i)}_{l}/dy_{*}|_{y_{*}=1}$ in Eq. (\ref{eq:dHlr-dHli-airflow})
and of
$\mu_{l}f^{(r)}_{l}|_{y_{*}=1}$, $\mu_{l}f^{(i)}_{l}|_{y_{*}=1}$ in Eq. (\ref{eq:dHlr-dHli-noairflow})
with respect to $\mu_{a}$. It can be seen that $-dH^{(r)}_{l}/dy_{*}|_{y_{*}=1}$ and $\mu_{l}f^{(r)}_{l}|_{y_{*}=1}$ increase for small $\mu_{a}$.
In the absence of airflow, the rate of latent heat loss due to thermal diffusion from the water-air surface to the air changes locally by the water-air surface disturbance. \cite{Ueno09} On the other hand, in the presence of airflow, the rate of latent heat loss is enhanced by the airflow, more so than in the case of thermal diffusion. However, as shown in Fig. \ref{fig:sim-mua-amp-vp} (c), non-uniformity of the rate of latent heat loss at the water-air surface decreases with an increase in $\mu_{a}$ because of the action of the restoring force on the water-air surface, which causes the amplitude of the water-air surface disturbance to decrease. \cite{Ueno03, Ueno04, Ueno07, Ueno09} This effect is due to the parameter $\alpha$ in $f_{l}$ in Eqs. (\ref{eq:dHlr-dHli-airflow}) and (\ref{eq:dHlr-dHli-noairflow}) and is more effective for large wave numbers.
The physical meaning that the values of $-dH^{(i)}_{l}/dy_{*}|_{y_{*}=1}$ in Eq. (\ref{eq:dHlr-dHli-airflow}) and $\mu_{l}f^{(i)}_{l}|_{y_{*}=1}$ in Eq. (\ref{eq:dHlr-dHli-noairflow}) are not zero will be discussed in \ref{sec:heatflux}.
An approximation of Eq. (\ref{eq:amp-airflow}) makes the above discussion more clear.
We note that the second term in Eq. (\ref{eq:amp-airflow}) is smaller than the first term, and the wave number at which $\sigma_{*}^{(r)}$ acquires a maximum value is almost the same as that without the second term. \cite{Ueno09}
Therefore, extracting the most dominant term from the first term in (\ref{eq:amp-airflow}) and using (\ref{eq:alpha}), we obtain
\begin{equation}
\sigma_{*}^{(r)}
\approx
\frac{36G'^{(r)}_{a}-\frac{3}{2}\alpha(\mu_{l}\Pec_{l})}{36}=G'^{(r)}_{a}-\frac{\Pec_{l}}{12}\left(\frac{a}{h_{0}}\right)^{2}\mu_{l}^{4},
\label{eq:amp-approx-airflow}
\end{equation}
at $\theta=\pi/2$. As mentioned above, the non-uniformity of the air temperature gradient at the water-air surface is the trigger of the ice-water interface instability, which is represented by the positive term $G'^{(r)}_{a}$ in Eq. (\ref{eq:amp-approx-airflow}).
In the absence of airflow, since $G'^{(r)}_{a}=\mu_{l}$, we find from $d\sigma_{*}^{(r)}/d\mu_{l}=0$ that $\sigma_{*}^{(r)}$ acquires a maximum value at $\mu_{l}=[3(h_{0}/a)^2/\Pec_{l}]^{1/3}$. From this, an approximate formula is obtained to determine the wavelength of the ripples: $\lambda=2\pi(a^{2}h_{0}\Pec_{l}/3)^{1/3}$, \cite{Ueno07, Ueno09} as mentioned in the Introduction in this paper.
On the other hand, in the presence of airflow, the value of $G'^{(r)}_{a}$ is greater than $\mu_{l}$, as shown in Fig. \ref{fig:tempprofiles-Ga} (b). This indicates that the natural convection airflow enhances the destabilization of the ice-water interface compared to the destabilization due to the thermal diffusion. However, it is difficult to express the dependence of $G'^{(r)}_{a}$ on $\mu_{a}$ analytically.
The stabilization of the ice-water interface is dominated by the negative term in Eq. (\ref{eq:amp-approx-airflow}). The stabilization mechanism due to the action of the restoring force of the surface tension and gravity on the water-air surface is not relevant to the airflow. Although the value of $\sigma^{(r)}_{*\rm max}$ in the presence of airflow is greater than that in its absence, the wavelengths determined from the most unstable mode have nearly the same value in both cases.
However, there is a considerable difference in $v_{p*}$. In the absence of airflow, $v_{p*}>0$ for all $\mu_{a}$, as shown by the dashed line in Fig. \ref{fig:sim-mua-amp-vp} (b). On the other hand, in the presence of airflow, $v_{p*}$ has negative values for a small wave number region because the terms with $G'^{(i)}_{a}$ in Eq. (\ref{eq:vp-airflow}) are the most dominant. The solid line in Fig. \ref{fig:sim-mua-amp-vp} (b) indicates that the sign of $v_{p*}$ changes from negative to positive at $\mu_{a}=3.7$. What determines the sign of $v_{p*}$ will be discussed in \ref{sec:heatflux}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=7cm,height=7cm,keepaspectratio,clip]{fig4a.eps}
\hspace{2mm}
\includegraphics[width=7cm,height=7cm,keepaspectratio,clip]{fig4b.eps}
\\[5mm]
\includegraphics[width=7cm,height=7cm,keepaspectratio,clip]{fig4c.eps}
\end{center}
\caption{(a) The wavelength versus $\sin\theta$ at $Q/l=$160/3 [(ml/h)/cm].
(b) The wavelength versus $Q/l$ at $\theta=\pi/2$.
(c) The phase velocity versus $Q/l$ at $\theta=\pi/2$.
Solid lines indicate the absence of airflow; \cite{Ueno03, Ueno07}
dashed lines and dashed-dotted lines indicate the presence of airflow for $Gr=609$ and $Gr=108$.}
\label{fig:wavelength-angle-Qoverl-airflow}
\end{figure}
Figures \ref{fig:wavelength-angle-Qoverl-airflow} (a) and (b) show the dependence of the wavelength of ripples on $\sin\theta$ at $Q/l=160/3$ [(ml/h)/cm] and that on $Q/l$ [(ml/h)/cm] at $\theta=\pi/2$, respectively. We have determined these wavelengths from the value of $\mu_{a}$ at which $\sigma_{*}^{(r)}$ acquires a maximum value for a given $Q/l$ and $\theta$. The solid lines indicate the absence of airflow, whereas the dashed and dashed-dotted lines indicate the presence of airflow.
Although the wavelengths of ripples in the presence of airflow are slightly shorter than those in the absence of airflow, the dependence of the wavelengths on the angles and water supply rates shows almost the same behaviour as the experimental results (see Figure 8 \cite{Ueno09}).
When determining the wavelengths in Fig. \ref{fig:wavelength-angle-Qoverl-airflow} (a) and (b), we have used
$\delta_{0}=6.6$ mm ($x=1.0$ m and $\Delta T_{a}=10$ $^{\circ}$C) for the dashed lines and
$\delta_{0}=3.7$ mm ($x=0.1$ m and $\Delta T_{a}=10$ $^{\circ}$C) for the dashed-dotted lines.
Table \ref{tab:tableI} shows the wavelengths obtained from various values of $\delta_{0}=4x/Gr$ using different combination of $x$ and $\Delta T_{a}$. It is found that the wavelength increases with the increase of both $x$ and $\Delta T_{a}$. This suggests that $G'^{(r)}_{a}$ in Eq. (\ref{eq:amp-approx-airflow}) must include the modified local Grashof number $Gr$.
However, the dependence of the wavelength $\lambda$ on $x$ and $\Delta T_{a}$ in $Gr$ is extremely small compared to that of $\bar{V}$, $T_{la}$ and $\delta$. This result is relevant to the fact that the wavelength of ripples on icicles is nearly independent of the vertical position of icicles and ambient air temperature.
Table \ref{tab:tableI} also shows that the value of $v_{p*}$ increases with a decrease in $\Delta T_{a}$ and a decrease in $x$.
Since $v_{p*}$ has positive values, the ripple with the most unstable mode moves only upwards.
Figure \ref{fig:wavelength-angle-Qoverl-airflow} (c) shows the dependence of $v_{p*}$ on $Q/l$.
The range of variation of $v_{p*}$ on $Q/l$ in the presence of airflow (dashed and dashed-dotted lines) is larger than that in the absence of airflow (solid line), and the dependence of $v_{p*}$ on $Gr$ is larger than that of the wavelength $\lambda$ on $Gr$. Therefore, we can say that $v_{p*}$ is sensitive to the parameters characterizing the air boundary layer.
\begin{table}[ht]
\caption{\label{tab:tableI} For $Q/l=50$ [(ml/h)/cm] and $\theta=\pi/2$, the dependence of modified local Grashof number, $Gr$, ice growth rate, $\bar{V}$, temperature at water-air surface, $T_{la}$, thickness of thermal boundary layer, $\delta=\delta_{0}/\bar{G}_{a*}$, wavelength of ripple, $\lambda$, and dimensionless translation velocity of ripple, $v_{p*}$, on air temperature far away, $T_{\infty}$, and position from the bottom of the gutter, $x$.}
\begin{ruledtabular}
\begin{tabular}{ccccccc}
&\multicolumn{4}{c}{$x=1.0$ m} \\
\hline
$T_{\infty}$ ($^{\circ}$C) & $Gr$ & $\bar{V}$ (mm/h) & $T_{la}$ ($\times 10^{-3}$ $^{\circ}$C)
& $\delta$ (mm) & $\lambda$ (mm) & $v_{p*}$ \\
\hline
-5 & 512 & 0.08 & -1.2 & 16.1 & 8.3 & 0.65 \\
-10 & 609 & 0.20 & -2.9 & 13.4 & 8.6 & 0.48 \\
-15 & 674 & 0.32 & -4.9 & 12.1 & 8.7 & 0.41 \\
-20 & 724 & 0.47 & -7.0 & 11.2 & 8.7 & 0.37 \\
\hline
&\multicolumn{4}{c}{$x=0.1$ m} \\
\hline
-5 & 91 & 0.14 & -2.1 & 9.6 & 7.9 & 0.89 \\
-10 & 108 & 0.33 & -5.0 & 7.9 & 8.3 & 0.71 \\
-15 & 120 & 0.56 & -8.4 & 7.0 & 8.4 & 0.63 \\
-20 & 129 & 0.81 & -12.1 & 6.5 & 8.5 & 0.56 \\
\end{tabular}
\end{ruledtabular}
\end{table}
\subsection{\label{sec:heatflux}Heat flux at the ice-water interface and water-air surface}
We assume a dimensionless small perturbation of the ice-water interface with an infinitesimal initial amplitude $\delta_{b}=\zeta_{k}/h_{0}$:
\begin{equation}
y_{*}=\zeta_{*}=\delta_{b}\Imag[{\rm exp}(\sigma_{*}t_{*}+i\mu_{l}x_{*})]
=\delta_{b}(t_{*})\sin[\mu_{l}(x_{*}-v_{p*}t_{*})],
\label{eq:zeta}
\end{equation}
where $\sigma_{*}=\sigma/(\bar{V}/h_{0})$, $t_{*}=(\bar{V}/h_{0})t$, $x_{*}=x/h_{0}$, $\delta_{b}(t_{*})\equiv \delta_{b}{\rm exp}(\sigma^{(r)}_{*}t_{*})$ and $\Imag$ denotes the imaginary part of its argument.
The corresponding perturbation of the water-air surface with an infinitesimal initial amplitude $\delta_{t}=\xi_{k}/h_{0}$ is given by
\begin{eqnarray}
y_{*}=\xi_{*}&=&1+\Imag[\delta_{t}{\rm exp}(\sigma_{*}t_{*}+i\mu_{l}x_{*})] \nonumber \\
&=&1+[(f_{l}^{(r)}|_{y_{*}=1})^{2}+(f_{l}^{(i)}|_{y_{*}=1})^2]^{1/2}
\delta_{b}(t_{*})\sin[\mu_{l}(x_{*}-v_{p*}t_{*})-\Theta_{\xi_{*}}],
\label{eq:xi}
\end{eqnarray}
where the relation $\delta_{t}=f_{l}|_{{y_{*}}=1}\delta_{b}$ for the amplitude is used, and $\Theta_{\xi_{*}}$ is a phase difference between the water-air surface and the ice-water interface.
Since $f_{l}|_{y_{*}=1}$ depends on the wave number through the parameter $\alpha$, the amplitude and phase of the water-air surface relative to the ice-water interface change depending on the wavelength of the ice-water interface disturbance. \cite{Ueno03, Ueno04, Ueno07, Ueno09}
The temperatures in the water layer and ice are expressed in the dimensionless forms: \cite{Ueno09}
\begin{equation}
T_{l*}(y_{*})\equiv \frac{T_{l}(y_{*})-T_{sl}}{T_{sl}-T_{la}}
=-y_{*}
+\delta_{b}\Imag[H_{l}(y_{*}){\rm exp}(\sigma_{*}t_{*}+i\mu_{l}x_{*})],
\label{eq:Tl}
\end{equation}
\begin{equation}
T_{s*}(y_{*}) \equiv \frac{T_{s}(y_{*})-T_{sl}}{T_{sl}-T_{la}}
=\delta_{b}{\rm exp}(\mu_{l} y_{*})\Imag[(H_{l}|_{y_{*}=0}-1){\rm exp}(\sigma_{*}t_{*}+i\mu_{l}x_{*})],
\label{eq:Ts}
\end{equation}
and the temperature in the air boundary layer is expressed as
\begin{equation}
T_{a*}(\eta)
=\bar{T}_{a*}(\eta)
+\Imag\left[\left(-\frac{d\bar{T}_{a*}}{d\eta}\Big|_{\eta=0}\right)
H_{a}(\eta)\frac{\xi_{k}}{\delta_{0}}{\rm exp}(\sigma_{*}t_{*}+i\mu_{l}x_{*})\right],
\label{eq:Ta}
\end{equation}
where we note that $y$ is normalized by $h_{0}$ in the water layer and ice, but $y$ is normalized by $\delta_{0}$ in the air boundary layer.
We define the perturbed part of dimensionless heat flux from the ice-water interface to the water and from the ice to the ice-water interface, as
$q_{l*}\equiv \Imag [-\partial T'_{l*}/\partial y_{*}|_{y_{*}=\zeta_{*}}]$ and
$q_{s*}\equiv \Imag[-K^{s}_{l}\partial T'_{s*}/\partial y_{*}|_{y_{*}=\zeta_{*}}]$, respectively,
where $T'_{l*}$ and $T'_{s*}$ represent the perturbed terms in Eqs. (\ref{eq:Tl}) and (\ref{eq:Ts}).
Hence, the total heat flux from the ice-water interface to the water and ice is expressed as follows: \cite{Ueno09}
\begin{eqnarray}
q_{ls*} &\equiv& q_{l*}-q_{s*}
=\delta_{b}\Imag\left[\left\{-\frac{dH_{l}}{dy_{*}}\Big|_{y_{*}=0}+K^{s}_{l}\mu_{l}(H_{l}|_{y_{*}=0}-1)\right\}
{\rm exp}(\sigma_{*}t_{*}+i\mu_{l}x_{*})\right] \nonumber \\
&=&\left[\left\{-\frac{dH_{l}^{(r)}}{dy_{*}}\Big|_{y_{*}=0}+K^{s}_{l}\mu_{l}(H_{l}^{(r)}|_{y_{*}=0}-1)\right\}^{2}
+\left\{-\frac{dH_{l}^{(i)}}{dy_{*}}\Big|_{y_{*}=0}+K^{s}_{l}\mu_{l}H_{l}^{(i)}|_{y_{*}=0}\right\}^{2}\right]^{1/2} \nonumber \\
&&\times \delta_{b}(t_{*})\sin[\mu_{l}(x_{*}-v_{p*}t_{*})-\Theta_{q_{ls*}}],
\label{eq:ql-qs}
\end{eqnarray}
where $\Theta_{q_{ls*}}$ is a phase difference between the total heat flux $q_{ls*}$ at $y_{*}=\zeta_{*}$ and the ice-water interface.
We also define the perturbed part of dimensionless heat flux from the water-air surface to the air as $q_{a*}\equiv \Imag[-\partial T'_{a*}/\partial \eta|_{\eta=\xi'/\delta_{0}}]$,
where $T'_{a*}=T'_{a}/(T_{la}-T_{\infty})$ represents the perturbed term in Eq. (\ref{eq:Ta}). Hence,
\begin{eqnarray}
q_{a*}&=&-\frac{\zeta_{k}}{\delta_{0}}
\Imag\left[\frac{dH_{a}}{d\eta}\Big|_{\eta=0}f_{l}|_{y_{*}=1}{\rm exp}(\sigma_{*}t_{*}+i\mu_{l}x_{*})\right] \nonumber \\
&=&
\left[\left(G'^{(r)}_{a}f_{l}^{(r)}|_{y_{*}=1}-G'^{(i)}_{a}f_{l}^{(i)}|_{y_{*}=1}\right)^{2}
+\left(G'^{(r)}_{a}f_{l}^{(i)}|_{y_{*}=1}+G'^{(i)}_{a}f_{l}^{(r)}|_{y_{*}=1}\right)^{2}\right]^{1/2} \nonumber \\
&& \times \delta_{b}(t_{*})\sin[\mu_{l}(x_{*}-v_{p*}t_{*})-\Theta_{q_{a*}}],
\label{eq:qa}
\end{eqnarray}
where $\Theta_{q_{a*}}$ is a phase difference between the heat flux $q_{a*}$ at $\eta=\xi'/\delta_{0}$ and the ice-water interface.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=7cm,height=7cm,keepaspectratio,clip]{fig5a.eps}\hspace{5mm}
\includegraphics[width=7cm,height=7cm,keepaspectratio,clip]{fig5b.eps}\\[2mm]
\includegraphics[width=8cm,height=8cm,keepaspectratio,clip]{fig5c.eps}
\end{center}
\caption{(a) and (b) are illustrations of the time evolution of an initial disturbance of the ice-water interface from $t_{*}=0$ to $t_{*}=1/\sigma^{(r)}_{*\rm max}$. The solid arrows in the water film and the dashed arrows in the air boundary layer show the direction of the supercooled water flow and airflow, respectively. The arrows attached $q_{la*}$ and $q_{a*}$ are the maximum point of heat flux at the ice-water interface and water-air surface, respectively.
(a) represents the disturbance of $\mu_{a}=4.3$ in the absence of airflow.
(b) represents the disturbance of $\mu_{a}=4.8$ in the presence of airflow.
(c) represents the phase shift of the water-air surface, $\Theta_{\xi_{k*}}$,
total heat flux at the ice-water interface, $\Theta_{q_{ls*}}$,
and heat flux at the water-air surface, $\Theta_{q_{a*}}$,
relative to the ice-water interface with respect to $\mu_{a}$.}
\label{fig:heatflux-sla-phase}
\end{figure}
Figures \ref{fig:heatflux-sla-phase} (a) and (b) illustrate the time evolution of the ice-water interface with an initial amplitude of $\delta_{b}=0.05$, for the wave number $\mu_{a}=4.3$ in the absence of airflow and for $\mu_{a}=4.8$ in the presence of airflow, respectively. The respective wave number represents the fastest growing mode, at which $\sigma^{(r)}_{*}$ acquires a maximum value, as shown by the dashed and solid lines in Fig. \ref{fig:sim-mua-amp-vp} (a).
The arrows on the ice-water interface and the water-air surface show the position of the maximum of $q_{ls*}$
and that of $q_{a*}$.
Using Eq. (\ref{eq:dHlr-dHli-airflow}), Eq. (\ref{eq:qa}) can be written as
$q_{a*}=[(-dH^{(r)}_{l}/dy_{*}|_{y_{*}=1})^{2}+(-dH^{(i)}_{l}/dy_{*}|_{y_{*}=1})^2]^{1/2}
\delta_{b}(t_{*})\sin[\mu_{l}(x_{*}-v_{p*}t_{*})-\Theta_{q_{a*}}]$.
Therefore, non-zero values of $-dH^{(i)}_{l}/dy_{*}|_{y_{*}=1}$ in Eq. (\ref{eq:dHlr-dHli-airflow}) contribute to the imaginary part of $q_{a*}$, and cause the phase shift of $q_{a*}$ relative to the ice-water interface.
In the absence of airflow, as shown in Fig. \ref{fig:heatflux-sla-phase} (a), $q_{a*}$ is largest at each protruded part of the water-air surface because the isotherm in the air is symmetrical around the protruded part. As shown in Fig. \ref{fig:heatflux-sla-phase} (c), the water-air surface shifts to the positive $x_{*}$ direction by $\Theta_{\xi_{*}}$ relative to the ice-water interface. In the absence of airflow, since $G'^{(r)}_{a}=\mu_{l}$ and $G'^{(i)}_{a}=0$, Eq. (\ref{eq:qa}) yields
$q_{a*}=\mu_{l}[(f_{l}^{(r)}|_{y_{*}=1})^{2}+(f_{l}^{(i)}|_{y_{*}=1})^2]^{1/2}
\delta_{b}(t_{*})\sin[\mu_{l}(x_{*}-v_{p*}t_{*})-\Theta_{q_{a*}}]$.
Comparing this to Eq. (\ref{eq:xi}), it is found that the position of the maximum of $q_{a*}$ also shifts to the positive $x_{*}$ direction by $\Theta_{q_{a*}}=\Theta_{\xi_{*}}$.
However, the position of the maximum of $q_{ls*}$ shifts by $\Theta_{q_{ls*}}$ to the upper side of the protruded part of the ice-water interface.
On the other hand, in the presence of an upward airflow shown by the dashed arrow in Fig. \ref{fig:heatflux-sla-phase} (b), the isotherms in the air boundary layer are no longer symmetrical around each protruded part.
The isotherms become closer on the lower side of the protruded part of the water-air surface due to the upward airflow. Hence, $q_{a*}$ is largest on the lower side of the protruded part, as shown in Fig. \ref{fig:heatflux-sla-phase} (b).
By comparing Fig. \ref{fig:heatflux-sla-phase} (a) to (b),
first, it is found that the position of the maximum of $q_{a*}$ in the absence of airflow is always on the protruded part of the water-air surface, but that position changes by the presence of airflow and depends on the wave number $\mu_{a}$.
As shown in Fig. \ref{fig:heatflux-sla-phase} (c), the sign of $\Theta_{q_{a*}}$ in the presence of airflow changes from negative to positive value at $\mu_{a}=5.7$, which corresponds to the change of sign of $-dH^{(i)}_{l}/dy_{*}|_{y_{*}=1}$ in Fig. \ref{fig:sim-mua-amp-vp} (c).
Second, there is a critical difference between the phase shift $\Theta_{q_{ls*}}$ in the absence of airflow and that in its presence. In the absence of airflow, the position of the maximum of $q_{ls*}$ shifts to the upper side of the protruded part of the ice-water interface with an increase in $\mu_{a}$ (see $\Theta_{q_{ls*}}$ (no airflow) in Fig. \ref{fig:heatflux-sla-phase} (c)). In this case, the sign of $v_{p*}$ is positive as shown by the dashed line in Fig. \ref{fig:sim-mua-amp-vp} (b).
Figure \ref{fig:heatflux-sla-phase} (a) shows that the ripple at $\mu_{a}=4.3$ moves upwards at $v_{p*}=0.59$. The displacement in the dimensional form is about 11 $h_{0}$ after the dimensionless time $1/\sigma^{(r)}_{*\rm max}=1/0.054$.
On the other hand, in the presence of upward airflow, the position of the maximum of $q_{ls*}$ is on the lower side of the protruded part of the ice-water interface for $0<\mu_{a}<3.7$, whereas that is on the upper side for $\mu_{a}>3.7$ (see $\Theta_{q_{ls*}}$ (airflow) in Fig. \ref{fig:heatflux-sla-phase} (c)). We showed that the sign of $v_{p*}$ changes from negative to positive at $\mu_{a}=3.7$ by the solid line in Fig. \ref{fig:sim-mua-amp-vp} (b).
Therefore, the sign of $v_{p*}$ is related to the sign of $\Theta_{q_{ls*}}$.
The ripples move down in the mode $0<\mu_{a}<3.7$, whereas they move up in the mode $\mu_{a}>3.7$. However, the ripple with the most unstable mode of $\mu_{a}=4.8$ is expected to be observed. Figure \ref{fig:heatflux-sla-phase} (b) shows that the ripple at $\mu_{a}=4.8$ moves upwards at $v_{p*}=0.48$. The displacement in the dimensional form is about 6 $h_{0}$ after the dimensionless time $1/\sigma^{(r)}_{*\rm max}=1/0.085$.
Substituting Eqs. (\ref{eq:Tl}) and (\ref{eq:Ts}) into Eq. (\ref{eq:Tsl}), the dimensionless form of $\Delta T_{sl}$ can be written as:
\begin{equation}
\Delta T_{sl*}
=[(H_{l}^{(r)}|_{y_{*}=0}-1)^{2}+(H_{l}^{(i)}|_{y_{*}=0})^{2}]^{1/2}
\delta_{b}(t_{*})\sin[\mu_{l}(x_{*}-v_{p*}t_{*})-\Theta_{T_{\zeta_{*}}}],
\label{eq:DeltaTsl}
\end{equation}
where $\Theta_{T_{\zeta_{*}}}$ is a phase difference between the temperature at $y_{*}=\zeta_{*}$ and the ice-water interface. $H_{l}^{(r)}$ and $H_{l}^{(i)}$ in (\ref{eq:DeltaTsl}) are determined by solving (\ref{eq:geq-Hl}) with the boundary conditions (\ref{eq:Tla-xi}) and (\ref{eq:heatflux-xi-h0}). Since the water film flow is not affected by the natural convection airflow, the forms of $\bar{U}_{l*}|_{y_{*}=1}$ and $f_{l}|_{y_{*}=1}$ in (\ref{eq:Tla-xi}) and (\ref{eq:heatflux-xi-h0}) are the same as those in the absence of airflow. However, $dH_{a}/d\eta|_{\eta=0}$ in (\ref{eq:heatflux-xi-h0}) in the presence of airflow is different from that in the absence of airflow.
As a result of the change in the temperature gradient at the water-air surface, the solution $H_{l}$ changes and causes different distribution of $\Delta T_{sl*}$ at the ice-water interface. The position of the maximum of $\Delta T_{sl*}$ changes depending on that of the rate of latent heat loss at the water-air surface. \cite{Ueno09} That is why $\Delta T_{sl}$ in Eq. (\ref{eq:Tsl}) was considered as the spatial temperature non-uniformity caused by the external disturbance at the water-air surface. Heat flux $q_{s*}$ in the ice in the vicinity of the ice-water interface is caused due to the deviation $\Delta T_{sl*}$, which contributes to the second term in Eq. (\ref{eq:dispersion}).
\section{Summary and Discussion}
A morphological instability theory has been elaborated for ice growth under a water film flow with a free surface and a natural convection airflow, within a linear stability analysis. This theory proposes a synthetic treatment of the heat flow in ice, water and air through a disturbed ice-water interface and water-air surface, thin water film flow and airflow, taking into account the influence of the shape of the water-air surface on the growth condition of the ice-water interface disturbance.
Even though the natural convection airflow was introduced, the shear stress-free condition at the unperturbed water-air surface still held. Moreover, the influence of the perturbed part of shear and normal stresses due to natural convection airflow on the water film flow was negligible. Consequently, the perturbed distribution of water film flow could be obtained without considering the influence of the airflow. However, since the rate of latent heat loss from the water-air surface to the surrounding air is affected by the airflow, the perturbed temperature distribution in the water layer is different from that in the absence of airflow.
In the absence of airflow, the position of the maximum of heat flux $q_{a*}$ at the water-air surface is at the protruded part of the water-air surface. In the presence of airflow, that of $q_{a*}$ is not necessarily at the protruded part.
Depending on the position of the maximum of $q_{a*}$, that of $q_{ls*}$ at the ice-water interface changes. We find that the position of the maximum growth rate of the ice-water interface disturbance is shifted upward relative to the position of the maximum of $q_{a*}$.
We also find that although the airflow causes the amplification rate of the ice-water interface disturbance to increase by the enhancement of the rate of latent heat loss from the water-air surface to the surrounding air, the wavelength of ice ripples is not significantly affected by the natural convection airflow.
On the other hand, the mean ice growth rate $\bar{V}$ and the ripple translation velocity $v_{p}$ depend on the parameters characterizing the air boundary layer.
We mention the importance of the influence of the temperature distribution in water film flow on the growth condition of the ice-water interface disturbance even though the water layer is very thin. If we can neglect the temperature distribution within the water layer, and focus on only the temperature distribution in the air, Eq. (\ref{eq:heatflux-zeta}) is replaced by $L(\bar{V}+\partial \zeta/\partial t)=-K_{a}\partial T_{a}/\partial y|_{y=\xi}$. Linearizing this equation at $y=h_{0}$ yields, to the zeroth order in $\xi_{k}$, $\bar{V}=-K_{a}T_{\infty}/(L\delta_{0}/\bar{G}_{a*})$,
which is identical to Eq. (\ref{eq:V-airflow}).
The first order in $\xi_{k}$ gives
$\sigma=(\bar{V}/h_{0})(h_{0}/\delta_{0})(-dH_{a}/d\eta|_{\eta=0})f_{l}|_{y_{*}=1}$,
whose real part is approximately expressed as
$\sigma_{*}^{(r)}
=G'^{(r)}_{a}f_{l}^{(r)}|_{y_{*}=1}-G'^{(i)}_{a}f_{l}^{(i)}|_{y_{*}=1}
=(36G'^{(r)}_{a}+6\alpha G'^{(i)}_{a})/(36+\alpha^{2}) \approx G'^{(r)}_{a}$.
Comparing this to Eq. (\ref{eq:amp-approx-airflow}), it is found from Fig. \ref{fig:tempprofiles-Ga} (b) that the ice-water interface is always unstable because the stabilizing term is absent. It should be noted that the stabilizing term in Eq. (\ref{eq:amp-approx-airflow}) was obtained from the solution of the perturbed temperature distribution in the water film flow. Although the heat transfer through the air boundary layer is the deciding factor in the growth rate $\bar{V}$, in order to obtain the growth condition of the ice-water interface disturbance, it is important to determine the perturbed temperature distribution in the water layer as well as that in the air boundary layer.
Although the wavelengths theoretically obtained in Table \ref{tab:tableI} are in agreement with experimental results, \cite{Maeno94, Matsuda97, Ueno09} several questions arise for the values of $\bar{V}$ and $T_{la}$. The measured growth rates of icicle radius in the experiment was about $1.4 \sim 5.3$ mm/h for the air temperatures range of $-4.9 \sim -28.8$ $^\circ$C in the case of the zero wind speed. \cite{Maeno94}
Also, the mean radial growth rate of ice grown on a 6-mm diameter round stick was 1.7 mm/h (see Figure 9 (a) \cite{Ueno09}). This experiment was conducted in a cold room, where large temperature fluctuations of $\pm3$ $^\circ$C around $-9$ $^\circ$C were observed. Substituting the measured value into the energy conservation equation $L\bar{V}=-K_{l}T_{la}/h_{0}$ at the ice-water interface, the degree of supercooling of the water layer is $T_{la}=-0.03$ $^\circ$C. Certainly, the values of $T_{la}$ and $\bar{V}$ calculated from Eqs. (\ref{eq:Tla-airflow}) and (\ref{eq:V-airflow}) are less than the measured experimental values by one order of magnitude.
If the value of the boundary layer thickness $\delta$ is less than that estimated from the natural convection boundary layer, Eqs. (\ref{eq:Tla-airflow}) and (\ref{eq:V-airflow}) suggest that the values of $T_{la}$ and $\bar{V}$ should increase. It is known that it is somewhat difficult to grow icicles with significant ripples in the steady calm conditions of icicle formation. \cite{Maeno94} Therefore, instead of assuming a calm environment for ice growth, different heat transfer mechanisms needs to be considered. Also, it is necessary to predict or to measure the mean ice growth rate $\bar{V}$ accurately in order to estimate the displacement of ripples.
We have to be careful in the measurement of the displacement of ripples because $v_{p}$ depends on environmental conditions, as mentioned above.
Finally, limitations of the proposed theory must be mentioned.
First, it was assumed that ice was grown in a flat gutter on an inclined plane, considering a perturbation around the flat ice surface.
However, as shown in Table \ref{tab:tableI}, for a given air temperature $T_{\infty}$, since the ice growth rate $\bar{V}$ depends on $x$, the actual grown ice thickness on the gutter varies locally. If heat conduction through the ice to the substrate is negligible, the ice thickness $b_{0}$ in the unperturbed state is proportional to the time. \cite{Ueno09} The angle that tangent vector to the ice-water interface at $(x,b_{0})$ makes with respect to the positive $x$ direction is given by $\phi(x,t)=\cos^{-1}[\{1+(db_{0}/dx)^{2}\}^{-1/2}]$.
Making use of Eq. (\ref{eq:V-airflow}), $x=h_{0}x_{*}$ and $t=(h_{0}/\bar{V})t_{*}$,
$\phi(x,t)$ gradually changes in time from the initial flat ice surface by
$\cos^{-1}[\{1+\{(d\bar{V}/dx)t\}^{2}\}^{-1/2}]
=\cos^{-1}[\{1+\{t_{*}/(4x_{*})\}^{2}\}^{-1/2}]$.
However, the change is negligible except for small $x_{*}$ because $t_{*}/x_{*} \ll 1$ even after 10 hours in the range of $0.1 \leq x \leq 1$ m. The actual geometry of the icicle is that of an elongated carrot shape. \cite{Short06} In this case too, since icicle's surfaces are nearly vertical, we can neglect the change in the slope $db_{0}/dx$ in $\phi(x,t)$ except for the tip region.
Hence, the use of air boundary layer under the assumption of a flat ice surface is valid, \cite{Short06} and the local variation in the thickness $h_{0}$ and the surface velocity $u_{l0}$ of the water film in the unperturbed state is negligible as ice grows.
However, for ice growth on aircraft wings and aerial cables, the local change in $\phi$ in time is remarkable compared to the icicle growth, so that we have to consider a morphological instability around curved ice surfaces in the unperturbed state, and $h_{0}$ and $u_{l0}$ are no longer constant over the curved ice surface. This is relevant to the problems on solidification on surfaces of arbitrary curvature. \cite{Myers02}
Second, in our linear stability analysis, a small perturbation of the ice-water interface was assumed:
$y_{*}=\zeta_{*}=\delta_{b}(t_{*})\sin[\mu_{l}(x_{*}-v_{p*}t_{*})]$.
However, since the amplitude $\delta_{b}(t_{*})=\delta_{b}{\rm exp}(\sigma^{(r)}_{*}t_{*})$ in $\zeta_{*}$ and in the corresponding fields increases exponentially with time when $\sigma^{(r)}_{*}>0$, the non-linear terms for the perturbation in the governing equations and boundary conditions are no longer small.
Even though the linear approximation only describes the initial evolution of the perturbation, there was good agreement between the wavelengths predicted from our linear stability analysis and experimentally observed wavelengths of finite amplitude ripples. However, it is needless to say that the linear theory is unable to clarify further features related to ripple development, and the question arises of the value of the saturation amplitude, and of how the perturbation amplitude evolves towards this value. \cite{Caroli92} This leads us to extend the linear perturbation calculation to higher orders in the perturbation amplitude. \cite{Wollkind70} Such an amplitude expansion generalizes the time evolution equation of the amplitude of the ice-water interface from $d\delta_{b}(t_{*})/dt_{*}=\sigma^{(r)}_{*}\delta_{b}(t_{*})$ to a nonlinear amplitude evolution equation. In order to implement it, algebraically complicated calculations are needed.
Third, for the relatively weak flow considered here, the free shear stress condition at the water-air surface was still satisfied, and water film flow was driven by gravity only. However, in the presence of a strong airflow around aircraft wings and aerial cables, the water film flow is driven by gravity and aerodynamic forces. Due to strong air shear stress exerted on the water-sir surface, the distribution of water film flow must be modified from the half-parabolic form $\bar{U}_{l*}=y_{*}^{2}-2y_{*}$ to $\bar{U}_{l*}=y_{*}^{2}+(R_{\tau_{al}}-2)y_{*}$, as discussed in \ref{sec:linearization}.
It is known that the aerodynamic forces, as modified by the accreted ice, are significant in determining the wind drag and lift on iced structures. However, the traditional approach in wet icing modelings has been based on the mass and energy conservations only and have ignored the dynamics of the surface flow of unfrozen water. \cite{Farzaneh08} When airflow and water film flow are coupled, the distribution of shear and normal stresses at the water-air surface may influence the temperature distribution in the water layer. The action of an aerodynamic force on the water-air surface, and the resulting morphological instability of the ice-water interface have to be considered. These issues are beyond the scope of the analysis developed here.
Removing these restrictions will be the subject of future research.
\begin{acknowledgements}
This study was carried out within the framework of the NSERC/Hydro-Qu$\acute{\rm e}$bec/UQAC Industrial Chair on Atmospheric Icing of Power Network Equipment (CIGELE) and the Canada Research Chair on Engineering of Power Network Atmospheric Icing (INGIVRE) at the Universit$\acute{\rm e}$ du Qu$\acute{\rm e}$bec $\grave{\rm a}$ Chicoutimi.
The authors would like to thank all CIGELE partners (Hydro-Qu$\acute{\rm e}$bec, Hydro One, R$\acute{\rm e}$seau Transport d'$\acute{\rm E}$lectricit$\acute{\rm e}$ (RTE) and $\acute{\rm E}$lectricit$\acute{\rm e}$ de France (EDF), Alcan Cable, K-Line Insulators, Tyco Electronics, Dual-ADE, and FUQAC) whose financial support made this research possible.
\end{acknowledgements}
|
1,314,259,993,242 | arxiv | \subsection{Double beta decay}
\label{ch:cha41}
The most promising way to distinguish between Dirac and Majorana neutrinos is neutrinoless double beta decay (0\mbox{$\nu\beta\beta$ decay}{})
\begin{equation}
(Z,A) \rightarrow (Z+2,A) + 2 e^- \quad (\Delta L =2)
\end{equation}
only possible if
neutrinos are massive Majorana particles.
The measured quantity is called effective Majorana neutrino mass \mbox{$\langle m_{\nu_e} \rangle$ } and given by
\begin{equation}
\label{eq:ema}\mbox{$\langle m_{\nu_e} \rangle$ } = \mid \sum_i U_{ei}^2 \eta_i m_i \mid
\end{equation}
with the relative CP-phases $\eta_i = \pm 1$, $U_{ei}$ as the mixing
matrix elements and
$m_i$ as the
corresponding mass eigenvalues.
From the experimental point, the evidence for 0\mbox{$\nu\beta\beta$ decay} is a peak in the sum energy
spectrum of the electrons
at the
Q-value of the involved transition.
The best limit is coming from the Heidelberg-Moscow experiment resulting in a
bound of \cite{bau99}
(Fig.\ref{pic:heimo})
\begin{equation}
\label{eq:thalb}
\mbox{$T_{1/2}^{0\nu}$} > 5.7 \cdot 10^{25} y \rightarrow \mbox{$\langle m_{\nu_e} \rangle$ } < 0.2 eV \quad (90 \% CL)
\end{equation}
having a sensitivity of $\mbox{$T_{1/2}^{0\nu}$} > 1.6 \cdot 10^{25} y$.
Eq.(\ref{eq:ema}) has to be modified in case of heavy neutrinos ($m_{\nu}
\mbox{$\stackrel{>}{\sim}$ } $1 MeV). For such heavy neutrinos the mass can no longer be neglected in the
neutrino propagator resulting in an A-dependent
contribution
\begin{equation}
\mbox{$\langle m_{\nu_e} \rangle$ } = \mid \sum_{i=1,light}^N U^2_{ei} m_i + \sum_{h=1,heavy}^M F (m_h,A)
U^2_{eh} m_h \mid
\end{equation}
By comparing these limits for isotopes with different atomic mass,
interesting limits
on the mixing angles and \mbox{$\nu_\tau$} parameters for an MeV \mbox{$\nu_\tau$}
can be obtained \cite{hal83,zub97}.
\begin{figure}[bht]
\begin{center}
\epsfig{file=heimopeak.eps,width=7cm,height=5cm}
\caption{Observed sum energy spectrum of the electrons around the expected 0\mbox{$\nu\beta\beta$ decay}{} line position
obtained by the
Heidelberg-Moscow experiment. No signal peak is seen. The two
different spectra correspond to data sets with (black) and
without (grey) pulse
shape discrimination.}
\label{pic:heimo}
\end{center}
\end{figure}
\paragraph{Future}
Several upgrades are planned to improve the existing half-life limits, only three are mentioned here, for
details see \cite{zub98}.
The next to come is NEMO-3, a giant TPC using double beta emitters up to 10 kg in form of thin foils,
which should start operation in 2000.
Even more ambitious would be the usage of
large amounts of materials (in the order of several hundred kg to tons)
like enriched \mbox{$^{136}Xe$ } added to
scintillators
\cite{rag94}, 750 kg $TeO_2$ in form of cryogenic bolometers (CUORE) \cite{fio98} or a
huge cryostat containing several hundred detectors of
enriched \mbox{$^{76}Ge$ } with a total mass of 1 ton (GENIUS) \cite{kla98}.
\section*{Terrestrial neutrino mass searches}
\medskip
{\it K. Zuber$^a$}\\
$^a$ Lehrstuhl f\"ur Exp. Physik IV, Universit\"at Dortmund, 44221 Dortmund
\end{center}
\setcounter{section}{1}
\subsection{Introduction}
Neutrinos play a fundamental role in several fields of physics from cosmology down to
particle physics. Even more, the observation of a non-vanishing rest mass of neutrinos would
have a big impact on our present model of particle physics and might guide towards grand
unified
theories. Currently three evidences exist showing effects of massive neutrinos: the deficit in
solar neutrinos, the zenith angle dependence of atmospheric neutrinos and the excess events
observed by LSND. These effects are explained with the help of neutrino
oscillations, thus depending on \mbox{$\Delta m^2$} {} = $m_2^2 - m_1^2$, where $m_1,m_2$ are the neutrino {} mass
eigenvalues and therefore are not absolute mass measurements.
For a recent review on the physics of massive neutrinos see \cite{zub98}.
\input massnel.tex
\input massmu.tex
\input masstau.tex
\input bb.tex
\input mamo.tex
\input ref.tex
\end{document}
\subsection{Magnetic moment of the neutrino}
Another possibility to check the neutrino character and mass is the search for its magnetic moment{}.
In the case of Dirac neutrinos{}, it can be shown that neutrinos can have a magnetic moment
due to loop diagrams which is proportional to
their mass and is given by \cite{lee77,mar77}
\begin{equation}
\mbox{$\mu_{\nu}$} = \frac{3 G_F e}{8 \sqrt{2} \pi^2} m_{\nu} = 3.2 \cdot 10^{-19} (\frac{m_\nu}{eV}) \mbox{$\mu_B$}
\end{equation}
In case of neutrino masses in the eV-range, this is far to small to be observed
and to have any significant
effects in
astrophysics. Nevertheless
there exist GUT-models, which are able to increase the magnetic moment without increasing the mass
\cite{pal92}. However
Majorana neutrinos still have a vanishing static moment because of CPT-invariance.
The existence of diagonal terms in the magnetic moment matrix would therefore prove
the
Dirac-character of neutrinos.
Non-diagonal terms in the moment matrix are possible for both types of neutrinos
allowing transition moments of the form \mbox{$\nu_e$} - $\bar{\nu}_\mu$.\\
Limits on magnetic moments arise from \mbox{$\nu_e$} $e$ - scattering experiments and
astrophysical considerations. The
differential cross section for \mbox{$\nu_e$} $e$ -
scattering in presence of a magnetic moment is given by
\begin{eqnarray}
\frac{d \sigma}{dT} = \frac{G_F^2 m_e}{2 \pi}
[(g_V + x+g_A)^2 +
(g_V + x- g_A)^2 (1-\frac{T}{E_\nu})^2 \\
+ (g_A^2 -
(x+g_V)^2)\frac{m_e T}{E_\nu^2}] + \frac{\pi \alpha^2 \mbox{$\mu_{\nu}$} ^2}{m_e^2}
\frac{1-T/E_\nu}{T}
\end{eqnarray}
where T is the kinetic energy of the recoiling electron and
$x$ denotes the neutrino form factor related to its square charge radius $\langle r^2 \rangle$
\begin{equation}
x=\frac{2 m_W^2 }{3} \langle r^2 \rangle sin^2 \theta_W \quad x \rightarrow -x \quad for
\quad \bar{\mbox{$\nu_e$} }
\end{equation}
The contribution associated with the charge radius can be neglected in the case $\mu_\nu
\stackrel{>}{\sim} 10^{-11} \mbox{$\mu_B$} $.
As can be seen, the largest effect of a magnetic moment can be observed in the low
energy region, and because of
destructive interference
of the electroweak terms, searches with antineutrinos would be preferred. The obvious sources
are therefore nuclear
reactors. Experiments done so far give limits of \mbox{$\mu_{\nu}$} {} $<1.8 \cdot 10^{-10} \mu_B$ (\mbox{$\nu_e$} ), \mbox{$\mu_{\nu}$} $<7.4
\cdot
10^{-10} \mu_B$ (\mbox{$\nu_\mu$} ) and \mbox{$\mu_{\nu}$} {} $<5.4 \cdot 10^{-7} \mu_B$ (\mbox{$\nu_\tau$} ). Also bounds for a magnetic moment of a sterile
neutrino,
discussed in
more detail later, can be obtained from a Primakoff like conversion in $\nu$N scattering
if there is a mixing with \mbox{$\nu_\mu$} . \\
Astrophysical limits are somewhat more stringent but also more model dependent.
To improve the experimental situation new experiments are taking data or are under construction. The most
advanced is the
MUNU experiment \cite{ams97} currently running at
the Bugey
reactor. It consists of a 1 m$^3$ TPC loaded with CF$_4$ under a pressure of 5 bar. The usage
of a TPC will not only
allow to measure the electron energy but for the first time in such experiments also the
scattering angle, making the
reconstruction of the neutrino energy possible.
In case of no magnetic moment the expected count rate is 9.5 per
day increasing to 13.4 per day if
$\mbox{$\mu_{\nu}$} = 10^{-10} \mbox{$\mu_B$} $ for an energy threshold of 500 keV. The estimated
background is 6 events per day. The expected
sensitivity level is down to $\mbox{$\mu_{\nu}$} = 3 \cdot 10^{-11} \mbox{$\mu_B$} $ . The usage
of a low background Ge-NaI
spectrometer in a shallow depth near a reactor has
also been considered \cite{bed97}. The usage of large low-level detectors
with a low-energy threshold
of a few keV in underground laboratories is also under investigation. The reactor
would be replaced
by a strong $\beta$-source. Calculations for a scenario of a 1-5 MCi $^{147}$Pm
source (endpoint
energy of 234.7 keV) in combination with a 100 kg low-level NaI(Tl) detector with a
threshold of about 2
keV
can be found in \cite{bar96}. Also using a $^{51}$Cr source within the BOREXINO experiment will allow
to put stringent limits on $\mbox{$\mu_{\nu}$} $.
\subsection{Mass measurement of the muon neutrino {}}
The way to obtain limits on \mbox{$m_{\nu_\mu}$} is given by the two-body decay of
the $\pi^+$.
A precise measurement of the muon momentum $p_{\mu}$ and
knowledge of $m_{\mu}$
and
$m_{\pi}$ is required.
These measurement was done at the PSI resulting in a limit of \cite{ass96}
\begin{equation}
\mbox{$m_{\nu_\mu}$} ^2 = (-0.016 \pm 0.023) MeV^2 \quad \rightarrow \quad \mbox{$m_{\nu_\mu}$} < 170 keV (90
\%CL)
\end{equation}
A new idea looking for pion decay in flight using the g-2 storage ring at BNL has been proposed
recently \cite{cus99}. Because the g-2 ring would act as a high resolution spectrometer an
exploration
of \mbox{$m_{\nu_\mu}$} down to 8 keV seems possible. Such a bound would have some far reaching consequences:
First of all it would be the largest step on any neutrino mass improvement within the last 20
years (Fig.\ref{pic:pdg}). Secondly it would bring any magnetic moment calculated within the
standard model and associated with \mbox{$\nu_\mu$} down to a
level of vanishing
astrophysical importance. Furthermore it would once and for all exclude that a possible 17 keV
mass eigenstate is the dominant contribution of \mbox{$\nu_\mu$} . Possibly the largest impact is on
astrophysical topics. All bounds on neutrino properties derived from stellar evolution are typically
valid for neutrino masses below about 10 keV, so they would then apply for \mbox{$\nu_\mu$} as well. For example, plasma
processes like $\gamma \rightarrow \nu \bar{\nu}$ would contribute to stellar energy losses and significantly prohibit
helium ignition, unless the neutrino has a magnetic moment smaller than $\mu_{\nu} < 3 \cdot 10^{-12} \mu_B$
\cite{raf99} much more stringent than laboratory bounds.
\begin{figure}
\begin{center}
\epsfig{file=neu.eps,width=8cm,height=6cm}
\caption{Evolution of neutrino mass limits over the last 15 years using the Particle Data Group values.
Extrapolated values are given for 2000 and 2002. Electron neutrino limits are given for $\beta$-decay (black
diamonds) and SN
1987A (green diamonds), for \mbox{$\nu_\mu$} {} as triangles and \mbox{$\nu_\tau$} as squares. As can be seen, the proposed measurement
of $m_{\mbox{$\nu_\mu$} }$ at
the g-2 experiment {} would result in the largest factor obtained. The mass scale corresponds to eV (\mbox{$\nu_e$} ), keV
(\mbox{$\nu_\mu$} ) and MeV (\mbox{$\nu_\tau$} ) respectively.}
\label{pic:pdg}
\end{center}
\end{figure}
\subsection{Mass measurements of the electron neutrino}
The classical way to determine the mass of $\bar{\mbox{$\nu_e$} }$ (which is identical to $m_{\nu_e}$
assuming CPT invariance) is the
investigation of the
electron spectrum in beta decay.
A finite neutrino mass will reduce the phase space and leads to a
change of the shape
of the electron spectra.
In case several mass
eigenstates contribute, the total electron spectrum is given by a
superposition
of the individual
contributions
\begin{equation}
N(E) \propto F(E,Z) \cdot p \cdot E \cdot (Q-E) \cdot \sum^3_{i=1}
\sqrt{(Q-E)^2 - m_i^2}
\mid U_{ei}^2
\mid
\end{equation}
where F(E,Z) is the Fermi-function, $m_i$ are the mass eigenvalues, $U_{ei}^2$ are
the mixing matrix elements connecting weak and mass eigenstates and $E,p$ are energy and momentum
of the emitted electron. The different involved $m_i$ produce kinks
in the Kurie-plot
where the size of the kinks is a measure
for the corresponding mixing
angle. This was discussed in connection with the now ruled out 17 keV
- neutrino . A new sensitive search for kinks in the region 4-30 keV using
$^{63}$Ni
was done recently resulting in an overall upper limit of $ U_{e2}^2 < 10^{-3}$ \cite{hol99}.\\
Searches for an eV-neutrino are done near the endpoint region of isotopes with low Q - values.
The preferred isotope under study is tritium, with an endpoint energy of about 18.6 keV.
By extracting o neutrino mass limit out of their data, most experiments done in the past end up with
negative $m_{\nu}^2$ fit values,
which need
not to have a common origin.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\epsfig{file=combine_q3_q4_q5.eps,width=8cm,height=6cm} &
\epsfig{file=re187spec.eps,width=7cm,height=5cm}
\end{tabular}
\caption{left: Mainz 1998 electron spectrum near the endpoint of tritium decay. The
signal/background ratio is increased by a factor
of 10 in comparison with the
1994 data. The Q-value of 18.574 keV is marking to the center of mass of the rotation-vibration
excitations of the
molecular ground state of the daughter ion $^3HeT^+$. right: \mbox{$^{187}Re$ } $\beta$-spectrum obtained with a
cryogenic bolometer by the Genoa group. Calibration peaks can also be seen.}
\label{pic:mainz}
\end{center}
\end{figure}
For a detailed discussion of the experiments see \cite{hol92,ott95}.
While until
1990 mostly magnetic spectrometers were used
for the measurements, the new experiments in Mainz and Troitzk use electrostatic retarding
spectrometers \cite{lob85,pic92}.
Fig.\ref{pic:mainz} shows the
present electron spectrum near the
endpoint as obtained with the Mainz spectrometer.
The current obtained limits are 2.8 eV (95 \% CL) ($m_{\nu}^2 = - 3.7 \pm 5.3 (stat.) \pm 2.1 (sys.) eV^2$)
\cite{wei99} and 2.5 eV (95 \% CL)
($m_{\nu}^2 = - 1.9 \pm 3.4 (stat.) \pm 2.2 (sys.) eV^2$)
\cite{lob99} respectively. The final sensitivity should be around 2 eV.\\
Beside this, the Troitzk experiment observed excess counts in the
region of interest,
which can be described by a monoenergetic line a few eV below the endpoint.
Even more, a semiannual modulation of the line position is observed \cite{lob99}. Clearly
further
measurements are needed to investigate this effect. Considerations of building a new larger scale
version of such a spectrometer exist, to probe neutrino masses down below 1 eV.\\
A complementary strategy is followed by using cryogenic microcalorimeters. Because these
experiments measure the total energy released,
final state effects are not important. This method allows the investigation
of the $\beta$-decay of \mbox{$^{187}Re$ }, which has the lowest Q-value of all $\beta$-emitters (Q=2.67
keV). Furthermore the associated half-life measurement would be quite important, because
the \mbox{$^{187}Re$ } - $^{187}$Os pair is a well known cosmochronometer and a more precise half -
life measurement would sharpen the dating of events in the early universe like the formation of
the solar system.
Cryogenic bolometers were build in form of metallic Re as well as AgReO$_4$ crystals and
$\beta$ - spectra (Fig.\ref{pic:mainz}) were measured \cite{gat99} \cite{ale99}, but at present
the experiments
are not
giving any limits on neutrino masses. Investigations to use this kind of technique also for
calorimetric measurements on tritium \cite{dep99} and on $^{163}$Ho \cite{meu98} are currently done.
Measuring accurately
branching ratios of atomic transitions or the internal bremsstrahlung spectrum in $^{163}$Ho
is interesting because this would result directly in a limit on \mbox{$m_{\nu_e}$} {}.
\subsection{Mass measurement of the tau neutrino {}}
The present knowledge of the mass of \mbox{$\nu_\tau$} stems from measurements with
ARGUS, CLEO, OPAL, DELPHI and ALEPH (see \cite{pas97}).
Practically all experiments use the $\tau$-decay into five charged pions
$\tau \rightarrow \mbox{$\nu_\tau$} + 5\pi^{\pm} (\pi^0)$
with a branching ratio of BR = ($9.7 \pm 0.7) \cdot 10^{-4}$. To increase the
statistics CLEO, OPAL, DELPHI
and ALEPH
extended their search by including the 3 $\pi$ decay mode. But even with the
disfavoured statistics,
the 5 prong-decay is much more sensitive, because the mass of the
hadronic system peaks at about 1.6
GeV, while the 3-prong system is dominated by the $a_1$ resonance at
1.23 GeV. While ARGUS obtained their limit by investigating the invariant mass of the
5 $\pi$-system, ALEPH, CLEO and OPAL
performed a two-dimensional analysis by including the energy of
the hadronic system.
The most
stringent one is given by ALEPH \cite{bar98}
|
1,314,259,993,243 | arxiv | \section{Introduction}
\label{Introduction}
In previous studies~\cite{Cea:2002wx,Cea:2005td} on the vacuum dynamics of
pure non-abelian gauge theories we found that the deconfinement
temperature depends on the strength of an external abelian
chromomagnetic field. In particular we ascertained that the deconfinement
temperature decreases when the strength of the applied field is
increased and eventually goes to zero. It is not difficult to see here an
analogy with the reversible Meissner effect in the case of ordinary
superconductors (where the system goes to normal even at zero temperature
if the magnetic field is strong enough)
and therefore we referred to it as "vacuum color
Meissner effect". We have also verified that the same effect
is not present in the case of abelian gauge theories, so that
it seems to be directly linked to the non-abelian nature of
the gauge group.
The dependence of the deconfinement temperature on
applied external fields is surely linked to the dynamics
underlying color confinement, therefore in our opinion,
apart from possible phenomenological implications,
such an effect could shed light on confinement/deconfinement
mechanisms.
On these basis we believe that it is important to
test if the effect continues to hold and how it qualitatively
changes when switching on fermionic degrees of freedom.
One important aim of the present work is therefore to investigate
the dependence of the deconfinement temperature
on the strength of an external abelian chromomagnetic field
in the case of full QCD with two flavors.
A second important and relevant issue regards the relation
between deconfinement and chiral symmetry restoration.
As it is well known, the two phenomena appear to be coincident
in ordinary QCD, while they are not so in different theories
(like QCD with adjoint fermions~\cite{Karsch:1998qj,Engels:2005rr,Lacagnina:2006sk}).
A simple explanation of this fact is not yet known
and may be strictly linked to the very dynamics of color confinement.
An important contribution towards a clear understanding of
this phenomenon could be to study whether it is stable
against the variation of external parameters: both
theoretical and numerical
studies~\cite{McLerran:2007qj,Hands:2006ve,Alles:2006ea,Conradi:2007kr}
have been performed in that sense for the case of QCD in
presence of a finite density of baryonic matter.
In the present work we investigate the same issue for the case
of an external field, i.e. we will ascertain whether
deconfinement and chiral symmetry restoration do coincide
also in presence of a constant chromomagnetic field.
The paper is organized as follows.
In Section 2 we review our method to treat a background field on the lattice.
In Sections 3 and 4 we discuss numerical results and finally, in Section 5, we
present our conclusions.
\section{External fields on the lattice}
\label{extfields}
In this section we review our method to study the dynamics
of lattice gauge theories in presence of background fields. In particular,
we focus on the case of constant chromomagnetic fields.
\subsection{The method}
\label{themethod}
In Refs.~\cite{Cea:1997ff,Cea:1999gn} we introduced a lattice gauge invariant
effective action $\Gamma[\vec{A}^{\text{ext}}]$ for an external background
field $\vec{A}^{\text{ext}}$:
\begin{equation}
\label{Gamma}
\Gamma[\vec{A}^{\text{ext}}] = -\frac{1}{L_t} \ln
\left\{
\frac{{\mathcal{Z}}[\vec{A}^{\text{ext}}]}{{\mathcal{Z}}[0]}
\right\}
\end{equation}
where $L_t$ is the lattice size in time direction and
$\vec{A}^{\text{ext}}(\vec{x})$ is the continuum gauge potential of the
external static background field. ${\mathcal{Z}}[\vec{A}^{\text{ext}}]$ is the
lattice functional integral
\begin{equation} \label{Zetalatt}
{\mathcal{Z}}[\vec{A}^{\text{ext}}] =
\int_{U_k(\vec{x},x_t=0)=U_k^{\text{ext}}(\vec{x})} {\mathcal{D}}U \; e^{-S_W}
\,, \end{equation}
with $S_W$ the standard pure gauge Wilson action.
The functional integration is performed over the lattice links, but constraining
the spatial links belonging to a given time slice (say $x_t=0$) to be
\begin{equation}
\label{coldwall}
U_k(\vec{x},x_t=0) = U^{\text{ext}}_k(\vec{x})
\,,\,\,\,\,\, (k=1,2,3) \,\,,
\end{equation}
$U^{\text{ext}}_k(\vec{x})$ being the elementary parallel transports
corresponding to the external continuum
gauge potential $\vec{A}^{\text{ext}}(x)=\vec{A}^{\text{ext}}_a(x) \lambda_a/2$.
Note that the temporal links are not constrained.
${\mathcal{Z}}[0]$ is defined analogously, but adopting a zero
external field, i.e. with $U^{\text{ext}}_k(\vec{x})$ fixed to the
identity element of the gauge group.
In the case of a static background field which does not vanish at infinity we
must also impose that, for each time slice $x_t \ne 0$, spatial links exiting
from sites belonging to the spatial boundaries are fixed according to
eq.~(\ref{coldwall}). In the continuum this last condition amounts to the
requirement that fluctuations over the background field vanish at infinity.
The partition function defined in eq.~(\ref{Zetalatt}) is also known
as lattice
Schr\"odinger functional~\cite{Luscher:1992an,Luscher:1995vs} and in the
continuum corresponds to the Feynman kernel~\cite{Rossi:1980jf}. Note that, at
variance with the usual formulation of the lattice Schr\"odinger
functional~\cite{Luscher:1992an,Luscher:1995vs}, where a cylindrical
geometry is adopted, our lattice has an hypertoroidal geometry, i.e.
the first and the last time slice are identified and periodic boundary
conditions are assumed in the time direction, so that the constraint
given in eq.~(\ref{Zetalatt}) should actually read
$U_k(\vec{x},L_t)=U_k(\vec{x},0)=U^{\text{ext}}_k(\vec{x})$.
With this prescription, $S_W$
in eq.~(\ref{Zetalatt}) is allowed to be the standard Wilson action.
The lattice effective action $\Gamma[\vec{A}^{\text{ext}}]$ defined
by eq.~(\ref{Gamma}) is given in terms of the
lattice Schr\"odinger functional, which is invariant for time-independent gauge
transformation of the background
field~\cite{Luscher:1992an,Luscher:1995vs}, therefore it is gauge
invariant too. It corresponds to the vacuum
energy, $E_0[\vec{A}^{\text{ext}}]$,
in presence of the background field, measured with respect to
the vacuum energy,
$E_0[0]$,
with $\vec{A}^{\text{ext}}=0$
\begin{equation}
\label{vacuumenergy}
\Gamma[\vec{A}^{\text{ext}}] \quad \longrightarrow \quad E_0[\vec{A}^{\text{ext}}]-E_0[0] \,.
\end{equation}
The relation above is true by letting the temporal lattice size $L_t \to \infty$;
on finite lattices this amounts to have $L_t$ sufficiently large
to single out the ground state contribution to the energy.
For finite values of $L_t$, however, having adopted the prescription
of periodic boundary conditions in time direction, the functional
integral in eq.~(\ref{Zetalatt}) can be naturally interpreted as the
thermal partition function
${\mathcal{Z_T}}[\vec{A}^{\text{ext}}]$~\cite{Gross:1981br}
in presence of the background field $\vec{A}^{\text{ext}}$, with
the temperature given by $T=1/(a L_t)$.
In this case the gauge invariant
effective action in eq.~(\ref{Gamma}) is replaced by the
the free energy functional defined as
\begin{equation}
\label{freeenergy}
{\mathcal{F}}[\vec{A}^{\text{ext}}] = -\frac{1}{L_t} \ln
\left\{
\frac{{\mathcal{Z_T}}[\vec{A}^{\text{ext}}]}{{\mathcal{Z_T}}[0]}
\right\} \; .
\end{equation}
When the physical temperature is sent to zero
the free energy functional reduces to the vacuum energy functional,
eq.~(\ref{Gamma}).
Let us now consider the extension of the above formalism to full QCD,
i.e. including dynamical fermions, which is relevant for the present
work. In presence of dynamical fermions
the thermal partition functional
becomes~\cite{Cea:2004ux}
\begin{eqnarray}
\label{ZetaT}
\mathcal{Z}_T \left[ \vec{A}^{\text{ext}} \right] &=
&\int_{U_k(L_t,\vec{x})=U_k(0,\vec{x})=U^{\text{ext}}_k(\vec{x})}
\mathcal{D}U \, {\mathcal{D}} \psi \, {\mathcal{D}} \bar{\psi} e^{-(S_W+S_F)}
\nonumber \\&=& \int_{U_k(L_t,\vec{x})=U_k(0,\vec{x})=U^{\text{ext}}_k(\vec{x})}
\mathcal{D}U e^{-S_W} \, \det M \,,
\end{eqnarray}
where $S_F$ is the fermion action and $M$ is the fermionic matrix.
The spatial links are still constrained to values corresponding
to the external background field, whereas
the fermionic fields are not constrained.
The relevant quantity is still the free energy functional defined as
in eq.~(\ref{freeenergy}).
Actually, a direct numerical evaluation of the ratio of partition
functions appearing in eq.~(\ref{freeenergy}) turns out to be quite
difficult. Even if techniques have been developed recently to deal
with similar problems~\cite{deForcrand:2000fi,D'Elia:2006vg}, we adopt the more conventional
strategy~\cite{DelDebbio:1994sx,Cea:1997ff}
of computing instead
a susceptibility of the free energy functional, in particular its derivative
$F^\prime$
with respect to the inverse gauge coupling $\beta$, which can be
easily evaluated and is also more appropriate for the aim of the
present study. $F^\prime$ is defined as
\begin{equation}
\label{deriv}
F^\prime(\beta) =
\frac{\partial {\mathcal{F}}(\beta)}{\partial \beta} =
\left \langle
\sum_{x,\mu < \nu}
\frac{1}{3} \, \text{Re}\, {\text{Tr}}\, U_{\mu\nu}(x) \right\rangle_0 \\
- \left\langle \sum_{x,\mu< \nu} \frac{1}{3} \, \text{Re} \, {\text{Tr}} \, U_{\mu\nu}(x)
\right\rangle_{\vec{A}^{\text{ext}}} \,,
\end{equation}
where the subscripts on the averages indicate the value of the
external field.
Only unconstrained plaquette are taken into account in the sum
in eq.~(\ref{deriv}).
Observing that $F[\vec{A}^{\text{ext}}] = 0$ at $ \beta = 0$, we
may eventually obtain $F[\vec{A}^{\text{ext}}]$ from
$F^{\prime}[\vec{A}^{\text{ext}}]$ by numerical integration:
\begin{equation}
\label{trapezu1}
F[\vec{A}^{\text{ext}}] = \int_0^\beta
F^{\prime}[\vec{A}^{\text{ext}}] \,d\beta^{\prime} \; .
\end{equation}
\subsection{A constant chromomagnetic field on the lattice}
\label{constantfield}
Let us now define a static constant abelian chromomagnetic field on the lattice.
In the continuum the gauge potential giving rise to a static constant abelian chromomagnetic field
directed along spatial direction $\hat{3}$ and direction $\tilde{a}$
in the color space can be written in the following form:
\begin{equation}
\label{su3pot}
\vec{A}^{\text{ext}}_a(\vec{x}) =
\vec{A}^{\text{ext}}(\vec{x}) \delta_{a,\tilde{a}} \,, \quad
A^{\text{ext}}_k(\vec{x}) = \delta_{k,2} x_1 H \,.
\end{equation}
In SU(3) lattice gauge theory
the constrained lattice links (see eq.~(\ref{coldwall})) corresponding to
the continuum gauge potential eq.~(\ref{su3pot}) are (choosing $\tilde{a}=3$, i.e. abelian
chromomagnetic field along direction $\hat{3}$ in color space)
\begin{equation}
\label{t3links}
\begin{split}
& U^{\text{ext}}_1(\vec{x}) =
U^{\text{ext}}_3(\vec{x}) = {\mathbf{1}} \,,
\\
& U^{\text{ext}}_2(\vec{x}) =
\begin{bmatrix}
\exp(i \frac {a g H x_1} {2}) & 0 & 0 \\ 0 & \exp(- i \frac {a g H
x_1} {2}) & 0
\\ 0 & 0 & 1
\end{bmatrix}
\,.
\end{split}
\end{equation}
We will refer to this case as $T_3$ abelian chromomagnetic field,
which will be our choice in the present work. Of course it is possible
to choose various alternatives, like an abelian field along
the direction of the $T_8$ generator, or along different combinations
of $T_3$ and $T_8$.
Since our lattice has the topology of a torus,
the magnetic field turns out to be quantized
\begin{equation}
\label{quant} a^2 \frac{g H}{2} = \frac{2 \pi}{L_1}
n_{\text{ext}} \,, \qquad n_{\text{ext}}\,\,\,{\text{integer}}\,.
\end{equation}
In the following $n_{\text{ext}}$ will be used to parameterize the
external field strength.
\section{Numerical simulations and results}
\label{numericalresults}
We have studied full QCD dynamics with two flavors of staggered
fermions in presence of a constant
chromomagnetic field. The simulations have been performed
on lattices $32^3\times8$ and $64\times32^2\times8$.
We used a slight modification of the standard HMC R-algorithm~\cite{Gottlieb:1987mq}
for two degenerate flavors of
staggered fermions with quark mass
$a m_q = 0.075$. According to our previous discussion, the links which are frozen
are not evolved during the molecular dynamics trajectory and the corresponding conjugate
momenta are set to zero.
We have collected about 2000
thermalized trajectories for each value of $\beta$.
Each trajectory consists of $125$ molecular dynamics
steps and has total length $1$. The computer simulations have
been performed using computer facilities at the
INFN apeNEXT computing center in Rome.
\subsection{The critical coupling}
As is well known,
the pure SU(3) gauge system undergoes a deconfinement
phase transition at a given critical temperature and this happens even in the unquenched
case (see Ref.~\cite{Laermann:2003cv} for an up-to-date review).
In our earlier studies~\cite{Cea:2002wx,Cea:2005td} we found that the critical coupling in pure
non-abelian gauge theories is shifted towards lower values by immersing the system in a constant chromomagnetic
background field: that means lower temperatures on lattices where the
temporal extent in lattice units is kept constant ($T = 1/(L_t a)$).
The main purpose of the present study is to verify if this effect survives
in presence of dynamical fermions. We refer in the
following to the constant abelian background field
defined in Eqs.~(\ref{su3pot}) and (\ref{t3links}).
In order to evaluate the critical gauge coupling we measure
$F^\prime[\vec{A}^{\text{ext}}]$ (eq.~(\ref{deriv})),
the derivative of the free energy with respect to the gauge coupling $\beta$, as a function of $\beta$.
We found that $F^\prime[\vec{A}^{\text{ext}}]$ displays
a peak in the critical region
where it can be parameterized as
\begin{equation}
\label{peak-form}
F^{\prime}(\beta,L_t)
= \frac{a_1(L_t)}{a_2(L_t) [\beta - \beta^*(L_t)]^2 +1} \,.
\end{equation}
In figure 1
\FIGURE[ht]{\label{Fig1}
\includegraphics[width=0.85\textwidth,clip]{figure_1.eps}
\caption{The derivative of the free energy eq.~(\ref{deriv}) with
respect to the gauge coupling (left axis, blue circles), and the
chiral condensate eq.~(\ref{chiralcond}) (right axis, red squares)
versus $\beta$. The vertical line represents the position of the
peak in the derivative of the free energy.}
}
we show an example of $F^\prime$ measured for
$n_{\text{ext}} = 1$ on a $32^3 \times 8$ lattice. In the same figure
we display also the chiral condensate
\begin{equation}
\label{chiralcond}
\langle \bar{\psi} \psi \rangle = \langle \frac{1}{V} \, \frac{N_f}{4} \,\, \text{Tr}\, M^{-1} \rangle
\end{equation}
and the numerical data point out that the peak in the derivative of
the free energy corresponds to the drop in the chiral condensate,
the latter is a signal of the transition leading to chiral symmetry
restoration.
In figure~\ref{Fig2}
\FIGURE[ht]{\label{Fig2}
\includegraphics[width=0.85\textwidth,clip]{figure_2.eps}
\caption{The derivative of the free energy as in figure~\ref{Fig1}
together with the absolute value of the Polyakov loop eq.~(\ref{Polyakov}).
Vertical dotted line as in figure~\ref{Fig1}.}
}
we compare the derivative of the free energy with
the absolute value of the Polyalov loop
\begin{equation}
\label{Polyakov}
P = \frac{1}{V_s} \sum_{\vec{x}} \frac{1}{3} \, {\text{Tr}} \prod_{x_4=1}^{L_t} U_4(x_4,\vec{x}) \; .
\end{equation}
We can see that also in this case the rise of the Polyakov loop
(expected at the
deconfining phase transition) corresponds
to the peak of the derivative of the free energy.
Moreover, in figure~\ref{Fig3}
\FIGURE[ht]{\label{Fig3}
\includegraphics[width=0.85\textwidth,clip]{figure_3.eps}
\caption{The derivative of the free energy as in figure~\ref{Fig1}
together with the susceptibility of the gauge action.}
}
the derivative of the free energy is
displayed together with the plaquette susceptibility
(susceptibility of the gauge action).
It is evident from this figure that, within our statistical
uncertainties, the peaks of the two quantities coincide.
Similar results are obtained by looking at the susceptibilities
of the Polyakov loop and of the chiral condensate.
From the above arguments we may draw some partial conclusions:
the critical
coupling of the phase transition can be located by looking at the peak
of the derivative of the free energy; moreover, as in the case
of zero external field and within statistical uncertainties,
a single transition seems to be present
where both deconfinement and chiral symmetry restoration take place.
It is worth to note that since the measurement of the derivative of
the free energy is simply related to the measurement of the gauge
plaquette we may have a good evaluation of the critical coupling
with a relatively small sample of measurements.
As told before our aim is to find if the critical coupling depends on the strength of the applied constant
chromomagnetic field. To this purpose we have varied the strength of
the external field by tuning up the parameter $n_{\text{ext}}$
and we have searched for the phase transition signalled by the
peak of the derivative of the free energy.
We have found that indeed the critical coupling shifts towards lower
values by increasing the external field strength.
In figure~\ref{Fig4}
\FIGURE[ht]{\label{Fig4}
\includegraphics[width=0.85\textwidth,clip]{figure_4.eps}
\caption{The derivative of the free energy eq.~(\ref{deriv}) versus $\beta$
for some values of the strength of the constant chromomagnetic field
parameterized (see eq.~(\ref{quant})) by the integer $n_{\text{ext}}$.
The yellow full circles corresponds to runs performed with
different machines (APEmille) and algorithms as a check.}
}
we display the derivative of the free energy in
correspondence of three values of $n_{\text{ext}}$
obtained on a $32^3\times8$ lattice,
together with the fit curves given by eq.~(\ref{peak-form}).
As one can see the position of the peaks decreases by increasing the external field strength.
In Table~1
\TABLE[t]{
\setlength{\tabcolsep}{0.9pc}
\centering
\caption[]{The values of the critical coupling versus the external field strengths.}
\begin{tabular}{ccc}
\hline
\hline
\multicolumn{1}{c}{lattice size}
& \multicolumn{1}{c}{$n_{\text{ext}}$}
& \multicolumn{1}{c}{$\beta_c$} \\
\hline
$\qquad 32 \times 32 \times 32 \times 8 \qquad $ & $\qquad 0 \qquad$ & 5.4851 (202) \\
$\qquad 64 \times 32 \times 32 \times 8 \qquad $ & $\qquad 1 \qquad$ & 5.4288 (128)\\
$\qquad 32 \times 32 \times 32 \times 8 \qquad $ & $\qquad 1 \qquad$ & 5.3808 (128)\\
$\qquad 32 \times 32 \times 32 \times 8 \qquad $ & $\qquad 2 \qquad$ & 5.3228 (90)\\
$\qquad 32 \times 32 \times 32 \times 8 \qquad $ & $\qquad 3 \qquad$ & 5.2888 (44)\\
$\qquad 32 \times 32 \times 32 \times 8 \qquad $ & $\qquad 4 \qquad$ & 5.2659 (48)\\
$\qquad 32 \times 32 \times 32 \times 8 \qquad $ & $\qquad 5 \qquad$ & 5.2680 (38)\\
\hline
\hline
\end{tabular}
\label{table1}
}
we report the values of the critical couplings versus
$n_{\text{ext}}$.
Figure~\ref{Fig5}
\FIGURE[ht]{\label{Fig5}
\includegraphics[width=0.85\textwidth,clip]{figure_5.eps}
\caption{The susceptibility of the absolute value of the Polyakov loop together with
the susceptibility of the chiral condensate. The vertical full line represents
the position of the peak in the derivative of the free energy for chromomagnetic
field strength $n_{\text{ext}}=5$. The vertical dotted lines give the error region.
Red and blue full lines are the best fits with the same parameterization as in eq.~(\ref{peak-form}) respectively to
the susceptibility of the (absolute value of the) Polyakov loop and
to that of the chiral condensate.}
}
displays instead the susceptibility of the
absolute value of the Polyakov loop together with the
susceptibility of the chiral condensate in the peak region for
the largest explored value of the external magnetic field
($n_{\text{ext}} = 5$).
It is worthwhile to note that, as mentioned earlier,
the position of the peak of Polyakov loop susceptibility ($\beta=5.2719(164)$)
and the position of the peak of the chiral condensate susceptibility
($\beta=5.2694(84)$) are consistent with each other
and with the position of the peak obtained from the derivative of the
free energy ($\beta=5.2680(38)$), thus confirming the conclusion
stated above, i.e. that the chiral and the deconfinement
transition are shifted towards lower temperatures by the presence
of the external field in an equal way, i.e. they continue
to be coincident within statistical errors even for $n_{\text{ext}} \neq 0$.
The numerical results obtained so far let us conclude that the critical coupling is dependent
on the strength of the background constant chromomagnetic field. On the other hand for pure SU(3) gauge theory we
obtained~\cite{Cea:2003un} that the value of the critical coupling is not changed by a monopole
background field.
Similar results are expected in presence of
dynamical fermions~\cite{Carmona:2002ty,Cea:2004ux,D'Elia:2005ta},
however we will verify this fact explicitly for the present case.
We recall the definition
of an abelian monopole background field on the lattice, for more
details and physical results see ref.~\cite{Cea:2004ux}.
It is well known that for SU(3) gauge theory the maximal abelian group is
U(1)$\times$U(1), therefore we may introduce two independent types
of abelian monopoles using respectively the Gell-Mann matrices
$\lambda_3$ and $\lambda_8$ or their linear combinations.
In the following we shall consider the abelian monopole field related to
the $\lambda_3$ diagonal generator.
In the continuum the abelian monopole field is given by
\begin{equation}
\label{monop3su3}
g \vec{b}^a({\vec{x}}) = \delta^{a,3} \frac{n_{\mathrm{mon}}}{2}
\frac{ \vec{x} \times \vec{n}}{|\vec{x}|(|\vec{x}| -
\vec{x}\cdot\vec{n})} \,,
\end{equation}
where $\vec{n}$ is the direction of the Dirac string and,
according to the Dirac quantization condition, $n_{\text{mon}}$ is
an integer. The lattice links corresponding to the abelian
monopole field eq.~(\ref{monop3su3}) are (we choose $\vec{n}=\hat{x}_3$)
\begin{equation}
\label{t3linkssu3}
\begin{split}
U_{1,2}^{\text{ext}}(\vec{x}) & =
\begin{bmatrix}
e^{i \theta^{\text{mon}}_{1,2}(\vec{x})} & 0 & 0 \\ 0 & e^{- i
\theta^{\text{mon}}_{1,2}(\vec{x})} & 0 \\ 0 & 0 & 1
\end{bmatrix}
\, \\ U^{\text{ext}}_{3}(\vec{x}) & = {\mathbf 1} \,,
\end{split}
\end{equation}
with $\theta^{\text{mon}}_{1,2}(\vec{x})$ defined as
\begin{equation}
\label{thetat3su2}
\begin{split}
\theta^{\text{mon}}_1(\vec{x}) & = -\frac{a n_{\text{mon}}}{4}
\frac{(x_2-X_2)}{|\vec{x}_{\text{mon}}|}
\frac{1}{|\vec{x}_{\text{mon}}| - (x_3-X_3)} \,, \\
\theta^{\text{mon}}_2(\vec{x}) & = +\frac{a n_{\text{mon}}}{4}
\frac{(x_1-X_1)}{|\vec{x}_{\text{mon}}|}
\frac{1}{|\vec{x}_{\text{mon}}| - (x_3-X_3)} \,,
\end{split}
\end{equation}
where $(X_1,X_2,X_3)$ are the monopole coordinates,
$\vec{x}_{\text{mon}} = (\vec{x} - \vec{X})$.
The monopole background field is introduced
by constraining (see eq.~(\ref{t3linkssu3})) the spatial links exiting from the sites
at the boundary of the time slice $x_t=0$. For what concern spatial links exiting from sites
at the boundary of other time slices ($x_t \ne 0$) we constrain these links according to eq.~(\ref{t3linkssu3}).
We have performed numerical simulations
in presence of an abelian monopole background field
with monopole charge $n_{\text{mon}}=10$
(again for 2 staggered flavors QCD of mass $a m_q=0.075$).
The critical coupling has been located by looking at the
\FIGURE[ht]{\label{Fig6}
\includegraphics[width=0.85\textwidth,clip]{figure_6.eps}
\caption{The derivative of the free energy for the abelian monopole background field
eq.~(\ref{monop3su3} in the peak region, together with (full line) the best fit
eq.~(\ref{peak-form}).}
}
peak of the derivative of the free energy (see figure~(\ref{Fig6})). We find
\begin{equation}
\label{monopole_peak}
\beta_c = 5.4873(192) \,.
\end{equation}
We have also done simulations
in absence of any external chromomagnetic field,
finding that the susceptibilities of the Polyakov loop,
of the chiral condensate and of the plaquette display
a peak at
\begin{equation}
\label{beta_c_wth_f}
\beta_c = 5.495 (25) \;.
\end{equation}
Noticeably, this value of the critical coupling without external field is consistent, within our statistical uncertainty, with the value we get
when we consider an abelian monopole field as background field.
Therefore we can conclude that, as we found~\cite{Cea:2004ux} in the case of pure gauge SU(3) gauge theory,
the abelian monopole field has no effect on the position of the critical coupling.
\section{Deconfinement temperature and critical field strength}
In previous studies~\cite{Cea:2005td} in pure lattice gauge
theories we looked for the possible dependence of the deconfinement
temperature on the strength of an external (chromo)magnetic field.
In particular we studied SU(2), and SU(3) l.g.t.'s both in (2+1)
and (3+1) dimensions and U(1) l.g.t. both in 4 dimensions and in
(2+1) dimensions. In fact, in the case of non-abelian gauge
theories, irrespective of the number of dimensions, we found that
the deconfinement temperature depends on the strength of the
constant chromomagnetic background field
(similar studies have been performed within a different framework
in refs.~\cite{Skalozub:1999bf,Demchik:2006qj}).
On the other hand, for U(1) gauge theory
we found no evidence for a dependence of
the critical coupling on the strength of an external magnetic
field. In particular, as is well known, 4-dimensional U(1) l.g.t.
undergoes a transition from a confined phase to a Coulomb phase: our
analysis showed~\cite{Cea:2005td} that the location of the
confinement-Coulomb phase transition is not changed by varying the
strength of an applied constant magnetic field. The same analysis has been performed for
compact quantum electrodynamics in (2+1) dimensions where it is
known~\cite{Polyakov:1976fu} that at zero temperature external
charges are confined for all values of the coupling and it is well
ascertained that the confining mechanism is the condensation of
magnetic monopoles which gives rise to a linear confining potential
and a non-zero string tension. Even in this case we verified that
the critical temperature for deconfinement does not depend on the
strength of an external magnetic field. As a consequence, we concluded that
the dependence of the critical coupling on the strength of the
external chromomagnetic field is a peculiar feature of non-abelian
theories.
The main aim of the present investigation is to
extend our study to non-abelian gauge theories in presence
of dynamical fermions.
To this end, having determined in the previous section the critical couplings corresponding to different external field strengths, we now try to estimate the critical temperature:
\begin{equation}
\label{criticaltemp}
T_c = \frac{1}{a(\beta_c, m_q) L_t} \,,
\end{equation}
where $L_t$ is the lattice temporal size and $a(\beta_c, m_q)$
is the lattice spacing at the given critical coupling $\beta_c$.
In the case of SU(3) pure gauge theory in order to evaluate $a(\beta_c)$ we used~\cite{Cea:2005td} the string tension.
We obtained that the values of the critical temperature versus the
square root of the external field strength can be fitted
by a linear parameterization. By extrapolating to zero external field strength we obtained:
\begin{equation}
\label{Tczerofield}
\frac{T_c(0)}{\sqrt{\sigma}} = 0.643(15)
\end{equation}
in very good agreement with the estimate $T_c/\sqrt{\sigma}=0.640(15)$ in the literature~\cite{Teper:1998kw}.
At the same time the intercept of line with the zero temperature axis furnished an
estimate of the critical field strength (i.e. the limit value above which the
gauge system is in the deconfined phase even at very low temperatures)
\begin{equation}
\label{criticalfield}
\sqrt{gH_c} = (2.63 \pm 0.15) \sqrt{\sigma} = (1.104 \pm 0.063) {\text{ GeV}} = 6.26(2) \times 10^{19} \text{ Gauss}
\end{equation}
using for the physical value of the string tension $\sqrt{\sigma}=420 \text{ MeV}$.
The same analysis can be performed by means of the improved lattice scale introduced
in Refs.~\cite{Allton:1996kr,Edwards:1998xf}
\begin{equation}
\label{lambdaimproved}
\widetilde{\Lambda}=\frac{1}{a} f(g^2) (1+c_2 \hat{a}(g)^2 + c_4 \hat{a}(g)^4) \,, \;\; \hat{a}(g)\equiv \frac{f(g^2)}{f(g^2=1)}
\end{equation}
where $g$ is the gauge coupling, $c_2=0.195(16)$, $c_4=0.0562(45)$, $\widetilde{\Lambda}/\sqrt{\sigma}=0.0138(12)$
and $f(g^2)$ is the 2-loop scaling function
\begin{equation}
\label{asympscaling}
f(g^2) = (b_0 g^2)^{-b_1/2b_0^2} \,\, \exp\left(- \frac{1}{2 b_0 g^2} \right)
\end{equation}
with
\begin{equation}
\label{coeffs}
\begin{split}
&b_0 = \frac{1}{16 \pi^2} \left[ 11 \frac{N_c}{3} - \frac{2}{3} N_f \right] \\
&b_1 = \left(\frac{1}{16 \pi^2} \right)^2 \left[ \frac{34}{3} N_c^2 -
\left( \frac{10}{3} N_c + \frac{N_c^2 -1}{N_c} \right) N_f \right]
\, ;
\end{split}
\end{equation}
$N_c$ is the number of colors and $N_f$ is the number of flavors.
Obviously the analysis of pure gauge data in units of $\widetilde{\Lambda}$ gives results consistent with
the same analysis done using the scale of the string tension (see Ref.~\cite{Cea:2005td}). In particular
a linear extrapolation towards zero external field gives:
\begin{equation}
\label{estrapsu3}
\frac{T_c}{\widetilde{\Lambda}} = 45.05 (1.02)
\end{equation}
in agreement with $T_c/\widetilde{\Lambda}=46.38(1.09)$ obtained from
$T_c/\sqrt{\sigma}=0.640(15)$.
Moreover the critical field strength turns out to be
\begin{equation}
\label{criticalfield2}
\frac{\sqrt{gH_c}}{\widetilde{\Lambda}} = 209.6 \pm 3.07
\end{equation}
which corresponds to $\sqrt{gH_c}=1.21(11) GeV$ in agreement with the estimate $\sqrt{gH_c}=1.10(6) GeV$
given in eq.~(\ref{criticalfield}).
Let us turn now to the $N_f=2$ case. Also in this case we have to face the
problem of fixing the physical scale.
In order to reduce the systematic effects involved in this procedure
we will consider the ratios
\begin{equation}
\label{ratios}
\frac{T_c(gH)}{T_c} \qquad \text{vs.} \qquad \frac{\sqrt{gH}}{T_c}
\end{equation}
where $T_c$ is the critical temperature without external field.
The above quantities can be obtained once
the ratio of the lattice spacings at the respective couplings is
known. A rough estimate of this ratio can be inferred by using the
2-loop scaling function $f(g^2)$ given in eq.~(\ref{asympscaling})
for $N_f = 2$. A better estimate could be obtained, as in the quenched
case, by adopting an improved scaling function
$f(g^2) (1+c_2 \hat{a}(g)^2 + c_4 \hat{a}(g)^4)$. We do not know, however,
the values of $c_2$ and $c_4$ for $N_f = 2$. In a first
approximation we will fix $c_2$ and $c_4$ to their quenched
values given above. In figure~\ref{Fig7}
\FIGURE[ht]{\label{Fig7}
\includegraphics[width=0.85\textwidth,clip]{figure_7.eps}
\caption{The critical temperature $T_c(gH)$ at a given strength of the chromomagnetic
background field in units of the critical temperature $T_c$ without external field versus
the square root of the strength of the background field in the same units.
Red circles are obtained by adopting the improved scaling function.
The blue line is the linear best fit. The blue circle on the horizontal axis is
the linear extrapolated values for the critical background field.
Green squares are obtained by adopting the 2-loop scaling function.
The blue dashed line is the linear best fit. The blue square on the horizontal axis is
the linear extrapolated values for the critical field.}
}
the quantities
reported in eq.~(\ref{ratios}) are displayed for both choices
described above, i.e. 2-loop asymptotic scaling and improved scaling.
The main result of our investigation is clear: even in presence of
dynamical quarks the critical temperature decreases with the strength
of the chromomagnetic field; moreover a linear fit to our data can be
extrapolated to very low temperatures, leading to the prediction of
a critical field strength above which strongly interacting matter
should be deconfined at all temperatures.
However, as can be appreciated from figure~\ref{Fig7}, the exact value
of the critical field strength is largely dependent on the choice
of the physical scale. In particular, assuming the 2-loop
scaling function we obtain
\begin{equation}
\label{gHTcaf}
\frac{\sqrt{gH_c}}{T_c} = 26.8 (5) \,,
\end{equation}
while assuming the improved scaling function we obtain
\begin{equation}
\label{gHTc}
\frac{\sqrt{gH_c}}{T_c} = 4.29 (10) \,.
\end{equation}
If the deconfinement temperature at zero field strength is taken
to be of the order of $170$ MeV, that means
$\sqrt{gH_c}$ in the range $0.7 - 4.5$ GeV.
It is clear that in order to get a reliable estimate of the critical
field strength, suitable for phenomenological purposes,
one should get a more reliable estimate of the physical
scale of our lattices. However one should work at a fixed value
of the pion mass (as close as possible to its physical value)
as well: that is beyond the purpose of the present investigation
and will be the subject of further studies.
To conclude this section we consider our measurements of the chiral condensate. In figure~\ref{Fig8}
\FIGURE[ht]{\label{Fig8}
\includegraphics[width=0.85\textwidth,clip]{figure_8.eps}
\caption{The chiral condensate eq.~(\ref{chiralcond}) versus $\beta$ in correspondence
of some values of the constant chromomagnetic background field.
In the inset the region corresponding to the phase transition has been magnified.}
}
we display the chiral condensate versus the gauge coupling in correspondence of some values of
the external field strength. Our numerical data show that, at least in the critical region, the value of the chiral condensate depends
on the strength of the applied field. Interestingly enough,
similar results for the chiral condensate have been found in ref.~\cite{Alexandre:2001pa}.
\section{Summary and Conclusions}
In this paper we have studied how a constant chromomagnetic field
perturbs the QCD dynamics.
In particular we focused on the theory at finite temperature
and we have found that, analogously to what happens in the pure
gauge theory~\cite{Cea:2005td}, the critical temperature
depends on the strength of the constant chromomagnetic background
field: it decreases as the external field is increased
and we have inferred,
as an extrapolation of our results,
that eventually the system is always deconfined for strong enough field
strengths. We estimated this critical field strength to be of the order
of 1~GeV, which is a typical QCD scale~\cite{Kabat:2002er}.
Notice that our estimate is of course affected by several systematic
uncertainties, like that regarding the estimate of the physical
scale: for that reason we consider it only as an order of magnitude
of the real expected critical strength. In fact a more reliable determination
usable for phenomenological purposes should be performed by working
with a fixed physical value of the pion
mass (and as close as possible to its physical value); that
is out of the purpose of the present work and will be the subject
of future investigations.
By comparing the critical couplings determined from the
derivative of the free energy functional with those determined
from the susceptibility of the chiral condensate and of the Polyakov
loop we have ascertained that, even in presence of an external
chromomagnetic background field and at least
up to the field strengths explored in the present work,
the critical temperatures
where deconfinement and chiral symmetry restoration take place
coincide within errors.
Another intriguing aspect we have found is the dependence of the
chiral condensate on the chromomagnetic field strength.
This last point deserves further studies.
In order to get a deeper
understanding of our results, we also plan to study the effect of the
background field on the equation of state of QCD.
\providecommand{\href}[2]{#2}\begingroup\raggedright |
1,314,259,993,244 | arxiv | \section{A note to the history of RF signal receiving and measurement techniques}
In the early days of radio-frequency (RF) engineering the available instrumentation
for measurements
was rather limited.
Besides elements acting on the heat developed by RF power (bi-metal contacts and
resistors with a very high temperature coefficient) only point/contact diodes,
and to some extent vacuum tubes, were available as signal detectors.
For several decades the slotted measurement line, see Section~\ref{VSWRsect},
was the only commonly used instrument to measure impedances
and complex reflection coefficients.
Around 1960 the tedious work with these coaxial and waveguide measurement lines became
considerably simplified with the availability of the vector network analyzer. At the same time the first
sampling oscilloscopes with 1~GHz bandwidth arrived on the market. This was possible due to progress in
solid-state (semiconductor) technology and advances in microwave elements (microstrip lines). Reliable,
stable and easily controllable microwave sources are the backbone of spectrum and network analyzers, as
well as sensitive (low-noise) receivers. The following sections focus on signal receiving devices
such as spectrum analyzers. An overview of network analysis is given later in Section~\ref{NetAnaSect}.
\section{Basic definitions, elements and concepts}
Before discussing key RF measurement devices, a brief overview of the most important components used in these devices and the related basic concepts are presented.
\subsection{Decibel}
Since the unit decibel (dB) is frequently used in RF engineering, a short introduction and definition of the terms are given. The decibel is a unit used to express relative differences between quantities, e.g.\ of signal power. It is expressed as the base-10 logarithm of the ratio of the powers between two signals:
\begin{equation}
P\text{ [dB]} = 10 \cdot \text{log}(P/P_{0}).
\label{dbpower}
\end{equation}
It is also common to express the signal amplitude in dB. Since power is proportional to the square of the signal amplitude, a voltage ratio in dB is expressed as:
\begin{equation}
V\text{ [dB]} = 20 \cdot \text{log}(V/V_{0}).
\label{dbvoltage}
\end{equation}
In Eqs. (\ref{dbpower}) and (\ref{dbvoltage}), $P_{0}$ and $V_{0}$ are the reference power and voltage,
respectively. A given value in dB is the same for power ratios as for voltage ratios. It is important to
note that there are no `power dB' or `voltage dB' as dB values always express a ratio.
Conversely, the absolute power and voltage can be obtained from dB values by
\begin{eqnarray}
P = P_{0} \cdot 10^{\frac{P\text{ [dB]}}{10}}, \\
V = V_{0} \cdot 10^{\frac{V\text{ [dB]}}{20}}.
\label{conversion}
\end{eqnarray}
The advantage using a logarithmic scale as unit of the measurement is twofold:
\begin{itemize}
\item [i)] typical RF signal powers tends to span several orders of magnitude; and
\item [ii)] signal attenuation losses and gains can simply computed by subtraction and addition.
\end{itemize}
Table \ref{dB} helps to familiarize with signal ratios and the associated dB values.
\begin{table}[t]
\caption{Overview of common dB values and their conversion into power and voltage ratios}
\begin{center}
\begin{tabular}{ccc}
\hline
\hline
& \bfseries Power ratio & \bfseries Voltage ratio \\
\hline
$-$20 dB & 0.01 & 0.1 \\
$-$10 dB & 0.1 & 0.32 \\
$-$6 dB & 0.25 & 0.5 \\
$-$3 dB & 0.50 & 0.71 \\
$-$1 dB & 0.74 & 0.89 \\
0 dB & 1 & 1 \\
1 dB & 1.26 & 1.12 \\
3 dB & 2.00 & 1.41 \\
6 dB & 4 & 2 \\
10 dB & 10 & 3.16 \\
20 dB & 100 & 10 \\
$n \cdot 10$ dB & 10$^{n}$ & 10$^{n\text{/2}}$ \\
\hline
\hline
\end{tabular}
\end{center}
\label{dB}
\end{table}
Absolute levels are expressed using a specific reference value, these dB systems are not based on SI units. Strictly speaking, the reference value should be included in parentheses when giving a dB value, e.g. +3 dB (1 W) indicates 3 dB at $P_{0} = 1$ W, thus 2 W. However, it is more common to add some typical reference values as letters after the unit, e.g.\ dBm defines dB using a reference level of $P_{0} = 1$ mW.
Thus, 0 dBm correspond to $-$30 dBW, where dBW indicates a reference level of $P_{0} = 1$ W. Often a reference impedance of 50 $\Omega$ is assumed.
Other common units are:
\begin{itemize}
\item [i)] dBmV for small voltages with $V_{0}$ = 1~mV; and
\item [ii)] dBmV/m for the electric field strength radiated from an antenna with reference field strength $E_{0} = 1$ mV/m.
\end{itemize}
\begin{figure}[h]%
\centering
\includegraphics[width=0.7\linewidth]{Diode_equivalent_circuit}
\caption{Simplified equivalent circuit of a diode detector (w/o parasitic elements)}%
\label{circdiode}%
\end{figure}
\subsection{The RF diode}
\label{RFdiodesec}
One of the most important elements, even today inside the most sophisticated RF measurement devices is the fast RF diode or \textit{Schottky} diode. The basic metal--semiconductor junction has an intrinsically very fast switching time of well below a picosecond, provided that the geometric size and hence the junction capacitance of the diode has sufficiently small dimensions. However, the unavoidable, and voltage-dependent junction capacity will lead to limitations of the maximum operating frequency.
The simplified equivalent circuit of such a diode is depicted in Fig. \ref{circdiode} and
an example of a commonly used \textit{Schottky} diode is shown in Fig. \ref{diode1}.
\begin{figure}[t]%
\centering
\includegraphics[width=50mm]{RF_diode_1}%
\caption{A typical \textit{Schottky} diode. The RF input of this detector diode is on the left and the video output on the right (courtesy \textit{Agilent}).}%
\label{diode1}%
\end{figure}
of the most important properties of any diode is its IV-characteristic,
which is the relation of the current passing the diode as a function of the applied voltage~\cite{vendelin}.
This relation is depicted graphically for two different types of diodes in Fig. \ref{kenn}.
It shows, the diode is a non-ideal commutator (in contrary to that shown in Fig. \ref{comm}) for small signals. Note that it is not possible to apply large signals, since this kind of diode would burn out.
\begin{figure}[t]%
\centering
\includegraphics[width=70mm]{Kennlinie.pdf}%
\caption{Current as a function of voltage for different diode types (LBSD = low barrier \textit{Schottky} diode)}%
\label{kenn}%
\end{figure}
\begin{figure}[t]%
\centering
\includegraphics[width=60mm]{ideal_commutator.pdf}%
\caption{The current--voltage relation of an ideal commutator with threshold voltage}%
\label{comm}%
\end{figure}
Although there exist versions with rather large power handling capability of \textit{Schottky} diodes, these can stand more than 9\,kV and several tens of amperes, they are not suitable in microwave applications due to their large junction capacity.
\begin{figure}[t]%
\centering
\includegraphics[width=80mm]{square_law_region.pdf}%
\caption{Relation between input power and output voltage}%
\label{squarelaw}%
\end{figure}
The region where the output voltage is proportional to the input power is called the square-law region (Fig. \ref{squarelaw}).
In this region the input power is proportional to the square of the input voltage and the output signal is proportional to the input power, hence the name square-law region.
The transition between the linear region and the square-law region is typically
between $-$10 and $-$20 dBm (Fig. \ref{squarelaw}).
For a more detailed description, see~\cite{src:oxford}.
There are some fundamental limitations when using diodes as detectors. The output signal of a diode (essentially DC or modulated DC if the RF is amplitude modulated) does not contain any phase information. In addition, the sensitivity of a diode limits the input level range to about $-$60 dB at best, which is not sufficient for many applications.
The minimum detectable power level of a RF diode is specified by the `tangential sensitivity', which typically amounts to $-$50 to $-$55 dBm for 10 MHz video bandwidth at the detector output \cite{thumm}.
To overcome these limitations, a more sophisticated method to utilize the RF diode is required. This method is presented in the next section.
\subsection{Mixer}
\label{Mixersec}
To include the detection of very small RF signals a device with a linear response over a wide range of signal levels (from 0 dBm (= 1 mW) down to the thermal noise = $-$174 dBm/Hz = 4$\cdot 10^{-21}$ W/Hz) is highly preferred. A RF mixer provides these features by using one, two or four diodes in different configurations (Fig. \ref{mixers}).
A mixer is essentially a frequency multiplier with a very high dynamic range,
implementing in it's simplest form the function
\begin{equation}
f_{1}(t) \cdot f_{2}(t) \text{\hspace{0.2cm}with } f_{1}(t) = \text{RF signal\hspace{0.2cm} and } f_{2}(t) = \text{ local oscillator (LO) signal}\hspace{0.2cm}
\label{mixer1}
\end{equation}
or more explicitly, for two sinusoidal signals
with amplitudes $a_{i}$ and frequencies $f_{i}$ ($i = 1, 2$),
\begin{equation}
a_{1}\cos(2\pi f_{1}t + \varphi) \cdot a_{2} \cos (2\pi f_{2}t) = \frac{1}{2}a_{1}a_{2}\left[ \cos ((f_{1} + f_{2})t + \varphi)
+ \cos ((f_{1} - f_{2})t + \varphi)\right].
\label{mixer2}
\end{equation}
Thus, we obtain a response at the intermediate-frequency (IF) port as sum and difference frequencies of the local oscillator ($\textrm{LO} = f_{1}$) and RF ($ = f_{2}$) signals.
Examples of different mixer configurations are shown in Fig. \ref{mixers},
they all use diodes to multiply the two applied signals, RF and LO.
These diodes operate like a switch, controlled by the frequency of the LO signal (Fig. \ref{mixprinc}).
The response of a mixer in the time
domain is depicted in Fig. \ref{mixerresponse}.
\begin{figure}[t]%
\centering
\includegraphics[width=80mm]{mixerconfig1.pdf}%
\caption{Examples of different mixer configurations}%
\label{mixers}%
\end{figure}%
\begin{figure}[tb]%
\centering
\includegraphics[width=100mm]{mixer_principle.pdf}%
\caption{Two circuit configurations interchanging with the frequency of the LO where the switches represent the diodes.}%
\label{mixprinc}%
\centering
\includegraphics[width=0.7\linewidth]{mixer_time_freq.pdf}%
\caption{Time
domain response of a mixer}%
\label{mixerresponse}%
\end{figure}%
The output signal is always in the ``linear regime'', provided that the mixer is not saturated with respect to the RF input signal. Note, with respect to the LO signal the mixer has to be always in saturation to insure the diodes operate almost as an ideal switch.
The phase of the RF signal is conserved in the output signal available at the IF output.
\subsection{Amplifier}
A linear amplifier, sometimes called ``gain stage'',
auguments the input signal by a factor which is usually indicated in decibels (dB).
The ratio between the output and the input signals is called the transfer function and its magnitude -- the voltage gain $G$ -- is measured in dB and given as
\begin{equation}
G [\text{dB}] = 20 \cdot \frac{{V}_{\text{RFout}}}{{V}_{\text{RFin}}} \text{\hspace{0.2cm}or\hspace{0.2cm} } \frac{{V}_{\text{RFout}}}{{V}_{\text{RFin}}} = 20 \cdot \text{log} G [\text{lin}].
\label{gain}
\end{equation}
The circuit symbol of an amplifier is shown in Fig. \ref{ampli} together with its S-matrix.
\begin{figure}[H]%
\centering
\includegraphics[width=0.7\linewidth]{amplifier_symbol.pdf}%
\caption{Circuit symbol and S-matrix of an ideal amplifier}%
\label{ampli}%
\end{figure}
The bandwidth of an amplifier specifies the frequency range where it is usually operated,
see Fig.~\ref{3dbbandwidth}.
This frequency range is defined by the $-$3~dB points\footnote{%
The $-$3~dB points are the values left and right of a reference value,
typically the local maximum of the amplifier transfer function, and are 3 dB below that reference.}
of the magnitude response with respect to its maximum or nominal transmission gain,
dividing the magnitude transfer function of the amplifier into a pass-band
and a stop-band of equal transmitted power.
For an ideal amplifier the output signal would always be proportional to the input signal.
However, a real amplifier is non-linear, typically for larger signals the transfer characteristic
deviates from its linear properties, which is validated for small-signal amplification.
When increasing the output power of an amplifier,
a point is reached where due to the non-linearities the small-signal gain
is reduced by 1~dB (Fig. \ref{1dB}).
This output power level defines the so-called 1~dB compression point, which is an important measure of the output power capability, thus the dynamic range for the amplifier
The transfer characteristic of an amplifier can be described in commonly used terms of RF engineering, i.e.\ the S-matrix, see Section~\ref{NetAnaSect}.
As implicitly contained in the S-matrix, both, amplitude and phase information of any spectral component are preserved when passing through an ideal amplifier. For a real amplifier the element $G = {S}_{21}$ (transmission from port 1 to port 2) is not a constant, but a complex function of frequency. Also the elements $S_{11}$ and $S_{22}$ are not zero
\begin{figure}[H]%
\centering
\includegraphics[width=65mm]{3db_band.pdf}%
\caption{Definition of the bandwidth}%
\label{3dbbandwidth}%
\end{figure}
\begin{figure}[H]%
\centering
\includegraphics[width=100mm]{IP_points_Witte_corr.pdf}%
\caption{Example for the 1 dB compression point \cite{witte}}%
\label{1dB}
\end{figure}
\subsection{Interception points of non-linear devices}
Important characteristics of non-linear devices are the interception points. Here, only a brief overview is given, further information can be found in \cite{witte}
The most relevant interception points is the interception point of third order (IP3 point).
Its importance derives from its straightforward determination,
plotting the input versus the output power on a logarithmic scale (Fig. \ref{1dB}).
The IP3 point is usually not measured directly, but is extrapolated from the data,
measured at much lower power levels in order to avoid overload
or damage of the device under test (DUT).
Applying two signals $(f_{1},f_{2} > f_{1})$ of closely spaced
frequencies $\Delta f$
simultaneously to the DUT, the intermodulation products appear
at +$\Delta f$ above $f_{2}$ and $-$$\Delta f$ below $f_{1}$.
This method is called the third-order intermodulation (TOI).
An example of an automatized TOI measurement is shown in Fig.~\ref{ip3}.
\begin{figure}[t]%
\centering
\includegraphics[width=120mm]{ip3_auto.pdf}%
\caption{An example of automatized TOI measurement}%
\label{ip3}%
\end{figure}
The transfer function of weakly non-linear devices can be approximated by a \textit{Taylor} expansion. Using $n$ higher order terms and plotting them together with an ideal linear device on a logarithmic scale results in two straight lines with different slopes ($x^{n} \stackrel{\text{log}}{\rightarrow} n \cdot \text{log }x$). Their intersection point is the intercept point of $n$th order. These points provide important information concerning the quality of non-linear devices.
In this context, the aforementioned 1 dB compression point of an amplifier is the intercept point of first order. For the method of measurements of the 1 dB compression point, see Section~\ref{1dBsect}.
Similar characterization techniques can also be applied for mixers, which, with respect to the LO signal, cannot be considered as weakly non-linear devices.
\subsection{The superheterodyne concept}
The word superheterodyne is composed of three parts: super (Latin: over), $\epsilon \tau \epsilon \rho \omega$ (hetero, Greek: different) and $\delta \upsilon \nu \alpha \mu \iota \sigma$ (dynamic, Greek: force), and can be translated as two forces superimposed\footnote{The direct translation (roughly) would be: another force becomes superimposed.}. Different abbreviations exist for the superheterodyne concept. In the USA it is often abbreviated by the simple word ``heterodyne'', and in Germany the shorter terms ``super'' or ``superhet'' are used.
\begin{figure}[bht]%
\centering
\includegraphics[width=120mm]{superhet.pdf}%
\caption{Schematic drawing of a superheterodyne radio receiver}%
\label{superhet}%
\end{figure}
A ``weak'' incident (RF) signal is subjected to non-linear superposition (i.e. mixing or multiplication)
with a ``strong'` sine wave signal from a LO.
At the mixer output sum and difference frequencies of the RF and LO signals appear.
The LO signal can be tuned such that this IF output signal is always of same frequency,
or stays within a very narrow frequency band.
Therefore, a fixed-frequency bandpass with excellent transfer characteristics can be used,
which is cheaper and easier to realize than a variable bandpass of the same performance.
Also, gain-stages (amplifiers) operating at a lower IF frequency are of better quality and/or
are more affordable.
A well-known application of this principle is any simple radio receiver (Fig. \ref{superhet}).
\section{Spectrum analyser}
\label{SpecAnasec}
RF spectrum analyzers can be found in virtually every control room of a modern particle accelerator. They are used for many aspects of beam diagnostics including Schottky signal acquisition and observation of RF signals. A spectrum analyzer is in principle very similar to a common superheterodyne broadcast receiver, except with respect to the choice of functions, change of parameters, and in general a more sophisticated, high quality design. It sweeps automatically through a specified frequency range, which corresponds to an automatic turning of the tuning knob on a radio. The signal is then displayed in the amplitude/frequency plane.
Originally, these kind of measurement instruments were setup manually
and used a cathode ray tube (CRT) as display.
Nowadays, with the availability of low-cost, powerful digital electronics for control and
signal processing, basically every instrument can be remotely controlled.
A microprocessor permits fast and reliable settings of the instrument,
and an analog-digital-converter (ADC) in connection with digital signal processing hardware
performs the acquisition and pre-processing of the measured signal values.
The digital data processing enables extensive data treatment for error correction,
complex calibration routines and self tests, which are a great improvement for
RF signal measurements.
However, the user of such sophisticated systems may not always be aware of the
basic analogue signal path and processing, before the signals are digitized and
prepared for user interaction.
The basics of these analogue sections is discussed as follows.
In general, we distiguish two types of spectrum analyzers:
\begin{itemize}
\item the scalar spectrum analyzer (SA) and
\item the vector spectrum analyzer (VSA).
\end{itemize}
The SA provides only information of the amplitude of the applied signal,
while the VSA provides information of the phase as well.
\subsection{Scalar spectrum analyzer}
A common oscilloscope displays a signal in the amplitude-vs.-time format (time domain).
The SA follows a different approach and displays the RF signal in the frequency domain.
\begin{figure}[hbt]%
\centering
\includegraphics[width=0.7\linewidth]{amplitude_modulation.pdf}%
\caption{Example of amplitude modulation in time and frequency domains}%
\label{am}%
\end{figure}
One of the major advantages of the frequency-domain visualization lies in the
higher sensitivity to perturbations of periodic signals.
For example, a 2\% distortion of a sine-wave signal is already difficult to be observed
on a the time domain display, but in the frequency domain on a logarithmic magnitude scale
the related ``harmonics'' (Fig. \ref{am}) are clearly visible
(here $-$40~dB below the main spectral line).
A very faint amplitude modulation (AM) of 10$^{-12}$ (power) on some sinusoidal signals
would be completely invisible on a time domain trace, but can be displayed as two side
harmonics 120 dB below the carrier in the frequency domain \cite{Schleifer}.
In the following we consider only ``classical'' SAs, based on a swept tuned band-pass filter
analysis (Fig. \ref{bp}), or utilizing the heterodyne receiver principle (Fig. \ref{sa}).
\begin{figure}[t]%
\centering
\includegraphics[width=0.7\linewidth]{tunable_BP.pdf}%
\caption{A tunable bandpass as a simple spectrum analyser (SA)}%
\label{bp}%
\end{figure}
The simplest form of a swept frequency spectrum analyzer is based on a tunable bandpass.
This may be a classical lumped element LC circuit or a YIG filter (YIG = yttrium iron garnet)
for frequencies $>$1~GHz.
The LC filter exhibits poor tuning, stability and resolution.
YIG filters are used in the microwave range (as preselectors) and for YIG oscillators.
Their tuning range is about one decade, with $Q$ values exceeding 1000.
For superior performance, the superheterodyne principle is applied basically in all commercial spectrum analyzers (Fig. \ref{superhet}).
\begin{figure}[t]%
\centering
\includegraphics[width=0.7\linewidth]{spectrum_analyzer.pdf}%
\caption{Block diagram of a spectrum analyzer}%
\label{sa}%
\end{figure}%
As already mentioned, the non-linear element (four-diode mixer or double-balanced mixer) delivers mixing products, like
\begin{equation}
f_{\text{signal}} = f_{\text{RF}} = f_{\text{LO}} \pm f_{\text{IF}}.
\label{eq:23a}
\end{equation}
Assuming an input frequency range $f_{\text{RF}}$ from 0 to 1~GHz for the spectrum analyzer shown in Fig. \ref{sa} and $f_{\text{LO}}$ ranging between 2 and 3~GHz, results in a frequency chart as shown in Fig. \ref{fchart}.
\begin{figure}[t]%
\centering
\includegraphics[width=0.4\linewidth]{freq_chart.pdf}%
\caption{Frequency chart of the SA of Fig. \ref{sa}, $f_{\text{IF}}$ = 2~GHz}%
\label{fchart}%
\end{figure}
Obviously, for a wide range of input frequencies, while rejecting any image response, requires a sufficiently high IF. A similar situation occurs for AM- and FM-broadcast receivers (AM-IF = 455~kHz, FM-IF = 10.7~MHz). But, for a high IF (e.g. 2 GHz) a stable, narrowband IF filter is very challenging, therefore most SAs and high-quality
receivers use more than a single IF. Certain SAs have four different LOs, some fixed, some tunable. To perform a large tuning range, the first, and for fine tuning (e.g. 20~kHz range), the third LO are variable.
Multiple mixing stages may also be necessary when downconverting to a lower IF (required when using high-$Q$ quartz filters) to ensure a good image response suppression of the mixers.
It can be demonstrated that the frequency of the $n^{\text{th}}$ LO must be higher than the (say) 80~dB bandwidth (BW) of the $(n - 1)^{\text{th}}$ IF band-pass filter. A disadvantage of multiple mixing is the possible generation of intermodulation lines if amplitude levels in the conversion chain are not carefully controlled.
The requirements of a modern SA with respect to frequency generation and mixing are
\begin{itemize}
\item high resolution,
\item high stability (drift and phase noise),
\item wide tuning range,
\item no ambiguities
\end{itemize}
and, with respect to the amplitude response
\begin{itemize}
\item large dynamic range ($>$$100$ dB),
\item calibrated, stable amplitude response,
\item low internal distortions.
\end{itemize}
It is important to notice that the bandwidth $\Delta f$ of the IF band-pass filter
is linked to sweep rate (or step width and rate when using a synthesizer):
\begin{equation}
\frac{\text{d}f}{\text{d}t} < (\Delta f)^{2}.
\label{eq:24a}
\end{equation}
In other words, the signal frequency has to remain stable within $\Delta T = 1/\Delta f$ for a given IF bandwidth $\Delta f$, which ensures steady-state conditions of the selected IF filter.
On many instruments the proper relation between $\Delta f$ and the optimum sweep rate is selected automatically, but it can always be altered manually (setting of the resolution bandwidth).
Caution is advised when applying, but not necessarily displaying, two or more strong ($> 10\ \mbox{dBm}$) signals to the input. Third-order intermodulation products may appear (generated at the first mixer or amplifier) and could lead to misinterpretation of the signals to be analyzed.
Spectrum analyzers usually have a rather poor noise figure of 20--40 dB,
as they often do not use pre-amplifiers in front of the first mixer (dynamic range, linearity).
But, with a good pre-amplifier, the noise figure can be reduced to almost that of the pre-amplifier.
This configuration permits amplifier noise-figure measurements with a reasonable resolution
of about 0.5 dB.
The input of the amplifier to be tested is connected to the hot and cold terminations,
and the two corresponding traces on the SA display
are evaluated \cite{Schiek, Yip, Evans, Connor, Landstorfer}.
\subsection{Vector spectrum and fast Fourier transform analyzer}
The modern vector spectrum analyser (VSA) is essentially a combination of a two-channel
digital oscilloscope and a fast \textit{Fourier} transformation (FFT) based spectrum display.
The incoming signal is down-converted, band-pass (BP) filtered, and passed to an analog-to-digital converter (ADC) (generalized Nyquist for BP signals; $f_{\text{sample}} = 2 \ \cdot $ BW).
Fig.~\ref{vsa} shows a typical, simplified schematic of a modern VSA.
\begin{figure}[t]%
\centering
\includegraphics[width=145mm]{VSA1.pdf}%
\caption{Block diagram of a vector spectrum analyser}%
\label{vsa}%
\end{figure}
The digitized signal is split into I (in-phase) and Q (quadrature, 90 degree offset) components with respect to the phase of some reference oscillator. Without this reference, the term ``vector'' would be meaningless for a spectral component.
One of the great advantages of a VSA, it easily allows to separate AM and FM components.
An example of vector spectrum analyzer display and performance is given in Figs. \ref{ec1} and \ref{ec2}. Both figures were obtained during measurements of the electron cloud in the CERN Super Proton Synchrotron (SPS).
\begin{figure}[t]%
\centering
\includegraphics[width=0.7\linewidth]{Ecloud1.pdf}%
\caption{Single-sweep FFT display similar to a very slow scan on a swept spectrum analyser}%
\label{ec1}%
\end{figure}
\begin{figure}[tb]%
\centering
\includegraphics[width=0.7\linewidth]{Ecloud2.pdf}%
\caption{Spectrogram display containing about 200 traces as shown on the left-hand side in colour coding. Time runs from top to bottom.}%
\label{ec2}%
\end{figure}
\section{Noise basics}
The concept of ``noise'' was originally studied for audible sound caused by statistical variations of the air pressure with a wide flat spectrum (white noise). It is now also used for electrical signals, with the noise ``floor'' determining the lower limit of the signal transmission. Typical noise sources are: \textit{Brownian} movement of charges (thermal noise), variations of the number of charges involved in the conduction (flicker noise) and quantum effects (\textit{Schottky} noise, shot noise). Thermal noise is only emitted by structures with electromagnetic losses, which, by reciprocity, also absorb power. Pure reactances do not emit noise (emissivity = 0).
Different categories of noise have been defined:
\begin{itemize}
\item white, which has a flat spectrum,
\item pink, being low-pass filtered and
\item blue, being high-pass filtered.
\end{itemize}
In addition to the spectral distribution, the amplitude density distribution is also required in order to characterize a stochastic signal. For signals generated by superposition of many independent sources, the amplitude density has a \textit{Gaussian} distribution.
The noise power density delivered to a load by a black body is given by \textit{Planck's} formula:
\begin{equation}
\frac{N_{\text{L}}}{\Delta f} = hf\left(\text{e}^{hf/kT} - 1\right)^{-1},
\label{plank}
\end{equation}
where $N_{\text{L}}$ is the noise power delivered to the load, $h = 6.625 \cdot 10^{-34}$\,J\,s the \textit{Planck} constant and $k = 1.38056 \cdot 10^{-23}$\,J/K \textit{Boltzmann's} constant.
Equation (\ref{plank}) indicates a constant noise power density up to about 120 GHz (at 290 K) with 1\% error. Beyond, the power density decays and there is no ``ultraviolet catastrophe'', i.e. the total integrated noise power is finite.
The radiated power density of a black body is given as
\begin{equation}
W_{\text{r}}(f,T) = \frac{hf^{3}}{c^{2}\left[\text{e}^{hf/kT} - 1\right]}.
\label{radpower}
\end{equation}
For $hf \ll kT$ the \textit{Rayleigh--Jeans} approximation of Eq. (\ref{plank}) holds:
\begin{equation}
N_{\text{L}} = kT \Delta f,
\label{rayleighjeans}
\end{equation}
where in this case $N_{\text{L}}$ is the power delivered to a matched load.
The noise voltage $v(t)$ of a resistor $R$ with no load is given as
\begin{equation}
\overline{v^{2}(t)} = 4 kT R\Delta f
\label{nlnoise}
\end{equation}
and the short-circuit current $i(t)$ by
\begin{equation}
\overline{i^{2}(t)} = 4 \frac{kT \Delta f}{R} = 4 kT G\Delta f,
\label{scircurr}
\end{equation}
where $v(t)$ and $i(t)$ are stochastic signals, and $G$ is $1/R$.
The linear averages $\overline{v(t)}, \overline{i(t)}$ vanishes, important are the quadratic averages $\overline{v^{2}(t)}, \overline{i^{2}(t)}$.
\begin{figure}[bht]%
\centering
\includegraphics[width=0.7\linewidth]{noisy_resistor.pdf}
\caption{Equivalent circuit of a noisy resistor terminated by a noiseless load }%
\label{resistor}%
\end{figure}
The available power (which is independent of $R$) is given by (see also Fig. \ref{resistor})
\begin{equation}
\frac{\overline{v^{2}(t)}}{4 R} = kT \Delta f.
\label{avpower}
\end{equation}
from which the spectral density function is defined as \cite{Schiek}
\begin{eqnarray}
\nonumber W_{\text{v}}(f) &=& 4kTR, \\
W_{\text{i}}(f) &=& 4kTG, \\
\nonumber \overline{v^{2}(t)} &=& \int_{f_{1}}^{f_{2}}W_{\text{v}}(f) \text{d}f.
\label{specdensfunc}
\end{eqnarray}
A noisy resistor may be composed of many elements (resistive network). Typically, it is made from a carbon grain structure, which has a homogeneous temperature. But if we consider a network of resistors with different temperatures, and hence with an inhomogeneous temperature distribution (Fig. \ref{noisyoneport}),
\begin{figure}[tb]%
\centering
\includegraphics[width=0.7\linewidth]{noisy_oneport.pdf}%
\caption{Noisy one-port with resistors of different temperatures \cite{Zinke, Schiek}}%
\label{noisyoneport}%
\end{figure}
the spectral density function becomes
\begin{equation}
W_{\text{v}} = \sum_{j}W_{\text{v}j} = 4kT_{\text{n}}R_{\text{i}}, \\
\label{multires1}
\end{equation}
where $W_{\text{v}j}$ are the individual noise sources (Fig. \ref{equivsources}), $T_{\text{n}}$ is the total noise temperature, $R_{\text{i}}$ the total input impedance, and $\beta_{j}$ are coefficients indicating the fractional part of the input power dissipated in the resistor $R_{j}$. For simplicity it is assumed that all $W_{\text{v}j}$ are uncorrelated.
\begin{figure}[t]%
\centering
\includegraphics[width=0.7\linewidth]{equivalent.pdf}%
\caption{Equivalent sources for the circuit of Fig. \ref{noisyoneport}}%
\label{equivsources}%
\end{figure}
The relative contribution ($\beta_{j}$) of a lossy element to the total noise temperature is equal to the relative dissipated power multiplied by its temperature:
\begin{equation}
T_{\text{n}} = \beta_{1}T_{1} + \beta_{2}T_{2} + \beta_{3}T_{3} + \cdots = \sum_{j} \beta_{j}T_{j
\label{relcont}
\end{equation}
A good example is the noise temperature of a satellite receiver, which is nothing else than a directional antenna. The noise temperature of free space amounts roughly to 3 K. The losses in the atmosphere, which is an air layer of 10 to 20 km height, causes a noise temperature at the antenna output of about 10 to 50 K. This is well below our room temperature of 290 K.
So far, only pure resistors have been considered. Looking at complex impedances, it is evident, losses occur only from dissipation in Re($Z$). The available noise power is independent of the magnitude of Re($Z$) with Re($Z$) $>$ 0. For Figs. \ref{noisyoneport} and \ref{equivsources}, Eq. (\ref{multires1}) still applies, except $R_{\text{i}}$ is replaced by Re($Z_{\text{i}}$). However, in complex impedance networks the spectral power density $W_{\text{v}}$ becomes frequency dependent \cite{Zinke}.
The rules mentioned above apply to passive structures. A forward-biased Schottky diode (external power supply) has a noise temperature of about $T_{0}$/2 + 10\%. A biased Schottky diode is not in thermodynamic equilibrium and only half of the carriers contribute to the noise \cite{Schiek}. But, it represents a real 50~$\Omega$ resistor when properly forward biased. For transistors, in particular field-effect transistors (FETs), the physical mechanisms are somewhat more complicated. Noise temperatures of 50 K have been observed for a FET at 290 K physical temperature.
\subsection{Noise-figure measurements with the spectrum analyzer}
Consider an ideal (noiseless) amplifier, terminated at its input (and output) with a load at 290 K with an available power gain ($G_{\text{a}}$). At the output we measure \cite{Yip, HP}:
\begin{equation}
P_{\text{a}} = kT_{0}\Delta fG_{\text{a}}.
\label{output}
\end{equation}
For $T_{0} = 290$ K (sometimes 300 K), we obtain $kT_{0}$ = $-$174 dBm/Hz
($-$dBm = decibel below 1 mW).
At the input we determine for a given signal $S_{\rm i}$ a certain signal-to-noise
ratio $S_{\rm i}/{N}_{\rm i}$, and at the output $S_{\rm o}/{N}_{\rm o}$,
from what the noise factor $F$ is defined as:
\begin{equation}
F = \frac{{S}_{\rm i}/{N}_{\rm i}}{{S}_{\rm o}/{N}_{\rm o}}
\label{noisefac}
\end{equation}%
and its logarithmic equivalent $\mathit{NF}$ follows as:
\begin{equation}
\mathit{NF} = 10 \log F
\label{noisefig}
\end{equation}
An ideal amplifier has $F = 1$ or $\mathit{NF} = 0$ dB. The noise temperature of this amplifier is 0 K, and signal and noise levels at the output are linearly increased by the gain.
A real amplifier adds some noise, which leads to a decrease in ${S}_{\rm o}/{N}_{\rm o}$ due to the added noise $N_{\text{a}}$:
\begin{equation}
F = \frac{N_{\text{a}}+N_{\rm i} G_{\text{a}}}{N_{\rm i} G_{\text{a}}}=
\frac{{N}_{\text{a}} + kT_{0}\Delta f G_{\text{a}}}{kT_{0}\Delta f G_{\text{a}}}.
\label{adnoise}
\end{equation}
For a linear system the minimum noise factor amounts to $F_{\text{min}} = 1$ or $\mathit{NF_{\text{min}}}=0$ dB, however, for non-linear systems one may experience a noise factor $F < 1$.
Noise factor and noise temperature are related by
\begin{equation}
T_{\text{e}} = \frac{N_{\text{a}}}{k\Delta f G_{\text{a}}} = T_0 (F-1)
\label{noisetemp}
\end{equation}
with $T_{\text{e}}$ being the equivalent temperature of a source impedance into a perfect, noise-free device
that would produce the same added noise $N_{\text{a}}$ \cite{HP}.
\begin{figure}[bht]%
\centering
\includegraphics[width=0.7\linewidth]{noisefigure_meas.pdf}%
\caption{Relation between source noise temperature $T_{\text{s}}$ and output power $P_{\text{out}}$ for an ideal (noise-free) and a real amplifier \cite{Yip, HP}.}%
\label{tempoutput}%
\end{figure}
The so-called $Y$-factor method is a popular way to measure the noise figure.
It is based on a switchable noise source with two calibrated values $N_1$ and $N_2$ for the noise temperature, e.g.\ $T_{\text{c}}$ and $T_{\text{h}}$, corresponding to ``cold'' and ``hot''.
Usually a dedicated noise diode is used as noise source, switched between non-bias and bias operation to provide the two noise temperatures.
The calibrated noise level is defined as \emph{excess noise ratio} ($\mathit{ENR}$):
\begin{equation}
\mathit{ENR}_{\text{dB}} = 10 \log \left(\frac{T_{\text{h}}-T_{\text{c}}}{T_0}\right)
\label{exessnoisedb}
\end{equation}
For most noise figure calculations the linear form is more useful:
\begin{equation}
\mathit{ENR} = 10^{\frac{\mathit{ENR}_{\text{dB}}}{10}}
\label{exessnoise}
\end{equation}
The noise source is connected to the amplifier or DUT to be analyzed, providing noise ``on'' ($N_2$) and ``off'' ($N_1$) conditions.
The ratio of these noise powers is called the \emph{$Y$-factor}:
\begin{equation}
Y = \frac{N_2}{N_1}
\label{yfactor}
\end{equation}
$Y$-factor and $\mathit{ENR}$ can be used to determine the noise slope of the DUT,
as illustrated in Fig.~\ref{tempoutput}.
The calibrated $\mathit{ENR}$ of the noise source represents a reference level for the input noise,
which allows the calculation of the internal (added) noise $N_{\text{a}}$ of the DUT:
\begin{equation}
N_{\text{a}} = k T_0 \Delta f G_1 \left( \frac{\mathit{ENR}}{Y-1}-1 \right)
\label{intnoise}
\end{equation}
The SA, operating in automatized \emph{noise figure mode}, controls the noise diode, i.e.\ switching between ``hot'' (on) and ``cold'' (off) states, acquiring the DUT output signal, and computes -- based on the calibrated $\mathit{ENR}$ -- the total \emph{system noise factor}
\begin{equation}
F_{\text{sys}} = \frac{\mathit{ENR}}{Y-1}
\label{sysnoise}
\end{equation}
which includes noise contributions from all parts of the system.
In case the ``cold'' noise temperature $T_{\text{c}}\neq T_0= 290$~K, Eq.~\ref{sysnoise} becomes
\begin{equation}
F_{\text{sys}} = \frac{\mathit{ENR-Y(T_{\text{c}}/T_0-1)}}{Y-1}
\label{sysnoisecorr}
\end{equation}
For low $\mathit{ENR}$ noise sources, $T_{\text{h}}<10\,T_{\text{c}}$, an alternative equation holds:
\begin{equation}
F_{\text{sys}} = \frac{\mathit{ENR(T_{\text{c}}/T_0)}}{Y-1}
\label{sysnoisealt}
\end{equation}
If $Y$ is close to 1, i.e.\ $F_{\text{sys}}\gg \mathit{ENR}$, the system noise factor ``masks'' the noise generated by the noise source, making an accurate measurement difficult or impossible.
Therefore the $Y$-factor method is limited to noise figure measurements with $\mathit{NF\approx 10}$~dB below the
$\mathit{ENR}$ of the noise source.
The literature explains a variety of other noise figure measurement methods \cite{Schiek,Connor,Landstorfer,Evans,Schiek2},
including the ``3~dB'' method \cite{HP} for the measurement of high noise figure devices,
where the $Y$-factor method is limited.
The noise figure of a cascade of amplifiers is given as \cite{Zinke, Schiek, Yip, HP, Schiek2}
\begin{equation}
F_{\text{total}} = F_{1} + \frac{F_{2} - 1}{G_{\text{a}1}} + \frac{F_{3} - 1}{G_{\text{a}1}G_{\text{a}2}} + \cdots .
\label{cascade}
\end{equation}
As Eq.~(\ref{cascade}) shows, the first amplifier in a cascade has a dominant effect on the total (system) noise figure, provided $G_{\text{a}1}$ is not too small and $F_{2}$ not too large. In order to select the best amplifier from a number of different units to be cascaded, the noise measure $M$
\begin{equation}
M = \frac{F - 1}{1 - (1/G_{\text{a}})}.
\label{noisemeasure}
\end{equation}
helps to select the optimal unit:\newline
The amplifier with the smallest $M$ should be selected as first unit in the cascade \cite{HP}.
\section{Introduction to network analysis and S-parameters}
\label{NetAnaSect}
One of the most common measurement tasks in the field of RF engineering is the analysis of circuits and
electrical networks. Such networks can be a simple one-port (two-pole), containing only a few passive
components (resistors, inductors and capacitors) or they may be complex units, consisting of passive, active and/or
non-linear components with several input and output ports.
A vector network analyzer (VNA) is
one of the most versatile and valuable pieces of measurement equipment used in a
RF laboratory
or particle accelerator control room.
The network analysis is performed by exciting
the device under test (DUT) with a well-defined input signal in terms of frequency and amplitude,
and recording the response of the network, for each frequency step as complex value
of the reflection and/or transmission coefficients.
These are the coefficients of the scattering parameters (S-Parameter),
the properties to characterize a DUT at RF and microwave frequencies.
The best commercially available network analyzers can cover a frequency
range of ten (and more) orders of magnitude (from a few Hz to many GHz),
with a resolution down to 0.1~Hz.
In the following sections, scalar and vector network analyzers are introduced and measurement techniques for the
determination of S-parameters of networks are discussed.
S-parameters are basically defined only for linear networks. In the real world, many DUTs are at least
weakly non-linear (e.g.\ mixers, or active elements such as amplifiers). For the analysis of these devices
certain approximations or extensions of the definitions are required~\cite{src:xPar}.
Another interesting application is the determination of the beam transfer function (BTF), where the DUT is
a circulating particle beam in an accelerator.
\begin{figure}[t]
\begin{center}
\subfloat[ ]{
\includegraphics[width=0.35\textwidth]{1port_waves.pdf}
\label{fig:1portWave}
}
\subfloat[ ]{
\includegraphics[width=0.3\textwidth]{1port_VI}
\label{fig:1PortVI}
}
\caption{Wave quantities of a one-port (with two poles) and impedance $Z_L$:
(a) incident ($a_1$) and reflected ($b_1$) wave; (b) relation of $a_1$ and $b_1$ to $V_1$ and $I_1$.}
\end{center}
\end{figure}
\subsection{One-port networks}
In RF engineering, \emph{wave quantities} are preferred in favor currents
or voltages for the characterization of RF circuits.
We can distinguish between incident (a) and reflected waves (b).
The incident wave travels from a source to the DUT -- the reflected wave travels in the opposite direction.
This terminology is preferred, because in RF engineering the linear geometrical
dimensions of a circuit often are larger than 10\% of the corresponding free-space wavelength.
Wave functions are defined in time and \emph{spacial} coordinates, and for this fact are preferred to voltages and currents, which typically are only defined in time.
This also requires the
definition of a reference plane, i.e.\ the physical location in space to which the measurement refers.
Without this reference plane, e.g.\ the phase of the reflection coefficient would be undefined,
which would make vectorial measurements impossible.
Of course, a mathematically correct description of the DUT in terms of voltages and currents still
holds, and also will return correct results, but working with wave quantities turns out to be much
more convenient in practice.
Both network discription methods -- if correctly applied -- have no fundamental limitation,
e.g. S-parameters can be used at very low frequencies and
voltage and current descriptions can also be used at very high frequencies.
Both methods are fully equivalent, for any frequency; the results are mutually convertible.
This fact is expressed by conversion rules, namely S-parameters can be converted
into impedances and vice versa.
The interface of the DUT to the outside world is utilized by one or more \emph{pole pairs}, which are
commonly referred as \emph{ports}. A device with one pair of poles (as in Fig.~\ref{fig:1portWave}) is
defined as one-port, where one incident ($a_1$) and one reflected ($b_1$) wave can propagate simultaneously.
The index of the wave quantities represents the number of the port.
The wave quantities can be determined from the voltage and current at the port. They are
related to each other
\Equation{equ:waveVI}{a_1 = \frac{V_1 + I_1 Z_0}{2 \sqrt{Z_0}}, \quad \quad
b_1 = \frac{V_1 - I_1 Z_0}{2 \sqrt{Z_0}},}
where $V_1$ and $I_1$ represent the voltage and current
respectively at the port as depicted in Fig.~\ref{fig:1PortVI}. $Z_0$ is an arbitrary reference impedance
(often, but not necessarily always, the characteristic impedance $Z_0 = Z_{\rm G} = 50~\Omega$ of the system).
The wave quantities have the dimension of $\sqrt{W}$ (see~\cite{FritzSpara}).
This normalization is important for the conservation of energy.
The power traveling towards the DUT
is calculated by $P_{\rm inc} = |a|^2$, the reflected power by $|b|^2$.
It is important to note that this
definition is mainly used in the USA -- in European notation,
the incident power is usually calculated by
$P_{\rm inc} = 0.5 |a|^2$.
These conventions have no impact on the calculation of S-parameters and only need to be
considered when the absolute power is of interest.
The reflection coefficient $\Gamma$ represents the ratio between the incident wave and the reflected wave of a
specific port. It is defined as
\Equation{equ:gamma}{
\Gamma = \frac{b_1}{a_1}.}
By substitution with Eq.~\eqref{equ:waveVI}, we can find a relation between the complex
(load) impedance $Z_L$ of a one-port and its complex reflection coefficient $\Gamma$:
\Equation{equ:zgamma}{
\Gamma = \frac{Z_L - Z_0}{Z_L + Z_0}.}
\subsection{Two-port networks}
For electrical networks with two ports (e.g. attenuators, amplifiers) we find more quantities
to be measured.
Besides the reflection coefficients for each port, the transmission in forward and reverse
directions also needs to be characterized.
We now require the definition of the scattering parameters (S-parameters) for
two ports.
The idea is to describe how the incident energy on one port is scattered by the network
and exits through the other ports.
All possible signal paths through a two-port are shown in Fig.~\ref{fig:2port2}.
A two-port has four complex, frequency-dependent scattering parameters:
\Equation{}{
S_{11} = \frac{b_1}{a_1}, \quad\quad S_{12} = \frac{b_1}{a_2}, \quad\quad S_{21} = \frac{b_2}{a_1}, \quad\quad
S_{22} = \frac{b_2}{a_2}.}
Here $S_{11}$ and $S_{22}$ are equal to the reflection coefficients $\Gamma$ of their respective
ports -- but \emph{only} under the condition that the corresponding other port is terminated in its
characteristic impedance.
$S_{21}$ and $S_{12}$ are the forward and reverse transmission coefficients,
respectively.
The first index of the S-parameter defines at which port the outgoing wave is observed,
the second index defines at which port the network is excited.
This leads to the counterintuitive appearing situation,
that for forward transmission the corresponding S-parameter is $S_{21}$, not $S_{12}$.
The S-parameters are measured following exactly the same definition.
The internal source of the network analyzer excites
an incident wave on port one, namely $a_1$.
Now $b_1$ and $b_2$, the outgoing waves from the DUT, are
measured, which allows the determination of $S_{11}$ and $S_{21}$ (provided that port one
and port two are terminated with their characteristic impedances).
\Figure{0.4}{2port2}{fig:2port2}{All possible S-parameters of a two-port network}{[t]}
It is very important to \emph{always} terminate all ports of the DUT with their respective characteristic impedances.
In many situations this is $Z_0$, but there are cases where the characteristic impedance is different between port
one and port two, e.g.\ a transformer with a turns ratio of two, leading to an impedance transformation by
a factor of four. In this case the characteristic impedance would be for port one $50~\Omega$ and for port two
$12.5~\Omega$.
The termination prevents unwanted reflections and ensures the DUT is only excited by a single
incident wave. For practical S-parameter measurements this implies that any port of the DUT needs to be
connected to a matched load corresponding to the characteristic impedance of this port. This rule includes
in particular the port connected to the VNA output port, or in other words, the generator impedance has
also to match the impedance of the DUT. For example,
the analysis of a DUT with $25~\Omega$ characteristic impedance is not simply straightforward on a $50~\Omega$ network analyzer, unless special care is taken..
But, permitting a modern VNA, applying a
special calibration procedure allows the modification of the characteristic impedance of each VNA port to any value (within
a reasonable range from $> 5~\Omega$ to $< 500~\Omega$), and in this way to adapt to the requirements of the DUT.
However, the situation of the termination of ports becomes more complicated for the
characterization of beam elements, like beam pickups, kickers, and accelerating structures,
where strictly speaking the beam (waveguide) ports also need to be terminated in their
characteristic impedance.
Often simple solutions can be applied, like microwave absorbing foam, to avoid unwanted
reflections from open beam ports.
The S-parameters are an intrinsic property of the DUT and not a function of the incident power used for the
measurement (condition of linearity). Obviously, the S-parameters measured shall be independent of the
instrumentation used to perform the measurement.
Once all $n^2$ S-parameters for a given $n$-port network are measured, the properties of this network can be
described by a set of linear equations. For incident waves $a_1$ and $a_2$
of arbitrary phase and magnitude on a two-port, the outgoing or scattered waves $b_1$ and $b_2$ can be
determined
\begin{eqnarray}\label{equ:slinear}
b_1 = S_{11} a_1 + S_{12} a_2,\\
b_2 = S_{21} a_1 + S_{22} a_2.\nonumber
\end{eqnarray}
These equations can be written in matrix format, for convenience:
\begin{eqnarray}\label{equ:smatrix}
\vec{b} &= \textbf{S} \; \vec{a}\\
\left[
\begin{array}{c}
b_1\\
b_2
\end{array}
\right]&=
\left[
\begin{array}{cc}
S_{11} & S_{12}\\
S_{21} & S_{22}
\end{array}
\right]
\left[
\begin{array}{c}
a_1\\
a_2
\end{array}
\right].
\end{eqnarray}
The S-matrix is a linear model of the DUT. Its diagonal elements represent the reflection
coefficients of each port. The remaining elements characterize all possible signal transmission
paths between the ports. S-parameters are in general complex and a function of frequency. The set of
linear equations given by the S-matrix must be solved for a single frequency at a time. S-parameters are
typically acquired over a certain frequency range (span) for a number $N$ of discrete, equidistant frequency steps. With $N$ data
points, the system of equations has to be solved $N$ times.
A discussion of the general properties of the S-matrix can be found in~\cite{FritzSpara}.
\section{Scalar network analysis}
A scalar network analyzer measures only the amplitude, i.e.\ the magnitude of a -- reflected or transmitted -- signal, the
phase is not available. Consequently, only the absolute value (the magnitude) of the complex
S-parameters can be obtained.
Today scalar network analyzers are basically obsolete, however, some key components and circuits are also found in VNAs, making this instrument a methodical way to introduce the concept of network analysis.
\Figure{0.7}{scalar1}{fig:scalarSimple}{A simple measurement set-up for the scalar transmission coefficient ($|S_{21}|$)}{[t]}
A simple network analysis set-up, as it was used more than 50 years ago, is shown in
Fig.~\ref{fig:scalarSimple}.
The measurement is performed in two steps, in the first step (Fig.~\ref{fig:scalarSimple}, left) without the DUT to measure the power of the incident signal ($V_1$). Then the DUT is inserted (Fig.~\ref{fig:scalarSimple}, right), and $V_2$ is
measured.
Following the magnitude of the transmission coefficient is calculated:
\Equation{equ:magTx}{|S_{21}| \propto \frac{V_2}{V_1}.}
To obtain the results in decibels, a logarithmic amplifier was connected to the output of the detector.
It has a
logarithmic transfer function ($V_{\rm out} = \log V_{\rm in}$) and permits the display of a large dynamic range on a dB
scale.
Furthermore, mathematical operations like multiplication or division,
e.g.\ required for normalization in Eq.~\eqref{equ:magTx},
transforms simply into into an addition or subtraction, handled by operational amplifiers.
As detector any kind of device converting the input RF signal into a DC voltage is applicable, assuming its transfer function is ``reasonable''\footnotemark\ proportional to the RF power. There are basically three possibilities to achieve
this:
\footnotetext{With the term ``reasonable'' we point out the fact, that many detectors have a non-linear relation between
input power and output voltage.}
\begin{description}
\item[Rectifier]
A fast \textit{Schottky} diode and a low-pass filter are used to convert the input RF signal to a DC voltage.
Operating the diode in its square-law region ($P_{\rm in} < -10$ dBm) results in an output voltage proportional
to the RF power; see Section~\ref{RFdiodesec}.\newline
Advantages: cheap, fast response (depending on $f_{\rm max}$ of the output filter). \newline
Limitations:
Commercially available RF power meters, based on \textit{Schottky} diodes, can operate from $-60$~dBm (limited by
tangential sensitivity) up to about +30~dBm (damage level). The non-linearity of the output signal versus input
power is compensated by electronic means (look-up table). Coaxial RF \textit{Schottky}
detectors are usually limited to maximum frequencies of approximately 100~GHz,
essentially determined by the coaxial connector technology available. Usually an input matching network is required to match the input impedance of the
\textit{Schottky} diode to $Z_0 = 50~\Omega$.
\item[Thermal measurement]
Several types of detectors based on heating effects are available for the measurement of RF power. In a
bolometer (thermistor or barretter), the high temperature coefficient of the thermal conductivity of certain
metals or metal alloys is exploited.
The temperature change $\Delta T$ of dissipated heat of the RF input signal is measured utilizing a
DC-based temperature measurement, while applying a correction of the non-linearities.
Barretters utilize the positive temperature coefficient of metals like
tungsten and platinum. Thermistors consist of a metal oxide with a strong negative temperature coefficient.
Another class of RF power meters based on heating is the thermo-element, which takes advantage of the
thermo-electrical coefficient of a junction between two different metals. A well-known example is the Sb-Bi
junction, which has a temperature coefficient of about $10^{-4}$ V/K, which is one of the highest values
available for this kind of detector. Even larger values can be achieved using semiconductor--metal junctions,
where thermoelectric coefficients of $250~\mu$V/K have been achieved. For further details, see \cite{src:thumm}.
\item[Mixer]
Multiplying two sinusoidal signals with different frequencies results in signals of sum and
difference frequencies at the multiplier's output; see Section~\ref{Mixersec}. Technically this frequency mixing principle allows to convert a range of high-frequency signals to a much lower intermediate frequency (IF) band. Now the RF power measurement is performed in simpler ways at this IF.
\end{description}
\subsection{Automatic Gain Control (AGC)}
Often RF measurements are performed over a wide range of frequencies, requiring the signal strength, i.e.\ the amplitude $V_0$ of the source to be constant.
This is usually achieved by an active feedback loop (\textit{levelling}), keeping $V_0$ constant,
independent of the operation frequency.
Any feedback loop requires a process variable which has to be detected and
controlled to a well defined set point, here the output signal level $V_1$.
For the automatic gain control (AGC) loop in a NA e.g.\ a resistive
power divider can be used to provide this reference signal, while keeping inputs and outputs matched to $Z_0
= 50~\Omega$ (Fig.~\ref{fig:scalarFeedback}). For this example, the test signal arriving at the
DUT is reduced by 6 dB due to the insertion loss of the resistive power divider. However, the AGC feedback loop ensures
the stimulus signal applied to the DUT has always a constant, well defined power level over a wide frequency range.
\Figure{0.5}{scalar2}{fig:scalarFeedback}{Simplified circuit diagram of a typical automatic gain control}{[t]}
For the characterization of linear DUTs, only the ratio $V_2 / V_1$ is of interest, which is independent of the absolute value of $V_0$. In this case the S-parameter measurements do not require an AGC loop of the RF generator, but in
practice the gain control has many advantages, in particular for measurements on weakly non-linear
elements, such as amplifiers.
\Figure{0.6}{scalar3}{fig:scalarFeedbackCoupler}{Feedback loop of a typical automatic gain control (AGC)}{[t]}
\subsection{Directional couplers}
Replacing the resistive power divider by a directional coupler reduces the insertion loss substantially,
the principle is outlined in Fig.~\ref{fig:scalarFeedbackCoupler}. $V_1$ is an attenuated replica -- defined by the
coupling factor -- of the forward-traveling wave, which is only used for as reference for the gain control.
Typically, directional couplers with a coupling coefficient of $-$20~dB are used for the purpose, they offer a transmission attenuation in the main branch of less than 0.3 dB. In contrast to the resistive power splitter, the transmission-line based directional coupler has a limited frequency range, and therefore other issues.
\Figure{0.7}{scalar4}{fig:scalarBidirectional}{Dual directional coupler in a network analyzer}{[t]}
Modern network analyzers (both scalar and vectorial versions) measure the forward-transmission, as well as the reflection coefficient of a DUT simultaneously, without the need to manually re-connect DUT ports. Each port of the instrument is equipped with a dual directional coupler, providing simultaneously replicas
of the incident and reflected waves from the DUT, see Fig.~\ref{fig:scalarBidirectional}. These
directional couplers, in combination with some required switches and attenuators are commonly called \emph{test set}.
In the early days, network
analyzers consisted of separate building blocks, like S-parameter test set, frequency generator, display and
controller unit. All these elements had to be connected by many external cables. Modern instruments have
all those building blocks integrated in a single unit, including advanced computer controls with digital data acquisition and post-processing.
Based on Fig.~\ref{fig:scalarBidirectional}, the reflection and transmission coefficients are defined as
\Equation{equ:refFwd}{
|S_{11}| \propto \frac{V_3}{V_1}, \quad \quad |S_{21}| \propto \frac{V_2}{V_1}.}
From the ratio of the reflected wave to the incident wave ($S_{11}$), valuable quantities like standing wave
ratio (SWR), reflection coefficient, impedance, admittance as well as return loss
of the DUT are determined. From the ratio of the transmitted wave to the incident wave ($S_{21}$), gain resp.\ insertion loss, the transmission coefficient, the insertion phase, and
group delay of the DUT can be characterized.
\section{Vector measurements}
A vector network analyzer (VNA) is able to measure the magnitude \emph{and phase} of a complex S-parameter.
There are different hardware configurations which implement this kind of RF instrument, e.g.\ six-port reflectometers, certain RF bridge methods, or superheterodyne RF network analyzers. Here only the latter will be introduced.
\subsection{The modern vector network analyzer}
A modern VNA contains a RF generator which produces the signal stimulating the DUT.
This signal is
usually generated by a synthesizer-type oscillator and is adjustable in very fine steps
over a large frequency
range, in a programmable manner.
Since all modern VNAs operate with analog and/or digital downconverters (mixing),
the generation of a tracking LO frequency is also necessary.
This tracking LO is typically generated by PLL
circuits and represents essentially a second oscillator following the
main frequency with a specified frequency offset.
The observation (IF) band signal is typically processed digitally, allowing bandwidth settings over a wide range, e.g.\
1~Hz to 20~MHz and more.
In all stages of the signal path the vectorial nature of the signal is preserved,
both phase and magnitude are processed, in the digital domain usually as I-Q (in-phase --
quadrature-phase) data, equivalent to real and imaginary parts.
Details on the internal signal processing of a VNA are found in \cite{src:fundVNA,
src:fundVNA2}. Note, similar to the spectrum analyzer, the sweep time and resolution bandwidth cannot be
adjusted independently.
A modern four-port vector network analyzer is shown in Fig.~\ref{fig:vectorPhoto}.
\Figure{3.0}{front_panel.png}{fig:vectorPhoto}{A modern four-port VNA}{[t]}
Although complete network analysis of any $N$-port can be performed with a two-port VNA, a four-port unit is extremely
convenient for many measurement tasks. It permits a quick analysis, e.g. of a directional coupler or a
three-port circulator without the need for swapping cables, it also introduces virtual ports of balanced nature, and many other valuable features.
\subsection{Time-domain transformation (synthetic pulse technique)}
For any linear system, the frequency domain information (data) can be converted
to the time domain by an inverse (fast) \textit{Fourier} transformation\footnotemark\ and vice versa, assuming the entire frequency vector data (magnitude and phase, or real and imaginary) is present.
This is the basis of the synthetic pulse technique,
available on many modern VNAs. It was commercially introduced by Hewlett-Packard in the 1980s for network analyzer applications.
\footnotetext{More precisely: by a discrete \textit{Fourier} transformation (DFT).
The fast \textit{Fourier} transformation (FFT) is just an optimized form of th eDFT,
exploiting the symmetry of $2^n$ data samples, thus saving computation time.
However, both algorithms will produce
the same result for the same input data.}
It renders the VNA even more versatile, allowing to display the impulse (\textit{Gaussian}) and/or step response of
the DUT, and to perform time-domain reflectometry (TDR) measurements. Typical applications of this
measurement techniques are:
\begin{enumerate}
\item Localizing and evaluating discontinuities (faults) in transmission lines.
\item Separating the scattering properties of sections of complicated RF networks by time-domain gating.
\item Echo cancellation (in multipath environments).
\item The synthetic pulse time-domain reflectometry can be very useful in
trouble-shooting, e.g. of the accelerator beam-pipe.
By using waveguide modes
it was successfully used to detect an obstacle in the LHC beam-pipe.
\end{enumerate}
The only constraint of the applicability of the synthetic pulse measurement technique, the DUT has to
be a \emph{linear} and \emph{time-invariant} (LTI) system.
\begin{figure}[t]
\begin{center}
\subfloat[ ]{
\includegraphics[width=0.5\textwidth]{td1}
\label{fig:iDFTcableS}
}
\subfloat[ ]{
\includegraphics[width=0.5\textwidth]{td2}
\label{fig:iDFTcableI}
}
\caption{Synthetic pulse measurement with a VNA: (a) step response; (b) impulse response. \newline
The measured frequency data is converted by
an inverse discrete Fourier transformation (iDFT)
to the time
domain. Now the synthetic impulse response of a transmission-line, here a coaxial cable, is displayed over time. The reflections of
the incident pulse on any irregularity or discontinuity, as well as the end of the cable are clearly identified. By measuring the time delay between the the reference plane and the location of the irregularity, or end of the cable (displayed as pulse or step in the reflection coefficient) the electrical length of the cable can be calculated.}
\label{fig:iDFTcable}
\end{center}
\end{figure}
A measurement example is shown in Fig.~\ref{fig:iDFTcable}.
A transmission line with a given length
and some perturbation is connected to a calibrated VNA.
The real part of the \textit{Fourier}-transformed reflection
coefficient ($S_{11}(\omega)$) is plotted versus time. The VNA permits the display of either, the synthetic step
(Fig.~\ref{fig:iDFTcableS}) or the impulse response (Fig.~\ref{fig:iDFTcableI}). The step is simply obtained by
(numerical) integration of the impulse response data.
The incident synthetic pulse is scattered from the discontinuity, but also from the open end of the cable. The
travel time for the pulse can be read on the horizontal axis on the time-domain display. In this example we
measure a delay of $t_d = 22$ ns until the open end of the cable becomes visible. This time accounts for the
impulse traveling towards the open end \emph{and back}; thus, the factor $1/2$ has to be taken into account
when calculating physical length $l$ of the transmission-line:
\Equation{equ:lineLength}{
l = \frac{c}{\sqrt{\varepsilon_r}} \cdot \frac{1}{2} t_d.}
In this example the relative dielectric constant of the insulation in the coaxial cable is $\epsilon_r = 2.3$ (PTFE Teflon),
which returns a cable length of $l = 2.2$ m. The same method can be applied for obtaining the position of
any irregularity or discontinuity (deformation, bad connector) along the cable. Nearly all VNAs with time-domain option permit
the designation of the velocity factor ($1 / \sqrt{\varepsilon_r}$ for a homogeneously filled transmission line) and thus
convert travel time or electrical length to physical distance on the display.
Note that the step response shown in Fig.~\ref{fig:iDFTcableS} returns the local reflection factor versus time.
Along the cable it amounts to $\Gamma = 0$, except for the position of the irregularity, indicating a
well-matched $50~\Omega$ transmission line. At the end we notice a positive step to $\Gamma = 1$, indicating an
open circuit (see Table~\ref{tbl:ref}).
\begin{table}[t]
\caption{Important values of the reflection coefficient}
\begin{center}
\begin{tabular}{lrr}
\hline
\hline
\bfseries DUT & ${\pmb Z_{\pmb L}}$ & ${\pmb\Gamma}$ \\
\hline
Open circuit &$\infty$ &+1\\
Short circuit &0 &--1\\
Matched load &$Z_0$&0\\
Load &$Z_0/2$ &--1/3\\
Load &$2 Z_0$ &1/3\\
\hline
\hline
\end{tabular}
\end{center}
\label{tbl:ref}
\end{table}
The reflected pulse in the impulse response trace (Fig.~\ref{fig:iDFTcableI}), related to the open end of the cable
does not reach unit amplitude due to fact of cable attenuation of the transmission line used
for this example -- a semi-rigid coaxial cable approximately 2 m length. The amplitude of this reflection from the open end
indicates the attenuation over twice the electrical length of the cable at the equivalent center frequency
($f_{\text{max}} = 3$ GHz, $f_{\text{centre}}= 1.5$ GHz) of the measurement.
For practical applications of the synthetic pulse technique, certain basic
properties of the discrete Fourier transform should be kept in mind, they are summarized in Table~\ref{tbl:fft}.
For example, a long cable needs to be tested.
This requires a long time window to ensure all multiple refections have decayed to zero,
which needs attention to ensure a sufficient narrow frequency sampling.
The time interval $\Delta t$ is related to
$1/\Delta f$, and this reciprocal relation may cause issues if settings are kept in ``automatic'' mode.
\begin{table}[t]
\caption{Important characteristics of the FFT}
\begin{center}
\begin{tabular}{rcl}
\hline
\hline
\bfseries Time domain & & \bfseries Frequency domain\\
\hline
$T_{\text{max}}$ (time span) &$\leftrightarrow$& $\Delta f$ (frequency resolution)\\
$\Delta t$ (time resolution) &$\leftrightarrow$& $f_{\text{max}}$ (frequency span)\\
\hline
\hline
\end{tabular}
\end{center}\label{tbl:fft}
\end{table}
On the other hand, if a bad connector or cable damage needs to be located along a transmission-line, a high
resolution in time is required. Thus, the VNA has to measure over a wide frequency span ($f_{\text{max}}$).
Obviously, we would like to often use both, a high frequency span and a close spacing of the samples in the
frequency domain, but there are practical limitations: namely, the number of data points available. Usually in
modern instruments the number of data points available amounts to 60000 and, depending on the application,
compromises have to be accepted.
Performing time-domain measurements with the vector network analyzer calls for two basic modes,
the ``low-pass'', or the ``band-pass'' mode to be selected.
\subsubsection{Low-pass mode
In low-pass mode the basic discrete \textit{Fourier} transformation algorithm is applied. This returns certain
constraints on the frequency-domain measurement data of the DUT (Fig.~\ref{fig:lpm}). The iDFT demands a
start frequency to always be 0 Hz (DC), and data is acquired in equidistant frequency steps between start and stop frequency.
Since most VNAs cannot measure at very low frequencies, the data points from DC to the minimum operation
frequency of the VNA are extrapolated mathematically.
Data points for negative frequencies are derived from the measured
samples on the corresponding positive frequencies by complex conjugation. Compared to the bandpass mode, this
effectively doubles the number of data points available for the calculation of the time trace. For this
particular symmetry, the discrete \textit{Fourier} transformation returns a purely real-valued time trace. A
practical time domain reflectometry (TDR) measurement routine is setup as follows:
\begin{enumerate}
\item The DUT is connected, the port and type of measurement are selected (transmission or reflection).
\item The frequency range of interest and the number of data points are entered (this relates to the
time domain by Table~\ref{tbl:fft})
\item After pushing the soft key, ``set frequency low-pass''\footnotemark, the instrument choses the exact
sampling frequencies.
\item Once the sampling points are defined, the VNA has to be calibrated (open, short, load for reflection
measurements).
\end{enumerate}
\footnotetext{
This soft key may appear with slightly different naming, depending on the definitions of the manufacturer.}
In the low-pass mode, the trace appearing on the screen for time domain reflectometry (TDR) or time domain transmission (TDT)
is basically equivalent to what a real-time TDR or sampling oscilloscope display; see Section~\ref{ch:compare}.
\begin{figure}[t]
\begin{center}
\subfloat[ ]{
\includegraphics[width=0.4\textwidth]{bp1}
\label{fig:lpm}
}
\hspace{15mm}
\subfloat[ ]{
\includegraphics[width=0.4\textwidth]{bp2}
\label{fig:bpm}
}
\caption{Sampling of frequency points for the different operating modes: (a) low-pass mode; (b) bandpass mode.}
\end{center}
\end{figure}
\subsubsection{Band-pass mode}
In band-pass mode (Fig.~\ref{fig:bpm}) the spectral lines
(frequency-domain data points) no longer need to be equidistant, and extrapolated down to DC,
they just need to cover the
frequency range of interest, e.g.\ from $f_{\rm min} = 1.2$ GHz to $f_{\rm max} = 1.5$ GHz.
The start and stop
frequencies of the VNA can be chosen arbitrarily, which returns a high degree of flexibility and is especially
suited for the measurement of devices having a limited frequency range (example: waveguide-mode reflectometry).
The bandpass mode is the equivalent to a narrowband TDR (and also time-domain
transmission TDT) using the synthetic pulse technique. It permits the display of the impulse response only,
since no extrapolated information on a DC component is available.
The measurement clearly identifies
position and size of perturbations along a transmission line, including waveguides.
Their characterization in terms of capacitive, inductive or resistive properties is possible, but not
straightforward~\cite{src:kruschdPaper}. Details on the
general properties and mathematical backgrounds of the low-pass and bandpass modes are found in
\cite{src:TDA, src:fundVNA2}.
\subsubsection{Windowing}
As the VNA always samples a limited frequency spectrum, starting at $f_{\text{min}}$ and stopping at
$f_{\text{max}}$, the acquired spectrum is clipped by a rectangular envelope.
Performing the iDFT, rectangular windowing artifacts show up in the time-domain data,
as compared in Fig.~\ref{fig:rectSpect}.
\begin{figure}[t]
\begin{center}
\subfloat[ ]{
\includegraphics[width=0.45\textwidth]{rectSpect}
\label{fig:spectLarge}
}
\hspace{11mm}
\subfloat[ ]{
\includegraphics[width=0.45\textwidth]{rectSpect2}
\label{fig:spectSmall}
}
\caption{(a) Infinite frequency span. (b) Limited frequency span.
The limited frequency span $\Delta f$ \protect\footnotemark
of the VNA leads to ``distortions'' of the time-domain synthetic pulse
measurement. The ideal response is convoluted with a $\mathit{sinc}$ function,
which characteristics depend on $\Delta f$.}
\label{fig:rectSpect}
\end{center}
\end{figure}
\footnotetext{not to be confused with the previous definition of $\Delta f$ for the equidistant frequency samples}
An infinite spectrum of constant density (shown in Fig.~\ref{fig:spectLarge}) leads to a Dirac-pulse
function in the time domain.
The Dirac pulse contains by definition all frequency components of equal
power. In Fig.~\ref{fig:spectSmall}, the spectrum is limited, for example,
by the maximum operation frequency
of the VNA, or by some user settings.
This can be expressed by multiplication of the ideal spectrum with a
rectangular function.
The iDFT of a rectangular function of width $\Delta f$ leads to a $\mathit{sinc}$ function
(sometimes denoted as $\mathit{si}$ function) in the time domain. This relation is shown in Eq.~\eqref{equ:ftSinc} and
graphically in Fig.~\ref{fig:rectSpect}.
\begin{align}\label{equ:ftSinc}
\begin{split}
\text{Frequency domain} \quad &\Longleftrightarrow \quad \text{Time domain}\\
\text{rect}\left(\frac{f}{\Delta f}\right) \quad &\Longleftrightarrow \quad \frac{\sin\left(\Delta f \pi
t\right)}{\pi t} = \Delta f \cdot \text{sinc}\left(\Delta f \pi t\right).
\end{split}
\end{align}
To mitigate the effect of rectangular clipping of the spectrum in the time domain result, various weighting
functions are available. They smoothly filter (reduce) the amplitude of the spectrum around $f_{\rm min}$
and $f_{\rm max}$ in band-pass and low-pass mode. This helps to reduce the
strong sidelobes (ringing) in the time domain.
However, the price to be paid is a
reduced pass-band, thus limiting the time resolution and the ability to distinguish between two
closely spaced impulses. The user has to select a reasonable trade off between the window weighting functions,
depending on the requirements of the particular measurement.
The effect of some window functions on main and sidelobes
is shown in the frequency domain(!) on a logarithmic scale in Fig.~\ref{fig:fftWndw}.
\Figure{0.7}{windows}{fig:fftWndw} {Typical window functions to suppress strong sidelobes}{[t]}
\subsubsection{Gating}
The gating option of the VNA allows to eliminate or select parts of the time-domain signal,
provided they are reasonably well separated in the time-domain trace.
For example, the already mentioned
cable, connecting to the VNA port, is assumed to have an internal irregularity at a certain position.
By suitable selection of a time-domain gate (highlighted in Fig.~\ref{fig:fftGate} from $t\approx 18$~ns
to $t\approx 26$~ns), the desired portion of the time domain trace (here, the total reflection at the open cable end)
can be separated from the the rest of the trace (set to zero).
This allows an analysis, e.g.\ by transformation back to the frequency domain, of the interesting part of the
circuit without influence of multiple reflections and perturbations from discontinuities, etc. (de-embedding).
For transmission
measurements, usually the \emph{first} arriving pulse in the time domain is selected, thus suppressing the
effect of all following reflections and related signals. For reflection measurements, the first, but also following
pulse response in the time-domain trace may be selected.
\Figure{0.6}{tdg}{fig:fftGate}{Only the signal in a certain time window is of interest. After selection, the FFT of this window will be calculated. Here the real values of the synthetic impulse response are shown on a linear scale.}{[t]}
The implemented time-domain gating function is not a ``brick wall'', but a soft switch applying a weighting function
similar to the iDFT window function.
As it is a \emph{non-linear} operation, it may generate additional frequency components which were not present
in the original signal. As general practical guide line, the gate should not cut into a signal trace different
from zero.
\begin{figure}[p]
\centerline{
\scalebox{1.0}{
\includegraphics*[width=1.0\textwidth]{examples}
} %
}
\caption{Examples of an arbitrary impedance, measured in TDR} %
\label{fig:fftExample} %
\end{figure} %
\subsubsection{Examples of synthetic pulse time-domain measurements}
A collection of measurement examples of simple DUTs are shown in Fig.~\ref{fig:fftExample}. For all
cases depicted, the VNA is set up in step response operation. The traces from top to bottom show:
\begin{enumerate}
\item Matched load ($Z = Z_C$). As $\Gamma$ is equal to zero, the response is zero everywhere.
\item Moderate (resistive) mismatch ($Z = 2 Z_C$, e.g.\ $100~\Omega$ in a $50~\Omega$ system). During the first
200~ps the trace displays the well impedance-matched cable, following the reflection coefficient jumps to a positive,
constand value due to the impedance mismatch.
\item Capacitor. The TDR displays the capacitive load for a moment as a short circuit, and resumes with an
exponential function, as the capacitor is charged. The final state is equivalent to an open circuit, as expected.
\item Inductor. In the TDR the inductive load appears at $t = 200$~ps as an open circuit, followed by an exponential
decay function. The steady state results in a short circuit, as the inductor is fully conducting.
\end{enumerate}
\subsubsection{Comparison to true time-domain measurements}
\label{ch:compare}
There is a wide range of applications for the discussed synthetic pulse time-domain technique.
A VNA in
time-domain low-pass step mode has a very similar range of applications as a
TDR sampling oscilloscope.
However, the synthetic pulse method is limited to strictly linear systems,
therefore the analysis
of transient or non-linear systems, e.g.\ settling response of a microwave oscillator
after power up would not give very meaningful results.
In other words, for highly
non-linear and time-varying DUTs true time-domain measurements, based on pulse generators and oscilloscopes
are still indispensable, e.g.\ an air traffic radar system, where we have linear but time-varying conditions.
The dynamic range of a typical
sampling oscilloscope is limited to about 60 to 80~dB with a maximum input signal of 1~V and a noise floor
around 0.1 to 1~mV (typical broadband oscilloscope).
The dynamic range of the VNA is $>100$~dB,
allowing similar maximum input
levels of approximately +10~dBm (some VNAs allow +20~dBm).
Both instruments are using
basically the same kind of detector, either a balanced mixer (four diodes)
or a sampling head (two, four or six diodes),
but the essential difference lies in the noise floor and the average signal power
arriving at the receiver input.
In
case of the VNA the measurement is based on a continuous-wave (CW) signal
with bandwidth of a few Hz,
and thus can obtain with appropriate filtering a very good signal-to-noise ratio\footnotemark.
\footnotetext{Remember the thermal noise is proportional to measurement bandwidth. Its density at room
temperature is --174 dBm/Hz.}
A traditional sampling oscilloscope acquires the data during a short time with a rather low repetition rate
(typically around 100~kHz up to a few MHz), with all the thermal noise power spread over the entire
frequency range (typically 20--50 ~GHz bandwidth).
With this low average signal power (around a microwatt) the signal spectral density
is orders of magnitude lower compared to the VNA measurement procedure (it acquires signals continuously),
which explains the large difference in dynamic range (even without gain switching).
A more detailed discussion about time-domain reflectometry with vector network analysers can be found in
\cite{src:TDA}.
\subsection{Calibration methods}
The hardware of even an ``ultra-modern'' VNA is not perfect, e.g.\ the internal source
is not perfectly impedance matched to $50~\Omega$ (over the entire frequency range),
its internal directional couplers have a finite directivity, since there exists no ideal (infinite) directivity in practice,
and finally the coaxial cables between VNA and DUT ports have frequency-dependent attenuation (dispersion)
effects.
This calls for a calibration to compensate all these unwanted effects, to guarantee a precise, instrument independent
analysis of the DUT.
There are several calibration procedures to eliminate some, or all of the mentioned deficiencies. The
easiest is called the ``response calibration'', typically applied for transmission, rarely for reflection measurements.
It basically is a $S_{21}$ (or $S_{12}$) transmission measurement of a quasi ``zero length'' ideal transmission-line,
by connecting the two cable ends of the two-port VNA with each other.
For the given VNA setting, i.e.\ start / stop frequency, \# of freq.\ points, resolution bandwidth, power level, etc., magnitude and phase are acquired and stored as $S_{21 \text{reference}}$
in the non-volatile memory for each frequency point.
Now, a DUT can be connected between the cable ports, with the connectors serving as \emph{reference planes}
of the calibrated system (VNA plus cables).
In calibrated mode the VNA performs:
\Equation{equ:respCal}{S_{21 \text{DUTcal.}} = \frac{S_{21 \text{DUTmeas.}}}{S_{21 \text{reference}}},}
However, this simple calibration procedure
eliminates essentially the frequency-dependent losses and phase-transfer functions
of the test cables only.
But, the mismatch between cable and generator, and the impact of the finite directivity
are still present.
A more sophisticated, and widely popular calibration technique for the
reflection measurements needs to
be performed:
the open, short and match technique. \newline
This technique covers the three independent error sources mentioned above:
finite directivity, generator mismatch and the transfer function of the cables.
\Figure{0.4}{errorNetwork}{fig:calErrorNw}{Error model of a VNA. The parameters $e_{xx}$ of the error network are determined by the calibration procedure and used to determine the true (corrected) result ($\Gamma_{\rm DUT}$)
based on the measured result ($\Gamma_{\rm M}$).}{[t]}
The VNA applies an internal error model, shown in Fig.~\ref{fig:calErrorNw}. The measured raw data acquired
by the instrument ($\Gamma_{\rm M}$) is distorted by certain systematic errors.
These errors are
modeled via four parameters: $e_{10}$, $e_{00}$, $e_{01}$, $e_{11}$, based on the error network model
of Fig.~\ref{fig:calErrorNw}.
$e_{nn}$ are in general complex
and frequency dependent parameters, furthermore $e_{10} = e_{01}$.
The error parameters are extracted and stored when performing a suitable calibration method, i.e.\ open, short, match,
such that the true value of the DUT ($\Gamma_{\rm DUT}$) is calculated and presented accordingly.
In simple terms, we need to carry out three independent measurements for each frequency point,
to solve three coupled equations with three complex unknowns.
These error terms represent the above-mentioned effects as listed in Table~\ref{tbl:eTerm}.
\begin{table}[t]
\caption{Interpretation of VNA error terms}
\begin{center}
\begin{tabular}{ll}
\hline
\hline
\bfseries Error term & \bfseries Interpretation\\
\hline
$e_{10}$ & Reflection tracking\\
$e_{00}$ & Directivity\\
$e_{11}$ & Test-port match\\
\hline
\hline
\end{tabular}
\end{center}
\label{tbl:eTerm}
\end{table}
The unknowns of the error network are determined applying a calibration measurement with three different,
but known, calibration DUTs. These
calibration DUTs do not need to be perfect, only the electromagnetic properties need to be known with great
precision. The tabulated complex, frequency-dependent S-parameters of the calibration standards are
provided by the manufacturer of the calibration hardware (they are often referred as calibration kit), and
are stored in the VNA memory as calibration kit reference data.
Usually the calibration DUTs represent an open circuit, a short circuit and a matched load (termination),
enabling the VNA to
determine the frequency-dependent error model.
This is altered if different test cables are used, or if the
VNA settings are modified, and would require a re-calibration under those circumstances.
Now the VNA continuously applies the error correction during the DUT measurement,
and the \emph{reference plane} is ``moved'' to the end of the test cables.
Only the DUT networks ``behind'' the reference plane are taken into account for the measurement.
The impact of the VNA calibration is demonstrated
in Fig.~\ref{fig:calib}, which presents a $S_{11}$ measurement of a high-quality $50~\Omega$
termination, with and without VNA calibration.
For an ideal termination, no reflection should be present, i.e.\ $S_{11}=0\equiv -\infty$~dB.
In this example the calibration of the VNA improves the measurement quality by 20~dB!
In case of a short (total reflection, $S_{11}=1\equiv 0$~dB), a non-calibrated $S_{11}$ response
typically displays a residual with values of a fraction of a dB, up to a few dB below the 0 dB line (same for an
open); after calibration these error reduces to a few millidecibels.
\Figure{0.7}{calib}{fig:calib}{$S_{11}$ measurement of a $50~\Omega$ termination with and without calibration. The calibration provides 20 dB improvement for this frequency range.}{[t]}
So far, we have covered the ``response calibration'' and the ``complete one-port calibration''.
To perform completely error-corrected transmission measurements, the ``full two-port calibration''
procedure has to be applied.
Therefore, the error model is expanded to include the errors from the receiving port,
requiring a calibration of each port based on the just discussed ``complete one-port calibration'' method.
Also, for transmission, we need two
standards, i.e.\ the ``response calibration'' and the ``isolation calibration'', however, latter often may be omitted.
In summary, the ``full two-port calibration'' consists out of a ``complete one-port calibration'' procedure for each port,
which requires open, short and match standards, plus the ``response calibration'' and eventually the
``isolation calibration''.
In total eight calibration measurements have to be performed to bring the VNA into the desired \emph{CAL} status.
\begin{figure}[t]
\begin{center}
\subfloat[ ]{
\includegraphics[width=0.35\textwidth]{calMan}
\label{fig:calKitsa}
}
\subfloat[ ]{
\includegraphics[width=0.3\textwidth]{calAuto.pdf}
\label{fig:calKitsb}
}
\caption{Typical calibration kits for a VNA: (a) manual (open, short, match); (b) electronic}
\label{fig:calKits}
\end{center}
\end{figure}
For measurements on devices with standard coaxial connectors, e.g.\ SMA or N-type,
calibration standards such as a termination, an open and a short circuit are available (shown in
Fig.~\ref{fig:calKitsa}).
As mentioned, to successfully perform the calibration procedure for the reflection coefficient, the tabulated
values, representing the electromagnetic properties of the calibration standards,
has to be present in the VNA.
Obviously, the tabulated parameters of the calibration kit does not have an infinite frequency resolution.
The instrument applies an interpolation procedure if the selected frequency points are not exactly at
the tabulated values of the calibration kit.
The calibration technique described so far is a well established industry standard for RF and microwave
VNA measurements.
However, it has a substantial disadvantage for the user: it is tedious and time consuming,
in particular if a calibration of a multiport VNA is required.\newline
Already for the full two-port calibration requires eight calibration measurements to satisfy the
eight-term error model. The manual procedure of connection and de-connection of the calibration
standards is time consuming, boring, and prone to errors. The situation becomes even worse when performing
a full four-port calibration (32 connections and de-connections of standards). For this reason, the
electronic calibration kit method is available and now very popular. For this procedure, each port is
connected via the measurement cable to the electronic calibration box (shown in Fig.~\ref{fig:calKitsb}),
which holds the different calibration standards, and switches them automatically controlled by the VNA.
This method enables to perform a full four-port calibration in less than a minute.
Again, like for the manual calibration method, the standards do not need to be perfect, but well known,
reproducible (switching) and stable. More details are found in \cite{src:fundVNA,src:fundVNA2}.
\subsection{1 \mbox{dB} compression point measurement}
\label{1dBsect}
A single tone sine-wave source is connected to the input of an amplifier and its amplitude level is gradually increased versus time.
Monitoring the output of this amplifier, we notice a proportional dependence between input and output powers for
small signal levels.
This proportionality is referred as the linear gain factor.
For higher input signal
levels, this relationship does not hold any more, since the amplifier is not a perfectly linear system,
and suffers from ``saturation'' effects.
A fraction of the
output power will appear at other frequencies, which are higher order harmonics of the input signal.
Typically the second and third harmonics are dominant, and the related signal distortion is referred as harmonic distortion.
In parallel, we observe a \emph{compression} of the gain for the fundamental signal.
The actual gain falls off below the small-signal gain response (Fig.~\ref{fig:1dBcompr}).
If this deviation amounts to
1~dB, we have reached the ``1~dB compression point''.
Typically the industry refers to the output power, when specifying the 1~dB compression point for their RF products.
\Figure{0.45}{1dB}{fig:1dBcompr}{Definition of the 1~dB compression point for an amplifier:
input vs.\ output power at the point where the power level falls below 1~dB from its (linearly) predicted value.}{[htb]}
The 1~dB compression point is an important figure of merit, used to characterize the linearity of
a RF system, in particular the performance of small-signal and power amplifiers.
It can be comfortably
measured with most VNAs in CW mode, i.e.\ choosing a single frequency
and performing a power sweep.
In power sweep mode, the instrument displays a trace similar as shown in Fig.~\ref{fig:1dBcompr}.
\section{Introduction to the Smith chart}
Even with today's availability of computer-aided simulation and circuit simulation software suites,
the \textit{Smith} chart is still a very valuable and important tool that facilitates an
interpretation of the (half) complex impedance plane with respect to the S-parameters,
and the related calculations and measurements.
This section gives a brief overview of the concept, and more importantly, of how to use the chart.
Its definition, as well as an introduction of how to navigate on the chart are illustrated.
Some typical examples illustrate the broad range of applications of the \textit{Smith} chart.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{coax_measurement_line.pdf}
\caption{Schematic view of a measurement set-up used to determine the reflection coefficient as well as the voltage standing wave ratio of a device under test (DUT) \cite{meinkegundlach}.}
\label{coax}
\end{figure}
\subsection{Voltage standing wave ratio ($\mathit{VSWR}$)}
\label{VSWRsect}
With modern RF measurement equipment available today it is rather easy to precisely measure
the reflection factor $\Gamma$, even for complicated networks.
In the ``good old days'' though, this was performed by measuring the electrical field
strength\footnote{The electrical field strength was used, since its measurement was
considerably easier than that of the magnetic field.} along a slotted coaxial line,
which has a longitudinal slit to allow a small field probe to be slided to any location
along the line (Fig. \ref{coax}).
This electric field probe, protruding into the field region of the coaxial line near the outer conductor, picked up an E-field signal, which was displayed on a microvoltmeter after rectification
via a microwave diode.
While moving the probe, field maxima and minima, as well as their position and spacing
where recorded.
From this information the reflection factor $\Gamma$ and the voltage standing wave
ratio ($\mathit{VSWR}$ or $\mathit{SWR}$) were determined:
\begin{itemize}
\item $\Gamma$ is defined as the ratio of the electrical field strength $E$ of the reflected wave versus the forward-traveling wave:
\begin{equation}
\Gamma = \frac{E\text{ of reflected wave}}{E \text{ of forward-traveling wave}}.
\label{eq:1}
\end{equation}
\item The $\mathit{VSWR}$ is defined as the ratio of maximum to minimum measured voltages:
\begin{equation}
\mathit{VSWR} = \frac{V_\text{max}}{V_{\text{min}}} = \frac{1 + |\Gamma|}{1 - |\Gamma|}.
\label{eq:2}
\end{equation}
\end{itemize}
Although today these measurements are far easier to conduct, the definitions of the aforementioned quantities are still valid. On top, their importance has not diminished in the field of microwave engineering, both reflection coefficient as well as $\mathit{VSWR}$ are still a vital part of the everyday life of a microwave engineer performing simulations or measurements.
\subsection{Definition of the \textit{Smith} chart}
The \textit{Smith} chart~\cite{smith2000} provides a graphical representation of $\Gamma$ that permits the determination of quantities like the $\mathit{VSWR}$, or the impedance of a device under test (DUT). It uses the bilinear \textit{Moebius} transformation, projecting the complex impedance plane on the complex $\Gamma$ plane:
\begin{equation}
\Gamma = \frac{Z - Z_{\text{0}}}{Z + Z_{\text{0}}} \hspace{0.5cm}\text{ with }\hspace{0.5cm} Z = R + \text{j}\,X.
\label{eq:3}
\end{equation}
As shown in Fig. \ref{scbasic}, the half--plane with positive real part of impedance $Z$ is mapped to the interior of the unit circle of the $\Gamma$ plane.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\textwidth]{Smith_basic.pdf}
\caption{Illustration of the \textit{Moebius} transformation from the complex impedance plane to the $\Gamma$ plane, commonly known as \textit{Smith} chart.}
\label{scbasic}
\end{figure}
\subsubsection{Properties of the transformation}
In general, this transformation has two main properties:
\begin{itemize}
\item generalized circles are transformed to generalized circles (note that a straight line is nothing else than a circle with infinite radius and is therefore mapped as circle to the \textit{Smith} chart);
\item angles are preserved locally.
\end{itemize}
Figure \ref{prop} illustrates how certain basic shapes transform between impedance and $\Gamma$ planes.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{smith_transform_properties.pdf}
\caption{Illustration of the transformation of basic shapes from the $Z$ to the $\Gamma$ plane.}
\label{prop}
\end{figure}
\begin{figure}[H]
\centering\includegraphics[width=0.75\linewidth]{Smith.pdf}
\caption{Example of a typical \textit{Smith} chart}
\label{smith}
\end{figure}
\subsubsection{Normalization}
The Smith chart is usually normalized to a reference impedance $Z_{\text{0}}$ (= real):
\begin{equation}
z = \frac{Z}{Z_{\text{0}}}.
\label{eq:4}
\end{equation}
This simplifies the transformation:
\begin{equation}
\Gamma = \frac{z - 1}{z + 1} \hspace{0.5cm} \Leftrightarrow \hspace{0.5cm} z = \frac{1 + \Gamma}{1 - \Gamma}.
\label{eq:5}
\end{equation}
Although $Z_0 = 50~\Omega$ is the most common reference impedance (typical characteristic impedance of coaxial cables) and many applications use this normalization, any other real, positive value is valid. \textit{Therefore, it is crucial to check the normalization assumed, before using any chart.}
Being unfamiliar, the \textit{Smith} charts appears confusing at a first look,
with a fine grid from the $Z$-plane mapped
to a dense grid of many circles on the chart (Fig. \ref{smith}).
\subsubsection{Admittance plane}
The \textit{Moebius} transformation which generates the \textit{Smith} chart also provides a mapping of the complex admittance plane ($Y = 1/Z$, or normalized $y = 1/z$) into the same chart:
\begin{equation}
\Gamma = -\frac{y - \text{1}}{y + \text{1}} = -\frac{Y - Y_{\text{0}}}{Y + Y_{\text{0}}} = - \frac{1/Z - 1/Z_{\text{0}}}{1/Z + 1/Z_{\text{0}}} = \frac{Z - Z_{\text{0}}}{Z + Z_{\text{0}}} = \frac{z - \text{1}}{z + \text{1}}.
\label{eq:6}
\end{equation}
Using this transformation results in the same chart, but mirrored at the center of the \textit{Smith} chart (Fig. \ref{admit}).
\begin{figure}[t]
\centering
\includegraphics[width=0.85\linewidth]{admittance.pdf}
\caption{Mapping of the admittance plane into the $\Gamma$ plane}
\label{admit}
\end{figure}
Often both mappings, the admittance and the impedance plane are combined into one chart, which then looks even more overwhelming. For reasons of simplicity all illustrations in this article use only the mapping from the impedance to the $\Gamma$ plane.
\subsection{Navigation in the \textit{Smith} chart}
The representation of circuit elements in the \textit{Smith} chart is discussed in this section, starting with some important points inside the chart. The following examples of circuit elements illustrate their representation in the chart.
\subsubsection{Important points}
There are three important points in the chart:
\begin{enumerate}
\item Open circuit with $\Gamma = 1, z \rightarrow \infty$.
\item Short circuit with $\Gamma = -1, z = 0$.
\item Matched load with $\Gamma = 0, z = 1$.
\end{enumerate}
They all are located along the real axis at the beginning and the end, which are also
on the outer circle (imaginary axis), and at the center of the \textit{Smith} chart (Fig. \ref{points}).
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth]{important_points.pdf}
\caption{Important points in the Smith chart}
\label{points}
\end{figure}
The upper half of the chart is ``inductive'', since it corresponds to the positive imaginary part of the impedance. The lower half is ``capacitive'', as it is corresponding to the negative imaginary part of the impedance.
Concentric circles around the center represent constant reflection factors (Fig. \ref{concentric}). Their radius is directly proportional to the magnitude of $\Gamma$; therefore, a radius of 0.5 corresponds to reflection of 3 dB (half of the signal is reflected), whereas the outermost circle (radius = 1) represents total reflection.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\linewidth]{concentric_circles.pdf}
\caption{Illustration of circles representing a constant reflection factor}
\label{concentric}
\end{figure}
Evidently, matching problems are clearly visualized in the Smith chart, since a mismatch will lead to a reflection coefficient larger than 0, see Eq. (\ref{eq:7}).
\begin{equation}
\text{Power into the load = forward power - reflected power: }P = \frac{1}{2}\left(\left|a\right|^{2} - \left|b\right|^{2}\right) = \frac{\left|a\right|^{2}}{2}\left(1 - \left|\Gamma\right|^{2}\right).
\label{eq:7}
\end{equation}
In Eq. (\ref{eq:7}) the European notation is used%
\footnote{The commonly used notation in the USA: power = $\left|a\right|^{2}$. These conventions have no impact on the S-parameters, but they are relevant for absolute power calculations. Since this is rarely used in context with \textit{Smith} chart gymnastics, the actual power definition used is not critical.}: power $= \left|a\right|^{2}/2$. Furthermore it should be noted, $(1 - \left|\Gamma\right|^{2})$ corresponds to the losses due to the impedance mismatch.
Even though here we limit to the mapping of the impedance plane to the $\Gamma$ plane,
The admittance is simple to determine, since
\begin{equation}
\Gamma(\frac{1}{z}) = \frac{1/z - 1}{1/z + 1} = \frac{1 - z}{1 + z} = \left(\frac{z - 1}{z + 1}\right)\text{ or } \Gamma(\frac{1}{z}) = - \Gamma(z).
\label{eq:8}
\end{equation}
In the \textit{Smith} chart this fact is visualized as a 180$^{\circ}$ rotation of the vector of a given impedance (Fig. \ref{imptoad}).
\begin{figure}[t]
\centering
\includegraphics[width=0.3\linewidth]{imp_to_adm.pdf}
\caption{Conversion of an impedance to the corresponding admittance in the \textit{Smith} chart}
\label{imptoad}
\end{figure}
\subsubsection{Impedance of simple, passive lumped element circuits}
\begin{figure}[t]
\centering
\includegraphics[width=0.4\linewidth]{series.pdf}
\caption{Circular traces of reactances with varying value connected in series to a fixed impedance}
\label{series}
\includegraphics[width=0.4\linewidth]{parallel.pdf}
\caption{Circular traces of reactances with varying value connected in parallel to a fixed impedance}
\label{para}
\end{figure}
Consider a simple passive circuit: a lumped, reactive element (inductance $L$, or capacitance $C$)
of arbitrary value connected in series to an resistance $R$.
The corresponding signature of this circuit in the \textit{Smith} chart, varying the
inductance resp.\ capacitance,
is a circle.
For a given type of impedance, the trace of this circle follows a clockwise (inductance), or anticlockwise
(capacitance) movement (Fig. \ref{series}).
If a lumped, reactive element is connected in parallel to $R$, the pattern is basically the same,
but rotated by 180$^{\circ}$ (Fig. \ref{para}).
It is equivalent to the discussed admittance mapping.
Summarizing both cases, results in a simple rule for the navigation in the \textit{Smith} chart: \\
\\
\textit{Reactive elements connected in series follow the trajectory of a circle in the impedance plane. Inductances move clockwise, capacitances move anticlockwise when increasing their value.
Reactive elements connected in parallel follow a circular trajectory in the admittance plane,
clockwise for capacitances, anticlockwise for inductances.}\\
\\
This rule is illustrated in Fig. \ref{rule}.
\begin{figure}[t]
\centering
\includegraphics[width=0.4\linewidth]{series_and_parallel.pdf}
\caption{Navigation in the \textit{Smith} chart when connecting reactive elements.}
\label{rule}
\end{figure}
\subsubsection{Impedance transformation using a transmission-line}
The S-matrix of an ideal, lossless transmission-line of physical length $l$ is given by
\begin{equation}
S = \left[
\begin{array}{cc}
0 & e^{-j\beta l} \\
e^{-j\beta l} & 0 \\
\end{array}
\right],
\label{eq:9}
\end{equation}
where $\beta = 2\pi/\lambda$ is the propagation coefficient at the wavelength $\lambda$ ($\lambda = \lambda_{\text{0}}$ for $\epsilon_{\text{r}} = 1$).
The lossless transmission-line changes only the phase between its ports.
Adding a short piece of, e.g.\ of coaxial cable in front of a load impedance, will turn the corresponding circle of $Z_{\text{load}}$ clockwise, which is effectively a transformation of the reflection factor $\Gamma _{\text{load}}$ (without line) to the new reflection factor $\Gamma _{\text{in}} = \Gamma _{\text{load}}e^{-j2\beta l}$. Graphically speaking, the vector corresponding to $\Gamma_{\text{in}}$ is rotated clockwise by an angle of 2$\beta l$ (Fig. \ref{transmissionline}).
\begin{figure}[H]
\centering
\includegraphics[width=0.3\linewidth]{transmission_line.pdf}
\caption{Adding a lossless transmission-line of physical length $l$ to an impedance $Z_{\text{load}}$}
\label{transmissionline}
\end{figure}
The input impedance of a lossless transmission-line of characteristic impedance $Z_0$, terminated with $Z_{\text{load}}$ is given by:
\begin{equation}
Z_{\text{in}} = Z_0 \frac{Z_{\text{load}}+j Z_0 \tan(\beta l)}{Z_0+j Z_{\text{load}}\tan(\beta l)}
\label{eq:11}
\end{equation}
and the corresponding reflection coefficient follows as mentioned:
\begin{equation}
\Gamma _{\text{in}} = \Gamma _{\text{load}}e^{-j2\beta l}
\label{eq:refl}
\end{equation}
Depending on the values of $\beta$, $Z_0$, $Z_{\text{load}}$, and $l$, the input impedance will be quite different from the load impedance $Z_{\text{load}}$.
Special cases are:
\begin{itemize}
\item $l=\lambda/2$: $Z_{\text{in}}=Z_{\text{load}}$
\item $l=\lambda/4$: $Z_{\text{in}}=Z^2_0/Z_{\text{load}}$ (impedance transformer)
\item $Z_{\text{load}}=Z_0$: $Z_{\text{in}}=Z_0$ (matched termination)
\item $Z_{\text{load}}=j X_{\text{load}}$: $Z_{\text{in}}=j X_{\text{in}}$ (reactive load $\Rightarrow$ reactive input impedance)
\item $l\ll\lambda$: $Z_{\text{in}}=Z_{\text{load}}$ (basically no line present)
\end{itemize}
Terminating a transmission-line with a short circuit, $Z_{\text{load}}=0$, simplifies Eq.~\ref{eq:11} to
\begin{equation}
Z_{\text{in}} = j Z_0\tan(\beta l)
\label{eq:tlshort}
\end{equation}
which results in an ``inductive'' or ``capacitive'' impedance behavior at the input, depending on the length of the line (see Fig.~\ref{tangens}).
Adding a transmission-line of length $\lambda/4$ interestingly results in a change of $\Gamma$ by a factor $-1$:
\begin{figure}[t]
\centering
\includegraphics[width=0.4\linewidth]{tangens.pdf}
\caption{Impedance of a transmission line as a function of its length $l$}
\label{tangens}
\end{figure}
\begin{equation}
\Gamma_{\text{in}} = \Gamma_{\text{load}} e^{-j 2\beta l} = \Gamma_{\text{load}} e^{-j 2(\frac{2\pi}{\lambda}) l} \stackrel{l=\frac{\lambda}{4}}{=} \Gamma_{\text{load}} e^{-j\pi} = -\Gamma_{\text{load}}.
\label{eq:12}
\end{equation}
Again, this is equivalent to inverting an impedance $z$ to its admittance $1/z$, or the clockwise rotation of the impedance vector by 180$^{\circ}$. Especially when starting with a short circuit ($Z_{\text{load}}=0\,\Rightarrow\, -1$ in the \textit{Smith} chart), adding a transmission line of length $\lambda/4$ transforms it into an open circuit ($+1$ in the \textit{Smith} chart), and vice versa.
\subsubsection{Two-port examples}
The general form of Eq.~\ref{eq:refl} returns the input reflection coefficient $\Gamma_{\text{in}}$
for a 2-port network terminated with $Z_{\text{load}}$, i.e.\ a reflection coefficient
$\Gamma_{\text{out}}$ at the output port:
\begin{equation}
\Gamma_{\text{in}} = {S}_{11} + \frac{{S}_{12} {S}_{21} \Gamma_{\text{load}}}{1 - {S}_{22} \Gamma_{\text{load}}}.
\label{eq:13}
\end{equation}
Lets evaluate some examples, defined by their S-matrix, which map their impedance to particular characteristic lines and circles on the \textit{Smith} chart.
For this illustration, a very simplified \textit{Smith} chart, consisting just of the outermost circle (imaginary axis) and the real axis is used.
\subsubsubsection{Transmission-line of length $\lambda/16$}
\label{tl}
The S-matrix of a $\lambda/16$ transmission-line is
\begin{equation}
\text{S} = \left[
\begin{array}{cc}
0 & \text{e}^{-\text{j}\frac{\pi}{8}} \\
\text{e}^{-\text{j}\frac{\pi}{8}} & 0 \\
\end{array}
\right]
\label{eq:14}
\end{equation}
has a input reflection coefficient of
\begin{equation}
\Gamma_{\text{in}} = \Gamma_{\text{load}} \text{e}^{-\text{j}\frac{\pi}{4}}
\label{eq:15}
\end{equation}
This corresponds to a rotation of the real axis of the \textit{Smith chart} by an angle of 45$^{\circ}$ (Fig. \ref{tlsimple}) and hence a change of the reference plane of the chart (Fig. \ref{tlsimple}). Consider, for example, a transmission-line terminated by a short and hence $\Gamma_{\text{load}} = -1$. The resulting reflection coefficient is then equal to $\Gamma_{\text{in}} = \text{e}^{-\text{j}\frac{\pi}{4}}$.
\begin{figure}[t]
\centerin
\includegraphics[width=0.4\linewidth]{transmission_line_simple.pdf}
\caption{Rotation of the real axis, therefore the reference plane of the \textit{Smith} chart
when adding a transmission-line}
\label{tlsimple}
\end{figure}
\begin{figure}[b]
\centerin
\includegraphics[width=0.4\linewidth]{attenuator.pdf}
\caption{Effect of an attenuator in the \textit{Smith} chart}
\label{att}
\end{figure}
\subsubsubsection{3~dB attenuator}
The S-matrix of a 3~dB attenuator is given by
\begin{equation}
\text{S} = \left[
\begin{array}{cc}
0 & \frac{\sqrt{2}}{2} \\
\frac{\sqrt{2}}{2} & 0 \\
\end{array}
\right].
\label{eq:16}
\end{equation}
The resulting reflection coefficient is
\begin{equation}
\Gamma_{\text{in}} = \frac{\Gamma_{\text{load}}}{2}
\label{eq:17}
\end{equation}
In the Smith chart, the connection of such an attenuator causes the outermost circle to shrink to a radius
of 0.5, see Fig. \ref{att}%
\footnote{An attenuation of 3 dB corresponds to a reduction by a factor 2 in power.}.
\subsubsection{Resistive load}
Fig. \ref{res} illustrates how the real axis is passed, if a resistive load changes its value 0 $< z < \infty$.
\begin{figure}[t]
\centerin
\includegraphics[width=0.4\linewidth]{resistor.pdf}
\caption{A load resistor of variable value in the simplified \textit{Smith} chart. Since the impedance has a real part only, the trace remains on the real axis of the $\Gamma$ plane.}
\label{res}
\end{figure}
\subsection{Examples for applications of the Smith chart}
In this section two examples of typical RF problems demonstrate how the \textit{Smith} chart greatly facilitates their solutions.
\subsubsection{A step in the characteristic impedance}
Consider a junction between two infinitely short cables, an incoming with a characteristic impedance
of $Z_1=50 \Omega$, the outgoing with $Z_2=75\Omega$ (Fig. \ref{junct}).
Both ports are matched in their characteristic impedance.
The incident waves are denoted with $a_{i}$ ($i = 1,2$), the reflecting waves with $b_{i}$.
The reflection coefficient at port 1 follows as
\begin{equation}
\Gamma_{1} = \frac{Z_2 - Z_1}{Z_2 + Z_1} = \frac{75 - 50}{75 + 50} = +0.2.
\label{eq:18}
\end{equation}
\begin{figure}[b]
\centerin
\includegraphics[width=0.4\linewidth]{junction.pdf}
\caption{Junction between two coaxial cables, one with with $Z_1=50\Omega$, the other with $Z_2=75\Omega$ characteristic impedance. Infinitely short cables are assumed --
only the junction is considered.}
\label{junct}
\end{figure}
\begin{figure}[t]
\centerin
\includegraphics[width=0.4\linewidth]{tlsolution.pdf}
\caption{Visualization of the two-port formed by the two cables of different characteristic impedances}
\label{tlsolution}
\end{figure}
Thus, the voltage of the reflected wave at port 1 is 20\% of the incident wave ($b_{1} = a_{1}$ $\cdot$ $0.2$), and the reflected power at port 1 is $\Gamma^2_1=0.04\equiv$ 4\%.
From conservation of energy, the transmitted power has to be 96\%, i.e.\ $b_{2}^{2} = 1-\Gamma^2_1=0.96$.
The voltage transmission coefficient in this particular case computes $t = 1 + \Gamma$, and the output voltage of the transmitted wave at port 2 is \emph{higher} than the voltage of the incident wave at port 1:
$V_{\text{transmitted}} = V_{\text{incident}} + V_{\text{reflected}} =1+0.2=1.2$.
Also, note that this structure is not symmetric ($S_{11}=+0.2 \neq S_{22}=-0.2$), but reciprocal
($S_{21} = S_{12}=\sqrt{1-\Gamma^2_1}$).
As all impedances are real, the corresponding vectors show up in the \textit{Smith} chart on the real axis (Fig. \ref{tlsolution}).
\subsubsection{Quality ($Q$) factor of a cavity}
The second example shows the calculation of the quality factor of a cavity resonator with help of the \textit{Smith} chart.
A cavity at or near to one of its eigenmode resonances can be approximated by a parallel $RLC$ equivalent circuit (Fig. \ref{rlc}).
\begin{figure}[b]
\centerin
\includegraphics[width=0.7\linewidth]{equivalent_circuit_cavity.pdf}
\caption{Equivalent circuit of a cavity near resonance. The transformer describes the coupling of the cavity
(typically $Z_{\text{shunt}} \approx 1$ M$\Omega$, as seen by the beam) to the generator (often $Z_G = 50~\Omega$).}
\label{rlc}
\end{figure}
The resonance condition is given as
\begin{equation}
\omega L = \frac{1}{\omega C}
\label{eq:19}
\end{equation}
from which the resonance frequency follows
\begin{equation}
\omega_{\text{res}} = \frac{1}{\sqrt{LC}} \text{\hspace{1cm} or \hspace{1cm} }f_{\text{res}} = \frac{1}{2 \pi}\frac{1}{\sqrt{LC}}.
\label{eq:20}
\end{equation}
The impedance $Z$ of the cavity equivalent circuit is simply
\begin{equation}
Z(\omega) = \frac{1}{\frac{1}{R} + \text{j}\omega C + \frac{1}{\text{j}\omega L}}.
\label{eq:24}
\end{equation}
The 3 dB bandwidth $\Delta f$ refers to the points where Re($Z$) = Im($Z$), which correspond to two vectors with an argument of 45$^{\circ}$ (Fig. \ref{3db}) and an impedance of $|Z_{(-3~\text{dB})}| = 0.707 R = R/\sqrt{2}$.
\begin{figure}[t]
\centerin
\includegraphics[width=0.7\linewidth]{3db_bandwidth.pdf}
\caption{Schematic drawing of the 3 dB bandwidth in the impedance plane}
\label{3db}
\end{figure}
In general, the quality factor $Q$ of a resonant circuit is defined as the ratio of the stored energy $W$ over the energy dissipated $P$ in one oscillation cycle:
\begin{equation}
Q = \frac{\omega W}{P}.
\label{eq:21}
\end{equation}
However, the $Q$ factor for a resonance can also be calculated using the 3~dB bandwidth and the resonance frequency:
\begin{equation}
Q = \frac{f_{\text{res}}}{\Delta f}.
\label{eq:22}
\end{equation}
For a cavity, three different quality factors are defined:
\begin{itemize}
\item $Q_{0}$ (unloaded $Q$): $Q$ factor of the unperturbed system, i.e. the stand-alone cavity;
\item $Q_{\text{L}}$ (loaded $Q$): $Q$ factor of the cavity when connected to a generator and/or measurement circuits;
\item $Q_{\text{ext}}$ (external $Q$): $Q$ factor that describes the degeneration of $Q_{0}$ due to the generator and/or diagnostic impedances.
\end{itemize}
All these $Q$ factors are linked via a simple relation:
\begin{equation}
\frac{1}{Q_{\text{L}}} = \frac{1}{Q_{0}} + \frac{1}{Q_{\text{ext}}}.
\label{eq:23}
\end{equation}
The coupling coefficient $\beta$ is then defined as
\begin{equation}
\beta = \frac{Q_{0}}{Q_{\text{ext}}}.
\label{eq:25}
\end{equation}
\emph{This coupling coefficient has not to be confused with the propagation coefficient of transmission lines, which is also denoted as $\beta$.}
In the \textit{Smith} chart, a resonant circuit shows up as a circle (Fig. \ref{qfactor}, dashed, red circle shown in the ``detuned short'' position). The larger the circle, the stronger is the coupling. Three types of coupling are distinguished, depending on the range of $beta$ (= size of the circle, assuming the circle is in the ``detuned short'' position):
\begin{figure}[t]
\centerin
\includegraphics[width=0.5\linewidth]{qfactor.pdf}
\caption{Evaluation of the different $Q$ factors of a resonant cavity with help of the \textit{Smith} chart}
\label{qfactor}
\end{figure}
\begin{itemize}
\item Undercritical coupling ($0 < \beta < 1$): the radius of the resonance circle is smaller than 0.25. Hence, the center of the chart ($\Gamma=0$) lies outside the circle.
\item Critical coupling ($\beta = 1$): the radius of the resonance circle is exactly 0.25. Hence, the circle crosses $\Gamma=0$ at the resonance frequency $f_{\text{res}}$.
\item Overcritical coupling ($1 < \beta < \infty$): the radius of the resonance circle is larger than 0.25. Hence, the center of the chart lies inside the circle.
\end{itemize}
In practice, the circle may be rotated around the origin due to the transmission lines between the resonant circuit and the measurement device.
From the different marked frequency points in Fig. \ref{qfactor} the 3~dB bandwidth, and thus the quality factors $Q_{0}$, $Q_{\text{L}}$ and $Q_{\text{ext}}$ are determined as follows:
\begin{itemize}
\item The unloaded $Q$ is determined from $f_{5}$ and $f_{6}$. The condition for these points is Re($Z$) = Im($Z$), with the resonance circle in the ``detuned short'' position.
\item The loaded $Q$ is determined from $f_{1}$ and $f_{2}$. The condition to find these points is $\left|\text{Im}(S_{11})\right| \rightarrow$ max.\ in ``detuned short'' position.
\item The external $Q$ is calculated from $f_{3}$ and $f_{4}$. The condition to determine these points is $Z$ = $\pm \text{j}$ in ``open short'' position, which is equivalent to $Y$ = $\pm \text{j}$ in ``detuned short'' position
\end{itemize}
To determine the points $f_{1}$ to $f_{6}$ with a network analyzer, the following steps are applicable:
\begin{itemize}
\item $f_{1}$ and $f_{2}$: set the marker format to Re($S_{11}$) + j Im($S_{11}$) and determine the two points where Im($S_{11}$) = max.
\item $f_{3}$ and $f_{4}$: set the marker format to $Z$ and find the two points where $Z = \pm$ j.
\item $f_{5}$ and $f_{6}$: set the marker format to $Z$ and locate the two points where Re($Z$) = Im($Z$).
\end{itemize}
\section{Summary}
Some fundamental concepts on RF devices, instruments, and signal processing techniques
have been presented in this introduction to RF measurement techniques.
Advantages of various measurement methods using spectrum and network analyzers
have been emphasized.
In the last section the definition of the \textit{Smith} chart,
and its usage were illustrated with several examples.
This article supports the practical part of the CAS RF course and CAS 2018 special topic on beam instrumentation,
and serves as background information.
\section{Acknowledgments}
Greatest respect and many thanks go to \emph{Fritz Caspers}, who started this CAS RF training initiative!
Many contributions, ideas and concepts in this text, and in the lecture root to him,
and to the numerous contributions and support of his former Ph.D.\ students!
|
1,314,259,993,245 | arxiv | \section{Introduction
\label{sec:intro}
Boij--S\"oderberg theory is the study of the
cone of Betti diagrams over the standard graded polynomial ring
$S=\Bbbk[x_1,\dots, x_n]$ and -- dually -- the
cone of cohomology tables of coherent sheaves on $\mathbb{P}^{n-1}_\Bbbk$,
where $\Bbbk$ is a field. The extremal rays of these
cones correspond to special modules and sheaves:
Cohen--Macaulay modules with pure resolutions (Definition~\ref{def:pure:res})
and supernatural sheaves (Definition~\ref{def:supernatural}), respectively.
Each set of extremal rays carries a partial order $\preceq$
(Definitions~\ref{defn:partial:deg} and \ref{defn:partial:root}) that
induces a simplicial decomposition of the corresponding cone.
Each partial order $\preceq$ is defined in terms of certain combinatorial
data associated to these special modules and sheaves.
For a module with a pure resolution, this data is a degree sequence, and for
a supernatural sheaf, this data is a root sequence.
Our main results reinterpret these partial orders $\preceq$
in terms of the existence of nonzero homomorphisms
between Cohen--Macaulay modules with pure resolutions
and between supernatural sheaves.
\begin{thm}\label{thm:poset:deg:main}
Let $\rho_d$ and $\rho_{d'}$ be extremal rays of the cone of Betti diagrams
for $S$ corresponding to Cohen--Macaulay modules with pure resolutions of
types $d$ and $d'$, respectively.
Then $\rho_d \preceq \rho_{d'}$ if and only if there
exist Cohen--Macaulay modules $M$ and $M'$ with pure resolutions of types
$d$ and $d'$, respectively, with $\operatorname{Hom}_S(M',M)_{\leq 0}\ne 0$.
\end{thm}
\begin{thm}\label{thm:poset:root:main}
Let $\rho_f$ and $\rho_{f'}$ be extremal rays of the cone of cohomology
tables for $\mathbb{P}^{n-1}$ corresponding to supernatural sheaves of types $f$
and $f'$, respectively.
Then $\rho_f\preceq \rho_{f'}$ if and only if there exist
supernatural sheaves $\mathcal{E}$ and $\mathcal{E}'$ of types $f$ and $f'$, respectively,
with $\operatorname{Hom}_{\mathbb{P}^{n-1}}(\mathcal{E}',\mathcal{E})\ne 0$.
\end{thm}
Though the statements of these two theorems are quite parallel,
Theorem~\ref{thm:poset:deg:main} is far more subtle than
Theorem~\ref{thm:poset:root:main}. Theorem~\ref{thm:poset:root:main}
follows nearly directly from the Eisenbud--Schreyer pushforward
construction of supernatural sheaves, but without modification,
it is not clear how to compare the modules constructed
in~\cite[\S5]{EiScConjOfBS07}.
We illustrate this via an example. Let $n=3$, $d=(0,2,3,5)$,
$d'=(0,3,9,10)$, and $M$ and $M'$ be finite length modules with
pure resolutions of types $d$ and $d'$, as constructed
in~\cite[\S5]{EiScConjOfBS07}.
We know of no method to produce a nonzero element of $\operatorname{Hom}(M,M')_{\leq 0}$,
even in this specific case.
The difficulty here stems from differences in the constructions of $M$ and
$M'$: the module $M$ is constructed by pushing forward a complex of
projective dimension $5$ along $\mathbb{P}^2\times (\mathbb{P}^1)^2\to \mathbb{P}^2$,
whereas $M'$ is constructed by pushing forward a complex of
projective dimension $10$ along $\mathbb{P}^2\times \mathbb{P}^2\times \mathbb{P}^5\to\mathbb{P}^2$.
Thus, the construction of~\cite[\S5]{EiScConjOfBS07} does not
even suggest that Theorem~\ref{thm:poset:deg:main} ought to be true.
Our motivation for conjecturing the statement of
Theorem~\ref{thm:poset:deg:main} -- and the first key idea behind its proof --
is based on a flexible version of the Eisenbud--Schreyer construction of
pure resolutions. This is Construction~\ref{modif:es} below,
and we show that the basic results of~\cite[\S5]{EiScConjOfBS07} can be
adapted to this construction. This extension enables us to use a single
projection map to simultaneously produce modules $N$ and $N'$ with pure
resolutions of types $d$ and $d'$. In the case under consideration,
we construct both $N$ and $N'$ by pushing forward complexes
of projective dimension $10$ along the projection map
$\mathbb{P}^2\times (\mathbb{P}^1)^7\to \mathbb{P}^2$.\footnote{We note that $M\ne N$
and $M'\ne N'$ in this example.}
We may then produce elements of $\operatorname{Hom}(N,N')_{\leq 0}$ by working with
the complexes on the source $\mathbb{P}^2\times (\mathbb{P}^1)^7$ of the projection map.
However, finding such a nonzero element poses a second technical challenge
in the proof of Theorem~\ref{thm:poset:deg:main}. This requires an explicit
and somewhat delicate computation involving the pushforward of a morphism
of complexes along the projection $\mathbb{P}^2\times (\mathbb{P}^1)^7\to \mathbb{P}^2$.
This computation is carried out in the proof of
Theorem~\ref{thm:deg:after:reduction}, thus providing a new understanding
of how certain modules with pure resolutions are related.
\begin{figure}
\begin{tikzpicture}[scale=1.2]
\draw[-](.5,.5)--(1,3);
\draw[-](.5,.5)--(2,2.5);
\draw[-](.5,.5)--(3,2.5);
\draw[-](.5,.5)--(3.6,2.8);
\draw[-](.5,.5)--(2.5,2.5);
\draw[dashed,-](.5,.5)--(3.2,3.5);
\draw[dashed,-,thick](.5,.5)--(2.2,3.5);
\draw[-](1,3)--(2,2.5)--(3,2.5)--(3.6,2.8)--(3.2,3.5)--(2.2,3.5)--cycle;
\draw[->,thick](3.6,2.2)--(7.6,2.2);
\draw (5.5,2.5) node { The partial order $\preceq$};
\draw (5.6,1.85) node { induces the fan structure.};
\end{tikzpicture}
\qquad
\begin{tikzpicture}[scale=1.2]
\draw[-](.5,.5)--(1,3);
\draw[-](.5,.5)--(2,2.5);
\draw[-](.5,.5)--(3,2.5);
\draw[-](.5,.5)--(3.6,2.8);
\draw[-](2,2.5)--(2.2,3.5);
\draw[-](3,2.5)--(3.2,3.5);
\draw[-](3.2,3.5)--(2.5,2.5);
\draw[-](2.2,3.5)--(2.5,2.5);
\draw[-](.5,.5)--(2.5,2.5);
\draw[dashed,-](.5,.5)--(3.2,3.5);
\draw[dashed,-,thick](.5,.5)--(2.2,3.5);
\draw[-](1,3)--(2,2.5)--(3,2.5)--(3.6,2.8)--(3.2,3.5)--(2.2,3.5)--cycle;
\end{tikzpicture}
\caption{
The partial order $\preceq$ on the extremal rays induces a simplicial
decomposition of the cone of Betti diagrams, where the
simplices correspond to chains of extremal rays with respect to the partial order.
This simplicial decomposition is essential to many applications of
Boij--S\"oderberg theory.}
\end{figure}
Besides providing greater insight into the structure of modules with pure
resolutions and supernatural sheaves, our results have two further
implications. First, the partial orders $\preceq$ are defined in
terms of the combinatorial data of degree sequences and root sequences
(see Sections~\ref{sec:prelim deg} and~\ref{sec:prelim root}), and
depend on the total order of $\mathbb{Z}$; thus, they are only formally
related to $S$ and $\mathbb{P}^{n-1}$. However, our reinterpretations of
$\preceq$ in terms of module- and sheaf-theoretic properties
suggest the naturality not only of $\preceq$, but also of the
induced simplicial decompositions of both cones. In other words,
while there exist graded modules whose Betti diagrams can be written
as a positive sum of pure tables in several ways, Theorem~\ref{thm:poset:deg:main}
suggests that the most natural of these decompositions is the
Boij--S\"oderberg decomposition produced
by~\cite{EiScConjOfBS07}*{Decomposition Algorithm}, and similarly for
Theorem~\ref{thm:poset:root:main} and cohomology tables.
A second implication involves the extension of
Boij--S\"oderberg theory to more complicated projective
varieties or graded rings.
For instance, the cone of free resolutions over a quadric hypersurface ring of
$\Bbbk[x,y]$ is described in~\cite{bbeg}. The extremal rays in this case correspond
to pure resolutions of finite or infinite length. We could thus consider
a partial order defined in parallel to Boij--S\"oderberg's original definition (based
on the combinatorial data of a degree sequence), or, following our result, we could consider
a partial order defined in terms of nonzero homomorphisms.
These partial orders are different in this hypersurface case; only the second definition leads to
a decomposition algorithm for Betti diagrams. See Example~\ref{ex:hypersurface} below for details.
For more general graded rings there even exist extremal rays that do
not correspond to pure resolutions. (Similar statements hold for more
general projective varieties.) There is thus no obvious extension of
Boij--S\"oderberg's original partial order to these cases. By
contrast, the reinterpretations of $\preceq$ provided by
Theorems~\ref{thm:poset:deg:main} and ~\ref{thm:poset:root:main} are
readily applicable to arbitrary projective varieties and graded rings.
We discuss one such case in Example~\ref{ex:bigraded}.
Theorems~\ref{thm:poset:deg:main} and~\ref{thm:poset:root:main} hold
over an arbitrary field $\Bbbk$, and their proofs involve variants of
the constructions in~\cite{EiScConjOfBS07} for supernatural sheaves
and modules with pure resolutions. When $\operatorname{char}(\Bbbk)=0$, there also
exist equivariant constructions of supernatural vector
bundles~\cite{EiScConjOfBS07}*{Thm.~6.2} and of finite length modules
with pure resolutions~\cite{efw}*{Thm.~0.1}. For these we prove the most
natural equivariant analogues of
our main results.
\begin{thm}\label{thm:equivariant:deg}
Let $V$ be an $n$-dimensional $\Bbbk$-vector space with
$\operatorname{char}(\Bbbk)=0$, and let $\rho_d$ and $\rho_{d'}$ be
the extremal rays of the cone of Betti diagrams for $S=\operatorname{Sym}(V)$
corresponding to finite length modules
with pure resolutions of types $d$ and $d'$.
Then $\rho_d\preceq \rho_{d'}$ if and only if there
exist finite length ${\bf GL}(V)$-equivariant
modules $M$ and $M'$
with pure resolutions of types $d$ and $d'$, respectively,
with $\operatorname{Hom}_{{\bf GL}(V)}(M',M)_{\le 0}\ne 0$.
\end{thm}
\begin{thm}\label{thm:equivariant:root}
Let $V$ be an $n$-dimensional $\Bbbk$-vector space with
$\operatorname{char}(\Bbbk)=0$, and let $\rho_f$ and $\rho_{f'}$ be the extremal rays
of the cone of cohomology tables for $\mathbb{P}^{n-1} = \mathbb{P}(V)$
corresponding to supernatural vector bundles of types $f$ and $f'$.
Then $\rho_f\preceq \rho_{f'}$ if and only if there exist
${\bf GL}(V)$-equivariant supernatural vector bundles $\mathcal{E}$ and $\mathcal{E}'$ of
types $f$ and $f'$, respectively, with $\operatorname{Hom}_{{\bf GL}(V)}(\mathcal{E}',\mathcal{E})\ne 0$.
\end{thm}
The action of ${\bf GL}(V)$ has two orbits on the maximal ideals of $S$:
one consisting of the maximal ideal $(x_1, \dots, x_n)$ and the other
consisting of its complement. An equivariant Cohen--Macaulay module
therefore has only two options for its support, and hence either has finite
length or must be a free module. Thus the finite length hypothesis in
Theorem~\ref{thm:equivariant:deg} is the natural equivariant analogue
of the Cohen--Macaulay hypothesis in Theorem~\ref{thm:poset:deg:main}.
As above, the statement for pure resolutions is more subtle than the
corresponding statement for supernatural vector bundles. The modules
constructed in \cite[\S 3]{efw} do not have nonzero equivariant
homomorphisms between them, but the explicit combinatorics of the
representation theory involved suggests a minor modification which
does work. This also suggests how the maps should be defined in terms of
the explicit presentation of the modules; the remaining nontrivial step is to
show that these maps are in fact well-defined. The main obstacle is
that such maps must be compatible with the actions of both the
general linear group and the symmetric algebra, and the interplay
between the two is delicate. This key issue in the proof of
Theorem~\ref{thm:equivariant:deg} is accomplished through a careful
computation involving Pieri maps (combined with results from~\cite{sam}).
\subsection*{Outline
In this paper, we first focus on the cone of Betti diagrams for $S$.
In Section~\ref{sec:prelim deg}, we prove the reverse implications of
Theorems~\ref{thm:poset:deg:main} and~\ref{thm:equivariant:deg}.
We then construct nonzero morphisms
between modules with pure resolutions.
Sections~\ref{sec:construct deg} and~\ref{sec:equiv:deg},
respectively, address the forward directions of
Theorems~\ref{thm:poset:deg:main} and~\ref{thm:equivariant:deg}.
We next address the cone of cohomology tables for $\mathbb{P}^{n-1}$.
In Section~\ref{sec:prelim root}, we prove the reverse
implications of Theorems~\ref{thm:poset:root:main}
and~\ref{thm:equivariant:root}. We then turn to the construction of
nonzero morphisms between supernatural sheaves:
Sections~\ref{sec:construct root} and~\ref{sec:equiv:root},
respectively, address the forward directions of
Theorems~\ref{thm:poset:root:main} and~\ref{thm:equivariant:root}.
Finally, we provide in Section~\ref{sec:extensions}
a brief discussion of how Theorem~\ref{thm:poset:deg:main} has been
applied in the study of Boij--S\"oderberg theory over other graded rings.
We suggest the survey~\cite{ES:ICMsurvey}
to the reader seeking additional background on Boij--S\"oderberg theory.
\subsection*{Acknowledgements
We would like to thank J.~Burke, D.~Eisenbud, C.~Gibbons, W. F.~Moore, F.-O.~Schreyer, B.~Ulrich, and J.~Weyman
for helpful discussions. Significant parts of this work were completed
at the Pan American Scientific Institute Summer School on
``Commutative Algebra and its Connections to Geometry'' in Olinda,
Brazil, and when the second author visited Purdue University; we thank
both of these institutions for their hospitality. The computer
algebra system \texttt{Macaulay2} \cite{M2} provided valuable
assistance in studying examples.
\section{The poset of degree sequences
\label{sec:prelim deg}
Let $M$ be a finitely generated graded $S$-module.
The \defi{$(i,j)$th graded Betti number} of $M$,
denoted $\beta_{i,j}(M)$, is $\dim_\Bbbk \mathrm{Tor}_i^S(\Bbbk, M)_j$.
The \defi{Betti diagram} of $M$ is a table, with rows indexed by $\mathbb{Z}$
and columns by $0, \dots, n$,
such that the entry in column $i$ and row $j$ is $\beta_{i,i+j}(M)$.
A sequence $d=(d_0, \dots, d_n)\in \left( \mathbb{Z}\cup \{\infty\} \right)^{n+1}$
is called a \defi{degree sequence} for $S$ if $d_i>d_{i-1}$ for all $i$
(with the convention that $\infty>\infty$).
The \defi{length} of $d$, denoted $\ell(d)$,
is the largest integer $t$ such that $d_t$ is finite.
\begin{definition}\label{def:pure:res}
A graded $S$-module $M$ is said to have a \defi{pure resolution of type
$d$} if a
minimal free resolution of $M$ has the form
\[
0\leftarrow M \leftarrow S(-d_0)^{\beta_{0,d_0}} \leftarrow S(-d_1)^{\beta_{1,d_1}} \leftarrow
\dots
\leftarrow
S(-d_{\ell(d)})^{\beta_{\ell(d),d_{\ell(d)}}} \leftarrow 0.
\qedhere
\]
\end{definition}
For every degree sequence $d$, there exists a
Cohen--Macaulay module with a
pure resolution of type $d$~\cite{EiScConjOfBS07}*{Theorem~0.1} (see also~\cite{boij-sod1}*{Conjecture~2.4}, \cite{efw}*{Theorem~0.1}).
The Betti diagram of any finitely generated $S$-module can be written as a positive
rational combination of the Betti diagrams of Cohen--Macaulay modules with
pure resolutions (see ~\cite {EiScConjOfBS07}*{Theorem~0.2} and~\cite{BoijSoderbergNonCM08}*{Theorem~2}).
The \defi{cone of Betti diagrams} for $S$
is the convex cone inside $\bigoplus_{j \in \mathbb{Z}}\mathbb{Q}^{n+1}$
generated by the Betti diagrams of all finitely generated $S$-modules.
Each degree sequence $d$ corresponds to a unique extremal ray of this cone,
which we denote by $\rho_d$, and
every extremal ray is of the form $\rho_d$ for some degree sequence $d$.
\begin{definition}\label{defn:partial:deg}
For two degree sequences $d$ and $d'$, we say that $d\preceq d'$ and that
$\rho_d\preceq \rho_{d'}$ if $d_i\leq d_i'$ for all $i$.
\end{definition}
This partial order induces a simplicial fan structure on the cone of Betti diagrams,
where simplices correspond to chains of degree sequences under the partial
order $\preceq$.
We now show that the existence of a nonzero homomorphism between
two modules with pure resolutions implies the comparability of their
corresponding degree sequences.
This result provides the reverse implications for
Theorems~\ref{thm:poset:deg:main} and~\ref{thm:equivariant:deg}.
\begin{prop}\label{prop:half:poset:deg}
Let $M$ and $M'$ be graded Cohen--Macaulay $S$-modules
with pure resolutions of types $d$ and $d'$, respectively.
If $\operatorname{Hom}(M',M)_{\leq 0}\ne 0$, then $d\preceq d'$.
\end{prop}
\begin{proof}
Write $\ell' = \ell(d')$ and $\ell = \ell(d)$. If $\ell' > \ell$, then
$\operatorname{codim} M' > \operatorname{codim} M$, and, by~\cite{BrHe:CM}*{Propositions~1.2.3
and~1.2.1}, $\operatorname{Hom}(M', M) = 0$.
Therefore we may assume that $\ell' \leq \ell$.
By hypothesis, we may fix a nonzero homomorphism
$\phi \in \operatorname{Hom}(M',M)_t$ for some $t\leq 0$.
Let $F_\bullet$ and $F'_\bullet$ be minimal graded free resolutions of $M$
and $M'$, respectively, and let $\left\{ \phi_i\colon F'_i \to F_i
\right\}_{i \geq 0}$ be the comparison maps in a lifting of $\phi$.
Suppose by way of contradiction that
there is a $j$ such that $d'_j < d_j$.
Since $d'_j < d_j$, we see that $\phi_j = 0$. Hence,
each $\phi_i$ such that $j \leq i \leq \ell'$ can be made
zero by some homotopy equivalence.
Write $(-)^\vee = \operatorname{Hom}_S(-, S(-n))$.
Since $M$ and $M'$ are Cohen--Macaulay,
we note that $(F_\bullet)^\vee$ and $(F'_\bullet)^\vee$
are minimal graded free resolutions of
$\Ext^\ell_S(M,S(-n))$ and $\Ext^{\ell'}_S(M', S(-n))$.
Further, the maps $\{\phi_i^\vee\}_{i \geq 0}$ define an element of
$\Ext^{\ell-\ell'}_S\left(\Ext_S^\ell(M, S(-n)), \Ext_S^{\ell'}(M', S(-n))\right)$.
In fact, if we write
$N = \operatorname{coker} \left((F_{\ell'-1})^\vee \longrightarrow (F_{\ell'})^\vee\right)$,
then $(\phi_{\ell'})^\vee \colon N \longrightarrow \Ext_S^{\ell'}(M', S(-n)))$
is the zero homomorphism.
Hence $\phi_i^\vee = 0$ for all $0 \leq i \leq \ell'$, and therefore $\phi = 0$.
\end{proof}
Proposition~\ref{prop:half:poset:deg} is untrue if we do not
assume that $M'$ is Cohen--Macaulay.
For example, consider $S = \Bbbk[x,y]$, $M = S/\langle x^2\rangle $, and
$M' = S \oplus \Bbbk$.
We used the hypothesis that $M'$ is Cohen--Macaulay
to have that $\operatorname{codim} M' = \ell(d')$ and that
$\operatorname{Hom}_S(F'_\bullet, S(-n))$ is a resolution.
\section{Construction of morphisms between modules with pure resolutions}
\label{sec:construct deg}
In Theorem~\ref{thm:poset:deg:main} we must, necessarily, consider more
than $\operatorname{Hom}(M',M)_0$.
For instance, if $n=2, d=(0,1,2)$, and $d'=(1,2,3)$, then any $M$ and
$M'$ with pure resolutions of types $d$ and $d'$ will be isomorphic to
$\Bbbk^m$ and $\Bbbk(-1)^{m'}$, respectively, for some integers $m,m'$.
In this case, $\operatorname{Hom}(M',M)_0=0$, whereas $\operatorname{Hom}(M',M)_{-1}\ne 0$.
However, it is possible to reduce to the consideration of $\operatorname{Hom}(M',M)_0$.
To do this, let $t := \min \{d'_i - d_i \mid d'_i\neq \infty \}$.
By replacing $d'$ by $d' - (t,\dots,t)$,
the forward direction of Theorem~\ref{thm:poset:deg:main}
is an immediate corollary of the following result.
\begin{thm}
\label{thm:deg:after:reduction}
Let $d \preceq d'$ be degree sequences for $S$ with
$d_j = d'_j$ for some $0 \leq j \leq \ell(d')$.
Then there exist finitely generated graded Cohen--Macaulay
modules $M$ and $M'$ with pure resolutions of types $d$ and $d'$, respectively,
with $\operatorname{Hom}(M',M)_0\ne 0$.
\end{thm}
\begin{remark}
The homomorphism group in Theorems~\ref{thm:poset:deg:main}
and~\ref{thm:deg:after:reduction}
is nonzero only for specific choices of the modules $M$ and $M'$.
For two degree sequences $d \preceq d'$, there exist many pairs of modules
$M$, $M'$ with pure resolutions of types $d$ and $d'$, respectively,
such that $\operatorname{Hom}(M',M)_{\leq 0}=0$.
For example, take $d=d'=(0,2,4)$,
$M = S/\< x^2,y^2 \>$, and $M' = S/\< l_1^2,l_2^2 \>$
for general linear forms $l_1$ and $l_2$.
As another example, consider $d = (0,3,6) \prec d'= (0,4,8)$. When $M
= S/\< x^3,y^3 \>$ and $M' = S/\< f,g \>$ for general quartic
forms $f$ and $g$, we again have $\operatorname{Hom}(M',M)_{\leq 0}=0$.
\end{remark}
The proof of Theorem~\ref{thm:deg:after:reduction} is given at the end
of this section and involves two main steps.
\begin{enumerate}
\item\label{eq:deg:step:1}
Construct twisted Koszul complexes $\mathcal{K}_\bullet$ and $\mathcal{K}_{\bullet}'$ on
a product $\mathbb{P}$ of projective spaces (including a copy of
$\mathbb{P}^{n-1}$) and push them forward along the
projection $\pi\colon \mathbb{P}\to \mathbb{P}^{n-1}$. This yields pure
resolutions $F_\bullet$ and $F_\bullet'$ of types $d$ and $d'$ that
respectively resolve modules $M$ and $M'$.
\item\label{eq:deg:step:2}
Show that there exists a morphism $h_\bullet\colon \mathcal{K}'_\bullet \to
\mathcal{K}_\bullet$ such that the induced map $\nu_\bullet \colon F_\bullet' \to
F_\bullet$ is not null-homotopic.
This yields a nonzero element $\psi\in \operatorname{Hom}_S(M', M)_0$.
\end{enumerate}
We achieve \eqref{eq:deg:step:1} by modifying the construction of pure
resolutions by Eisenbud and Schreyer \cite{EiScConjOfBS07}*{\S5}.
We replace their use of $\prod_i \mathbb{P}^{d_i - d_{i-1}}$ with a product of
copies of $\mathbb{P}^1$. This enables us to simultaneously
construct pure resolutions of types
$d$ and $d'$ and a nonzero map between the modules they resolve. The
details of \eqref{eq:deg:step:1} are contained in
Construction~\ref{modif:es}. For \eqref{eq:deg:step:2}, we
apply Construction~\ref{modif:es} so as to produce the morphism $h_\bullet$.
Checking that the induced map $\nu_\bullet$ is not null-homotopic
uses, in an essential way, the hypothesis that $d_j = d'_j$ for some $0
\leq j \leq \ell(d')$. Example~\ref{ex:degseqs} demonstrates these
arguments. Write $\mathbb{P}^{1 \times r}$ for the $r$-fold product of
$\mathbb{P}^1$.
\begin{construction}
[Modification of the Eisenbud--Schreyer construction of pure resolutions]
\label{modif:es}
The objects involved in this construction of a pure resolution $F_\bullet$
of type $d$ will be denoted by $\Kos^d_\bullet$, $\mathcal{K}_\bullet$, and $\mathcal{L}$.
The corresponding objects for the pure resolution $F'_\bullet$ of type $d'$
are $\Kos^{d'}_\bullet$, $\mathcal{K}'_\bullet$, and $\mathcal{L}'$. Let
\begin{equation}
\label{equation:rForKos}
r :=\max\{d_{\ell(d)}-d_0-\ell(d), d'_{\ell(d')}-d_0-\ell(d')\}
\end{equation}
and $\mathbb{P} := \mathbb{P}^{n-1}\times \mathbb{P}^{1\times r}$. On $\mathbb{P}$, fix the coordinates
\[
\left([x_1:x_2:\dots:x_n], [y^{(1)}_0:y^{(1)}_1], \ldots,
[y^{(r)}_0:y^{(r)}_1]\right)
\]
and consider the multilinear forms
\[
f_p := \sum_{i_0 + \cdots + i_r = p}
\left( x_{i_0}\cdot \prod_{j=1}^r y_{i_j}^{(j)} \right)
\qquad \text{for } p = 1,2,\dots,n+r.
\]
(Note that $i_0 \in \{1, \ldots, n\}$ and $i_j \in \{0,1\}$ for all $1 \leq
j \leq r$.) We now define
\begin{align*}
D & := \{d_0, d_0+1, \dots, d_0+\ell(d)+r\}, &
D' & := \{d_0, d_0+1, \dots, d_0+\ell(d')+r\}, \\
\delta & := (\delta_1< \dots< \delta_r) = D \ensuremath{\!\smallsetminus\!} d, &
\delta' & := (\delta'_1< \dots< \delta'_r) = D' \ensuremath{\!\smallsetminus\!} d', \\
a & := \delta - (d_0+1, \ldots, d_0+1), &
a' & := \delta' - (d_0+1, \ldots, d_0+1),& \\
\mathcal{L} & := {\mathcal O}_\mathbb{P}(-d_0,a), \qquad \qquad \qquad \qquad \text{and} &
\mathcal{L}' & := {\mathcal O}_\mathbb{P}(-d_0,a').
\end{align*}
(We view $\delta$ and $\delta'$ as ordered sequences.)
Let $\Kos^{d}_\bullet$ be the Koszul complex on
$f_1, \dots, f_{\ell(d)+r}$, which is an acyclic complex of sheaves on
$\mathbb{P}$ of length $\ell(d)+r$ (see
\cite{EiScConjOfBS07}*{Proposition~5.2}). Let $\mathcal{K}_\bullet
:= \Kos^d_\bullet \otimes \mathcal{L}$. Let $\pi\colon\mathbb{P} \rightarrow
\mathbb{P}^{n-1}$ denote the projection onto the first factor. By repeated
application of~\cite{EiScConjOfBS07}*{Proposition~5.3},
$\pi_*\mathcal{K}_\bullet$ is an acyclic complex of sheaves on $\mathbb{P}^{n-1}$ of
length $\ell(d)$ such that each term is a direct sum of line
bundles. Taking global sections of this complex in all twists yields
the pure resolution $F_\bullet$ of a graded $S$-module (that is
finitely generated and Cohen--Macaulay). We
can write the free module $F_i$ explicitly as follows. If $s=\max\{i \mid a_i-d_j+d_0 \leq -2\}$,
then we have
\[
F_j = S(-d_j)^{\binom{\ell(d)+r}{d_j-d_0}}\otimes \left(
\bigotimes_{i=1}^s \mathrm{H}^1(\mathbb{P}^1, {\mathcal O}(a_i-d_j+d_0)) \right) \otimes
\left( \bigotimes_{i=s+1}^r \mathrm{H}^0(\mathbb{P}^1, {\mathcal O}(a_i-d_j+d_0)) \right) .
\]
Let $\Kos^{d'}_\bullet$ be the Koszul complex on
$f_1, \dots, f_{\ell(d')+r}$ and $\mathcal{K}_\bullet' := \Kos^{d'}_\bullet \otimes \mathcal{L}'$,
and define $F'_\bullet$ in a similar manner.
\end{construction}
The value of $r$ in \eqref{equation:rForKos} is the least integer such
that we are able to fit both the twists
$-d_0$ and $\min\{-d_{\ell(d)}, -d'_{\ell(d')}\}$
in the $\mathbb{P}^{n-1}$ coordinate of the bundles
of the complexes $\mathcal{K}_\bullet$ and $\mathcal{K}'_\bullet$. The
choices of $a$ and $a'$, which ensure that $F_\bullet$ and
$F'_\bullet$ are pure of types $d$ and $d'$, are dictated by the
homological degrees in $\mathcal{K}_\bullet$ and $\mathcal{K}_\bullet'$ that need to
be eliminated in each projection away from a $\mathbb{P}^1$ component of $\mathbb{P}$.
In Example~\ref{ex:degseqs}, these homological degrees are those with an
underlined $-1$ in Table~\ref{tab:propaedeuticTwists}.
Observe that $a - a' \in \mathbb{N}^r$ since $d\preceq d'$.
Thus there is a nonzero map
$h_\bullet\colon\mathcal{K}'_\bullet\to\mathcal{K}_\bullet$ that is induced by a
polynomial of multidegree $(0,a-a')$. In \eqref{eq:deg:step:2}, we
show that $\pi_*h_\bullet$ induces the desired nonzero map.
The following extended example contains all of the main ideas
behind the proof of Theorem~\ref{thm:deg:after:reduction}.
\begin{example}
\label{ex:degseqs}
Consider $d=(0,2,4,5,6)$ and $d'=(1,2,4,7) = (1,2,4,7,\infty)$.
Note that $d_2 = d'_2 = 4$, so that $d$ and $d'$ satisfy the hypotheses
of Theorem~\ref{thm:deg:after:reduction}.
Here $r=4$ and $\mathbb{P} = \mathbb{P}^{3}\times \mathbb{P}^{1\times 4}$.
On $\mathbb{P}$, we have the Koszul complexes
$\Kos^d_\bullet = \Kos_\bullet({\mathcal O}_\mathbb{P}; f_1, \dots, f_8)$
and $\Kos^{d'}_\bullet = \Kos_\bullet({\mathcal O}_\mathbb{P}; f_1, \dots, f_7)$.
There is a natural map $\Kos^{d'}_\bullet\to \Kos^{d}_\bullet$
induced by the inclusion
$\langle f_1, \dots, f_7\rangle \subseteq \langle f_1, \dots, f_8\rangle$.
Here we have
\begin{align*}
\delta & =(1,3,7,8), & \delta'=&\ (0,3,5,6), \\
a&=(0,2,6,7), & a'=&\ (-1,2,4,5), \\
\mathcal{K}_\bullet&=\Kos^d_\bullet\otimes {\mathcal O}_\mathbb{P}(0,a), &\text{and}\quad
\mathcal{K}'_\bullet=&\ \Kos^{d'}_\bullet\otimes {\mathcal O}_\mathbb{P}(0,a').
\end{align*}
Table~\ref{tab:propaedeuticTwists} shows the twists
in each homological degree of these complexes.
\begin{table}[h]
\parbox{5cm}{%
\begin{tabular}{|c|c|}%
\multicolumn{2}{c}{$d = (0, 2, 4, 5, 6)$}\\%
\hline
$i$ & Twist in $\mathcal{K}_i$\\\hline
0 &$(0,0, 2, 6, 7)$\\%
$-1$ &$(-1,\uuline{-1}, 1, 5, 6)$ \\%
$-2$ &$(-2,-2, 0, 4, 5)$ \\%
$-3$ &$(-3,-3, \uuline{-1}, 3, 4)$ \\%
$-4$ &$(-4,-4, -2, 2, 3)$ \\%
$-5$ &$(-5,-5, -3, 1, 2)$ \\%
$-6$ &$(-6,-6, -4, 0, 1)$ \\%
$-7$ &$(-7,-7, -5, \uuline{-1}, 0)$\\%
$-8$ &$(-8,-8, -6, -2, \uuline{-1})$ \\%
\hline%
\end{tabular}}
\parbox{5cm}{%
\begin{tabular}{|c|c|}%
\multicolumn{2}{c}{$d' = (1, 2, 4, 7)$} \\
\hline
$i$& Twist in $\mathcal{K}'_i$\\\hline
$ 0$ & $(0,\uuline{-1}, 2, 4, 5)$\\
$-1$ & $(-1,-2, 1, 3, 4)$\\
$-2$ & $(-2,-3, 0, 2, 3)$\\
$-3$ & $(-3,-4, \uuline{-1}, 1, 2)$\\
$-4$ & $(-4,-5, -2, 0, 1)$\\
$-5$ & $(-5,-6, -3, \uuline{-1}, 0)$\\
$-6$ & $(-6,-7, -4, -2, \uuline{-1})$\\
$-7$ & $(-7,-8, -5, -3, -2)$\\
\hline
\end{tabular}
\phantom{ALIGNMENT line}
\vspace*{1.5mm}
}
\caption{Twists appearing in $\mathcal{K}_\bullet$ and $\mathcal{K}_\bullet'$ in Example~\ref{ex:degseqs}.}
\label{tab:propaedeuticTwists}
\end{table}
Let $h$ be a nonzero homogeneous polynomial on
$\mathbb{P}$ of multidegree $(0,a-a')= (0,1,0,2,2)$. Then multiplication by
$h$ induces a nonzero map $h \colon \mathcal{K}_0' \to \mathcal{K}_0$. To write $h$,
we use matrix multi-index notation for the monomials in
$\Bbbk[y_0^{(1)}, y_1^{(1)}, \dots, y_0^{(4)}, y_1^{(4)}]$, where the
$i$th column represents the multi-index of the
$y^{(i)}$-coordinates. With this convention, fix
\[
h =
\mathbf{y}^{\left(\begin{smallmatrix}1&0&2&2\\0&0&0&0
\end{smallmatrix}\right)}
:= y^{(1)}_0\cdot \left( y^{(3)}_0\right)^2\cdot
\left(y^{(4)}_0\right)^{2}.
\]
Denote the induced map of complexes $\mathcal{K}_\bullet'\to \mathcal{K}_\bullet$ by
$h_\bullet$.
Taking the direct image of $h_\bullet$ along the natural projection
$\pi\colon \mathbb{P}\to\mathbb{P}^3$ and its global sections in all twists induces
a map $\nu_\bullet\colon F_\bullet'\to F_\bullet$.
We claim that $\nu_\bullet$ is not null-homotopic. This need not hold
for an arbitrary pair $d\preceq d'$, however it does hold for a pair
of degree sequences which satisfy the hypotheses of
Theorem~\ref{thm:deg:after:reduction}. We use the fact that
$d_2=d_2'=4$, as this implies that $\nu_2\colon F_2' \to F_2$ is a
matrix of scalars. Since $F_\bullet'$ and $F_\bullet$ are both
minimal free resolutions, it then follows that the map $\nu_2$ factors
through a null-homotopy only if $\nu_2$ is itself the zero map. Thus
it is enough to show that $\nu_2\ne 0$. For this, note that
\begin{align*}
F_2&=S(-4)^{\binom{8}{4}}\otimes \mathrm{H}^1(\mathbb{P}^1, {\mathcal O}(-4))\otimes \mathrm{H}^1(\mathbb{P}^1, {\mathcal O}(-2))\otimes \mathrm{H}^0(\mathbb{P}^1, {\mathcal O}(2))\otimes \mathrm{H}^0(\mathbb{P}^1, {\mathcal O}(3))\\
\text{and}\quad
F_2'&=S(-4)^{\binom{7}{4}}\otimes \mathrm{H}^1(\mathbb{P}^1, {\mathcal O}(-5))\otimes \mathrm{H}^1(\mathbb{P}^1,
{\mathcal O}(-2))\otimes \mathrm{H}^0(\mathbb{P}^1, {\mathcal O}(0))\otimes \mathrm{H}^0(\mathbb{P}^1, {\mathcal O}(1))
\end{align*}
and that $F_2$ and $F_2'$ have
$\mathrm{H}^1$ terms in precisely the same positions,
and similarly for the $\mathrm{H}^0$ terms.
We may then use~\cite{explicit}*{Lemma 7.3} to compute the map
$\nu_2\colon F'_2\to F_2$ explicitly. Since the matrix is too large
to be written down, we simply exhibit a basis element of $F'_2$ that
is not mapped to zero.
For $I=\{i_1<\dots <i_4\}$ a subset of either $\{1, \dots, 8\}$ or $\{1,
\dots, 7\}$, we use the notation $\epsilon_I:=\epsilon_{i_1}\wedge \dots
\wedge \epsilon_{i_4}$ to write $S$-bases for
$S(-4)^{\binom{8}{4}}$ and for $S(-4)^{\binom{7}{4}}$.
Choose the natural monomial bases for the
cohomology groups appearing in the tensor product expressions for $F_2$ and $F_2'$, and write these monomials in multi-index notation.
Recalling the above definition of $h$, we then have that
\[
\epsilon_{1,2,3,4}\otimes
\mathbf{y}^{\left(\begin{smallmatrix}-4&-1&0&1\\-1&-1&0&0
\end{smallmatrix}\right)}
\]
is a basis element of $F_2$. We compute
\begin{align*}
\nu_2
\left(\epsilon_{1,2,3,4}\otimes
\mathbf{y}^{\left(\begin{smallmatrix}-4&-1&0&1\\-1&-1&0&0
\end{smallmatrix}\right)}\right)
& = \epsilon_{1,2,3,4}\otimes
\mathbf{y}^{\left(\begin{smallmatrix}-4&-1&0&1\\-1&-1&0&0
\end{smallmatrix}\right)}\cdot h\\
& = \epsilon_{1,2,3,4}\otimes
\mathbf{y}^{\left(\begin{smallmatrix}-4&-1&0&1\\-1&-1&0&0
\end{smallmatrix}\right)+\left(\begin{smallmatrix}1&0&2&2\\0&0&0&0
\end{smallmatrix}\right)}\\
& = \epsilon_{1,2,3,4}\otimes
\mathbf{y}^{\left(\begin{smallmatrix}-3&-1&2&3\\-1&-1&0&0
\end{smallmatrix}\right)}.
\end{align*}
Since this yields a basis element of $F_2'$, it is clear that $\nu_2$ is a nonzero map,
so $\nu_\bullet$ is not null-homotopic.
\end{example}
\begin{proof}[Proof of Theorem~\ref{thm:deg:after:reduction}]
Construction~\ref{modif:es} yields
finitely generated graded Cohen--Macaulay modules $M$ and $M'$
that have pure resolutions $F_\bullet$ and $F'_\bullet$ of types $d$ and
$d'$, respectively.
To construct the desired nonzero map $\psi\colon M'\to M$,
we fix a generic homogeneous form $h$ on $\mathbb{P}$ of multidegree
$(0,a-a')$, which exists because $a-a' = \delta - \delta'\in\mathbb{N}^r$. Multiplication by $h$ induces a map
$h_\bullet\colon \mathcal{K}'_\bullet \to \mathcal{K}_\bullet$.
The functoriality of $\pi_*$ induces a map
$\pi_*\mathcal{K}'_\bullet\to\pi_*\mathcal{K}_\bullet$ that,
upon taking global sections in all twists, yields a map
$\nu_\bullet\colon F_\bullet' \to F_\bullet$.
Let $\psi \colon M'\to M$ be the map induced by $\nu_\bullet$.
To show that $\psi$ is nonzero, it suffices to show that $\nu_\bullet$
is not null-homotopic. Let $j$ be the index such that $d_j=d_j'$.
Then $F_j$ and $F_j'$ are generated entirely in the same degree.
Since $F_\bullet$ and $F_\bullet'$ are minimal free resolutions,
$\nu_j\colon F_j'\to F_j$ is given by a matrix of scalars. Thus it
follows that $\nu_\bullet$ is null-homotopic only if $\nu_j$ is the
zero map. We now use the description of $\nu_j$ given
in~\cite{explicit}*{Lemma 7.3}. (The relevant homological degree in
both $\mathcal{K}_\bullet$ and $\mathcal{K}'_\bullet$ is $d_j-d_0$.)
Let $s=\max\{i \mid a_i-d_j+d_0\leq -2\}$ and let $s'=\max\{i \mid
a_i'-d_j'+d_0\leq -2\}$. Note that, since $d_j=d_j'$, the construction of
$a$ and $a'$ implies that $s=s'$. We then have
\begin{align*}
F_j & = S(-d_j)^{\binom{\ell(d)+r}{d_j-d_0}}\otimes \left(
\bigotimes_{i=1}^s \mathrm{H}^1(\mathbb{P}^1, {\mathcal O}(a_i-d_j+d_0)) \right) \otimes
\left( \bigotimes_{i=s+1}^r \mathrm{H}^0(\mathbb{P}^1, {\mathcal O}(a_i-d_j+d_0)) \right) \; \text{and} \\
F'_j & = S(-d_j)^{\binom{\ell(d')+r}{d_j-d_0}}\otimes \left(
\bigotimes_{i=1}^{s} \mathrm{H}^1(\mathbb{P}^1, {\mathcal O}(a'_i-d_j+d_0)) \right) \otimes
\left( \bigotimes_{i=s+1}^{r} \mathrm{H}^0(\mathbb{P}^1, {\mathcal O}(a'_i-d_j+d_0)) \right),
\end{align*}
where both $F_j$ and $F'_j$ have the same number of factors involving
$\mathrm{H}^0$ (and therefore also the same number involving $\mathrm{H}^1$). Hence
we can repeatedly apply~\cite{explicit}*{Lemma 7.3} to conclude that
$\nu_j$ is simply the map induced on cohomology by the map
$h_{d_j-d_0}\colon \mathcal K_{d_j-d_0}'\to \mathcal K_{d_j-d_0}$.
We now fix a specific value of $h$ and show that $\nu_{j}\ne 0$.
Let $c:=a-a'\in \mathbb N^r$ and write $c=(c_1, \dots, c_r)$. Let
\[
h:=\left(y_0^{(1)} \right)^{c_1} \cdot \left(y_0^{(2)} \right)^{c_2} \cdots \left(y_0^{(r)} \right)^{c_r}
=\mathbf{y}^{\left(\begin{smallmatrix}c_1& \dots& c_r\\ 0&\dots &0\end{smallmatrix}\right)},
\]
so that $h$ is the unique monomial of multidegree $(0,c)$ that
involves only the $y_0^{(i)}$-variables.
For $I=\{i_1<\dots <i_{d_j-d_0}\}$ a subset of either $\{1, \dots, \ell(d)+r\}$ or $\{1,
\dots, \ell(d')+r\}$, we use the notation $\epsilon_I:=\epsilon_{i_1}\wedge \dots
\wedge \epsilon_{d_j-d_0}$ to write $S$-bases for
$S(-d_j)^{\binom{\ell(d)+r}{d_j-d_0}}$ and for $S(-d_j)^{\binom{\ell(d')+r}{d_j-d_0}}$.
Choose the natural monomial bases for the
cohomology groups appearing in the tensor product expression for $F_j$ and $F_j'$, and write these monomials in matrix
multi-index notation, as in Example~\ref{ex:degseqs}.
For each $i$ corresponding to an $\mathrm{H}^1$-term (i.e. $i\in \{1, \dots, s\}$), let $u_i:=-(a_i-d_j+d_0)+1$. For each $i$ corresponding to an $\mathrm{H}^0$ term (i.e. $i\in \{s+1, \dots, r\}$), let $w_i:=-(a_i-d_j+d_0)$. Observe that
\[
\epsilon_{\{1, \dots, d_j-d_0\}}\otimes \mathbf{y}^{\left(\begin{smallmatrix}u_1& \dots& u_s&w_{s+1}&\dots &w_r\\ -1&\dots&-1&0&\dots&0\end{smallmatrix}\right)}
\]
is a basis element of $F_j$. We then have that
\begin{align*}
\nu_j
\left(
\epsilon_{\{1, \dots, d_j-d_0\}}\otimes
\mathbf{y}^{\left(\begin{smallmatrix}u_1& \dots& u_s&w_{s+1}&\dots &w_r\\ -1&\dots&-1&0&\dots&0\end{smallmatrix}\right)}
\right)
& = \epsilon_{\{1, \dots, d_j-d_0\}}\otimes
\mathbf{y}^{\left(\begin{smallmatrix}u_1& \dots& u_s&w_{s+1}&\dots &w_r\\ -1&\dots&-1&0&\dots&0\end{smallmatrix}\right)}
\cdot h\\
& = \epsilon_{\{1, \dots, d_j-d_0\}}\otimes
\mathbf{y}^{\left(\begin{smallmatrix}u_1& \dots& u_s&w_{s+1}&\dots &w_r\\ -1&\dots&-1&0&\dots&0\end{smallmatrix}\right)}
\cdot \mathbf{y}^{\left(\begin{smallmatrix}c_1& \dots& c_r\\ 0&\dots &0\end{smallmatrix}\right)}\\
&=
\epsilon_{\{1, \dots, d_j-d_0\}}\otimes
\mathbf{y}^{\left(\begin{smallmatrix}u_1+c_1& \dots& u_s+c_s&w_{s+1}+c_{s+1}&\dots &w_r+c_r\\ -1&\dots&-1&0&\dots&0\end{smallmatrix}\right)}.
\end{align*}
One may check that this is a basis element of $F_j'$, and hence the
map $\nu_j$ is nonzero. Therefore $\nu_\bullet$ is not null-homotopic,
as desired.
\end{proof}
\section{Equivariant construction of morphisms between
modules with pure resolutions
\label{sec:equiv:deg}
Throughout this section, we assume that $\Bbbk$ is a field of
characteristic 0 and that all degree sequences have length $n$. Let $V$ be
an $n$-dimensional $\Bbbk$-vector space, and let $S = \operatorname{Sym}(V)$. We
use $\mathbf{S}_\lambda$ to denote a Schur functor, as in
Section~\ref{sec:equiv:root}. As in Section~\ref{sec:construct deg},
a shift of $d'$ reduces the remaining direction of
Theorem~\ref{thm:equivariant:deg} to the following result.
\begin{theorem} \label{theorem:eqvtmodules} Let $d \preceq d'$ be two
degree sequences such that $d_k = d'_k$ for some $k$. Then there
exist finite length ${\bf GL}(V)$-equivariant $S$-modules $M$ and $M'$
with pure resolutions of types $d$ and $d'$, respectively, with
$\operatorname{Hom}_{{\bf GL}(V)}(M',M)_0 \ne 0$.
\end{theorem}
Our proof of Theorem~\ref{theorem:eqvtmodules} relies on
Lemma~\ref{lemma:eqvtmodules}, which handles the special case when the
degree sequences $d$ and $d'$ differ by $1$ in a single position. This
proof will repeatedly appeal to Pieri's rule for decomposing the
tensor product of a Schur functor by a symmetric power. We refer the
reader to \cite[\S1.1 and Theorem 1.3]{sam} for a statement of this
rule, as our main use of it will be through \cite[Lemma 1.6]{sam}.
Given a degree sequence $d$, let $M(d)$ be the ${\bf GL}(V)$-equivariant
graded $S$-module constructed in~\cite{efw}*{\S3} (see also
\cite{sam}*{\S2.1}), and let ${\bf F}(d)_\bullet$ be its
${\bf GL}(V)$-equivariant free resolution. By construction, the generators
for each $S$-module ${\bf F}(d)_j$ form an irreducible ${\bf GL}(V)$-module
whose highest weight we call $\lambda(d)_j$. For instance, if
$d=(0,2,5,7,8)$, then $\lambda(d)_0=(3,1,0,0)$ and
$\lambda(d)_1=(5,1,0,0)$~\cite{efw}*{Example~3.3}. Note that
$M(d)\otimes V$ is also an equivariant module with a pure resolution
of type $d$.
\begin{lemma}
\label{lemma:eqvtmodules}
Let $d = (d_0, \dots, d_n) \in \mathbb{Z}^{n+1}$ be a degree sequence, and
let $d'$ be the degree sequence obtained from $d$ by replacing $d_i$
by $d_i+1$ for some $i$. Then there exists an equivariant nonzero morphism
$\phi \colon M(d')\otimes V\to M(d)$.
Further, if $F_\bullet$ and $F_\bullet'$ are the minimal free resolutions of $M(d)$ and $M(d')\otimes V$ respectively,
then we may choose $\phi$ so that the induced map $F_j'\to F_j$ is surjective for all $j\ne i$.
\end{lemma}
\begin{remark} \label{remark:eqvtefw} Let $d$ and $d'$ be degree sequences
as in the statement of Lemma~\ref{lemma:eqvtmodules}. We observe that
\begin{enumerate}[(i)]
\item $\lambda(d')_i = \lambda(d)_i$.
\item If $j < i$, then $\lambda(d')_j$ is obtained from
$\lambda(d)_j$ by removing a box from the $i$th part.
\item If $j > i$, then $\lambda(d')_j$ is obtained from
$\lambda(d)_j$ by removing a box from the $(i+1)$st part.
\end{enumerate}
For instance, if $d=(0,2,4)$ and $d'=(0,3,4)$, then we have
\[
\lambda(d)_j=\begin{cases}
(1,0) & \text{ if } j=0\\
(3,0) & \text{ if } j=1\\
(3,2) & \text{ if } j=2
\end{cases}
\quad \text{ and } \quad
\lambda(d')_j=\begin{cases}
(0,0) & \text{ if } j=0\\
(3,0) & \text{ if } j=1\\
(3,1) & \text{ if } j=2.
\end{cases} \qedhere
\]
\end{remark}
\begin{remark} In the proof of Lemma~\ref{lemma:eqvtmodules}, we
repeatedly use~\cite{sam}*{Lemma 1.6}. The statement of the lemma is
for factorizations of Pieri maps into simple Pieri maps $\mathbf{S}_\nu V
\to \mathbf{S}_\eta V \otimes V$, but we need to factor into simple Pieri
maps as well as simple co-Pieri maps $\mathbf{S}_\eta V \otimes V \to
\mathbf{S}_\nu V$. No modification of the proof is needed: we simply use
the fact that the composition of a co-Pieri map and a Pieri map of
the same type is an isomorphism and that in each case that we
apply~\cite{sam}*{Lemma 1.6}, the Pieri maps may be factored so that
the simple Pieri maps and simple co-Pieri maps of the same type
appear consecutively.
\end{remark}
\begin{proof}[Proof of Lemma~\ref{lemma:eqvtmodules}]
Set $\lambda_\ell = \sum_{j=\ell}^{n-1} (d_{j+1} - d_j - 1)$ for $1
\le \ell \le n-1$, $\lambda_n = 0$, $\mu_1 = \lambda_1 + d_1 - d_0$,
and $\mu_\ell = \lambda_\ell$ for $1 \le \ell \le n$. If $i = n$, we
modify $\lambda$ and $\mu$ by adding 1 to all of its parts (so in
particular, $\lambda_n = \mu_n = 1$). As in~\cite{efw}*{\S3},
define $M$ to be the cokernel of the Pieri map
\[
\psi_{\mu / \lambda} \colon S(-d_1) \otimes \mathbf{S}_\mu V \to S(-d_0)
\otimes \mathbf{S}_\lambda V.
\]
We will choose partitions $\lambda'$ and $\mu'$ so that
$M'$ is the cokernel of the Pieri map
\[
\psi_{\mu' / \lambda'} \colon S(-d'_1) \otimes \mathbf{S}_{\mu'} V \to
S(-d'_0) \otimes \mathbf{S}_{\lambda'} V.
\]
%
To do this, we separately consider the three cases $i=0$, $i=1$, and $i \ge
2$. In each case, we specify $\lambda'$ and $\mu'$ (these
descriptions are special cases of Remark~\ref{remark:eqvtefw}) and
construct a commutative diagram of equivariant degree 0 maps
\begin{equation} \label{eqn:square} \xymatrix{ S(-d_1) \otimes
\mathbf{S}_\mu V \ar[rr]^-{\psi_{\mu/\lambda}}
& & S(-d_0) \otimes \mathbf{S}_\lambda V \\
S(-d'_1) \otimes \mathbf{S}_{\mu'} V \otimes V \ar[rr]^-{ \psi_{\mu' /
\lambda'} \otimes 1_V} \ar[u]^-{\phi_\mu} & & S(-d'_0)
\otimes \mathbf{S}_{\lambda'} V \otimes V \ar[u]^-{\phi_\lambda} }
\end{equation}
that induces an equivariant degree 0 map of the cokernels $\phi
\colon M' \to M$. Since the Pieri maps are only
well-defined up to a choice of nonzero scalar, we only prove
that the square commutes up to a choice of nonzero scalar.
One may scale appropriately to obtain strict commutativity.
Finally, after handling the three separate cases, we prove that the induced maps
$F_j'\to F_j$ are surjective whenever $j\ne i$. Since $F_\bullet'$ is a minimal
free resolution, this implies that the map $F_\bullet'\to F_\bullet$ is not null-homotopic,
and hence $\phi \colon M'\to M$ is nonzero.
~
\noindent \emph{Case $i=1$.} Set $\lambda'_1 = \lambda_1 - 1$,
$\lambda'_j = \lambda_j$ for $2 \le j \le n$, and $\mu' =
\mu$. Also, let $d'_0 = d_0$ and $d'_1 = d_1 + 1$. Using the
notation of \eqref{eqn:square}, we define $\phi_\mu$ by identifying
$\mathbf{S}_{\mu'} V \otimes V$ with $\operatorname{Sym}^1 V \otimes \mathbf{S}_\mu V$ and then
extending it to an $S$-linear map. Let $\phi_\lambda$ be
the projection of $\mathbf{S}_{\lambda'} V \otimes V \to \mathbf{S}_\lambda V$
tensored with the identity of $S(-d_0)$. From the degree $d_1 + 1$
part of \eqref{eqn:square}, we obtain
\[
\xymatrix{ \operatorname{Sym}^1 V \otimes \mathbf{S}_\mu V \ar[r]^-\alpha & \operatorname{Sym}^{d_1 -
d_0 + 1} V \otimes \mathbf{S}_\lambda V \\
\mathbf{S}_\mu V \otimes V \ar[r]^-\delta \ar[u]^-\beta & \operatorname{Sym}^{d_1 - d_0
+ 1} V \otimes \mathbf{S}_{\lambda'} V \otimes V \ar[u]^-\gamma. }
\]
Note that $\alpha$ is the linear part of ${\bf F}_1 \to {\bf F}_0$
and is hence injective because $d_2 - d_1 > 1$.
Since $\beta$ is an isomorphism, $\alpha \beta$ is injective.
Also we have $\lambda_1 >
\lambda_2$ because $d_2 - d_1 > 1$, so by Pieri's rule, every summand
of $\mathbf{S}_\mu V \otimes V$ is also a summand of $\operatorname{Sym}^{d_1 - d_0 + 1}
V \otimes \mathbf{S}_\lambda V$. Using~\cite{sam}*{Lemma~1.6}, one can show
that $\gamma \delta$ is also injective. Since the tensor product
$\operatorname{Sym}^{d_1 - d_0 + 1} V \otimes \mathbf{S}_\lambda V$ is multiplicity-free
by the Pieri rule, this implies that these maps are equal after
rescaling the image of each direct summand of $\mathbf{S}_\mu V \otimes V$
by some nonzero scalar. Hence this diagram is commutative, and the
same is true for \eqref{eqn:square}.
~
\noindent \emph{Case $i \ge 2$.} Set $\lambda'_i =
\lambda_i - 1$ and $\lambda_j = \lambda_j$ for $j \ne i$.
Similarly, set $\mu'_i = \mu_i - 1$ and $\mu'_j = \mu_j$ for $j \ne i$.
Using the notation of \eqref{eqn:square}, let $\phi_\mu$
be a nonzero projection of $\mathbf{S}_{\mu'} V \otimes V$ onto $\mathbf{S}_\mu V$
tensored with the identity on $S(-d_1)$. Similar to the previous case,
choose a nonzero projection $\mathbf{S}_{\lambda'} V \otimes V \to \mathbf{S}_\lambda V$
and tensor it with the identity map on $S(-d_0)$ to get
$\phi_\lambda$. From the degree $d_1$ part of
\eqref{eqn:square}, we obtain
\[
\xymatrix{ \mathbf{S}_\mu V \ar[r]^-\alpha & \operatorname{Sym}^{d_1 - d_0} V \otimes
\mathbf{S}_\lambda V \\
\mathbf{S}_{\mu'} V \otimes V \ar[r]^-\delta \ar[u]^-\beta & \operatorname{Sym}^{d_1 -
d_0} V \otimes \mathbf{S}_{\lambda'} V \otimes V \ar[u]^-\gamma. }
\]
Let $\mathbf{S}_\nu V$ be a direct summand of $\mathbf{S}_{\mu'} V \otimes V$. If
$\nu \ne \mu$, then $\mathbf{S}_\nu V$ is not a summand of $\operatorname{Sym}^{d_1 -
d_0} V \otimes \mathbf{S}_\lambda V$, as otherwise we would have $\nu_i =
\lambda_i - 1$, and both of the compositions $\alpha \beta$ and $\gamma
\delta$ would therefore be 0 on such a summand. If $\nu = \mu$, then the
composition $\alpha \beta$ is nonzero, so it is enough to check that
the same is true for $\gamma \delta$; this holds
by~\cite{sam}*{Lemma~1.6}, and hence this diagram and
\eqref{eqn:square} are commutative.
~
\noindent \emph{Case $i=0$.} Set $d^\vee:= (-d_n, -d_{n-1}, \dots,
-d_0)$ and $d'^{\vee}:=(-d'_n, -d'_{n-1}, \dots, -d'_0)$. Since
$d_j=d_j'$ for all $j\ne i=0$, we see that $d^\vee$ and $d'^\vee$
only differ in position $n$. Hence, by the case $i\geq 2$ above (we
assume that $n \ge 2$ since the $n=1$ case is easily done directly),
we have finite length modules $M(d^\vee)$ and $M(d'^{\vee})$ with
pure resolutions of types $d^\vee$ and $d'^\vee$, respectively,
along with a nonzero morphism $\psi \colon M(d^\vee)\otimes V \to
M(d'^\vee)$. If we define $N^\vee:= \Ext^n(N,S),$ then
$M(d'^\vee)^\vee \cong M(d')$ and $(M(d^\vee)\otimes V)^\vee \cong
M(d) \otimes V^*$ (both isomorphisms are up to some power of
$\bigwedge^n V$ which we cancel off). In addition, since
$\Ext^n(-,S)$ is a duality functor on the space of finite length
$S$-modules, we obtain a nonzero map
\[
\psi^\vee \colon M(d')\to M(d)\otimes V^*.
\]
By adjunction, we then obtain a nonzero map $M(d')\otimes V\to
M(d)$.
~
Fixing some $j\ne i$, we now prove the surjectivity of the maps $F_j'\to F_j$, which implies that $\phi$ is a nonzero morphism, as observed above. The key observation is that,
in each of the above three cases, $F_j$ is an irreducible Schur module. Since $d_j=d_j'$, the map
\[
F_j'=S(-d'_j)\otimes \mathbf{S}_{\lambda(d')_j}V\otimes V\to F_j=S(-d_j)\otimes \mathbf{S}_{\lambda(d)_j}V
\]
is induced by a nonzero equivariant map $\mathbf{S}_{\lambda(d')_j}V\otimes V\to \mathbf{S}_{\lambda(d)_j}V$. Since the target
is an irreducible representation, this morphism, and hence the map $F_j'\to F_j$, is surjective. More specifically, the map $\mathbf{S}_{\lambda(d')_j}V\otimes V\to \mathbf{S}_{\lambda(d)_j}V$ is a projection onto one of the factors in the Pieri rule decomposition of $\mathbf{S}_{\lambda(d')_j}V\otimes V$.
\end{proof}
\begin{example} \label{example:eqvtres1}
This example illustrates the construction of
Lemma~\ref{lemma:eqvtmodules} when $d=(0,2,4)$ and $d'=(0,3,4)$.
When writing the free resolutions, we simply write the Young
diagram of $\lambda$ in place of the corresponding graded
equivariant free module. Also, we follow the conventions in
\cite{efw} and \cite{sam} and draw the Young diagram of $\lambda$ by
placing $\lambda_i$ boxes in the $i$th {\it column}, rather than the
usual convention of using rows. The morphism from
Lemma~\ref{lemma:eqvtmodules} yields a map of complexes, which we write as
\[
\begin{CD}
@.M@<<< {\tiny \tableau[scY]{|}} @<<< {\tiny \tableau[scY]{|||}}
@<<< {\tiny
\tableau[scY]{,|,||}} @<<< 0 \\
@. _{\psi} @AAA@AAA @AAA @AAA \\
@. M'@<<< {\tiny \tableau[scY]{|}}\otimes \varnothing @<<< {\tiny
\tableau[scY]{|}} \otimes {\tiny \tableau[scY]{|||}} @<<< {\tiny
\tableau[scY]{|}} \otimes {\tiny \tableau[scY]{,|||}} @<<< 0.\\
\end{CD}
\]
Observe that $d_2=4=d_2'$ and that the vertical arrow in homological position $2$ is surjective, as it corresponds to a Pieri rule projection. A similar statement holds in position $0$.
\end{example}
\begin{proof}[Proof of Theorem~\ref{theorem:eqvtmodules}]
Set $r:=\sum_{j=0}^n d_j'-d_j$. We may construct a sequence of degree sequences
$d=:d^0<d^1<\dots <d^r:=d'$ such that $d^j$ and $d^{j+1}$ satisfy the hypotheses of
Lemma~\ref{lemma:eqvtmodules} for any $j$.
Lemma~\ref{lemma:eqvtmodules} yields a nonzero morphism
\[
\phi^{(j+1)} \colon M(d^{j+1})\otimes V\to M(d^j)
\]
for any $j=1, \dots, r$.
If we set $M^{(j)}:=M(d^j)\otimes V^{\otimes j}$, and we set $\psi^{(j+1)}$ to be the natural map
\[
\psi^{(j+1)} \colon M^{(j+1)}\to M^{(j)}
\]
given by $\phi^{(j)}\otimes \text{id}_V^{\otimes j}$, then we may compose the map $\psi^{(j+1)}$ with
the map $\psi^{(j)}$.
Let $M:=M^{(0)}=M(d),$ and let $M':=M^{(r)}=M(d')\otimes V^{\otimes r}$.
We then have an equivariant map $\psi:=\psi^{(1)}\circ \dots \circ
\psi^{(r)} \colon M'\to M$, and we must finally show that $\psi$ is
nonzero. Let $F^{(j)}_\bullet$ be the minimal free resolution of
$M^{(j)}$. Since $d_k=d_k'$, it follows that $d^{(j)}_k=d^{(j+1)}_k$
for all $j$. Lemma~\ref{lemma:eqvtmodules} then implies that we can
choose each $\phi^{(j+1)}$ such that the map $\psi^{(j+1)}$ induces a
surjection $F^{(j+1)}_k\to F^{(j)}_k$. Since the composition of
surjective maps is surjective, it follows that the map $F^{(r)}_k\to
F^{(0)}_k$ induced by $\psi$ is surjective. Since $F^{(0)}_\bullet$
is a minimal free resolution, we conclude that the map of
complexes $F^{(r)}_\bullet\to F^{(0)}_\bullet$ is not
null-homotopic, and hence $\psi \colon M'\to M$ is a nonzero morphism.
\end{proof}
\begin{remark}\label{rmk:simpler}
By introducing a variant of Lemma~\ref{lemma:eqvtmodules}, we may
simplify the construction used in the proof of
Theorem~\ref{theorem:eqvtmodules}. Let $d$ and $d'$ be two degree
sequences such that $d_i'=d_i+N$, and $d_j'=d_j$ for all $j\ne i$.
Iteratively applying Lemma~\ref{lemma:eqvtmodules} yields a morphism
$\phi \colon M(d')\otimes V^{\otimes N}\to M(d)$. Since
$\operatorname{char}(\Bbbk)=0$, we have an inclusion $\iota \colon \operatorname{Sym}^N V\to
V^{\otimes N}$, and we let $\psi$ be the morphism induced by
composing $\phi$ and $\text{id}_{M(d')}\otimes \iota$. Let
$F_\bullet'$ and $F_\bullet$ be the minimal free resolutions of
$M(d')\otimes \operatorname{Sym}^N V$ and $M(d)$ respectively. The map $F_j'\to
F_j$ induced by $\psi$ is induced by the equivariant map of vector spaces
\[
\mathbf{S}_{\lambda(d')}V\otimes \operatorname{Sym}^NV\to \mathbf{S}_{\lambda(d)}V.
\]
This map is surjective because it is a projection onto one of the factors in the Pieri rule decomposition of
$\mathbf{S}_{\lambda(d')}V\otimes \operatorname{Sym}^NV$.
This simplifies the proof of Theorem~\ref{theorem:eqvtmodules} as follows. Let $i_1 > \cdots > i_\ell$ be the indices for which $d$ and $d'$ differ. By iteratively applying the construction outlined in this remark, we may construct the desired modules and nonzero morphism in $\ell$ steps. Since $\ell$ can be far smaller than $r:=\sum_{j=0}^n d_j'-d_j$, this variant is useful for computing examples such as Example~\ref{ex:big eqvt}.
\end{remark}
\begin{example}\label{ex:big eqvt}
We illustrate Theorem~\ref{theorem:eqvtmodules}
with $n=4$, $d = (0,2,3,6,7)$, and $d' = (1,2,5,6,10)$. Using the notation
of Remark~\ref{rmk:simpler}, $d^{(1)} = (0,2,3,6,10)$, $d^{(2)} = (0,2,5,6,10)$.
Following the same conventions as in Example~\ref{example:eqvtres1},
the corresponding resolutions are given in Figure~\ref{fig:big diagram}.
%
Notice that $d_3=6=d_3'$. Focusing on the third terms of the
resolutions, we see that the maps are simply projections from
Pieri's rule. In particular, these maps are surjective and therefore
nonzero.
\qedhere
\begin{landscape}
\begin{figure}
\vspace*{1cm}
\[
\begin{CD}
d \quad \quad @. @. {\tiny \tableau[scY]{,,,|,,,|,,,|,|,}} @<<<
{\tiny \tableau[scY]{,,,|,,,|,,,|,|,|||}} @<<< {\tiny
\tableau[scY]{,,,|,,,|,,,|,|,|,||}} @<<< {\tiny
\tableau[scY]{,,,|,,,|,,,|,,|,,|,,||}} @<<< {\tiny
\tableau[scY]{,,,|,,,|,,,|,,,|,,|,,||}} @<<< 0 \\
@. @. @AAA @AAA @AAA @AAA @AAA \\
d^{(1)} \quad \quad @. {\tiny \tableau[scY]{|||}} \ \otimes \bigg(
@. {\tiny \tableau[scY]{,,|,,|,,|,|,}} @<<<{\tiny
\tableau[scY]{,,|,,|,,|,|,|||}} @<<< {\tiny
\tableau[scY]{,,|,,|,,|,|,|,||}} @<<< {\tiny
\tableau[scY]{,,|,,|,,|,,|,,|,,||}} @<<< {\tiny
\tableau[scY]{,,,|,,,|,,,|,,,|,,|,,||}} @<<< 0 @. \bigg) \\
@. @. @AAA @AAA @AAA @AAA @AAA \\
d^{(2)} \quad \quad @. {\tiny \tableau[scY]{|||}} \ \otimes \
{\tiny \tableau[scY]{||}} \ \otimes \bigg( @. {\tiny
\tableau[scY]{,,|,,|,,|||}} @<<<{\tiny
\tableau[scY]{,,|,,|,,|||||}} @<<< {\tiny
\tableau[scY]{,,|,,|,,|,|,|,||}} @<<< {\tiny
\tableau[scY]{,,|,,|,,|,,|,|,||}} @<<< {\tiny
\tableau[scY]{,,,|,,,|,,,|,,,|,|,||}} @<<< 0 @. \bigg) \\
@. @. @AAA @AAA @AAA @AAA @AAA \\
d' \quad \quad @. {\tiny \tableau[scY]{|||}} \ \otimes \ {\tiny
\tableau[scY]{||}} \ \otimes \ {\tiny \tableau[scY]{|}} \
\otimes \bigg( @. {\tiny \tableau[scY]{,,|,,|,,|||}} @<<<{\tiny
\tableau[scY]{,,|,,|,,||||}} @<<< {\tiny
\tableau[scY]{,,|,,|,,|,|,|,|}} @<<< {\tiny
\tableau[scY]{,,|,,|,,|,,|,|,|}} @<<< {\tiny
\tableau[scY]{,,,|,,,|,,,|,,,|,|,|}} @<<< 0 @. \bigg) \\
\end{CD}
\]
\smallskip
\caption{The Young diagram depictions of the resolutions in
Example~\ref{ex:big eqvt}.}
\label{fig:big diagram}
\end{figure}
\end{landscape}
\end{example}
\section{The poset of root sequences
\label{sec:prelim root}
Let $\mathcal{E}$ be a coherent sheaf on $\mathbb{P}^{n-1}$.
The \defi{cohomology table} of $\mathcal{E}$ is a table
with rows indexed by $\{0, \ldots, n-1\}$ and columns indexed by $\mathbb{Z}$,
such that the entry in row $i$ and column $j$ is
$\dim_\Bbbk \mathrm{H}^i(\mathbb{P}^{n-1}, \mathcal{E}(j-i))$.
A sequence
$f=(f_1, \dots, f_{n-1})\in \left(\mathbb{Z}\cup\{-\infty\}\right)^{n-1}$
is called a \defi{root sequence} for $\mathbb{P}^{n-1}$
if $f_i<f_{i-1}$ for all $i$ (with the convention that $-\infty<-\infty$).
The \defi{length} of $f$, denoted $\ell(f)$, is the largest integer $t$
such that $f_t$ is finite.
\begin{definition}\label{def:supernatural}
Let $f$ be a root sequence for $\mathbb{P}^{n-1}$. A sheaf $\mathcal{E}$ on $\mathbb{P}^{n-1}$ is
\defi{supernatural of type} $f=(f_1, \dots, f_{n-1})$ if the following are satisfied:
\begin{asparaenum}
\item The dimension of $\operatorname{Supp} \mathcal{E}$ is $\ell(f)$.
\item For all $j\in \mathbb Z$, there exists at most one $i$
such that $\dim_\Bbbk \mathrm{H}^i(\mathbb{P}^{n-1}, \mathcal{E}(j))\ne 0$.
\item The Hilbert polynomial of $\mathcal{E}$ has roots $f_1, \dots, f_{\ell(f)}$.
\end{asparaenum}
Dropping the reference to its root sequence,
we also say that $\mathcal{E}$ is a \defi{supernatural sheaf}
(or a \defi{supernatural vector bundle} if it is locally free).
\end{definition}
For every root sequence $f$, there exists a supernatural sheaf of type
$f$~\cite{EiScConjOfBS07}*{Theorem~0.4}.
Moreover, the cohomology table of any coherent sheaf
can be written as a positive real combination of cohomology tables
of supernatural sheaves~\cite{EiScSupNat09}*{Theorem~0.1}.
The \defi{cone of cohomology tables} for $\mathbb{P}^{n-1}$
is the convex cone inside $\prod_{j \in \mathbb{Z}}\mathbb{R}^n$
generated by cohomology tables of coherent sheaves on $\mathbb{P}^{n-1}$.
Each root sequence $f$ corresponds to a unique extremal ray of this cone,
which we denote by $\rho_f$, and
every extremal ray is of the form $\rho_f$ for some root sequence $f$.
\begin{definition}\label{defn:partial:root}
For two root sequences $f$ and $f'$, we say that $f\preceq f'$ and that
$\rho_f \preceq \rho_{f'}$ if $f_i\leq f_i'$ for all $i$.
\end{definition}
This partial order induces a simplicial fan structure on the cone of cohomology
tables, where simplices correspond to chains of root sequences under the partial
order $\preceq$.
We now
show that the existence of a nonzero homomorphism between two
supernatural sheaves implies the comparability of their corresponding
root sequences, which provides the reverse implications for
Theorems~\ref{thm:poset:root:main} and~\ref{thm:equivariant:root}.
\begin{prop}\label{prop:half:poset:root}
Let $\mathcal{E}$ and $\mathcal{E}'$ be supernatural sheaves of types $f$ and $f'$
respectively. If $\operatorname{Hom}(\mathcal{E}',\mathcal{E})\ne 0$, then $f\preceq f'$.
\end{prop}
\begin{proof}
Let $\tate(\mathcal{E})$ and $\tate(\mathcal{E}')$ denote the Tate resolutions of
$\mathcal{E}$ and $\mathcal{E}'$~\cite{EiFlScExterior03}*{\S4}. These are doubly
infinite acyclic complexes over the exterior algebra $\Lambda$,
which is Koszul dual to $S$ and has generators in degree $-1$.
Since $\operatorname{Hom}(\mathcal{E}',\mathcal{E})\ne 0$, there is a map $\phi\colon \tate(\mathcal{E}')
\to \tate(\mathcal{E})$ that is not null-homotopic. Observe that for every
cohomological degree $j$, $\phi^j \colon \tate(\mathcal{E}')^j \to
\tate(\mathcal{E})^j$ is nonzero. First, if $\phi^j=0$ for some $j$, then,
we may take $\phi^k=0$ for all $k < j$. Secondly, if $k > j$, then
after applying $\operatorname{Hom}_{\Lambda}(-, \Lambda)$ (which is exact because
$\Lambda$ is self-injective), we can take $\phi^k$ to be zero.
By~\cite{EiScConjOfBS07}*{Theorem~6.4}, we see that all the minimal
generators of $T(\mathcal{E})^j$ (respectively, $T(\mathcal{E}')^j$) are of a single degree
$i$ (respectively, $i'$). (This is equivalent to stating that every
column of the cohomology table of $\mathcal{E}$ and $\mathcal{E}'$ contains precisely one
nonzero entry.) Since $\phi^j$ is nonzero and $\Lambda$ is generated in
elements of degree $-1$, we see that $i' \leq i$. Now, again
by~\cite{EiScConjOfBS07}*{Theorem~6.4}, $f \preceq f'$.
\end{proof}
\section{Construction of morphisms between supernatural sheaves
\label{sec:construct root}
The goal of this section is to prove Theorem~\ref{thm:main:root}, which provides
the forward direction of Theorem~\ref{thm:poset:root:main}.
\begin{thm}
\label{thm:main:root}
Let $f\preceq f'$ be two root sequences.
Then there exist supernatural sheaves
$\mathcal{E}$ and $\mathcal{E}'$ of types $f$ and $f'$, respectively,
with $\operatorname{Hom}(\mathcal{E}',\mathcal{E})\ne 0$.
\end{thm}
For the purposes of exposition, we separate the proof of
Theorem~\ref{thm:main:root} into two cases (with $\ell(f) = \ell(f')$
and with $\ell(f) < \ell(f')$), and handle these cases in
Propositions~\ref{propn:rootSameLength} and~\ref{propn:rootDiffLength}
respectively. Examples~\ref{ex:1} and ~\ref{ex:root:diff:lengths}
illustrate the essential ideas behind the proof in each case.
If $\ell(f) < n-1$, then we call $(f_1, \ldots, f_{\ell(f)})$
the \defi{truncation} of $f$, and write $\tau(f)$. Let $f=(f_1, \dots,
f_{n-1})$ be a root sequence with $\ell(f)=s$. Denote the $s$-fold product
of $\mathbb{P}^1$ by $\mathbb{P}^{1\times s}$. Fix homogeneous
coordinates
\begin{equation}
\label{equation:homogCoordProdPP}
\left([y_0^{(1)}:y_1^{(1)}], \dots, [y_0^{(s)}:y_1^{(s)}]\right)
\quad\text{on}\quad \mathbb{P}^{1\times s}.
\end{equation}
In order to produce a supernatural sheaf of type $f$ on $\mathbb{P}^{n-1}$,
we first construct a supernatural vector bundle of type $\tau(f)$ on $\mathbb{P}^s$.
Its image under an embedding of $\mathbb{P}^s$ as a linear subvariety $\mathbb{P}^{n-1}$
will give the desired supernatural sheaf.
We now outline our approach to construct a nonzero map between supernatural sheaves on $\mathbb{P}^s$ of types $f\preceq f'$ in the case that $\ell(f) = \ell(f') = s$.
This uses the proof of~\cite{EiScConjOfBS07}*{Theorem~6.1}.
\begin{enumerate}
\item\label{enum:snvbConstEqLenFinMap}
Construct a finite map $\pi\colon \mathbb{P}^{1\times s}\to \mathbb{P}^{s}$.
\item\label{enum:snvbConstEqLenPushFwd}
Choose appropriate line bundles $\mathcal L$ and $\mathcal L'$ on $\mathbb{P}^{1\times s}$
so that $\pi_* \mathcal L$ and $\pi_*\mathcal L'$ are supernatural vector bundles
of the desired types.
\item\label{enum:snvbConstEqLenMaps}
When $\ell(f) = \ell(f') = s$, construct a morphism $\mathcal L' \stackrel{\phi}{\longrightarrow}
\mathcal L$ such that $\pi_* \phi$ is nonzero.
\end{enumerate}
For \eqref{enum:snvbConstEqLenFinMap},
we use the multilinear $(1,\dots,1)$-forms
\begin{equation}\label{eqn:fp}
g_p:= \sum_{i_1+\dots+i_{s}=p} \left( \prod_{j=1}^{s} y_{i_j}^{(j)}
\right) \qquad \text{for}\quad p=0, \dots, s
\end{equation}
on $\mathbb{P}^{1\times s}$ to define the map
$\pi\colon \mathbb{P}^{1\times s}\to \mathbb{P}^s$ via $[g_0:\cdots:g_s]$.
For \eqref{enum:snvbConstEqLenPushFwd},
with $\mathbf 1 :=(1,\dots, 1)\in\mathbb{Z}^s$,
\[
\mathcal{E}_f:=\pi_*\left({\mathcal O}_{\mathbb{P}^{1\times s}}(-f-\mathbf 1)\right)
\]
is a supernatural vector bundle of type $\tau(f)$ on $\mathbb{P}^s$ of rank
$s!$ (the degree of $\pi$). The next example illustrates
\eqref{enum:snvbConstEqLenMaps}.
\begin{example}
\label {ex:1}
Here we find a nonzero morphism $\mathcal{E}_{f'}\to \mathcal{E}_{f}$ that
is the direct image of a morphism of line bundles on $\mathbb{P}^{1\times (n-1)}$.
Let $n=5$ and $f:=(-2,-3,-4,-5) \preceq f':=(-1,-2,-3,-4)$.
The map $\pi\colon \mathbb{P}^{1\times 4}\to \mathbb{P}^4$ is finite of degree $4!=24$.
Following steps \eqref{enum:snvbConstEqLenFinMap} and \eqref{enum:snvbConstEqLenPushFwd} as outlined above, we set
$\mathcal{E}:=\mathcal{E}_{f}=\pi_* {\mathcal O}_{\mathbb{P}^{1\times 4}}(1,2,3,4)$ and
$\mathcal{E}':=\mathcal{E}_{f'}=\pi_* {\mathcal O}_{\mathbb{P}^{1\times 4}}(0,1,2,3)$.
There is a natural inclusion
\begin{equation}
\label{equation:inclGlobalSec}
\pi_* \mathcal{H}om_{\mathbb{P}^{1\times 4}}\left( {\mathcal O}_{\mathbb{P}^{1\times 4}}(0,1,2,3),
{\mathcal O}_{\mathbb{P}^{1\times 4}}(1,2,3,4)\right)\subseteq
\mathcal{H}om_{\mathbb{P}^4}\left( \mathcal{E}', \mathcal{E} \right),
\end{equation}
which induces an inclusion of global sections
(see Remark~\ref{rmk:inclusion}). Therefore
\begin{align*}
\operatorname{Hom}(\mathcal{E}', \mathcal{E})&\supseteq \mathrm{H}^0\left(\mathbb{P}^4, \pi_* \mathcal{H}om_{\mathbb{P}^{1\times 4}}\left( {\mathcal O}_{\mathbb{P}^{1\times 4}}(0,1,2,3), {\mathcal O}_{\mathbb{P}^{1\times 4}}(1,2,3,4)\right) \right)\\
&= \mathrm{H}^0(\mathbb{P}^{1\times 4}, {\mathcal O}_{\mathbb{P}^{1\times 4}}(1,1,1,1))\\
&\simeq\Bbbk^{16}.
\end{align*}
We thus conclude that $\operatorname{Hom}(\mathcal{E}', \mathcal{E})\ne 0$.
The inclusion \eqref{equation:inclGlobalSec} is strict.
Note that, by definition, neither $\mathcal{E}'$ nor $\mathcal{E}$ has intermediate cohomology,
and hence, by Horrocks' Splitting Criterion, both $\mathcal{E}$ and $\mathcal{E}'$ must split as the sum of line bundles.
Thus $\mathcal{E}'={\mathcal O}_{\mathbb{P}^4}^{24}$ and $\mathcal{E}={\mathcal O}_{\mathbb{P}^4}(1)^{24}$, and it follows
that $\operatorname{Hom}(\mathcal{E}',\mathcal{E})=\mathrm{H}^0(\mathbb{P}^4,{\mathcal O}(1)^{576}) \simeq \Bbbk^{2880}$.
\end{example}
\begin{remark}\label{rmk:inclusion}
Let $\pi\colon\mathbb{P}^{1\times s} \to \mathbb{P}^s$ be as in \eqref{enum:snvbConstEqLenFinMap}.
For coherent sheaves $\mathcal{F}$ and $\mathcal{G}$ on $\mathbb{P}^{1\times s}$, we have
\[
\pi_* \mathcal{H}om_{{\mathcal O}_{\mathbb{P}^{1\times s}}}(\mathcal{F},\mathcal{G})
\subseteq
\mathcal{H}om_{{\mathcal O}_{\mathbb{P}^s}}(\pi_*\mathcal{F},\pi_*\mathcal{G}).
\]
Indeed, this can be checked locally.
Let $U \subseteq \mathbb{P}^s$ be an affine open subset,
and write $A = \mathrm{H}^0(U, {\mathcal O}_{\mathbb{P}^s})$ and
$B = \mathrm{H}^0(U, \pi_*{\mathcal O}_{\mathbb{P}^{1\times s}})$.
For all $B$-modules $M$ and $N$, every nonzero $B$-module
homomorphism is also a nonzero $A$-module homomorphism
via the map $A \rightarrow B$. Injectivity is immediate.
\end{remark}
\begin{remark}
\label{remark:pushFwdSupNat}
Suppose that $\beta \colon \mathbb{P}^s \to \mathbb{P}^{n-1}$ is a closed immersion
as a linear subvariety.
Let $\mathcal{E}$ be a coherent sheaf on $\mathbb{P}^s$.
It follows from the projection formula and from the finiteness of $\beta$ that
$\mathcal{E}$ is a supernatural sheaf
on $\mathbb{P}^s$ of type $(f_1, \ldots, f_s)$ if and only if $\beta_*\mathcal{E}$ is a
supernatural sheaf on $\mathbb{P}^{n-1}$ of type
$(f_1, \ldots, f_s, -\infty, \ldots, -\infty)$.
\end{remark}
\begin{prop}
\label{propn:rootSameLength}
If $\ell(f) = \ell(f')$, then Theorem~\ref{thm:main:root} holds.
\end{prop}
\begin{proof}
We first reduce to the case $\ell(f')=n-1$.
Let $\beta\colon\mathbb{P}^{\ell(f')} \to \mathbb{P}^{n-1}$ be a closed immersion as a
linear subvariety.
Let $\ell(f')=s$ and write $f=(f_1, \dots, f_s, -\infty, \dots, -\infty)$ and
$f'=(f_1', \dots, f_s', -\infty, \dots, -\infty)$.
Assume that $\mathcal{E}$ and $\mathcal{E}'$ are supernatural sheaves
of type $(f_1, \dots, f_s)$ and $(f'_1, \dots, f_s')$ on $\mathbb{P}^{s}$ and that
$\operatorname{Hom}(\mathcal{E}', \mathcal{E})\ne 0$. Then, by Remark~\ref{remark:pushFwdSupNat},
$\beta_*\mathcal{E}$ and $\beta_*\mathcal{E}'$ are supernatural sheaves of types $f$ and $f'$,
and $\operatorname{Hom}(\beta_*\mathcal{E}', \beta_*\mathcal{E}) \neq 0$.
We may thus assume that $\ell(f')=n-1$.
Let $\mathbf{1} := (1,\dots,1) \in\mathbb{Z}^{n-1}$.
Let $\pi\colon \mathbb{P}^{1 \times (n-1)} \to \mathbb{P}^{n-1}$ be the morphism given by
the forms $g_p$ defined in~\eqref{eqn:fp} (with $s=n-1$).
Let $\mathcal{E} := \mathcal{E}_f= \pi_* {\mathcal O}(-f-\mathbf 1)$ and
$\mathcal{E}' := \mathcal{E}_{f'}= \pi_*{\mathcal O}(-f'-\mathbf{1})$.
Remark~\ref{rmk:inclusion} shows that
\[
\mathrm{H}^0\left(\mathbb{P}^{n-1}, \pi_* \mathcal{H}om_{\mathbb{P}^{1\times (n-1)}}\left(
{\mathcal O}(-f'-\mathbf{1}), {\mathcal O}(-f-\mathbf{1}) \right) \right) \subseteq
\operatorname{Hom}_{\mathbb{P}^{n-1}}(\mathcal{E}',\mathcal{E}).
\]
Note that
$\mathcal{H}om_{\mathbb{P}^{1\times (n-1)}}\left(
{\mathcal O}(-f'-\mathbf{1}), {\mathcal O}(-f-\mathbf{1})
\right) = {\mathcal O}( f' - f )$.
Since $f \preceq f'$, we have that
$\mathrm{H}^0(\mathbb{P}^{1\times (n-1)}, {\mathcal O}( f'- f ))\ne 0$,
and thus $\operatorname{Hom}_{\mathbb{P}^{n-1}}(\mathcal{E}',\mathcal{E}) \neq 0$.
\end{proof}
When $\ell(f) < \ell(f')$, the supernatural sheaves
constructed using~\eqref{enum:snvbConstEqLenFinMap}
and~\eqref{enum:snvbConstEqLenPushFwd} above have supports
of different dimensions.
Before addressing this general case, we provide an example.
\begin{example}
\label{ex:root:diff:lengths}
Let $n=5$ and $f=(-2,-3,-4,-\infty) \preceq f'=(-1,-2,-3,-4)$,
so that $\ell(f)=3 < \ell(f') = 4 = n-1$.
We proceed by modifying steps
\eqref{enum:snvbConstEqLenFinMap}-\eqref{enum:snvbConstEqLenMaps}
above.
{
\makeatletter
\def\@roman\c@enumi$'${\@roman\c@enumi$'$}
\makeatother
\begin{enumerate}
\item\label{enum:snvbConstNonEqLenFinMap}
We extend the construction of \eqref{enum:snvbConstEqLenFinMap}
to the commutative diagram
\[
\xymatrix{
\mathbb{P}^{1\times 3} \ar[r]^{\alpha} \ar[d]^{\pi^{(3)}} & \mathbb{P}^{1 \times 4} \ar[d]^{\pi^{(4)}}\\
\mathbb{P}^3 \ar[r]^{\beta}&\mathbb{P}^{4}.
}
\]
\item\label{enum:snvbConstNonEqLenPushFwd}
Choose appropriate line bundles $\mathcal L$ on $\mathbb{P}^{1\times 3}$
and $\mathcal L'$ on $\mathbb{P}^{1\times 4}$,
so that $\pi^{(3)}_* \mathcal L$ and $\pi^{(4)}_* \mathcal L'$
are supernatural sheaves of the desired types.
\item\label{enum:snvbConstNonEqLenMaps}
Construct a morphism
$\mathcal L' \stackrel{\phi}{\longrightarrow}\alpha_*\mathcal L$
such that $\pi^{(4)}_* \phi$ is nonzero.
\end{enumerate}
}
For \eqref{enum:snvbConstNonEqLenFinMap}, we use the
homogeneous coordinates from \eqref{equation:homogCoordProdPP}.
The maps $\pi^{(3)}$ and $\pi^{(4)}$ are instances of the map $\pi$
from \eqref{enum:snvbConstEqLenFinMap} for $\mathbb{P}^{1 \times 3}$
and $\mathbb{P}^{1 \times 4}$, respectively.
Define a closed immersion
$\alpha\colon\mathbb{P}^{1\times 3}\rightarrow \mathbb{P}^{1\times 4}$
by the vanishing of the coordinate $y_1^{(4)}$.
Fix coordinates $x_0, \ldots, x_4$ for $\mathbb{P}^4$, and let
$\beta\colon \mathbb{P}^3 \rightarrow \mathbb{P}^4$ be the closed immersion given
by the vanishing of $x_4$.
We now have that the diagram in
\eqref{enum:snvbConstNonEqLenFinMap} is indeed commutative.
In \eqref{enum:snvbConstNonEqLenPushFwd},
we take $\mathcal L ={\mathcal O}_{\mathbb P^{1\times 3}}(1,2,3)$ and
$\mathcal L' = {\mathcal O}_{\mathbb{P}^{1\times 4}}(0,1,2,3)$ and set
$\mathcal{E}_f = \pi^{(3)}_* \mathcal{L}$ and $\mathcal{E}_{f'} = \pi^{(4)}_* \mathcal{L}'$.
Set $\mathcal{E}:=\beta_* \mathcal{E}_f$ and $\mathcal{E}':=\mathcal{E}_{f'}$.
Then $\mathcal{E}$ is a supernatural sheaf on $\mathbb{P}^4$
(see Remark~\ref{remark:pushFwdSupNat}), and
\begin{align*}
\operatorname{Hom}_{\mathbb{P}^{4}}(\mathcal{E}', \mathcal{E})
&= \mathrm{H}^0\left(\mathbb{P}^{4}, \mathcal{H}om\left( \pi^{(4)}_* \left( {\mathcal O}_{\mathbb{P}^{1\times
4}}(0,1,2,3)\right), \pi^{(4)}_*\left( \alpha_*{\mathcal O}_{\mathbb{P}^{1\times 3}}(1,2,3)
\right) \right) \right).&
\intertext{By Remarks~\ref{rmk:inclusion} and~\ref{rmk:0s}, we obtain the containment}
\operatorname{Hom}_{\mathbb{P}^{4}}(\mathcal{E}', \mathcal{E})&\supseteq \mathrm{H}^0\left(\mathbb{P}^{4}, \pi^{(4)}_* \mathcal{H}om\left( {\mathcal O}_{\mathbb{P}^{1\times 4}}(0,1,2,3), \alpha_*{\mathcal O}_{\mathbb P^{1\times 3}}(1,2,3) \right)\right) \\
&\cong \mathrm{H}^0\left(\mathbb{P}^{1\times 4}, \mathcal{H}om\left( {\mathcal O}_{\mathbb
P^{1\times 4}}(0,1,2,3),\alpha_*{\mathcal O}_{\mathbb P^{1\times
3}}(1,2,3) \right) \right)\\
&\cong \mathrm{H}^0\left(\mathbb{P}^{1\times 4}, \left( \alpha_*{\mathcal O}_{\mathbb P^{1\times 3}}(1,1,1)\right)(0,0,0,-3) \right)&\\
&\cong \mathrm{H}^0\left(\mathbb{P}^{1\times 4}, \alpha_*{\mathcal O}_{\mathbb P^{1\times 3}}(1,1,1)\right)\cong \Bbbk^{8}.
\end{align*}
In particular, $\operatorname{Hom}_{\mathbb{P}^{4}}(\mathcal{E}', \mathcal{E})\ne 0$, as desired.
\end{example}
\begin{remark}\label{rmk:0s}
Let $1 \leq s < t$, and let $\alpha\colon\mathbb{P}^{1\times s} \rightarrow \mathbb{P}^{1
\times t}$ be the embedding given by the vanishing of $y^{(s+1)}_1, \ldots,
y^{(t)}_1$.
Let $\mathcal F$ be a coherent sheaf on $\mathbb{P}^{1 \times s}$ and
$b\in \mathbb{Z}^{t-s}$. Write $\mathbf{0}_s$ for the
$0$-vector in $\mathbb{Z}^s$. Then
\begin{equation
\mathrm{H}^i\left( \mathbb{P}^{1\times t}, \left( \alpha_* \mathcal{F}\right) (\mathbf{0}_s,b)
\right)
\cong
\mathrm{H}^i\left( \mathbb{P}^{1\times t}, \alpha_* \mathcal{F}\right)
\cong
\mathrm{H}^i\left( \mathbb{P}^{1\times s}, \mathcal{F}\right)
\end{equation}
The first isomorphism
follows from the projection formula, taken along with
the fact that, by the definition of $\alpha$, the line bundle
${\mathcal O}_{\mathbb{P}^{1\times t}}(\mathbf{0}_s,b)$ is trivial when restricted to the
support of $\alpha_* \mathcal{F}$ (which is contained in $\mathbb{P}^{1 \times s}$).
The second isomorphism holds because $\alpha$ is a finite morphism.
\end{remark}
\begin{prop}
\label{propn:rootDiffLength}
If $\ell(f) < \ell(f')$, then Theorem~\ref{thm:main:root} holds.
\end{prop}
\begin{proof}
We may reduce to the case $\ell(f')=n-1$ by the same argument as in the beginning of the proof of Proposition~\ref{propn:rootSameLength}.
Let $s = \ell(f)$ and consider the line bundles
$\mathcal L = {\mathcal O}_{\mathbb{P}^{1 \times s}}(-\tau(f) - \mathbf 1)$
on $\mathbb{P}^{1 \times s}$ and
$\mathcal L' = {\mathcal O}_{\mathbb{P}^{1 \times(n-1)}}(-f' - \mathbf 1)$
on $\mathbb{P}^{1 \times (n-1)}$.
Let $\pi\colon\mathbb{P}^{1\times s} \to \mathbb{P}^s$ and
$\pi'\colon \mathbb{P}^{1 \times (n-1)} \to \mathbb{P}^{n-1}$ be the maps
defined by the forms in~\eqref{eqn:fp}.
Let $\mathcal{E}_f = \pi_* \mathcal L$ and
$\mathcal{E}_{f'} = (\pi')_* \mathcal L'$, and define the closed immersion
$\alpha\colon\mathbb{P}^{1\times s}\rightarrow \mathbb{P}^{1\times(n-1)}$
by the vanishing of the coordinates $y_1^{(s+1)}, \ldots,y_1^{(n-1)}$.
Fix coordinates $x_0, \ldots, x_{n-1}$ for $\mathbb{P}^{n-1}$, and
let $\beta\colon \mathbb{P}^s \rightarrow \mathbb{P}^{n-1}$ be the closed immersion
given by the vanishing of $x_{s+1}, \ldots, x_{n-1}$.
This yields the commutative diagram
\[
\xymatrix{
\mathbb{P}^{1\times s} \ar[r]^-{\alpha} \ar[d]^{\pi} &
\mathbb{P}^{1 \times (n-1)} \ar[d]^{\pi'} \\
\mathbb{P}^s \ar[r]^{\beta} & \mathbb{P}^{n-1}.
}
\]
By Remark~\ref{remark:pushFwdSupNat},
$\mathcal{E} := \beta_* \mathcal{E}_f$ is a supernatural sheaf of type $f$.
Also, $\mathcal{E}' := \mathcal{E}_{f'}$ is a supernatural sheaf of type $f'$.
We must show that $\operatorname{Hom}_{\mathbb{P}^{n-1}}(\mathcal{E}', \mathcal{E}) \neq 0$.
It suffices to show that
$\operatorname{Hom}_{\mathbb{P}^{1 \times
(n-1)}}(\mathcal L', \alpha_*\mathcal L) \neq 0$
by Remark~\ref{rmk:inclusion}.
To see this, let $c:=(f_1', \dots, f'_s)$ and $b:=(-f'_{s+1}-1, \dots, -f'_{s'}-1)$,
and note that
\begin{align*}
\mathcal{H}om (\mathcal L', \alpha_*\mathcal L) & =
\mathcal{H}om({\mathcal O}_{\mathbb{P}^{1\times (n-1)}}(-f'-\mathbf 1),
\alpha_* {\mathcal O}_{\mathbb{P}^{1\times s}}(-\tau(f)-\mathbf 1)) \\
& \cong (\alpha_* {\mathcal O}_{\mathbb{P}^{1\times s}}(c-\tau(f)))(\mathbf 0_s, -b).
\end{align*}
By Remark~\ref{rmk:0s},
$\operatorname{Hom} (\mathcal L', \alpha_*\mathcal L) = \mathrm{H}^0(\mathbb{P}^{1 \times s},
{\mathcal O}(c-\tau(f)))$, which is nonzero as $\tau(f) \preceq c$.
\end{proof}
\section{Equivariant construction of morphisms between supernatural sheaves}%
\label{sec:equiv:root}
Throughout this section, we assume that $\Bbbk$ is a field of
characteristic 0 and that all root sequences have length $n-1$. Let $V$ be
an $n$-dimensional $\Bbbk$-vector space, identify $\mathbb{P}^{n-1}$ with
$\mathbb{P}(V)$, and let $\mathcal{Q}$ denote the tautological quotient bundle of
rank $n-1$ on $\mathbb{P}(V)$. We have a short exact sequence
\[
0 \to {\mathcal O}(-1) \to V \otimes {\mathcal O}_{\mathbb{P}(V)} \to \mathcal{Q} \to 0.
\]
We will use the fact that $\det \mathcal{Q} \cong {\mathcal O}(1) \otimes \bigwedge^n
V$ is a ${\bf GL}(V)$-equivariant isomorphism. For a weakly decreasing
sequence $\lambda$ of non-negative integers, we let $\mathbf{S}_\lambda$
denote the corresponding Schur functor. See \cite{weyman}*{Chapter 2}
for more details (since we are working in characteristic 0, the
functors $K_\lambda$ and $L_{\lambda^t}$ are isomorphic, where
$\lambda^t$ is the transpose partition of $\lambda$, and we call this
$\mathbf{S}_\lambda$). We extend this definition to weakly decreasing
sequences $\lambda$ with possibly negative entries as follows. Set
$\mathbf{1}=(1,\dots, 1)\in \mathbb Z^{n-1}$ and define
$\mathbf{S}_{\lambda}\mathcal{Q}:=\mathbf{S}_{\lambda - \lambda_{n-1}\mathbf{1}}\mathcal{Q} \otimes
(\det \mathcal{Q})^{\lambda_{n-1}}$.
\begin{proof}[Proof of Theorem~\ref{thm:equivariant:root}]
The reverse implication has been shown in
Proposition~\ref{prop:half:poset:root}. For the forward
implication, we proceed in two steps. First, we construct
equivariant supernatural bundles $\mathcal{E}'$ and $\mathcal{E}$ with $\operatorname{Hom}(\mathcal{E}',
\mathcal{E}) \ne 0$ using the construction in the proof of \cite[Theorem
6.2]{EiScConjOfBS07}. Second, we use this fact to construct a new
supernatural bundle $\mathcal{E}''$ of type $f'$ such that
$\operatorname{Hom}_{{\bf GL}(V)}(\mathcal{E}'',\mathcal{E})\ne 0$. Thus we will ignore powers of the
trivial bundle $\bigwedge^n V$ that appear in the first step.
Write $N_i = f'_i - f_i$ and let $\lambda\in \mathbb Z^{n-1}$ be the
partition defined by
\[
\lambda_i:= f_1 - f_{n-i} - n+1 + i \quad \text{for } 1 \le i \le n-1.
\]
Let $\lambda'$ be the sequence of weakly decreasing integers defined by
$\lambda'_{n-i} := \lambda_{n-i} - N_i$ and set
\[
\mathcal{E} := \mathbf{S}_\lambda \mathcal{Q} \otimes {\mathcal O}(-f_1 - 1)\quad \text{ and } \quad
\mathcal{E}' := \mathbf{S}_{\lambda'} \mathcal{Q} \otimes {\mathcal O}(-f_1 - 1).
\]
Observe that $\mathbf{S}_{\lambda'} \mathcal{Q} \otimes {\mathcal O}(-f_1 - 1) \cong
\mathbf{S}_{\lambda' + N_1\cdot \mathbf{1}} \mathcal{Q} \otimes {\mathcal O}(-f'_1 - 1)$.
Hence by the Borel--Weil--Bott theorem~\cite{weyman}*{Corollary
4.1.9}, $\mathcal{E}$ and $\mathcal{E}'$ are supernatural vector bundles of types
$f$ and $f'$, respectively.
To compute $\operatorname{Hom}(\mathcal{E}',\mathcal{E})$, let $\lambda'' := \lambda' + N_1\cdot
\mathbf{1}$. Define $\lambda^c$ to be the complement of $\lambda$
inside of the $(n-1) \times \lambda_1$ rectangle, so $\lambda^c_j =
\lambda_1 - \lambda_{n-j}$ for $1 \le j \le n-1$. Then
$\mathbf{S}_{\lambda}\mathcal{Q}\cong \mathbf{S}_{\lambda^c}\mathcal{Q}^* \otimes {\mathcal O}(\lambda_1)$ by
\cite{weyman}*{Exercise 2.18}. We then obtain
\[
\mathcal{H}om(\mathcal{E}', \mathcal{E}) \cong \mathbf{S}_{\lambda'} \mathcal{Q}^* \otimes \mathbf{S}_\lambda
\mathcal{Q} \cong \mathbf{S}_{\lambda''} \mathcal{Q}^* \otimes \mathbf{S}_{\lambda^c} \mathcal{Q}^* \otimes
{\mathcal O}(\lambda_1 + N_1)
\]
and seek to show that this bundle has a nonzero global section.
Fix $\mu$ so that $\mathbf{S}_\mu \mathcal{Q}^*$ is a direct summand of
$\mathbf{S}_{\lambda''} \mathcal{Q}^* \otimes \mathbf{S}_{\lambda^c} \mathcal{Q}^*$. The
Borel--Weil--Bott Theorem~\cite{weyman}*{Corollary 4.1.9} shows that
$\mathbf{S}_\mu \mathcal{Q}^* \otimes {\mathcal O}(\lambda_1 + N_1)$ has nonzero sections if and
only if $\lambda_1 + N_1 \ge \mu_1$. This is equivalent to $\mu$ being
inside of a $(n-1) \times (\lambda_1+N_1)$
rectangle. By~\cite{fulton}*{\S9.4}, the existence of such a $\mu$ is
equivalent to the condition
\begin{equation}\label{eqn:lambda:ineq}
\lambda''_i + \lambda^c_{n-i} \le \lambda_1+N_1 \quad \text{for }
i=1,\dots,n-1.
\end{equation}
Since $\lambda''_i + \lambda^c_{n-i} = \lambda_1 + N_1 - N_{n-i}$, we
see that \eqref{eqn:lambda:ineq} holds for all $i$, and thus
$\operatorname{Hom}(\mathcal{E}',\mathcal{E})\ne 0$.
For the second step, replace $\mathcal{E}'$ by $\mathcal{E}'' : = \mathcal{E}' \otimes
\operatorname{Hom}(\mathcal{E}', \mathcal{E})$, where we view $\operatorname{Hom}(\mathcal{E}', \mathcal{E})$ as a trivial bundle
over $\mathbb{P}(V)$. Note that
\[
\mathrm{H}^i(\mathbb{P}(V),\mathcal{E}''(j))\cong \mathrm{H}^i(\mathbb{P}(V),\mathcal{E}'(j))\otimes \operatorname{Hom}(\mathcal{E}',
\mathcal{E})
\]
for all $i,j$, and hence $\mathcal{E}''$ is also supernatural of type $f'$.
The space of sections $\operatorname{Hom}(\mathcal{E}'', \mathcal{E})$ is $\operatorname{Hom}(\mathcal{E}', \mathcal{E})^* \otimes
\operatorname{Hom}(\mathcal{E}', \mathcal{E})$, which contains the ${\bf GL}(V)$-invariant section
corresponding to the evaluation map. This gives a nonzero
${\bf GL}(V)$-equivariant map $\mathcal{E}'' \to \mathcal{E}$.
\end{proof}
\begin{example}
We reconsider Example~\ref{ex:1} in the equivariant context. Here we
will not ignore powers of $\bigwedge^n V$. Let $n=4$ and
$f=(-2,-3,-4,-5) \preceq f'=(-1,-2,-3,-4)$. With notation as in the
proof of Theorem~\ref{thm:equivariant:root}, we have $N=(1,1,1,1)$,
$\lambda=(0,0,0,0)$, $\lambda'=(-1,-1,-1,-1)$,
\begin{align*}
\mathcal{E}&=\mathbf{S}_{(0,0,0,0)}\mathcal{Q} \otimes {\mathcal O}(2-1)={\mathcal O}(1), \quad \text{and}\\
\mathcal{E}'&=\mathbf{S}_{(-1,-1,-1,-1)}\mathcal{Q} \otimes {\mathcal O}(2-1) = \left({\mathcal O}(-1)
\otimes \left(\bigwedge^n V\right)^{-1} \right) \otimes {\mathcal O}(1)
=\left(\bigwedge^n V\right)^{-1} \otimes {\mathcal O}.
\end{align*}
Since $\lambda^c=(0,0,0,0)=\lambda''$, we see that
\[
\mathcal{H}om(\mathcal{E}',\mathcal{E})\cong {\mathcal O}(1) \otimes \bigwedge^n V,
\]
which certainly has nonzero global sections. In fact,
$\operatorname{Hom}(\mathcal{E}',\mathcal{E})\cong V \otimes \bigwedge^n V$. Note, however, that
this implies that there is no nonzero equivariant morphism from
$\mathcal{E}'$ to $\mathcal{E}$. We thus set $\mathcal{E}'':=\mathcal{E}\otimes \operatorname{Hom}(\mathcal{E}',\mathcal{E})$.
Then $\operatorname{Hom}(\mathcal{E}'',\mathcal{E})\cong V^*\otimes V$, and our desired nonzero
equivariant morphism is given by the trace element.
\end{example}
\section{Remarks on other graded rings
\label{sec:extensions}
Given any graded ring $R$, one could try to use an analog of
Theorem~\ref{thm:poset:deg:main} to induce a partial order on
the extremal rays of the cone of Betti diagrams over $R$. This
application has already proven useful in a couple of the other cases
where Boij--S\"oderberg has been studied. In this section, we provide
a sketch of some of these applications.
\begin{figure}
\begin{tikzpicture}[xscale=1.4,yscale=1.0]
\draw(0,9) node {$(0,1,2,3,\dots)$};
\draw(1,10) node {\dots};
\draw(1,8) node {$(0,1,\infty,\infty,\dots)$};
\draw(2,9) node {\dots};
\draw(3,8) node {\dots};
\draw(2,7) node {$(0,2,3,4,\dots)$};
\draw(3,6) node {$(0,2,\infty,\infty,\dots)$};
\draw(1,6) node {$(1,2,3,4,\dots)$};
\draw(4,7) node {\dots};
\draw(2,5) node {$(1,2,\infty,\infty,\dots)$};
\draw(3,4) node {$(1,3,4,5,\dots)$};
\draw(4,5) node {$(0,3,4,5,\dots)$};
\draw(5,6) node {\dots};
\draw(4,3) node {$(1,3,\infty,\infty,\dots)$};
\draw(5,4) node {$(0,3,\infty,\infty,\dots)$};
\draw(6,5) node {\dots};
\draw(4,1) node {$(2,3,4,5,\dots)$};
\draw(5,2) node {(1,3,4,5,\dots)};
\draw(6,3) node {$(0,4,5,6,\dots)$};
\draw(7,4) node {\dots};
\draw(5,0) node {\dots};
\draw(6,1) node {\dots};
\draw(7,2) node {\dots};
\draw[-] (1.2,6.2)--(1.8,6.8);
\draw[-] (1.2,5.8)--(1.8,5.2);
\draw[-] (0.2,9.2)--(.8,9.8);
\draw[-] (1.2,8.2)--(1.8,8.8);
\draw[-] (2.2,7.2)--(2.8,7.8);
\draw[-] (3.2,6.2)--(3.8,6.8);
\draw[-] (4.2,5.2)--(4.8,5.8);
\draw[-] (5.2,4.2)--(5.8,4.8);
\draw[-] (6.2,3.2)--(6.8,3.8);
\draw[-] (0.2,8.8)--(0.8,8.2);
\draw[-] (1.2,7.8)--(1.8,7.2);
\draw[-] (2.2,6.8)--(2.8,6.2);
\draw[-] (3.2,5.8)--(3.8,5.2);
\draw[-] (4.2,4.8)--(4.8,4.2);
\draw[-] (5.2,3.8)--(5.8,3.2);
\draw[-] (2.2,5.2)--(2.8,5.8);
\draw[-] (3.2,4.2)--(3.8,4.8);
\draw[-] (4.2,3.2)--(4.8,3.8);
\draw[-] (5.2,2.2)--(5.8,2.8);
\draw[-] (2.2,4.8)--(2.8,4.2);
\draw[-] (3.2,3.8)--(3.8,3.2);
\draw[-] (4.2,2.8)--(4.8,2.2);
\draw[-] (4.2,1.2)--(4.8,1.8);
\draw[-] (4.2,0.8)--(4.8,0.2);
\draw[-] (5.2,1.8)--(5.8,1.2);
\draw[-] (6.2,2.8)--(6.8,2.2);
\end{tikzpicture}
\caption{For
the hypersurface ring $R$, this partial order provides a simplicial fan structure,
as illustrated in~\cite{bbeg} and discussed in Example~\ref{ex:hypersurface}.
The partial order is determined by an analog of
Theorem~\ref{thm:poset:deg:main}.}
\label{fig:hypersurface}
\end{figure}
\begin{example}\label{ex:hypersurface}
We first consider an example involving hypersurface rings over
$\Bbbk[x,y]$. Let $f\in \Bbbk[x,y]$ be a quadric polynomial, and set
$R:=\Bbbk[x,y]/\<f\>$. The cone of Betti diagrams over $R$ is
described in detail in \cite{bbeg}. The extremal rays still
correspond to Cohen--Macaulay modules with pure resolutions, though
some of the degrees are infinite in length.
\begin{enumerate}[(i)]
\item \emph{Finite pure resolutions.} For example, if $h$ is a
degree $7$ polynomial that is not divisible by $f$, then the free
resolution of $R/\<h\>$ is \[ R\leftarrow R(-7) \leftarrow 0.\]
Following the notation of Section~\ref{sec:prelim deg}, we denote
such a resolution by its corresponding degree sequence, i.e.,
$(0,7,\infty,\infty,\dots)$.
\item
\emph{Infinite pure resolutions.}
For example, the free resolution of the $R$-module $R/\<x,y\>$ is
\[
R\leftarrow R^2(-1)\leftarrow R^2(-2)\leftarrow R^2(-3)\leftarrow
\cdots.
\]
We denote this by its corresponding degree sequence, i.e.,
$(0,1,2,3,\dots)$.
\end{enumerate}
There are two possible partial orders for these extremal rays:
\begin{itemize}
\item $\rho_d\preceq \rho_{d'}$ if $d_i\leq d_i'$ for all $i$.
\item $\rho_d\preceq\rho_{d'}$ if there exist Cohen--Macaulay
$R$-modules $M$ and $M'$ with pure resolutions of types $d$ and
$d'$, respectively, with $\operatorname{Hom}_R(M',M)_{\leq 0}\ne 0$.
\end{itemize}
In contrast with the case of the polynomial ring, these partial orders are genuinely different. Only the second partial order leads to a greedy algorithm for decomposing Betti diagrams over $R$, in parallel to~\cite{EiScConjOfBS07}*{Decomposition Algorithm}. This also provides an analog of the Multiplicity Conjecture for $R$.
\end{example}
\begin{example}\label{ex:bigraded}
We now consider $S=\Bbbk[x,y]$ with the $\mathbb{Z}^2$-grading $\deg(x):=(1,0)$
and $\deg(y):=(0,1)$. In general, the cone of bigraded Betti diagrams
over $S$ remains poorly understood. However,
portions of this cone have been worked out by the first
three authors, and we now provide a brief sketch of these unpublished results.
We restrict attention to the cone of Betti diagrams of finite length
$S$-modules $M$, where all of the Betti numbers of $M$ are
concentrated in bidegrees $(a,b)$ with $0\leq a,b\leq 2$. The
extremal rays of this cone may be realized by quotients of monomial
ideals of the form $m_1/m_2$, where each $m_i$ is a monomial ideal
generated by monomials of the form $x^\ell y^k$ with $0\leq
\ell,k\leq 2$.
The natural analog of Theorem~\ref{thm:poset:deg:main} induces
a partial order on these rays, which also induces a simplicial structure
on this cone of bigraded Betti diagrams.
\end{example}
\def\cfudot#1{\ifmmode\setbox7\hbox{$\accent"5E#1$}\else
\setbox7\hbox{\accent"5E#1}\penalty 10000\relax\fi\raise 1\ht7
\hbox{\raise.1ex\hbox to 1\wd7{\hss.\hss}}\penalty 10000 \hskip-1\wd7\penalty
10000\box7}
\begin{bibdiv}
\begin{biblist}
\bib{bbeg}{article}{
author={Berkesch, Christine},
author={Burke, Jesse},
author={Erman, Dan},
author={Gibbons, Courtney},
title={The cone of {B}etti diagrams over a hypersurface ring of low
embedding dimension},
note={\tt arXiv:1109.5198v1},
date={2011},
}
\bib{explicit}{article}{
author={Berkesch, Christine},
author={Erman, Dan},
author={Kummini, Manoj},
author={Sam, Steven~V},
title={Tensor complexes: Multilinear free resolutions constructed from higher tensors},
note={\tt arXiv:1101.4604},
date={2011},
}
\bib{boij-sod1}{article}{
AUTHOR = {Boij, Mats},
AUTHOR = {S{\"o}derberg, Jonas},
TITLE = {Graded {B}etti numbers of {C}ohen--{M}acaulay modules and the
multiplicity conjecture},
JOURNAL = {J. Lond. Math. Soc. (2)},
VOLUME = {78},
YEAR = {2008},
NUMBER = {1},
PAGES = {85--106},
ISSN = {0024-6107},
}
\bib{BoijSoderbergNonCM08}{misc}{
author={Boij, Mats},
author={S{\"o}derberg, Jonas},
title={Betti numbers of graded modules and the multiplicity conjecture
in the non-{C}ohen--{M}acaulay case},
date={2008},
note={\tt arXiv:0803.1645},
}
\bib{BrHe:CM}{book}{
author={Bruns, Winfried},
author={Herzog, J{\"u}rgen},
title={Cohen--{M}acaulay rings},
series={Cambridge Studies in Advanced Mathematics},
publisher={Cambridge University Press},
address={Cambridge},
date={1993},
volume={39},
ISBN={0-521-41068-1},
}
\bib{EiFlScExterior03}{article}{
author={Eisenbud, David},
author={Fl{\o}ystad, Gunnar},
author={Schreyer, Frank-Olaf},
title={Sheaf cohomology and free resolutions over exterior algebras},
date={2003},
ISSN={0002-9947},
journal={Trans. Amer. Math. Soc.},
volume={355},
number={11},
pages={4397\ndash 4426 (electronic)},
url={http://dx.doi.org/10.1090/S0002-9947-03-03291-4},
}
\bib{efw}{article}{
author={Eisenbud, David},
author={Fl\o ystad, Gunnar},
author={Weyman, Jerzy},
title={The existence of pure free resolutions},
journal={Ann. Inst. Fourier (Grenoble)},
date={2011},
volume={61},
number={3},
pages={905\ndash 926},
}
\bib{EiScConjOfBS07}{article}{
author={Eisenbud, David},
author={Schreyer, Frank-Olaf},
title={Betti numbers of graded modules and cohomology of vector
bundles},
date={2009},
ISSN={0894-0347},
journal={J. Amer. Math. Soc.},
volume={22},
number={3},
pages={859\ndash 888},
}
\bib{EiScSupNat09}{article}{
author={Eisenbud, David},
author={Schreyer, Frank-Olaf},
title={Cohomology of coherent sheaves and series of supernatural bundles},
journal={J. Eur. Math. Soc. (JEMS)},
volume={12},
date={2010},
number={3},
pages={703--722},
issn={1435-9855},
}
\bib{ES:ICMsurvey}{inproceedings}{
author={Eisenbud, David},
author={Schreyer, Frank-Olaf},
title={Betti numbers of syzygies and cohomology of coherent sheaves},
date={2010},
booktitle={Proceedings of the {I}nternational {C}ongress of
{M}athematicians},
note={Hyderabad, India},
}
\bib{fulton}{book}{
author={Fulton, William},
title={Young tableaux, with applications to representation theory
and geometry},
series={London Mathematical Society Student Texts},
volume={35},
publisher={Cambridge University Press},
place={Cambridge},
date={1997},
pages={x+260},
isbn={0-521-56144-2},
isbn={0-521-56724-6},
}
\bib{M2}{misc}{
label={M2},
author={Grayson, Daniel~R.},
author={Stillman, Michael~E.},
title = {Macaulay 2, a software system for research
in algebraic geometry},
note = {Available at \url{http://www.math.uiuc.edu/Macaulay2/}},
}
\bib{sam}{article}{
author={Sam, Steven V},
author={Weyman, Jerzy},
title={Pieri resolutions for classical groups},
journal={J. Algebra},
volume={329},
date={2011},
pages={222--259},
}
\bib{weyman}{book}{
author={Weyman, Jerzy},
title={Cohomology of vector bundles and syzygies},
series={Cambridge Tracts in Mathematics},
volume={149},
publisher={Cambridge University Press},
place={Cambridge},
date={2003},
pages={xiv+371},
isbn={0-521-62197-6},
}
\end{biblist}
\end{bibdiv}
\bigskip
\end{document
|
1,314,259,993,246 | arxiv | \section{Introduction}
To sum, our contributions are:
\begin{enumerate}
\item We implement several simple heuristics to generate plausible and naturally-sounding code-switched utterances, based on monolingual data from a NLU bechmark;
\item We show-case that monolingual models fail to process code-switched utterances. At the same time, cross-lingual models cope much better with such texts;
\item We show, that fine-tuning of the language model on code-switched utterances improves the performance by ... .
\end{enumerate}
\section{Related work}
\paragraph{Generation of code-switched text}
\paragraph{Joint intent detection and slot-filling}
\paragraph{Other related research directions} include {\bf code-switching detection},
\section{Our approach}
\subsection{Dataset}
\subsection{Joint intent recognition and slot-filling}
\subsection{Code-switching generation}
We propose a two-step approach to generating adversarial code-switched sentence for a source sentence. First, we generate candidates by replacing randomly chosen words with their translation to target language. Second, we check, whether the candidate sentence increases the loss of the NLU model.
\paragraph{Candidate generation}
{\bf Word-level adversaries} include...
{\bf Phrase-level adversaries} include...
\paragraph{Candidate selection}
We filter candidates according to the following rules...
\section{Experimental results}
\section{Discussion}
\section{Conclusion}
\bibliographystyle{acl_natbib}
\section{Introduction}
Training dialogue systems used by virtual assistants in task-oriented applications requires large annotated datasets. The core machine learning task to every dialogue system is {\it intent detection}, which aims to detect what the intention of the user is. New intents emerge when new applications, supported by the dialogue systems, are launched. However, an extension to new intents may require annotating additional data, which may be time-consuming and costly. What is more, when developing a new dialogue system, one may face the cold start problem if little training data is available. Open sources provide general domain annotated datasets, primarily collected via crowd-sourcing or released from commercial systems, such as Snips NLU benchmark \cite{coucke2018snips}. However, it is usually problematic to gather more specific data from any source, including user logs, protected by the privacy policy in real-life settings.
For all these reasons, we suggest a learnable approach to create training data for intent detection. We simulate a real-life situation in which no annotated data but rather only a short description of a new intent is available. To this end, we propose to use methods for zero-shot conditional text generation to generate plausible utterances from intent descriptions. The generated utterances should be in line with the intent's meaning.
Our contributions are:
\begin{enumerate}[topsep=0pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]
\item We propose a zero-shot generation method to generate a task-oriented utterance from an intent description;
\item We evaluate the generated utterances and compare them to the original crowd-sourced datasets. The proposed zero-shot method achieves high scores in fluency and diversity as per our human evaluation;
\item We provide experimental evidence of a semantic shift when generating utterances for unseen classes using the zero-shot approach;
\item We apply reinforcement learning for the one-shot generation to eliminate the semantic shift problem. The one-shot approach retains semantic accuracy without sacrificing fluency and diversity.
\end{enumerate}
\section{Related work}
\paragraph{Conditional language modelling} generalizes the task of language modelling. Given some conditioning context $z$, it assigns probabilities to a sequence of tokens \cite{mikolov2012context}. Machine translation \cite{sutskever2014sequence,cho2014learning} and image captioning \cite{you2016image} are seen as typical conditional language modelling tasks. More sophisticated tasks include text abstractive summarization \cite{nallapati2017summarunner,narayan2019article} and simplification \cite{zhang2017sentence}, generating textual comments to source code \cite{richardson2017code2text} and dialogue modelling \cite{lowe2017training}. Structured data may act as a conditioning context as well. Knowledge base (KB) entries \cite{vougiouklis2018neural} or DBPedia triples \cite{colin2016webnlg} serve as condition to generated plausible factual sentences. Neural models for conditional language modelling rely on encoder-decoder architectures and can be learned both jointly from scratch \cite{vaswani2017attention} or by fine-tuning pre-trained encoder and decoder models \cite{budzianowski2019hello,lewis2019bart}.
\paragraph{Zero-shot learning (ZSL)} has formed as a recognized training paradigm with neural models becoming more potent in the majority of downstream tasks. In the NLP domain, the ZSL scenario aims at assigning a label to a piece of text based on the label description. The learned classifier becomes able to assign class labels, which were unseen during the training time. The classification task is then reformulated in the form of question answering \cite{levy2017zero} or textual entailment \cite{yin2019benchmarking}.
Other techniques for ZSL leverage metric learning and make use of capsule networks \cite{du2019investigating} and prototyping networks \cite{yu2019episodebased}.
\paragraph{Zero-shot conditional text generation} implies that the model is trained in such a way that it can generalize to an unseen condition, for which only a description is provided. A few recent works in this direction show-case dialog generation from unseen domains \cite{zhao2018zero} and question generation from KB's from unseen predicates and entity types \cite{elsahar2018zero}. CTRL \cite{keskar2019ctrl}, pre-trained on so-called control codes, which can be combined to govern style, content, and surface form, provides for zero-shot generation for unseen codes combinations. PPLM \cite{dathathri2019plug} uses signals, representing the class, e.g., bag-of-words, during inference, and can generate examples with given semantic attributes without pre-training.
\paragraph{Training data generation} can be treated as form of data augmentation, a research direction being increasingly in demand. It enlarges datasets for training neural models and help avoid labor-intensive and costly manual annotation. Common techniques for textual data augmentation include back-translation \cite{sennrich2016improving}, sampling from latent distributions \cite{xia2021pseudo}, simple heuristics, such as synonym replacement \cite{wei2019eda} and oversampling \cite{chawla2002smote}. Few-shot text generation has been applied to natural language generation from structured data, such as tables \cite{chen2020few} and to intent detection data augmentation \cite{xia2021pseudo}. However, these methods are incompatible with ZSL, requiring at least a few labeled examples for the class being augmented.
An alternative approach suggests to use a model to generate data for the target class based on task-specific world knowledge \cite{chen2017automatically} and linguistic features \cite{iyyer2018adversarial}.
\paragraph{Deep reinforcement learning (RL)} methods prove to be effective in a variety of NLP tasks. Early works approach the tasks of machine translation \cite{grissom2014don}, image captioning \cite{rennie2017self} and abstractive summarization \cite{paulus2017deep}, assessed with not differentiable metrics. \cite{wu2020textgail} tries to improve the quality of transformer-derived pre-trained models for generation by leveraging proximal policy optimization. Other applications of deep RL include dialogue modeling
\cite{li2016deep} and open-domain question answering \cite{wang2018r}.
\section{Methods}
Our main goal is to generate plausible and coherent utterances, which relate to unseen intents, leveraging the description of the intent only. These utterances should clearly express the desired intent. For example, if conditioned on the intent {\it ``delivery from the grocery store''} the model should generate an utterance close to {\it ``Hi! Please bring me milk and eggs from the nearest convenience store''} or similar.
Two scenarios can be used to achieve this goal. In the {\bf zero-shot scenario}, we train the model on a set of seen intents $\mathcal{S}$ to generate utterances. If the generation model generalizes well, the utterances generated for unseen intents $\mathcal{U}$ are diverse and fluent and retain intents' semantics. In the {\bf one-shot scenario}, we utilize one utterance per unseen intent $\mathcal{U}$ to train the generation model and learn the semantics of this particular intent.
\subsection{Zero-shot generation}
Our model as depicted in Figure~\ref{fig:gpttuning}) aims to generate plausible utterances conditioned on the intent description. We fine-tune the GPT-2 medium model \cite{radford2019language} on task-oriented utterances, collected from several NLU benchmarks (see Section~ \ref{section:data} for more details on the dataset).
\begin{figure}[!htp]
\includegraphics[width=\linewidth]{pictures/tuned_gpt.pdf}
\caption{Training setup. The input an intent description and an utterance concatenated, the output is the utterance.}
\label{fig:gpttuning}
\end{figure}
Our approach to fine-tuning the GPT-2 model follows \cite{budzianowski2019hello}. Two pieces of information, the intent description and the utterance are concatenated to form the input. More precisely, the input has the following format: {\it [intent description]} {\it utterance}. During the training phase, the model is presented with the output obtained from the input by masking the intent description. The output has the following format: \texttt{<MASK>}, $\ldots$, \texttt{<MASK>} {\it utterance}. The full list of intents is provided in Table 4 in Appendix.
Such input allows the model to pay attention to intent tokens while generating. The standard language modeling objective, negative log-likelihood loss, is used to train the model:
\begin{align*}
\mathcal{L}\left(\theta\right)=-\sum_{i}\sum_{t=1}^{\left|\mathbf{x}^{(i)}\right|} \log p_{\theta}\left(x_{t}^{(i)} |\text{intent}, x_{<t}^{(i)}\right).
\end{align*}
We fine-tuned the model for one epoch to avoid over-fitting. Otherwise, the model tends to repeat redundant semantic constructions of the input utterances. At the same time, a bias towards the words from the training set gets formed. The parameters of the training used were set to the following values: batch size equals to $32$, learning rate equals to $5$e-$5$, the optimizer chosen is Adam \cite{kingma2015adam} with default parameters.
\subsection{One-shot Generation}
{\bf Motivation.} The zero-shot approach to conditional generation may degrade or even fail if (i) the intent description is too short to properly reflect the semantics of the intent, (ii) the intent description is ambiguous or contains ambiguous words. Produced utterances may distort the initial meaning of the intent or be meaningless at all. The model may generate an utterance {\it ``Count the number of people in the United States''} for the intent ``calculator'', or {\it ``Add a book by Shakespeare to the calendar''} for a ``book reading'' service. Although such examples can be treated not as outliers but rather as real-life whimsical utterances, this is not the desired behavior for the generation model. We address this phenomenon as \emph{Semantic Shift} and provide experimental evidence of it in Section~\ref{sec:SemanticShift}.
Based on these observations, we hypothesize that the problem could be solved if we provide a single training example to improve models' generalization abilities. A single example can give the model a clue about what the virtual assistant can do with books and which entities our calculator is designed to calculate by gaining better world knowledge. For this purpose, we are moving from the zero-shot to the one-shot setting. We propose a method for improving zero-shot generation by leveraging just one example.
Our approach is inspired by the recent TextGAIL \cite{wu2020textgail} approach. It addresses the problem of exposure bias in pre-trained language models and proposes a GAN-like style scheme for fine-tuning GPT-2 to produce appropriate story endings using a reinforcement algorithm. As a reward, TextGAIL uses a discriminator output trained to distinguish real samples from generated samples.
As we are limited in using learnable discriminators because of the lack of training data, we propose an objective function based on a similarity score. Our objective function produces utterances, which are close to the reference example. At the same time, it forces the model to generate more diverse and plausible utterances. Table 5 in Appendix provides reference examples used for the one-shot generation method.
\noindent{\bf Method.} After zero-shot fine-tuning, we perform a one-shot model update for each intent separately. We perform several steps of the Proximal Policy Optimization algorithm \cite{schulman2017proximal} with the objective function described further.
\noindent{\bf Reward.} Our reward function is based on BERTScore \cite{zhang2019bertscore}, which serves as the measure of contextual similarity between generated sentences and the reference example. BERTScore correlates better with human judgments than other existing metrics, used to control semantics of generated texts and detect paraphrases. Given a reference and a candidate sentence, we embed them using RoBERTa model \cite{liu2019roberta}. The BERTScore F1 calculated on top of these embeddings is used as a part of the final reward.
It is not enough to reward the model only for the similarity of the generated utterance to the reference one. If so, the model tends to repeat the reference example and receives the maximal reword. We add the negative sum of frequencies of all $n$-grams in the utterance to the reward function, forcing the model to generate less frequent sequences.
Given an intent $I$ and a reference example $x_{\text{ref}}^I$, the reward for the sentence $x$ is calculated by the formula:
\begin{align*}
R_I(x) &= R_{sim}(x_{\text{ref}}^I, x) + R_{div}(x) && \\
R_{sim}(x_{\text{ref}}^I, x) &= \text{BERTScore}(x_{\text{ref}}^I, x)&& \\
R_{div}(x) &= \sum\limits_{s \in \text{n-grams}(x)}(-\nu_s)&&
\end{align*}
where $\nu_s$ is the $n$-gram frequency, calculated from all the generated utterances inside one batch.
\noindent{\bf Objective function.} First, we plug this reward into standard PPO objective function, getting intent-specific term $L^{\text {policy }}_{I}(\theta)$. Following the TextGAIL approach, we add $\mathbf{KL}$ divergence with the model without zero-shot fine-tuning to prevent forgetting the information from the pre-trained model. We add an entropy regularizer, making the distribution smoother, which leads to more diverse and fluent sentences. According to our experiments, this term helps avoid similar prefixes for all generated sentences as $n$-gram reward only does not cope with this issue. The final generator objective for maximization in the one-shot scenario for the intent $I$ can be written as follows:
$
\begin{aligned}
L(I;\theta) =& L^{\text{policy}}_I(\theta)+\hat{\mathbb{E}}_t\large{[} \beta\mathbf{H}(p_{\theta;I}(\cdot | s_t))\large\\ -& \alpha\mathbf{KL}[p_{\theta;I}(\cdot | s_t), q(\cdot | s_t)]{]},
\end{aligned}
$
\noindent where $s_t$ is intent description, $p_{\theta;I}$ is the conditional distribution $p_{\theta}(\cdot|I)$(distribution, derived from model with updates from PPO policy), $q$ is an unconditional LM distribution, calculated by GPT-2 language model without fine-tuning. The entropy and $\mathbf{KL}$ are calculated per each token, while the $L^{\text{policy}}$ term is calculated for the whole sentence.
\subsection{Decoding strategies}
Recent studies show that a properly chosen decoding strategy significantly improves consistency and diversity metrics and human scores of generated samples for multiple generation tasks, such as story generation \cite{holtzman2019curious}, open-domain dialogues, and image captioning \cite{Ippolito2019comparison}. However, to the best of our knowledge, no method proved to be a one-size-fits-all one. We perform experiments with several decoding strategies, which improve diversity while preserving the desired meaning. We perform an experimental evaluation of different decoding parameters.
\noindent {\bf Beam Search}, a standard decoding mechanism,
keeps the top $b$ partial hypotheses
at every time step and eventually chooses the hypothesis that has the overall highest probability.
\noindent {\bf Random Sampling (top-$k$)} \cite{fan2018hierarchical} greedily samples at each time step one of the top-$k$ most likely tokens in the distribution.
\noindent {\bf Nucleus Sampling (top-$p$)} \cite{holtzman2019curious} samples from the most likely tokens whose cumulative probability does not exceed $p$.
\noindent {\bf Post Decoding Clustering} \cite{Ippolito2019comparison} (i) clusters generated samples using BERT-based similarity and (ii) selects samples with the highest probability from each cluster. It can be combined with any decoding strategy.
\section{Performance evaluation}
We use several quality metrics to assess the generated data: (i) we use multiple fluency and diversity metrics, (ii) we account for the performance of the classifiers trained on the generated data.
\noindent{\bf Fluency.} We consider fluency dependent upon the number of spelling and grammar mistakes: the utterance is treated as a fluent one if there are no misspellings and no grammar mistakes. We utilize LanguageTool \cite{milkowski2010developing}, a free and open-source grammar checker, to check spelling and correct grammar mistakes.
\noindent{\bf Diversity.} Following \cite{Ippolito2019comparison}, we consider two types of diversity metrics:
$Dist\mbox{-}k$ \cite{li2016diversity} is the total number of distinct $k$-grams divided by the total number of produced tokens in all of the utterances for an intent;
$Ent\mbox{-}k$ \cite{zhang2018generating} is an entropy of $k$-grams distribution. This metric takes into consideration that infrequent $k$-grams contribute more to
diversity than frequent ones.
\noindent{\bf Accuracy.} After we obtain a large amount of generated data, we train a RoBERTa-based classifier \cite{liu2019roberta} to distinguish between different intents, based on the generated utterances. As usual, we split the generated data into two parts so that the first part is used for training, and the second part serves as the held-out validation set to compute the classification accuracy $acc_{clsf}$. High $acc_{clsf}$ values mean that the intents are well
distinguishable, and the utterances that belong to the same intent are semantically consistent.
\noindent{\bf Human evaluation} We perform two crowd-sourcing studies to evaluate the quality of generated utterances, which aim at the evaluation of semantic correctness and fluency.
First, we asked crowd workers to evaluate semantic correctness. We gave crowd workers an utterance and asked them to assign one of the four provided intent descriptions; a correct option was among them (i.e., the one used to generate this very utterance). For the sake of completeness, we added a fifth option, ``none of above''. We assess the results of this study by two metrics, accuracy and $recall@4$. Accuracy $acc_{crowd}$ measures the number of correct answers, while $recall@4$ measures the number of answers which are different from the last ``none of above'' option.
Second, we asked crowd workers to evaluate the fluency of generated utterances. Crowd workers were provided with an utterance and were asked to score it on a Likert-type scale from 1 to 5, where (5) means that the utterance sounds natural, (3) means that the utterance contains some errors, (1) means that it is hard or even impossible to understand the utterance. We assess the results of this study by computing the average score.
\section{Zero-shot generation experiments}
\subsection{Data preparation}
\label{section:data}
{\bf Data for fine-tuning.} We combined two NLU datasets, namely The Schema-Guided Dialogue Dataset (SGD) \cite{rastogi2019towards} and Natural Language Understanding Benchmark (NLU-bench) \cite{coucke2018snips} for the fine-tuning stage.
Both datasets have a two-level hierarchical structure: they are organized according to services (in SGD) or scenarios (in NLU-Bench). Each service/scenario contains several intents, typically 2-5 intents per high-level class. For example, the service \emph{Buses\_1} is divided into two intents \emph{FindBus} and \emph{BuyBusTickets}.
SGD dataset consists of multi-turn task-oriented dialogues between user and system; each user utterance is labeled by service and intent. We adopted only those utterances from each dialog in which a new intent arose, which means the user clearly announced a new intention. This is a common technique to remove sentences that do not express any intents. As a result, we got three utterances per dialog on average.
As NLU-Bench consists of user utterances, each marked up with a scenario and intent label, we used it without filtering. Summary statistics of the dataset used is provided in Table~\ref{tab:finetune_data}.
\begin{table}[h]
\centering
\begin{tabular}{p{2.59cm}p{1.2cm}p{1.2cm}p{1.2cm}}
\toprule
& SGD & NLU-bench & Total \\ \midrule
No. of utterances & 49986 & 25607& 75593 \\
No. of services & 32& 18& 50 \\
No. of intents & 67 & 68& 135 \\
Total tokens & $\sim$550k & $\sim$170k & $\sim$720k \\
Unique tokens & $\sim$10.8k & $\sim$8.3k & $\sim$17.4k \\ \bottomrule
\end{tabular}
\caption{The total number of utterances, intents, services and words across datasets and final statistics of our fine-tuning data.}
\label{tab:finetune_data}
\end{table}
\begin{table*}[!htp]
\centering
\begin{tabu} to \textwidth { l | c c c | c c c }
\toprule
\multicolumn{7}{c}{Zero-shot generation} \\\toprule
\multirow{2}{*}{Decoding strategy} & \multicolumn{3}{ c |}{Automated metrics} & \multicolumn{3}{ c }{Human evaluation} \\
\cline{2-7}
& $acc_{clsf}$ & $Dist\mbox{-}4$ & $Ent\mbox{-}4$ & $acc_{crowd}$ & $recall@4$ & Fluency score \\
\midrule
Random Sampling ($b=4$) & 0.82 & \bf{0.50} & \bf{6.20} & 0.63 & 0.87 & 4.77 \\
Nucleus Sampling ($p=0.6$) + PDC & 0.82 & 0.40 & 5.77 & 0.68 & 0.85 & \bf{4.95} \\
Beam Search ($b=3$) + PDC & 0.85 & 0.22 & 4.92 & 0.67 & 0.85 & 4.88 \\
Beam Search ($b=3$) & 0.88 & 0.15 & 4.76 & 0.60 & 0.80 & 4.76 \\
Nucleus Sampling ($p=0.4$) & 0.89 & 0.25 & 4.95 & 0.72 & 0.90 & 4.81 \\
\bottomrule
\multicolumn{7}{c}{One-shot generation} \\\toprule
Nucleus Sampling ($p=0.4$) & \bf{0.94} & 0.39 & 5.88 & \bf{0.78} & \bf{0.91} & 4.86 \\\bottomrule
\end{tabu}
\caption{Decoding strategies for zero-shot and one-shot generation. PDC stands for Post Decoding Clustering.}
\label{tab:genComparison}
\end{table*}
\begin{table*}
\centering
\begin{tabu}
to \textwidth { l | c c c c }
\toprule
& $acc_{crowd}$ & $recall@4$ & $Dist\mbox{-}4$ & $Ent\mbox{-}4$ \\\midrule
SGD+NLU-bench & 0.83 & 0.95 & 0.53 & 5.92 \\\bottomrule
\end{tabu}
\caption{Evaluation of the test dataset, created by merging and re-splitting two datasets under consideration.}
\label{tab:trueData}
\end{table*}
\noindent{\bf Intent set for generation.} For the evaluation of our generation methods, we created a set of 38 services and 105 intents\footnote{The full list of services and intents in both sets presented in the Appendix} covering the most common requirements of a typical user of a modern dialogue system. The set includes services dedicated to browsing the Internet, adjusting mobile device settings, searching for vehicles, and others.
To adopt a zero-shot setup, we split the data into train and test sets in the following way. Some of the services are unseen ($s \in \mathcal{U}$), i.e., are present in the test set only. There are no seen services in the train set related to unseen services. The rest of the services are seen, i.e., present in both train and test set ($s \in \mathcal{S}$), but different intents put in train and test sets. For example, \emph{Flight} services are present in the train data and \emph{Plane} service is used in the test set; from \emph{Music} services, intents \emph{Lookup song} and \emph{Play song} were used for training, and \emph{Create playlist} and \emph{Turn on music} for a testing. To form the intent description for fine-tuning and generation, we join service and intent labels.
\subsection{Evaluation}
We generated 100 examples per intent using different decoding strategies and their parameters. For the more detailed evaluation, we picked up the generation methods of different decoding strategies that achieved good scores ($acc_{clsf} >80\%$ and $Ent\text{-}4>4$). For these utterances, we performed a human evaluation of semantic correctness and diversity; Table~\ref{tab:genComparison} compares the decoding strategies according to various quality metrics. For a more detailed evaluation of decoding strategies, see Table 2 in Appendix.
To compare the diversity of human-generated utterances to our generated utterances, we evaluate the fine-tuning dataset with $Ent\mbox{-}4$ and $Dist\mbox{-}4$ metrics. The semantics of generated data is assessed by $acc_{crowd}$ and $recall@4$. We present metrics for this dataset in Table~\ref{tab:trueData}.
\begin{table*}[!htp]
\centering
\begin{tabular}{p{4.8cm}p{4.8cm}p{5.4cm}}
\toprule
Beam Search (3) & Random Sampling (3) &
Nucleus Sampling (0.98)\\
$Ent\mbox{-}4 = 4.26$ &$Ent\mbox{-}4 = 5.93$ & $Ent\mbox{-}4 = 6.86$\\
\midrule
i need to know what's going on with my phone &i want to see my messages in the phone book & show me a message from jean lee for my favorite apple company \\
i want you show me the message from my phone & show me my most recent messages from my phone number & how can you tell me mike with the message \\
i want you show me my messages on my phone & show me the messages from the device i was using & could you check to see if my friends are in a group that is gossiping \\
i want you to show my messages on my smart phone & show me the message from my friend jane that i sent to her & list all messages in my bbq menu from ausy \\
i want to read a new message from my friend & can you please show me the messages from my phone & just turn on the smart mute this monday night \\ \bottomrule
\end{tabular}
\caption{Utterances, generated by different decoding strategies and the diversity scores of the decoding strategies.}
\label{tab:diversityExamples}
\end{table*}
\subsection{Analysis and model comparison}
\paragraph{Fluency.}
Spell checking results reveal the following issues of the generated utterances. The major issues are related to casing: an utterance may start in lower case, the first-person singular personal pronoun ``I'' is frequently generated in lower case, too. Punctuation issues include missing quotes, question marks, periods, or repeated punctuation marks. Common mistakes are omitting of a hyphen in the word {\it ``Wi-Fi''} and {\it ``e-mail''} and confusing definite and indefinite articles, as well as confusing {\it ``a''/``an''}. These issues are more or less natural to humans and thus do not prevent further use of generated utterances. The only unnatural issues found by LanguageTool are phrase repetition in small numbers ($4$ errors of this type per $10000$ utterances). For examples of fluency issues in generated data, see Table 1 in Appendix.
\paragraph{Diversity.} Table~\ref{tab:diversityExamples} shows examples of the phrases generated by means of different decoding strategies, conditioning on the intent \emph{Show message}, along with diversity metrics, $Dist$ and $Ent$. Higher $Ent$ and $Dist$ scores indeed correspond to a more diverse decoding strategy. At the same time, extremely high diversity may generate utterances unrelated to the intent, expressing non-clear meaning and lack of common sense.
\paragraph{Diversity / Accuracy trade-off.} Figure~\ref{fig:accuracyDiversity} shows the trade-off between the diversity ($Ent\mbox{-}4$) and the accuracy ($acc_{clsf}$) of the generated data.
\begin{figure}[!htp]
\includegraphics[width=\linewidth]{pictures/ent_acc.pdf}
\caption{The trade-off between diversity ($Ent\mbox{-}4$) and accuracy.}
\label{fig:accuracyDiversity}
\end{figure}
Every point corresponds to sentences generated using different zero-shot strategies. The human level stands for the diversity and accuracy metrics computed for the test set as is. The beam search scores are mainly in the top-left corner of the plane, leading to high accuracy and low diversity values. Top-$k$ Random Sampling strategy does not achieve the highest levels of accuracy. Nucleus Sampling can generate datasets with a large range of diversity and accuracy scores, depending on the chosen parameter. Post-decoding clustering increases diversity for low-diverse decoding strategies and decreases it for high-diverse ones, moving the generator closer to the human level.
\paragraph{Two ways to assess accuracy.} Table~\ref{tab:genComparison} shows that there is no clear correspondence between automated accuracy $acc_{clsf}$ and human accuracy $acc_{crowd}$. Therefore $acc_{clsf}$ cannot serve as the final measure for the semantic consistency of the generator.
The \emph{Semantic shift} problem cannot be captured by the automated accuracy $acc_{clsf}$: the model generates examples which are consistent inside each class, and classes are well-separated, but the generated examples do not correspond well to the intent descriptions.
\begin{table*}[!htp]
\centering
\begin{tabular}
{p{8cm}p{5cm}p{0.75cm}p{0.75cm}}
\toprule
Intent description and reference examples & Undesirable meaning & Zero-shot & One-shot \\\midrule
{\bf Intent description} Train Buy train ticket \newline {\bf Reference} Make a purchase of the train ticket, not bus. Buy a train ticket for a specific date to some location &
{\bf Meaning} Get bus ticket \newline
{\bf Example} I need a bus to go there. I need to leave on the 3rd of this month. & 97 & 23 \\\midrule
{\bf Intent description} Wallpapers Put default wallpaper \newline {\bf Reference} Change the background picture of the device display to the default one. Replace current background on the device with the default one &
{\bf Meaning} Put new wall cover in a house \newline
{\bf Example} I want to put the wallpaper for my bedroom on the wall. & 74 & 1 \\\midrule
{\bf Intent description} Calculator Find sum \newline {\bf Reference} Compute, calculate the sum of the given numbers. Open the calculator and compute the sum of the following numbers &
{\bf Meaning} Find some amount of money \newline {\bf Example} I need to find the average price of a house. & 57 & 0
\\\bottomrule
\end{tabular}
\centering
\caption{Evaluation of semantic shift reduction by one-shot generation. The first column contains intent description and reference utterances used for one-shot generation. The second column shows examples of typical undesirable meaning. The last two columns show the percentage of examples with given incorrect meaning among 100 generated utterances by zero-shot and one-shot generation. Nucleus sampling ($p=0.4$) is used for both methods.}
\label{tab:ZHvsOS}
\end{table*}
\subsection{Semantic shift problem}
\label{sec:SemanticShift}
The semantic consistency is crucial: how well do the generated utterances correspond to the intent description? In most cases, zero-shot generation is quite reliable: $acc_{crowd}>0.8$ for $57\%$ of intents, $recall@4 > 0.9$ for $72\%$ of intents. However, generated utterances are distinguishable from other classes for some intents, but they do not completely correspond to the intent description. Several generated utterances below illustrate this issue.
{\bf Intent:} {\it Buy train tickets} \\
\noindent {\bf Utterance:} I want to buy a bus ticket. I want to leave on the 12th of this month. \\
{\bf Intent:} {\it Put default wallpapers} \\
\noindent {\bf Utterance:} Put the default wallpaper for the bedroom. I want to see it on the wall.\\
{\bf Intent:} {\it Calculator Find sum }\\
\noindent {\bf Utterance:} I need to find a calculator. I need to know the value of one dollar.
For example, The bias in the fine-tuning data causes this issue. For example, travel-related intents mainly correspond to bus travel. So the model confuses buses and trains. In other cases, the model gets wrong the intent description due to the lack of world knowledge. E. g. the generated phrases for {\it Wallpaper} may be related to wallpapers in a house; utterances for {\it Calculator} may be related to finding some numbers like the average price of houses in the area.
\iffalse
\begin{table}[h]
\caption{Semantic inconsistency examples.}
\centering
\begin{tabular}{ |l|l| }
\hline
\multicolumn{1}{|c|}{intent} & \multicolumn{1}{|c|}{phrase} \\
\hline
Wallpaper : Put default wallpaper &
Put the default wallpaper for the bedroom. I want to see it on the wall. \\
Trains : Buy train tickets &
I want to buy a bus ticket. I want to leave on the 12th of this month. \\
Calculator : Find sum &
I need to find a calculator. I need to know the value of one dollar. \\
Calculator : Count &
I want to know how many calories are in a cup of coffee \\
Call : Show phone number &
can you call my mom and ask for her number \\
\hline
\end{tabular}
\label{tab:semanticExamples}
\end{table}
\fi
\section{One-shot generation experiments}
Based on human evaluation of zero-shot generated data, we select Nucleus Sampling ($p=0.4$) as the best decoding strategy and apply it further in the one-shot scenario. Indeed, Table~\ref{tab:genComparison} confirms that the one-shot generation improves all evaluation metrics, both human and automated. The resulting one-shot utterances are more fluent than zero-shot utterances. The classifier trained on one-shot utterances has higher accuracy values when compared to the one trained on zero-shot utterances.
At the same time, one-shot generation restricts the semantics of the generated utterances and reduces the semantic shift. To illustrate, how the problem of semantic shift diminishes, we study several cases where the zero-shot model tends to generate utterances with undesirable meaning (see Section~\ref{sec:SemanticShift}): {\bf bus} instead of {\bf train}; {\bf wallpaper} as a {\it wall cover} instead of {\it background picture}; {\bf sum} as {\it amount of money} instead of {\it number}.
Table~\ref{tab:ZHvsOS} shows that after one-shot fine-tuning, the number of utterances with undesirable meaning becomes drastically lower; for more examples, see Table 3 in Appendix.
\section{Conclusion}
In this paper, we have introduced zero-shot and one-shot methods for generating utterances from intent descriptions. We ensure the high quality of the generated dataset by a range of different measures for diversity, fluency, and semantic correctness, including a crowd-sourcing study. We show that the one-shot generation outperforms the zero-shot one based on all metrics considered. Using only a single utterance for an unseen intent to fine-tune the model increases diversity and fluency. Moreover, fine-tuning on a single utterance diminishes the semantic shift problem and helps the model gain better world knowledge.
Virtual assistants in real-life setup should be highly adaptive. In some tasks, we need much more data than is currently available: exploring model robustness to distribution change, finding the best architecture, dealing with a fast-growing set of intents (the number of intents could be thousands). If the intents to support come from different providers, they pose diverse semantics, style, and noises. Adaptation to different user groups and individual users, having different intent usage distribution, is another crucial problem. We need large-scale and flexible datasets to approach these tasks, which can hardly be collected via crowd-sourcing from external sources.
Zero- or one-shot generation is an appealing technique. The model obtains the background knowledge about the world and the domain during pre-training. Next, only small amounts of data are needed to fine-tune the model. State-of-the-art pre-trained language models, fine-tuned in a zero- or one-shot fashion, generate fluent and diverse phrases close to real-life utterances. The meaning of the intent and essential details, such as book titles, movie genres, expression of speech acts, or emoticons, are preserved. What is more, manipulating a decoding strategy makes it possible to balance the generated utterances' diversity, semantic consistency, and correctness.
Our future work directions include assessing the downstream performance of proposed generation methods for an end-user application and evaluating slot-filling performance. The proposed approach can be tested to generate utterances specific to interest groups.
\section*{Acknowledgements}
Ekaterina Artemova is partially supported by the framework of the HSE University Basic Research Program.
|
1,314,259,993,247 | arxiv | \section{\label{}}
\section{Introduction}
The substantial samples of $W$ and $Z$ bosons currently being collected
by the CDF and D\O\ experiments accommodate a wide variety of precision
electroweak measurements. The two general purpose experiments observe
$p\bar{p}$ collisions at a center-of-mass energy of 1.96 TeV generated
by the Fermilab Tevatron Collider. In its current operating mode, the
Tevatron operates as a $W$ and $Z$ boson factory. In a normal week of
operation the Tevatron produces roughly 50,000 $W$ boson and 5,000 $Z$
boson events in each lepton decay channel for each experiment. Currently,
each of the experiments has recorded approximately 1.5 pb$^{-1}$ of data,
which corresponds to about a quarter of the total expected Run~II luminosity.
$Z$ boson parameters have been measured to very high precision at the
large electron-positron collider (LEP) at CERN and the linear collider
at SLAC. For example, the $Z$ boson mass has been measured with an
accuracy of 2 parts in 10$^{5}$~\cite{mass}. However, current
measurements of the $W$ boson parameters are less precise (the present
uncertainty on the $W$ boson mass is about 4 parts in 10$^{4}$~\cite{
mass}). Based on expected Run II integrated luminosities, the
two Tevatron experiments will collect a sample of $W$ bosons events
on the same order as the 17 million $Z$ boson events collected by the
four LEP experiments. Using these event samples, CDF and D\O\ will
significantly reduce the current experimental uncertainties on the
electroweak parameters associated with the $W$ boson.
In addition, the large $W$ and $Z$ boson samples allow for precision
tests of the QCD production mechanisms for bosons. In particular, the
cross section for boson production depends on both the calculable hard
scattering parton cross sections and the Parton Distribution Functions
(PDFs), which describe the momentum fractions carried by the quarks
and gluons within the proton. The PDFs are determined experimentally,
and studies of boson production at the Tevatron can be used to place
constraints on these distributions. These constraints are important
because PDF uncertainties significantly impact the level of precision
of Tevatron measurements of electroweak parameters.
\section{Detectors}
The CDF and D\O\ detectors are designed to trigger on and accurately
reconstruct charged particles, electrons, photons, muons, hadronic
jets, and the transverse energy imbalance associated with neutrinos.
The $z$-axes of the CDF and D\O\ coordinate systems are defined to be
along the direction of the incoming protons. Particle trajectories
are described by $\theta$, the polar angle relative to the incoming
proton beam, and $\phi$, the azimuthal angle about the beam axis.
Pseudorapidity, $\eta = -\mathrm{ln}(\tan(\theta/2)$, is also used
to describe locations within the detectors.
One particular strength of the CDF detector is its beam-constrained
central tracking resolution,
\begin{equation}
\delta(p_{T})/p_{T} \sim 0.0005 \times p_{T}~(\mathrm{GeV}/c)~[|\eta| < 1],
\end{equation}
based on hit information from the outer open-cell drift chamber. The
calorimeters of both detectors allow for high-resolution reconstruction
of the energies of electrons, photons, and jets. For example, the energy
resolution for clusters in the CDF central electromagnetic calorimeter is
\begin{equation}
\delta(E_{T})/E_{T} \sim 13.5\% \oplus 1.5\%~(\mathrm{GeV})~[|\eta| < 1.1],
\end{equation}
which allows for high precision electron energy measurements. A main
strength of the D\O\ detector is the forward coverage provided by its
calorimeters and muon detector systems. The D\O\ calorimeter provides
hermetic coverage up to $|\eta| < 4.2$ (compared to $|\eta| < 3.6$ for
CDF) and muon coverage up to $|\eta| < 2.0$ (compared to $|\eta| < 1.5$
for CDF). This additional forward coverage results in a significantly
better acceptance for leptons from boson decays, particularly for muons.
\begin{table}[t]
\begin{center}
\caption{Preliminary uncertainty estimates for CDF $W$
boson mass measurement using 200~pb$^{-1}$ of data.}
\begin{tabular}{|l|c|c|c|}
\hline \textbf{Uncertainty [MeV]} & \textbf{Electrons} &
\textbf{Muons} & \textbf{Common}
\\
\hline Lepton Energy Scale & 27 & 17 & 17 \\
and Resolution & & & \\
\hline Recoil Scale and & 14 & 12 & 12 \\
Resolution & & & \\
\hline Backgrounds & 7 & 9 & - \\
\hline Production and & 16 & 17 & 16 \\
Decay Model & & & \\
\hline Statistics & 48 & 53 & - \\
\hline Total & 60 & 60 & 26 \\
\hline
\end{tabular}
\label{tab:wmass}
\end{center}
\end{table}
\section{Measurements of Electroweak Parameters}
\subsection{$W$ Boson Mass Measurement}
A precision measurement of the $W$ boson mass is among the highest
priorities for the Tevatron experiments. Self-energy corrections
to the $W$ boson depend on the masses of the top quark ($\propto
M^{2}_{top}$) and the Higgs boson ($\propto \mathrm{ln} M_{H}$),
as well as potential contributions from non-Standard Model (SM)
physics. Because of these dependencies, the $W$ boson mass is a
critical input to SM fits that constrain the mass of an unobserved
Higgs boson or, subsequent to a potential Higgs discovery, test the
consistency of the SM.
The current level of uncertainty on top quark mass measurements
from the Tevatron experiments~\cite{topmass} is at the level
of 2.1~GeV/$c^{2}$ which corresponds to roughly a 1.2~$\!\%$
measurement of $M_{top}$. To obtain equivalent constraining
power on $M_{H}$, the $W$ boson mass would need to be measured
to about 0.015~$\!\%$ corresponding to a total uncertainty of
about 12~MeV/$c^{2}$. Due to the needed level of precision,
the $W$ boson mass measurement is extremely challenging.
In order to make a measurement substantially better than
0.1~$\!\%$, all aspects of $W$ boson production and
detection need to be understood at the 10~MeV level. In
particular, this precision must be achieved for $W$ boson
production and decay, lepton momentum/energy scales and
resolutions, and additional energies within the event
associated with hadronic recoil against the boson $p_{T}$
and underlying interactions of the remnant quarks and gluons.
Once this detailed event model has been constructed, the $W$
boson mass can be determined by generating events for many
different mass values and picking the set that provides
the best match with data, in particular by fitting to the
transverse mass, $M_{T} = E_{T}^{\ell} E_{T}^{\nu} - (
E_{x}^{\ell} E_{x}^{\nu} + E_{y}^{\ell} E_{y}^{\nu})$,
distribution for the $W \rightarrow \ell \nu$ candidate
events in data.
The CDF experiment is close to completing a $W$ mass measurement
using 200~pb$^{-1}$ of data collected at the beginning of Run~II.
The expected uncertainties associated with this measurement are
shown in Table~\ref{tab:wmass}. The total uncertainty for the
combined measurement based on events collected in both the electron
and muon channels is expected to be 48~MeV/$c^{2}$ which would make
this measurement the single most precise to date. More importantly,
the largest component of the total uncertainty is statistical,
indicating that the result will be further improved simply by
incorporating more data. In fact, with the exception of the
uncertainty associated with the production and decay model,
each of the uncertainty categories improves with additional
statistics. Larger samples of $J/\Psi$, $\Upsilon$, and $Z$
boson events, for example, further improve the measurement of
the track momentum and calorimeter energy scales for leptons.
\begin{figure*}[t]
\centering
\includegraphics[width=135mm]{wmass.eps}
\caption{Projection for the expected precision of a single
experiment $W$ mass measurement as a function of integrated
luminosity based on Run~I Tevatron measurements.}
\label{fig:wmass}
\end{figure*}
\begin{figure}
\includegraphics[width=65mm]{higgscon.eps}
\caption{Projection for Tevatron constraints on $M_{H}$
based on the expected precision of combined top quark
and $W$ boson mass measurements assuming 8~fb$^{-1}$ of
data collected by each experiment.}
\label{fig:higgscon}
\end{figure}
Figure~\ref{fig:wmass} shows a projection for the expected
precision of the $W$ boson mass measurement as a function of
integrated luminosity for a single experiment based on Tevatron
Run~I measurements. The combined preliminary uncertainty for
the 200~pb$^{-1}$ CDF Run~II analysis lies significantly below
the expectation based on the Run~I results, indicating improved
understanding of the $W$ boson event characteristics. With
enough additional luminosity, the precision of the measurement
will become limited by the uncertainties associated with the
boson production and decay model (currently on the order of
20~MeV) which do not scale with statistics. Reducing these
uncertainties requires additional measurements that can
constrain components of the production model, such as PDFs
and the boson $p_{T}$ spectrum.
A projection for the potential Tevatron constraints on the
Higgs boson mass based on 8~fb$^{-1}$ of data delivered to
each experiment is shown in Figure~\ref{fig:higgscon}. At
the expected level of precision, significant constraints
will be placed on non-SM particles such as those predicted
by supersymmetry (SUSY).
\subsection{$W$ Boson Width Measurements}
The width of the $W$ boson is a less constraining observable
in global electroweak fits than the mass, but measuring
its value confirms a basic prediction of the SM and could
provide indications of new physics beyond the SM. The
Tevatron experiments make both direct and indirect $W$ boson
width measurements. The direct measurements have no built-in
SM assumptions and are therefore sensitive to potential
contributions from new physics such as a heavy $W^{\prime}$.
Indirect measurements are based on SM assumptions and provide
high precision results that can also be used to place
constraints on other SM parameters such as CKM matrix elements.
\begin{figure}
\includegraphics[width=65mm]{mcwidth.eps}
\caption{D\O\ simulation of the $M_{T}$ distribution for
$W \rightarrow e \nu$ events as a function of $W$ boson
width.}
\label{fig:mcwidth}
\end{figure}
Tevatron direct measurements of the $W$ boson width are
extracted from the shape of the high $M_{T}$ region in
$W \rightarrow \ell \nu$ events. The procedure is similar
to that used for measuring the $W$ mass. The $W$ boson
production and detector resolution effects that distort
the observed lineshape must be carefully modeled within
a fast event simulation. Using the tuned simulation,
event samples are generated based on a range of input
values for the $W$ boson width. The change in the shape
of the high $M_{T}$ tail as a function of the $W$ width
is illustrated in Figure~\ref{fig:mcwidth}. D\O\ has
made a preliminary direct measurement of the $W$ width
based on 177~pb$^{-1}$ of Run~II data. The measurement
uses the peak region in the $M_{T}$ distribution for $W
\rightarrow e \nu$ candidate events to normalize signal
and background contributions to the sample, and then
fits the shape in the tail region to determine the
most likely value for the $W$ width. The final result
for the $W$ width obtained from the fit shown in
Figure~\ref{fig:fitwidth} is
\begin{equation}\label{eq:dwidth}
\Gamma_{W} = 2.011 \pm 0.093 (\mathrm{stat}) \pm 0.107 (\mathrm{syst})~\mathrm{GeV} .
\end{equation}
Indirect determinations of the $W$ boson width are
obtained from a measured ratio of production cross
sections times branching fractions,
\begin{equation}
R = {\sigma \times \mathrm{Br} (W \rightarrow \ell \nu) \over
\sigma \times \mathrm{Br} (Z \rightarrow \ell \ell)} .
\end{equation}
The value of $R$ can be measured very precisely since
many of the uncertainties associated with the individual
cross section measurements, in particular the significant
uncertainty on the measured luminosity, cancel in the
ratio. Within the context of the SM, this ratio can
also be expressed as
\begin{equation}
R = {\sigma(W) \over \sigma(Z)} \times {\Gamma(W \rightarrow \ell \nu)
\over \Gamma(W)} \times {\Gamma(Z) \over \Gamma(Z \rightarrow \ell \ell)} .
\end{equation}
Using this equation, a precise value for $\Gamma(W)$
can be extracted from $R$ using a next-to-next-to-leading
order (NNLO) theoretical prediction for $\sigma(W)/\sigma(Z)$,
precision LEP measurements of $\Gamma(Z \rightarrow \ell \ell)$
and $\Gamma(Z)$, and a SM calculation for $\Gamma(W \rightarrow
\ell \nu)$.
\begin{figure}
\includegraphics[width=65mm]{dwidth.eps}
\caption{D\O\ fit to the $M_{T}$ distribution of
$W \rightarrow e \nu$ events used to measure the
$W$ width.}
\label{fig:fitwidth}
\end{figure}
CDF has made an indirect measurement of the $W$ boson
width based on the first 72~pb$^{-1}$ of data collected
in Run~II. The ratio $R$ was measured independently in
the electron and muon channels, resulting in a combined
value of
\begin{equation}
R = 10.84 \pm 0.15 (\mathrm{stat}) \pm 0.14 (\mathrm{syst}),
\end{equation}
which has an overall relative precision of 1.9~$\!\%$.
Since the most significant contribution to the systematic
uncertainty on this measurement originates from the lepton
selection efficiency measurement made from the $Z \rightarrow
\ell \ell$ data samples, it is expected that a measurement
with a precision of better than 1~$\!\%$ will be possible
using additional data statistics.
The indirect value for the $W$ boson width extracted from
the measured value of $R$ is
\begin{equation}
\Gamma(W) = 2092 \pm 42~\mathrm{MeV},
\end{equation}
which is in good agreement with the SM prediction and
the previously described direct measurement of the $W$
boson width. A comparison of the measured indirect width
with previous results and the SM expectation is shown in
Figure~\ref{fig:inwidth}. Since in the SM the total $W$
boson width is a sum over partial widths for leptons and
quarks, which in the case of the quarks depend on certain
CKM matrix elements, the measured value of $\Gamma(W)$ can
also be used to indirectly measure the value of the least
constrained element, $V_{cs}$. Based on world-averaged
measurements of the other CKM matrix elements that
contribute to the partial widths, CDF obtains a value of
\begin{equation}
|V_{cs}| = 0.976 \pm 0.030.
\end{equation}
\begin{figure}
\includegraphics[width=65mm]{inwidth.eps}
\caption{Comparison of the CDF indirect width measurement
with previous results and the SM prediction.}
\label{fig:inwidth}
\end{figure}
\subsection{Quark Couplings}
\begin{figure}
\includegraphics[width=65mm]{afbpict2.eps}
\caption{Illustration of $\gamma^{\ast}/Z$ decay in the
parton-parton center of mass frame. Forward (backward)
events are defined as those with positive (negative)
$cos(\theta^{\ast})$.}
\label{fig:afbpict}
\end{figure}
The Tevatron experiments can extract the axial and vector
neutral current light quark couplings from measurements of
the Drell-Yan forward-backward asymmetry. This asymmetry
is defined as
\begin{equation}
A_{FB} = {\sigma_{F} - \sigma_{B} \over \sigma_{F} + \sigma_{B}}
\end{equation}
where $\sigma_{F(B)}$ is defined as the cross section for
Drell-Yan events in which the positively charged lepton is
produced along (opposite) the proton's direction of motion
in the parton-parton center of mass frame. The decay of
the $\gamma^{\ast}/Z$ in this frame is illustrated in
Figure~\ref{fig:afbpict}. The sign of $\cos(\theta^{\ast})$
determines whether a given event is forward or backward
(forward if $\cos(\theta^{\ast}) > 0$).
CDF and D\O\ have both made preliminary measurements
of the forward-backward asymmetry for $\gamma^{\ast}/Z
\rightarrow e e$ events as a function of dielectron
invariant mass. The CDF result based on a 364~pb$^{-1}$
data sample is shown in Figure~\ref{fig:afbres1} and
the D\O\ result based on a 177~pb$^{-1}$ data sample is
shown in Figure~\ref{fig:afbres2}. As illustrated in
Figure~\ref{fig:afbcons}, the quark couplings to the $Z$
boson can be extracted from these measurements. Although
the coupling measurement is less precise than that of the
LEP experiments, it breaks a two-fold degeneracy in the
LEP results, providing an important confirmation of the
SM. The coupling values have also been determined from
analysis of deep inelastic scattering data at HERA~\cite{zeus}.
\begin{figure}[t]
\includegraphics[width=65mm]{afbres1.eps}
\caption{CDF Measurement of the forward-backward
asymmetry in $\gamma^{\ast}/Z \rightarrow e e$
events as a function of the di-electron invariant
mass.}
\label{fig:afbres1}
\end{figure}
\begin{figure}[t]
\includegraphics[width=65mm]{afbres2.eps}
\caption{D\O\ Measurement of the forward-backward
asymmetry in $\gamma^{\ast}/Z \rightarrow e e$
events as a function of the di-electron invariant
mass.}
\label{fig:afbres2}
\end{figure}
\begin{figure}
\includegraphics[width=65mm]{afbcons2.eps}
\caption{Comparison of the limits on the allowed
range of values for the up quark axial and vector
neutral current couplings obtained from Tevatron
(72 pb$^{-1}$), HERA, and LEP measurements.}
\label{fig:afbcons}
\end{figure}
More importantly, the Tevatron experiments measure
$A_{FB}$ over a wide range of invariant masses (both
below and above the $Z$-pole). The high mass region
is of particular interest since the effects of new
bosons interfering with the SM bosons could result
in measured values of $A_{FB}$ inconsistent with SM
expectations. The potential effect of a $Z^{\prime}$
on the predicted $A_{FB}$ in the high mass region
is shown in Figure~\ref{fig:afbres2}, along with
the measured D\O\ values. With additional data it
should be possible to distinguish between the new
physics and SM scenarios.
\subsection{Trilinear Gauge Couplings}
The analysis of diboson final states at the Tevatron
provides an opportunity for studying self-interactions
of the gauge bosons. These interactions are a direct
result of the electroweak SU(2) structure, and the SM
makes specific predictions on the expected production
cross sections for each diboson final state. Non-SM
particles that couple to the electroweak bosons can
modify the expected cross sections, particularly at high
$E_{T}$, and looking for potential indications of these
anomalous couplings provides a route to uncovering new
physics.
\begin{table}[t]
\begin{center}
\caption{Diboson final states available at the Tevatron
and the trilinear couplings involved in their production.
The couplings shown in parentheses are absent in the SM.}
\begin{tabular}{|l|c|}
\hline \textbf{Diboson Final State} & \textbf{Trilinear Couplings} \\
\hline $q\bar{q}^{\prime} \rightarrow W \rightarrow W\gamma$ & $WW\gamma$ only \\
\hline $q\bar{q}^{\prime} \rightarrow W \rightarrow WZ$ & $WWZ$ only \\
\hline $q\bar{q} \rightarrow \gamma^{\ast}/Z \rightarrow WW$ & $WW\gamma$ , $WWZ$ \\
\hline $q\bar{q} \rightarrow \gamma^{\ast}/Z \rightarrow Z\gamma$ & $(ZZ\gamma)$ , $(Z\gamma\gamma)$ \\
\hline $q\bar{q} \rightarrow \gamma^{\ast}/Z \rightarrow ZZ$ & $(ZZ\gamma)$ , $(ZZZ)$ \\
\hline
\end{tabular}
\label{tab:diboson}
\end{center}
\end{table}
Table~\ref{tab:diboson} gives a summary of the diboson
final states available at the Tevatron and the trilinear
gauge couplings that contribute to the production of each
state. The Tevatron experiments are sensitive to different
combinations of couplings than LEP and explore a higher
$\sqrt{s}$. The couplings in the table that are enclosed
within parentheses are absent in the SM. Due to the absence
of these couplings, the associated final states are ideal
channels for observing effects from new physics.
The CDF and D\O\ experiments have produced a wide variety
of new Run~II results based on the study of diboson final
states~\cite{cdfpub}~\cite{d0pub}. A few of these are
highlighted in detail here. The cross section for $WW$
production, which involves both the $WW\gamma$ and $WWZ$
trilinear gauge couplings, has recently been measured by
CDF using a 825~pb$^{-1}$ data sample. The analysis
focuses on the dilepton final state produced when both
$W$ bosons decay into a lepton and neutrino. Events are
selected with two opposite-sign leptons (electrons or
muons) that satisfy the standard CDF selection criteria.
The missing $E_{T}$ in the event, expected from the
two neutrinos, is required to be above 25~GeV, greatly
reducing the main expected background contributions from
Drell-Yan, $W\gamma$, and $W$ plus jet production. Before
looking at the signal region, events in the low missing
$E_{T}$ region are utilized to cross-check the background
estimation. In the signal region, the final background
estimate is $38 \pm 5$ events on top of an expected $WW$
signal contribution of $52 \pm 4$ events. Based on 95
observed events, CDF measures a cross section of
\begin{equation}
\sigma(p\bar{p} \rightarrow WW) = 13.6 \pm 3.0 (\mathrm{stat+syst+lum}),
\end{equation}
consistent with the next-to-leading order (NLO)
calculation~\cite{ditheory} of $12.4 \pm 0.8$~pb. The
final candidate events plotted as a function of event
missing $E_{T}$, along with the expected signal and
background contributions, are shown in Figure~\ref{fig:WWres}.
\begin{figure}
\includegraphics[width=65mm]{WWres.eps}
\caption{Comparison of missing $E_{T}$ distribution for
observed data events to the combined expectation from
signal and background in the CDF $WW$ analysis.}
\label{fig:WWres}
\end{figure}
Both CDF and D\O\ have also recently completed measurements
of $WZ$ production. Production of this final state is of
particular interest because the $WWZ$ coupling can be studied
independent of the $WW\gamma$ coupling, which also contributes
to $WW$ production. D\O\ has performed a search based on a
data sample corresponding to roughly 800~pb$^{-1}$ of
integrated luminosity. This analysis uses the trilepton
final state in which both bosons decay leptonically. A total
of three leptons (electrons or muons) passing the standard
D\O\ selection criteria are required. Of the three leptons,
two are required to be of the same flavor and form an
opposite-sign pair with an invariant mass consistent
with the $Z$ boson mass. The event missing $E_T$ is also
required to be greater than 20~GeV, consistent with that
from the neutrino produced in the decay of the $W$ boson.
Taking advantage of its wider acceptance for leptons,
D\O\ expects to see $7.5 \pm 1.2$ signal events on top
of a background of $3.6 \pm 0.2$ events, and observes 12
events in the data. Based on the calculated probability
for the background to fluctuate into the observed number
of events, D\O\ obtains 3.3~$\sigma$ evidence for $WZ$
production and measures
\begin{equation}
\sigma(p\bar{p} \rightarrow WZ) = 4.0^{+1.9}_{-1.5} (\mathrm{stat+syst+lum}),
\end{equation}
consistent with the NLO calculation~\cite{ditheory} of
$3.68 \pm 0.25$~pb. Figure~\ref{fig:WZres1} shows the
transverse mass distribution for the neutrino (missing
$E_{T}$) and lepton coming from the $W$ boson decay
for the D\O\ candidate events compared to the combined
expectation from signal and background.
\begin{figure}
\includegraphics[width=65mm]{WZres1.eps}
\caption{Comparison of $M_{T}$ distribution determined from
the $W$ boson decay products for observed data events with
the combined expectation from signal and background in the
D\O\ $WZ$ analysis.}
\label{fig:WZres1}
\end{figure}
CDF completed a similar search using roughly the same
amount of data and observed only 2 events compared to
an expectation of $3.7 \pm 0.3$ signal and $0.9 \pm 0.3$
background events. The observation of two events was
found to be consistent with both the background-only
and background plus signal hypotheses. The smaller
number of expected events as compared with the D\O\
analysis is directly related to the reduced acceptance
for leptons in the CDF detector. In order to improve
the CDF analysis, new lepton categories were created
to take advantage of additional tracking and calorimeter
cluster information in the events to increase lepton
acceptance. In order to increase electron coverage
out to $|\eta| < 2.8$, a category for forward electron
candidates in the calorimeter with no track match was
added. Similarly, an increase in muon coverage out to
$|\eta| < 1.6$ was obtained using forward track candidates
fiducial to the calorimeter with energy deposits consistent
with the expectation from a minimum-ionizing particle. In
addition, the tracks pointing at calorimeter cracks were
placed into a flavor-neutral category of leptons which
could be assigned as either electrons or muons. With the
additional lepton categories in place, CDF performed a new
search for $WZ$ production using 1.1~fb$^{-1}$ of data.
Including the improved lepton acceptance, CDF observes 16
events with signal and background expectations of $12.5
\pm 0.9$ events and $2.7 \pm 0.4$ events, respectively.
Based on the probability of the background fluctuating
into the observed signal, CDF obtains a 5.9~$\sigma$
observation of $WZ$ production and measures
\begin{equation}
\sigma(p\bar{p} \rightarrow WZ) = 5.0^{+1.8}_{-1.6} (\mathrm{stat+syst+lum}),
\end{equation}
which is also consistent with the NLO calculation.
The final candidate events plotted as a function
of event missing $E_{T}$, along with the expected
signal and background contributions, are shown in
Figure~\ref{fig:WZres2}.
\begin{figure}
\includegraphics[width=65mm]{WZres2.eps}
\caption{Comparison of missing $E_{T}$ distribution for
observed data events with the combined expectation from
signal and background in the CDF $WZ$ analysis. The
arrow on the figure indicates the signal region for this
search (missing $E_{T} > 25$~GeV).}
\label{fig:WZres2}
\end{figure}
As mentioned previously, diboson production is sensitive
to new physics appearing in the trilinear gauge couplings.
Potential new physics contributions can be incorporated in
the Lagrangian using a standard methodology that introduces
two parameters, $\lambda$ and $\Delta\kappa$, which are zero
in the SM and non-zero in the case of additional new physics
contributions. Generally, the effect of anomalous couplings
on diboson production is a net increase in the cross section
at high $E_{T}$. Figure~\ref{fig:acpict} illustrates how the
shape of the diboson cross section as a function of $E_{T}$
varies for different values of $\lambda$ and $\Delta \kappa$.
The added terms in the Lagrangian violate unitarity unless
an upper limit ($\Lambda$) on the scale for the new physics
is imposed. A common approach is to use the parameterization
$\alpha(s) = \alpha_{0}/(1 + s/\Lambda^{2})^{2}$ which causes
the effect of the anomalous couplings to ``turn-off'' as the
upper limit on the energy scale is approached.
\begin{figure}
\includegraphics[width=65mm]{acpict.eps}
\caption{The predicted shape of a generic diboson cross
section as a function of $E_{T}$ for different values of
$\lambda$ and $\Delta \kappa$.}
\label{fig:acpict}
\end{figure}
\begin{figure}
\includegraphics[width=65mm]{acdata.eps}
\caption{Comparison of leading lepton $p_{T}$ distribution
for D\O\ $WW$ candidate events observed in the dilepton
final state with SM and non-SM expectations.}
\label{fig:acdata}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=135mm]{bosonprd.eps}
\caption{Illustration of $W$ boson production at the Tevatron.
A $u$ quark in the proton annihilates with a $\bar{d}$ quark
in the anti-proton at a squared center of mass $s = Q^{2}$ to
produce a $W^{+}$. The energies of the $u$ and $\bar{d}$ are
$x_{p}E_{p}$ and $x_{\bar{p}}E_{\bar{p}}$, respectively.}
\label{fig:bosonprd}
\end{figure*}
D\O\ has performed a preliminary analysis to set anomalous
couplings limits based on a measurement of the $WW$ cross
section using dilepton final states. The analysis sets
limits on anomalous $WW\gamma$ and $WWZ$ trilinear gauge
couplings under the assumption that the two couplings
are equal and $\Lambda = 2$~TeV. Figure~\ref{fig:acdata}
shows the D\O\ data and both SM and non-SM expectations
plotted as a function of the $p_{T}$ of the highest $p_{T}$
lepton. Based on the observed agreement between data and
the SM prediction, D\O\ obtains the following limits:
\begin{equation}
-0.32 < \Delta \kappa < 0.45 , -0.29 < \lambda < 0.45 .
\end{equation}
These preliminary limits can be improved significantly with
larger data samples and incorporating information from other
final states.
\section{Studies of Boson Production}
\subsection{Boson Production at the Tevatron}
A typical example of boson production at the Tevatron is
shown in Figure~\ref{fig:bosonprd}. At leading order (LO),
a quark and anti-quark pair annihilate to create a $W$ or
$Z$ boson, which subsequently decays into a quark or lepton
pair. The production cross section is calculated as a sum
of partial cross sections ($d\sigma_{q\bar{q}}$), convoluted
with the PDFs that describe the distributions of the proton
momentum fraction ($x_{p}$) carried by each of the constituent
quarks and gluons. The cross section can be written as
\begin{equation}
\sigma = \int \sum_{i,j}[f_{i}^{q}(x_{p})f_{j}^{\bar{q}}(x_{\bar{p}}) +
f_{i}^{\bar{q}}(x_{p})f_{j}^{q}(x_{\bar{p}})] \times d\sigma_{q\bar{q}}
dx_{p} dx_{\bar{p}}
\end{equation}
where $i$ and $j$ denote the different possible quark
flavor combinations. The longitudinal momentum of
the produced boson is directly related to the PDFs.
In particular, if one of the two annihilating quarks
carries a significantly larger fraction of proton
momentum, the boson will be produced with momentum
in the same direction as the incident proton.
The effects of QCD and QED NLO corrections are also
important. QCD corrections give rise to final states
that contain multiple partons, sometimes with high $p_{T}$,
and modify the overall boson production kinematics, including
the boson $p_{T}$ spectrum. The most important effect
originating from NLO QED corrections is photon radiation
from final state charged leptons, which have a significant
effect on lepton identification and kinematics. QED
radiation from the initial state quarks and from the boson
itself (in the case of $W$ bosons) also contributes to the
overall event kinematics.
\subsection{Parton Distribution Functions}
The functional forms of the PDFs originate from
non-perturbative QCD interactions and are therefore
incalculable. Instead, they are parameterized using
data from deep inelastic scattering, fixed target,
and hadron collider experiments. Two standard
parameterizations come from the CTEQ~\cite{cteq} and
MRST~\cite{mrst} groups. In the case of the CTEQ
group, the parton momentum fraction distributions
are parameterized as
\begin{equation}
xf_{a}(x,Q_{0}) = A_{0}x^{A_{1}}(1-x)^{A_{2}}e^{A_{3}x}(1+A_{4}x)^{A_{5}}
\end{equation}
for five categories of quark/gluon proton constituents
(valence $u$ and $d$ quarks, sea $\bar{u}$ and $\bar{d}$
quark combinations, and gluons). This configuration
gives a total of thirty free parameters in the fit to
the experimental data, although the CTEQ group chooses
to leave ten of these at fixed values. The remaining free
parameters are determined for a low energy scale, $Q_{0} =
1.3$~GeV, and the $Q^{2}$ dependence is obtained from QCD
evolution equations.
\begin{figure*}[t]
\centering
\includegraphics[width=135mm]{wpdf.eps}
\caption{An example of how CTEQ ``error'' PDF sets are
used to determine an overall PDF uncertainty. The shift
in the measured $W$ boson mass from its central value is
obtained for Monte Carlo templates generated with each
of the 40 error PDF sets. The observed shifts associated
with each of the twenty orthogonal eigenvectors are added
in quadrature to determine the total uncertainty.}
\label{fig:wpdf}
\end{figure*}
A recent development is that each group also provides a set
of ``error'' PDFs that are intended to map out the allowable
parameter space for the PDFs within the experimental data
uncertainties. The twenty free parameters used in the fit
are found to be correlated with one another. To facilitate
uncertainty calculations, these correlations are removed by
forming eigenvectors within the $A_{i}$-space. For each of
the twenty eigenvectors, two complete PDF sets are generated
corresponding to a given increase in $\chi^{2}$ of the overall
fit ($\Delta \chi^{2} = 100$ for the CTEQ group). The MRST
group follows a similar procedure using a slightly different
parameterization that results in only fifteen free parameters
for their fit. The MRST group also uses a smaller $\Delta
\chi^{2} = 50$ to construct its version of the error PDFs.
An example of how error PDFs are used to determine an
overall PDF uncertainty for a specific analysis is shown
in Figure~\ref{fig:wpdf} for the case of the $W$ boson mass
measurement. The shift in the measured mass from its central
value is obtained using Monte Carlo templates generated with
each of the forty error PDF sets. Since the twenty eigenvectors
are orthogonal to each other by design, the observed shifts
associated with each can be added in quadrature to determine a
total PDF model uncertainty. Although each eigenvector typically
contains information about multiple fit parameters, there is a
strong correlation in some cases between a given fit parameter
and an eigenfunction. For example, the eigenvector corresponding
to error PDFs 1 and 2 in Figure~\ref{fig:wpdf} has a significant
correlation with the $A_{1}$ (low-$x_{p}$) parameter associated
with valence $u$ quarks. These correlations give an indication
of the experimental inputs to the fits which need to be improved
to reduce the overall PDF uncertainty for a specific analysis.
\subsection{Inclusive Production Cross Sections}
Because many electroweak measurements at the Tevatron are
sensitive to uncertainties in the PDF model, both CDF and
D\O\ perform studies of boson production to constrain the
PDF model. The simplest of these studies are measurements
of the inclusive boson production cross sections. The
Tevatron experiments measure inclusive $W$ and $Z$ cross
sections using each of the lepton ($e$, $\mu$, and $\tau$)
decay channels. The dominant uncertainty in these results
is associated with the integrated luminosity measurements
made by each experiment ($\sim 6\%$). Within this uncertainty,
the measured cross sections are found to be in good agreement
with the NNLO theoretical calculations~\cite{inclusive}.
The agreement between CDF and D\O\ measured values and
theoretical predications are shown in Figures~\ref{fig:wxsec}
and~\ref{fig:zxsec}. Since the theoretical uncertainties are
significantly smaller than the measurement uncertainties, no
additional constraints on the boson production model can be
obtained from these measurements.
\subsection{Forward $W$ Boson Cross Section}
Differential cross section measurements contain additional
information that can be used to constrain PDFs. CDF
performs a simple differential measurement by independently
evaluating the $W$ boson cross section using $W \rightarrow
e \nu$ events with electrons observed in the central and
forward regions of the detector. Figure~\ref{fig:bosrap}
shows the $W$ boson acceptance as a function of the boson
rapidity, defined as
\begin{equation}
y_{W} = {1 \over 2} \mathrm{log} {E + p_{z} \over E - p_{z}},
\end{equation}
for the CDF $W \rightarrow e \nu$ cross section measurements
using events with electrons reconstructed in the central
and forward calorimeter modules. Since $W$ bosons produced
at different rapidities probe different regions of $x_{p}$,
the ratio of central to forward cross sections measurements
can be a useful tool for placing constraints on PDFs.
\begin{figure}
\includegraphics[width=65mm]{wxsec.eps}
\caption{Summary of Tevatron inclusive $W$ boson cross section
measurements as a function of $E_{CM}$ compared to a NNLO
theoretical calculation (solid black line).}
\label{fig:wxsec}
\end{figure}
\begin{figure}
\includegraphics[width=65mm]{zxsec.eps}
\caption{Summary of Tevatron inclusive $Z$ boson cross section
measurements as a function of $E_{CM}$ compared to a NNLO
theoretical calculation (solid black line).}
\label{fig:zxsec}
\end{figure}
\begin{figure}
\includegraphics[width=65mm]{bosrap.eps}
\caption{$W$ boson acceptance as a function of rapidity for
the CDF $W \rightarrow e \nu$ cross section measurements
using events with reconstructed electrons in the central
and forward parts of the detector.}
\label{fig:bosrap}
\end{figure}
The selection of forward electron candidates is based
on electromagnetic clusters in the calorimeter matched
with tracks reconstructed primarily from silicon detector
hits~\cite{ionim}. Given the selection criteria, CDF
observes 48,165 candidate events in a 223~pb$^{-1}$ data
sample. The $M_{T}$ spectrum of the candidate events is
shown in Figure~\ref{fig:fxsec}, along with the combined
expectation for signal and background. The observed
agreement indicates a good understanding of the forward
detector systems.
\begin{figure}
\includegraphics[width=65mm]{fxsec.eps}
\caption{$M_{T}$ distribution for candidate events in the
CDF cross section measurement based on $W \rightarrow e
\nu$ events with electrons in the forward detector region.}
\label{fig:fxsec}
\end{figure}
The measured forward ($1.2~<~|\eta^{det}_{e}|~<~2.8$) cross
section is
\begin{equation}
\sigma^{for} = 2796 \pm 13 (\mathrm{stat}) ^{+95}_{-90} (\mathrm{syst})~\mathrm{pb},
\end{equation}
neglecting the luminosity uncertainty which will cancel in
the cross section ratio. The previously measured central
($|\eta^{det}_{e}|~<~0.9$) cross section~\cite{ourprl} has
a value of
\begin{equation}
\sigma^{cen} = 2771 \pm 14 (\mathrm{stat}) ^{+62}_{-56} (\mathrm{syst})~\mathrm{pb},
\end{equation}
also neglecting the luminosity uncertainty. The remaining
systematic uncertainties on the measurements are dominated
by those associated with electron identification and the
PDF model. In order to separate these, CDF uses visible
cross sections, defined as
\begin{equation}
\sigma_{vis} = \sigma_{tot} \times A ,
\end{equation}
where $A^{cen}$ is for example the kinematic and geometric
acceptance for $W \rightarrow e \nu$ events in the central
cross section measurement. Using this definition the PDF
model uncertainties are removed from the measured ratio of
cross sections,
\begin{equation}
R_{exp} = \sigma^{cen}_{vis} / \sigma^{for}_{vis} = 0.925 \pm 0.033 .
\end{equation}
CDF then compares the measured ratio with the equivalent
theoretical ratio of acceptances
\begin{equation}
R_{th} = A^{cen} / A^{for}
\end{equation}
determined from simulated event samples generated using both
the CTEQ ($R_{th} = 0.924 \pm 0.037$) and MRST ($R_{th} =
0.941 \pm 0.012$) PDF distributions. The uncertainties on
the acceptance ratios are obtained from the error PDF sets
using the previously described method. The uncertainty
on the measured ratio is of the same order as the PDF
uncertainties on the theoretical ratio, suggesting that a
similar measurement with additional statistics would help
to constrain the PDF models.
\subsection{Differential $Z$ Boson Cross Section}
Measuring the differential boson production cross section
over the full rapidity range can further improve PDF model
constraints. The dilepton decay modes of the $Z$ boson
allow for precise measurements, since the backgrounds in
these final states are small and the full event kinematics
can be precisely reconstructed. The rapidity of the $Z$
boson is closely related to the proton momentum fractions
carried by the two colliding quarks. As shown in
Figure~\ref{fig:yrap}, $W$ or $Z$ bosons are produced at
high rapidity when the proton momentum fraction of one
quark is significantly larger than that of the other.
Therefore, the measured differential cross section at high
rapidity is a good probe of the PDF distributions at high
$x_{p}$.
\begin{figure}
\includegraphics[width=65mm]{yrap.eps}
\caption{The interacting partons' momentum fractions
required to produce a $W$ boson ($Q = 80$~GeV). The
larger the difference between $x_{p}$ and $x_{\bar{p}}$,
the greater the rapidity of the produced boson.}
\label{fig:yrap}
\end{figure}
D\O\ has made a preliminary measurement of the differential
$Z$ boson cross section based on a 337~pb$^{-1}$ data sample.
Using $Z \rightarrow e e$ candidate events, D\O\ reconstructs
the differential cross section shown in Figure~\ref{fig:yres}.
The measured cross section is observed to agree well with the
NNLO prediction. The measurement is currently statistics-limited
but can be used to constrain PDF models using additional data.
\begin{figure}
\includegraphics[width=65mm]{yres.eps}
\caption{Differential $Z$ boson cross section measured by
D\O\ as a function of boson rapidity. The measured cross
section is in good agreement with a NNLO theoretical
prediction based on MRST PDFs (solid line).}
\label{fig:yres}
\end{figure}
\subsection{$W$ Boson Charge Asymmetry}
A final measurement useful for constraining PDFs is the
$W$ boson charge asymmetry measurement. On average the
$u$ quarks inside the proton contain a higher fraction
of the proton's momentum than the $d$ quarks. Due to
this imbalance, $W^{+}$ ($W^{-}$) bosons produced at the
Tevatron have a net positive (negative) rapidity, as
shown in Figure~\ref{fig:wasym}. The V-A structure of the
electroweak couplings dictates the angular distribution
of the leptons in the decays of the $W$ bosons, which is
preferentially opposite to the production asymmetry. As
shown in Figure~\ref{fig:wasym}, the net effect of the
decay asymmetry is to partially reduce the observable
production asymmetry extracted from the lepton rapidity
distributions. Because the production asymmetry originates
from the imbalance of the momentum fractions carried by
$u$ and $d$ quarks within the proton, charge asymmetry
measurements provide constraints on the $d/u$ ratio in
the proton as a function of $x_{p}$.
Measurements are typically performed using the charged
leptons from the $W$ boson decays. The lepton asymmetry
is defined as
\begin{equation}
A(\eta_{\ell}) = {d\sigma_{+}/d\eta_{\ell} - d\sigma_{-}/d\eta_{\ell} \over
d\sigma_{+}/d\eta_{\ell} + d\sigma_{-}/d\eta_{\ell}} = A(y_{W}) \otimes (V-A) .
\end{equation}
Both CDF and D\O\ have performed preliminary measurements
of the lepton charge asymmetry. The key experimental
issues are understanding forward lepton identification
and charge misidentification rates, which are needed to
correct the observed asymmetry. A D\O\ measurement of
the lepton charge asymmetry using $W \rightarrow \mu
\nu$ events selected from a 230~pb$^{-1}$ data sample is
shown in Figure~\ref{fig:d0asym}. The measured charge
misidentification rates for this analysis are found to
be below $10^{-4}$ out to muon pseudorapidities of $2$.
The measured asymmetry is compared to a theoretical
prediction based on the CTEQ PDF model. The measurement
is observed to have some sensitivity to PDFs even at
the current level of statistical sensitivity. The CDF
measurement based on $W \rightarrow e \nu$ events
selected from a 170~pb$^{-1}$ data sample are shown in
Figure~\ref{fig:cdfasym}. Here the data is separated
into two categories based on the $E_{T}$ of the electron.
Comparisons with theoretical predictions using the CTEQ
PDF model illustrate the increased sensitivity of the high
$E_{T}$ events to PDF variations.
\begin{figure}
\includegraphics[width=65mm]{wasym.eps}
\caption{Rapidity distributions of positively and
negatively charged $W$ bosons produced at the
Tevatron, and the pseudorapidity distributions
of the positively and negatively charged leptons
produced in their decays.}
\label{fig:wasym}
\end{figure}
\begin{figure}
\includegraphics[width=65mm]{d0asym.eps}
\caption{D\O\ lepton charge asymmetry measurement based
on $W \rightarrow \mu \nu$ events. The measurement is
compared to a theoretical calculation based on the
CTEQ and MRST PDF models.}
\label{fig:d0asym}
\end{figure}
\begin{figure}
\includegraphics[width=65mm]{cdfasym.eps}
\caption{CDF lepton charge asymmetry measurement based
on $W \rightarrow e \nu$ events. The measurement is
compared to a theoretical calculation based on the
CTEQ PDF model.}
\label{fig:cdfasym}
\end{figure}
A new generation of Tevatron charge asymmetry analyses
are currently under development, with the goal of fully
exploiting the kinematic information in $W$ events to
directly reconstruct the underlying $W$ boson production
asymmetry. Applying a $W$ mass constraint leads to two
kinematic solutions that can be weighted by taking into
account information about the production and decay of the
$W$ bosons. Potential dependencies on the input model are
resolved through an iterative procedure. Preliminary CDF
studies of this approach indicate significantly increased
sensitivity to PDFs. The potential increase in sensitivity
is illustrated in Figure~\ref{fig:newasym}, which shows
a comparison of hypothetical lepton charge asymmetry and
direct $W$ boson charge asymmetry measurements based on a
common set of candidate events obtained from a 400~pb$^{-1}$
dataset.
\begin{figure}
\includegraphics[width=65mm]{newasym.eps}
\caption{Comparison of the potential PDF sensitivities for
lepton charge asymmetry and $W$ boson production asymmetry
measurements made with a common set of simulated candidate
events corresponding to a luminosity of 400~pb$^{-1}$.}
\label{fig:newasym}
\end{figure}
\section{Conclusions}
The large samples of $W$ and $Z$ bosons being collected
at the Tevatron accommodate a wide variety of electroweak
measurements. In particular, the properties of the $W$
boson can be measured with very high precision by the CDF
and D\O\ experiments. In addition, detailed studies of
boson production at the Tevatron can be used to constrain
PDF models and provide important information about the
boson production mechanisms. The analyses reported here
are based on only a small fraction of the expected data,
so there is significant room for improving the precision
of the current measurements. It is important to note that
obtaining similar precision results from the Large Hadron
Collider (LHC) will be challenging and certainly require
input (such as PDF constraints) from the Tevatron
experiments.
\bigskip
|
1,314,259,993,248 | arxiv | \section{Introduction}\label{sec:intro}
Selection of significant features with control over the false discovery rates (FDR), is one of the most important problems in application of machine learning methods \citep{jovic2015review, rietschel2018feature} for problems such as untargeted metabolomics \citep{heinemann2019machine} and genome-wide association study (GWAS) \citep{frommlet2012modified}. Due to the large number of features involved, it is of utmost importance to provide provable guarantees in selecting the true underlying features that can explain certain phenotypic conditions.
Classical methods of FDR control depend on the assumptions on how the features and the responses are related \citep{benjamini1995controlling, gavrilov2009adaptive}. Barber and Candes in their seminal paper \citep{candes2016panning}, proposed a novel FDR control approach, called the Model-X knockoff that can be used as a statistical wrapper around any machine learning method that can select features. Model-X knockoff does not rely on the nature of the relationship between the features and responses and therefore is \textit{model-free}. In order to control the FDR, the Model-X framework generates a synthetic set of features called knockoffs, which mimic the original features but are conditionally independent of the responses given the original features.
\paragraph{Related works.} Existing methods of knockoff generation either (i) assume the distribution of the features, or (ii) incorporate a generative model to learn the feature distribution from data. Second-order knockoff \citep{candes2016panning} assumes that the distribution of the features is jointly Gaussian. Another knockoff generation approach based on the Hidden Markov Model (HMM) \citep{sesia2019gene} characterizes the feature distribution with the help a Markov chain. Conditional independence knockoff (CIK) \citep{liu2019power} can produce valid knockoffs only if the Gaussian graphical model associated with the features is a tree. A more flexible approach that samples knockoffs from a Bayesian network is discussed in \citep{gimenez2019knockoffs}, where the covariates are modeled as the observed variables of the network.
Methods such as KnockoffGAN \citep{salimans2016improved}, Deep Knockoffs \citep{romano2020deep}, and Auto-Encoding knockoff \citep{liu2018auto} focus on learning a deep generative model to produce valid knockoffs. To create valid knockoffs, KnockoffGAN employs a generative adversarial network (GAN) that requires simultaneous training of four different but interconnected neural networks and is therefore computationally expensive. Auto-Encoding knockoff uses latent variables to reconstruct the covariate distribution using a variational autoencoder \citep{kingma2019introduction}. The performance of this method depends on the dimension of the latent space considered. Higher dimensional latent space can be used to make the model better but may have diminished power if the covariates violate the low-dimensional approximation. Deep-Knockoffs uses the celebrated two-sample goodness-of-fit statistic called maximum mean discrepancy (MMD) \citep{gretton2012kernel} as the loss function in a generative model to produce knockoffs. Though training a generative model with MMD is less expensive \citep{li2015generative}, in high dimension MMD has less power \citep{ramdas2015decreasing}. Another method, called DDLK \citep{sudarshan2020deep} generates knockoffs by first maximizing the likelihood of the features and then minimizing the KL (Kullback-Liebler) divergence between the joint distribution of the features and the knockoffs and any possible swaps between them. KL divergence cannot provide a useful gradient when the supports of the two distributions are disjoint \citep{binkowski2018demystifying}, which is a long-recognized problem for KL divergence-based discrepancy measures.
\paragraph{Summary of main contributions:}
We introduce a new statistic called soft Rank Energy {\small$(\text {sRE})$}, which is heavily inspired by the recent development in \textit{multivariate distribution-free goodness-of-fit tests} based on Rank Energy {\small$(\text {RE})$} \citep{deb2019multivariate}, that in turn is based on the fundamental results in optimal transportation theory \citep{mccann1995existence}. {\small$\text {sRE}$} is the extension of {\small $\text {RE}$} that is obtained by entropic regularization of the optimal transport problem.
Similar to \citep{cuturi2019differentiable}, this makes {\small $\text {sRE}$}, when used as loss function, a differentiable function of the generative model parameters. We highlight on the properties of {\small$\text{sRE}$} which make it a desirable candidate for measuring two-sample goodness-of-fit. We also introduce the kernel variant of {\small $\text{sRE}$} called soft Rank Maximum Mean Discrepancy {\small $(\text{sRMMD})$}. We inspect the behaviour of {\small$\text {sRE}$} and {\small$\text {sRMMD}$} w.r.t. the sample size, dimensions and the regularization parameters. We show that with an appropriate entropy regularizer, and sample size, {\small$\text {sRMMD}$} based generative models do not suffer from mode collapse. We use sRMMD as a loss function in a generative model to produce valid knockoffs. We demonstrate that knockoffs generated by our proposed method can keep the FDR under control, with either comparable or increased detection power compared to existing baselines in the cases considered.
\paragraph{Notations:} We use bold-math capital letters $\bm X$ for multivariate random variables, bold-face capital letters $\mathbf X$ for matrices and maps, lower-case bold face bold-math $\bm x$ for vectors. We denote by $\mathcal P(\mathbb R^d)$- the set absolutely continuous measure in $\mathbb R^d$. $\overset{d}{=}$ refers to the equality in distribution. $\mathcal N(\mu_d, \mathbb I_d)$ is referred to as the Gaussian distribution, where $\mathbb I_d$ is a $d$-dimensional identity matrix. The rest of the notations is standard and should be clear from the context.
\section{Overview of the knockoff filter}
Given that $\bm X = (X_1, \dots, X_d)\in\mathbb R^d$ and $y\in \mathbb R$ are the multivariate random variable and the response variable, respectively. Assume that the distribution associated with $\bm X$ is known and denoted as $\mathsf F_{\bm X}$, whereas there exists no knowledge about the conditional distribution, $\mathsf F_{y|\bm X}$. A variable $X_j$ is said to be ``null or unimportant'' \textit{if and only if} $y$ is conditionally independent on $X_j$, given the other variables, $y \perp \!\!\! \perp X_j| X_{-j}$, where $X_{-j} = \{X_1, \dots, X_d\}\setminus X_j$. Otherwise, it will be considered as a relevant or important variable. Under these assumptions, the primary goal of the Model-X Knockoff filter \citep{candes2016panning} is to discover \textit{as many relevant variables} as possible while keeping the FDR under control. FDR is defined as following,
{\small
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{3pt}
\begin{equation*}
\text {FDR} = \mathbb E\left[ \frac{|\hat{\mathcal S} \cap \mathcal H_0|} {\text{max}(1, \;\; |\hat{\mathcal S}|)} \right],
\end{equation*}
}%
where $\mathcal H_0$ denotes the true set of null or unimportant variables and $\hat {\mathcal S}\subseteq\{1, 2, \dots, d\}$ is a subset of variables that is selected by any variable selection method.
To control FDR, Model-X knockoff \citep{candes2016panning} generates knockoff, $\bm {\tilde X}$- a random vector that satisfies the following properties:
\begin{itemize}
\setlength \itemsep{0pt}
\item[(a)] \textbf{Exchangeability:} $( \bm X, \tilde{\bm X})_{\text{swap}(B)} \overset{d}{=} ( \bm X, \tilde {\bm X})$,
\item[(b)] \textbf{Conditional independence:} $ y \perp \!\!\! \perp \tilde{\bm X} \;|\; \bm X$.
\end{itemize}
Exchangeability property ensures that for any subset {\small $B\subset\{1, 2, \dots, p\}$}, the joint distribution remains unchanged when the variables and their corresponding knockoffs exchange their positions. That is, for any random vector, {\small $\bm X = (X_1, X_2, X_3)$}, and a set, {\small $B=\{2, 3\}$}, in order to satisfy the exchangeability condition, we must have {\small $(X_1, \tilde{X}_2, \tilde{X}_3, \tilde X_1, X_2, X_3)\overset{d}{=}(X_1, X_2, X_3, \tilde X_1, \tilde X_2, \tilde X_3)$}. The second property ensures that knockoffs are generated without the knowledge of the response variable.
To use knockoffs in a controlled variable selection procedure, it is important to construct a knockoff statistic, $W_j$ that can satisfy the \textit{flip-sign} property. That is, $W_j = w_j([\bm X, \tilde{\bm X}], y)$, for some function $w_j$ and $B$, must satisfy the following,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}\label{knockoff-stat}
w_j([\bm X, \tilde{\bm X}]_{\text{swap}(B)}, y)=\begin{cases}
w_j([\bm X, \tilde{\bm X}], y)\; \text{if}\; j\not \in B,\\
- w_j([\bm X, \tilde{\bm X}], y) \;\text{if}\; j\in B.
\end{cases}
\end{align}
}
Equation \eqref{knockoff-stat} says that, swapping the position of $j$-th variable with its knockoff will change the sign of $W_j$.
Given the knockoff statistics $W_j$ for $j= 1, \dots, d$, a guaranteed control of FDR is achieved by the knockoff filter at a level $q\in (0, 1)$ by selecting $j$-th variable, $j\in \{1, \dots, d\}$, such that $W_j\geq \tau$, where $\tau$ is achieved via the following,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{equation}\label{knockoff}
\tau = \min_{t>0}\Big \{t: \frac{1+ |\{j: W_j \leq -t\}|}{|\{j: W_j \geq t\}|}\leq q\Big \}.
\end{equation}
}
The power and FDR control of a knockoff filter depends on how the \textbf{exchangeability} condition is satisfied. A naive approach called Second-order knockoff \citep{candes2016panning} only approximates first two moments to satisfy the exchangeability condition assuming that the covariates follow a multivariate Gaussian distribution. However, Second-order knockoffs provide insufficient guarantees for FDR control if the assumption does not hold. On the other hand, Deep knockoffs \citep{romano2020deep} use a generative model based on the maximum mean discrepancy (MMD) \citep{gretton2012kernel} statistic to produce higher-order knockoffs that can achieve provable FDR guarantee under general conditions.
We now briefly describe the knockoff generation procedure used in Deep Knockoffs \citep{romano2020deep}.
\subsection{Deep knockoffs
Deep knockoff \citep{romano2020deep} employs a deep neural network, {\small $f_\theta(\bm X, \bm V)$} that generates a knockoff {\small $\tilde {\bm X}\in \mathbb R^d$} from original input variable $\bm X\in \mathbb R^d$, and a noise vector {\small $\bm V\sim \mathcal N(0, \mathbb I_d)\in \mathbb R^d$}. Here, $\theta$ denotes the parameters of the network. In order to satisfy the exchangeability condition without any assumption on the covariate distribution, Deep knockoff \citep{romano2020deep} uses the unbiased MMD \citep{li2015generative} estimate as a loss function- that can match the higher-order moments including the first two moments. In particular, for a design matrix {\small $\mathbf X\in \mathbb R^{n\times d}$} with $n$ observations, and the corresponding knockoff matrix {\small $\mathbf {\tilde X}\in \mathbb R^{n\times d}$}, Deep knockoff computes the loss in the following manner:
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}\label{mmd}
\ell_{\text{MMD}}(\mathbf X, \tilde {\mathbf X})
&= {\text{MMD}} \Big [ (\mathbf X', \tilde{\mathbf X'}), (\tilde{\mathbf X''}, \mathbf X'')\Big]+{\text{MMD}} \Big [ (\mathbf X', \tilde{\mathbf X'}), (\mathbf X'', \tilde{\mathbf X''})_{\text{swap}(B)}\Big],
\end{align}
}
where {\small $\mathbf X', \mathbf X'', \mathbf {\tilde X}', \mathbf {\tilde X}''\in\mathbb R^{n/2 \times d} $} are obtained by randomly splitting the design matrix in half and $B$ is a uniformly chosen random subset of {\small $\{1, \dots, d\}$}, such that $j\in B$ with probability $1/2$. To increase the power of the knockoff filter, a regularization term is added to the loss function that penalizes the large pairwise correlation,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}
\text{D}_{\text{corr}}(\mathbf X, \tilde{\mathbf X}) = \|\text{diag}(\hat {\mathbf G}_{\bm X \tilde {\bm X}}) - 1 + s^*_{\text{SDP}}(\hat {\mathbf G}_{\bm X\bm X})\|^2,
\end{align}
}%
where {\small $\mathbf {\hat G}_{\bm X \bm {\tilde X}}, \mathbf {\hat G}_{\bm X \bm X} \in \mathbb R^{d\times d}$} are the empirical covariance matrices and $s^*_{\text{SDP}}$ is the solution to the following semidefinite programming,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align*}
s^*_{\text{SDP}}(\Sigma_{\bm X, \bm X}) &= \arg \min_{s\in [0, 1]^d} \sum_{j=1}^p|1- s_j|, \,\,\, \text{s.t.} \;\;2 \Sigma \succeq \text{diag}(s).
\end{align*}
}
Deep knockoffs also adds a second-order loss term (Equation 7 in \citep{romano2020deep}) which is claimed to be effective in reducing the training time. Total loss of the generative model is given as,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}
\label{loss}
\ell(\mathbf X, \mathbf{\tilde X})\!\! =\!\!\ell_{\text{MMD}}(\mathbf X, \tilde {\mathbf X}) + \lambda \ell_{\text{s-o}}(\mathbf X, \mathbf {\tilde X}) +\delta \text{D}_{\text{corr}}(\mathbf X, \tilde{\mathbf X}).
\end{align}
}%
For further insights on the generative model, we refer the reader to \citep{romano2020deep} and reference therein.
\section{Background: OT based multivariate rank energy}
Rank-based goodness-of-fit tests have been studied comprehensively in 1-D e.g., Kolmogorov-Smirnov test \citep{smirnov1939estimation}, Wilcoxon signed-rank test \citep{wilcoxon1947probability}, Wald-Wolfowitz runs test \citep{wald1940test}. Unlike 1-D, due to the lack of canonical ordering in $d$- dimensional space, for $d\geq 2$, the notion of rank cannot be defined in a straightforward way. Recently several authors \citep{hallin2017distribution,hallin2021distribution, chernozhukov2017monge,deb2019multivariate,hallin2020fully, shi2020rate, shi2020distribution} studied the notion of multivariate rank based on OT theory. In this paper, we will consider the setting in \citep{deb2019multivariate} explicitly and build upon the ideas therein.
\paragraph{Ranks and Quantiles for univariate distributions.} Let $X$ be a univariate random variable with c.d.f. $\mathsf F: \mathbb{R} \rightarrow [0,1]$. It is a standard result that when $\mathsf F$ is continuous, the random variable $\mathsf{F}(X) \sim \mathsf{U} [0, 1]$ - the uniform distribution on $[0,1]$. For any $x \in \mathbb{R}$, $\mathsf{F}(x)$ is referred to as the \textit{rank-function}. For any $0<p<1$, the \emph{quantile function} is defined by $\mathsf{Q}(p) = \inf \{x \in \mathbb R: p \leq \mathsf F(x)\}$. When $\mathsf F$ is continuous, the quantile function $\mathsf Q = \mathsf F^{-1}$.
\paragraph{Ranks and Quantiles for multivariate distributions.} Since there exists no natural ordering in $\mathbb R^d$, defining ranks and quantiles in high dimension is not straightforward. To extend the notion of rank in $\mathbb R^d$, theory of Optimal Transport (OT) has been used to propose meaningful and useful notions of multivariate rank and quantile functions \citep{hallin2017distribution,chernozhukov2017monge,deb2019multivariate,hallin2020fully}. In it's most standard-setting, given two distributions, a source distribution $\mu\in \mathcal P(\mathbb R^d)$ and a target distribution $\nu \in \mathcal P(\mathbb R^d)$, OT aims to find a map $\mathbf T: \mathbb R^d \rightarrow \mathbb R^d$ that pushes $\mu$ to $\nu$ with a minimal cost. That is, given {\small $\bm X\sim \mu$}, and {\small $\bm{Y} \sim \nu $}, OT finds a map $\mathbf T$,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}\label{monge}
\inf_{\mathbf T} \int \|\bm{x} - \mathbf T(\bm{x})\|^2 d\mu(\bm{x}) \;\; \text{subject to.}\;\; \bm{Y} = \mathbf T(\bm{X}) \sim \nu.
\end{align}
}
Note that if $\mathbf T(\bm{X}) \sim \nu$ when $\bm{X} \sim \mu$, we write $\nu = \mathbf T_{\#}\mu$. The \textit{key insight} in using the theory of OT to multivariate ranks and quantiles comes from noticing that in case of $d = 1$, the optimal transport map is given by $\mathbf T = \mathsf F_\nu^{-1} \circ \mathsf F_\mu$, where $\mathsf F_\mu$ and $\mathsf F_\nu$ are the distribution functions for $\mu$ and $\nu$, respectively. When $\nu = \mathsf U[0, 1]$, this gives the rank function $\mathsf F_\mu$. Following \citep{deb2019multivariate}, McCann's theorem \citep{mccann1995existence} stated below, is used to extend the notion of rank to the multivariate setting.
\begin{theorem}[\citep{mccann1995existence}] \label{Theorem 1}
Assume $\mu, \nu \in \mathcal P(\mathbb R^d)$ be absolutely continuous measures, then there exists transport maps $\mathbf R(\cdot)$ and $\mathbf Q(\cdot)$, that are gradients of real-valued $d$-variate convex functions such that $\mathbf R_\# \mu =\nu, \;\; \mathbf Q_\#\nu = \mu$, $\mathbf R$ and $\mathbf Q$ are unique and $\mathbf R\circ \mathbf Q(\bm{X}) = \bm{X}$, $\mathbf Q\circ \mathbf R(\bm{Y}) = \bm{Y}$.
\end{theorem}
Based on this result, the authors in \citep{deb2019multivariate} give the following definitions for the rank and quantile functions in high dimensions.
\begin{definition}[\citep{deb2019multivariate}]
Given an absolutely continuous measure $\mu \in \mathcal P(\mathbb R^d)$ and $\nu =\mathsf U[0,1]^d$ - the uniform measure on the unit cube in $\mathbb{R}^d$, the ranks and quantile \textit{maps} for $\mu$ are defined as the maps $\mathbf R(\cdot)$ and $\mathbf Q(\cdot)$, respectively as defined in Theorem \ref{Theorem 1}.
\end{definition}
\subsection{Rank Energy}
To state the definition of Rank Energy \citep{deb2019multivariate}, we begin with a brief introduction on the Energy distance \citep{baringhaus2004new,szekely2013energy}. Energy distance is a multivariate, two-sample goodness-of-fit measure. Given two independent multivariate random variables {\small $\bm X \in \mathbb R^d\sim \mu_{\bm X}, \bm Y\in \mathbb R^d\sim \mu_{\bm Y}$}, where {\small $\mu_{\bm X}, \mu_{\bm Y}\in \mathcal P(\mathbb R^d)$}, energy distance is defined via:
{\small
\setlength{\abovedisplayskip}{10pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{10pt}
\begin{align}\label{eq:energy_dist}
\text E(\bm X, \!\bm Y)\! = \!\gamma_d\! \int_{\mathbb R}\!\int_{\mathcal S^{d-1}}\!\Big(\! \mathbb{P}(\bm a^\top \!\bm X\!\leq \!t) - \mathbb{P}(\bm a^\top\! \bm Y\!\leq\! t)\Big)^2\!\!\! d\kappa(\bm a) dt,
\end{align}
}
where $\gamma =(2\Gamma(d/2))^{-1}\sqrt{\pi}(d-1)\Gamma\big((d-1)/2\big)$ for $d>1$, $\mathcal S^{d-1} \overset{.}{=} \{\bm x\in \mathbb R^d: \|\bm x\|= 1\}$, and $\kappa(\cdot)$ denotes the uniform measure on $\mathcal S^{d-1}$. It is self-evident that for all $\bm a\in \mathcal S^{d-1}$ and $t\in \mathbb R$, the two measures will be equal \textit{if and only if} {\small $\mathbb{P}(\bm a^\top \bm X\leq t) = \mathbb{P}(\bm a^\top \bm Y\leq t)$}.
Deb \citep{deb2019multivariate} proposed a rank-based version of the energy measure and defined it as the rank energy measure.
\begin{definition}[\citep{deb2019multivariate}]
Suppose that $\bm X\sim \mu_{\bm X}$ and $\bm Y\sim \mu_{\bm Y}$. Let $\mathbf R_\lambda$ denote the population rank corresponding to a mixture distribution, $\lambda \mu_{\bm X} +(1-\lambda \mu_{\bm Y})$, for some $\lambda\in (0, 1)$. Then the population rank energy {\small $\mathop{}\!\mathrm{RE}_\lambda$} is defined as:
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}\label{population_RE}
\mathop{}\!\mathrm{RE}_\lambda(\bm X, \bm Y) =\gamma_d \int_{\mathbb R}\int_{\mathcal S^{d-1}} \Big(\mathbb P(\bm a^\top\mathbf R_\lambda(\bm X)\leq t) -& \mathbb{P}(\bm a^\top \mathbf R_\lambda(\bm Y)\leq t)\Big)^2 d\kappa(\bm a) dt.
\end{align}
}%
In other words, rank energy between $\bm X$ and $\bm Y$ is the energy measure between the corresponding ranks, $\mathbf R_\lambda(\bm X)$ and $\mathbf R_\lambda(\bm Y)$.
\end{definition}
{\small $\mathop{}\!\mathrm{RE}_\lambda(\cdot,\cdot)$} is a distribution-free multivariate two-sample goodness-of-fit measure. A nice feature of {\small $\text{RE}_\lambda$} is that, in 1-D, it is equivalent to the widely used Cram\'er-Von Mises \citep{anderson1962distribution} statistic for two-sample distribution testing, \citep{deb2019multivariate}.
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}
\frac{1}{2}\text{RE}_\lambda(X, Y) = \int (\mathsf F_X(t)-\mathsf F_Y(t))^2 \text d\mathsf H_{\lambda}(t),
\end{align}
}%
where $\mathsf F_X, \mathsf F_Y$ and $\mathsf H_{\lambda}$ are absolutely continuous distribution functions corresponding to the probability measures $\mu_X, \mu_Y$ and $\lambda \mu_X+(1-\lambda)\mu_Y$, respectively. Based on this result, we note the following lemma.
\begin{lemma}
Let $\mu_X$ and $\mu_Y$ are supported on the interval $[a, b]$ and $[a+s, b+s]$, respectively, and their mixture measure denoted as $\mu_H=\lambda \mu_X +(1-\lambda)\mu_Y$, for any $\lambda \in(0,1)$. Also, let $\mathsf F_X, \mathsf F_Y$ and $\mathsf H_\lambda$, denote the cumulative distribution functions for $\mu_X, \mu_Y$ and $\mu_H$, respectively. Then {\small $\mathop{}\!\mathrm{RE}_\lambda$} as a function of $s$, is a constant independent of $s$ for $s \geq b-a$ and $s\leq a-b$.
\end{lemma}
\begin{proof}
Please see Supplementary \ref{supp:lemma1}.
\end{proof}
We also verify this phenomenon empirically for higher dimensions in Section \ref{sec:4.1}.
\paragraph{Sample rank energy.}\label{sample_RE} To define sample rank energy (hereafter referred to as RE), we begin with the definition of a sample rank map. Given a set of i.i.d. samples {\small $\{\bm X_1, \dots, \bm X_m\}\in \mathbb R^d$} with empirical measure {\small $\mu_m^{\bm X} = m^{-1}\sum_{i=1}^m \delta_{\bm X_i}$} and a set of Halton sequence {\small $\mathcal H_{m}^d:=\{\bm h_1, \dots, \bm h_{m}\}\subset [0, 1]^d$} \citep{deb2019multivariate} with empirical measure {\small $\nu_{m}^{\mathbf H}= m^{-1}\sum_{i=1}^{m}\delta_{\bm h_i}$} that weakly converges to {\small $\mathsf U[0,1]^d$}, to compute a sample rank map {\small $\mathbf{\widehat R}_{m}$}, one solves for the following discrete optimal transport problem,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}
\label{eq:OT_linear}
\mathbf{\hat P} = \arg \min_{\mathbf{P} \in \Pi} \sum_{i,j = 1}^{m} \mathbf{C}_{i,j} \mathbf{P}_{i,j},
\end{align}
}
where $\mathbf C_{i,j}= \|\bm X_i-\bm h_j\|^2, \Pi = \{ \mathbf{P}: \mathbf{P} \bm{1} = \frac{1}{m} \bm{1}, \bm{1}^\top \mathbf{P} = \frac{1}{m} \bm{1}^\top\}$. It is well known that the solution to this problem, under the given set-up, is one of the scaled permutation matrices. Consequently, one obtains a map $\mathbf {\hat R}_m(\bm X_i) = \bm h_{\sigma(i)}$, where $\sigma(i)$ is the non-zero index in the $i$-th row of $\mathbf {\hat P}$.
Now to compute RE between two sets of i.i.d. samples {\small $\{\bm X_1, \dots, \bm X_m\}\sim \mu_m^{\bm X}$} and {\small $\{\bm Y_1, \dots, \bm Y_n\}\sim \mu_n^{\bm Y}$}, a joint sample rank map $\mathbf{\widehat R}_{m,n}$ is computed via solving \eqref{eq:OT_linear} between the joint empirical measure {\small $\mu_{m,n}^{\bm X, \bm Y} = (m+n)^{-1}(m\mu_m^{\bm X} + n\mu_n^{\bm Y})$} and the target empirical measure {\small $\nu_{m,n}^{\mathbf H}=(m+n)^{-1}\sum_{i=1}^{m+n}\delta_{\bm h_i}$}.
Given the sample ranks corresponding to $\bm X_i$'s and $\bm Y_j$'s, RE is defined as,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}\label{sre_sample}
\text{{RE}} \doteq & \frac{2}{mn} \sum_{i=1}^{m} \sum_{j=1}^{n} \| \widehat{\mathbf{R}}{m,n}(\bm{X}_i) - \widehat{\mathbf{R}}_{m,n}(\bm{Y}_j) \| - \frac{1}{m^2} \sum_{i,j=1}^{m} \| \widehat{\mathbf{R}}{m,n}(\bm{X}_i) - \widehat{\mathbf{R}}{m,n}(\bm{X}_j)\| \notag \\ & - \frac{1}{n^2} \sum_{i,j=1}^{n} \| \widehat{\mathbf{R}}{m,n}(\bm{Y}_i) - \widehat{\mathbf{R}}{m,n}(\bm{Y}_j)\|.
\end{align}
}
RE is distribution-free under the null for fixed sample size. However, discrete OT \eqref{sre_sample} has large sample complexity $\mathcal O(n^{-1/d})$ \citep{genevay2019sample} and also computationally expensive $\mathcal O(n^3\text{log}(n))$, given the sample size $n$. Moreover, RE is constant in the region where the supports of the two distributions are completely disjoint (see Supplementary).
\section{Proposed soft rank energy}
Towards developing the notion of soft rank energy we first state the Kantorovich relaxation of the Monge problem, where instead of a map, one seeks an optimal coupling $\bm{\pi}$ between a source distribution $\mu\in \mathcal P(\mathbb R^d)$ and a target distribution $\nu\in \mathcal P(\mathbb R^d)$. That is, given $\bm X\sim \mu$ and $\bm Y\sim \nu$, Kantorovich relaxation \citep{OTAM} solves for,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}\label{eq:kantoro_OT}
\argmin_{\bm \pi\in \Pi(\mu, \nu)}\int c(\bm x, \bm y) \text d\bm\pi(\bm x,\bm y),
\end{align}
}
where $\Pi(\mu, \nu)$ is the set of joint probability measure over the product set $\mathbb R^d \times \mathbb R^d$ with marginals $\mu$ and $\nu$, and typically $c(\bm x, \bm y)$ is a symmetric positive cost function satisfying $c(\bm x, \bm x)=0$. It is well-known that the Kantorovich solution coincides with the Monge solution, whenever the latter exists \citep{OTAM}.
Adding an entropic regularization term to \eqref{eq:kantoro_OT} allows one to get the transport plan in closed form \citep{feydy2019interpolating} and makes the OT problem differentiable everywhere w.r.t. the weights of the input measures \citep{blondel2018smooth}. For any regularizer $\varepsilon>0$, the primal formulation of the entropy regularized OT is given by:{\small
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{0pt}
\begin{align}\label{entropic_OT}
\argmin_{\bm \pi\in \Pi(\mu, \nu)} \int c(\bm x, \bm y) \text d\bm\pi(\bm x, \bm y) + \varepsilon \text{KL}(\bm \pi(\bm x, \bm y)| \mu(\bm x)\otimes \nu(\bm y)),
\end{align}
}%
{\small
\setlength{\abovedisplayskip}{0pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{0pt}
\begin{align}
\text{where}\;\text{KL}(\pi(\bm x, \bm y)|\mu(\bm x) \otimes \nu(\bm y)) \doteq \int \text{ln} \Big(\frac{\text d\bm\pi(\bm x,\bm y)}{\text d\mu(\bm x) \text d\nu(\bm y)}\Big)\text d\pi(\bm x,\!\bm y).\notag
\end{align}
}
The unconstrained dual of \eqref{entropic_OT} is achieved by using the Fenchel-Rockafellar's duality theorem \citep{clason2021entropic},
{\small
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{3pt}
\begin{align}\label{dual}
&\argmax_{f, g} \int f(\bm x)\text d\mu(\bm x) + \int g(\bm y)\text d\nu(\bm y) -\!\! \varepsilon\!\! \int\!\! \int\!\text{exp}\big( \frac{1}{\varepsilon}(f(\bm x)\!\!+ \!\!g(\bm y)\!\!-\!\! c(\bm x, \bm y))\big)\text d\mu(\bm x)\text d\nu(\bm y)\!+\! \varepsilon ,
\end{align}
}%
where $(f, g)$ is a pair of continuous functions. Given the optimal dual solutions $f,g$, the optimal plan $\bm \pi^{\varepsilon}$ is given as \citep{seguy2017large},
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}
\text d \bm \pi^\varepsilon (\bm{x}, \bm{y}) = \text{exp} \Big(\!\frac{1}{\epsilon}(f(\bm x)+g(\bm y)-c(\bm x,\bm y))\!\Big) \text d\mu(\bm{x}) \text d \nu(\bm{y}). \notag
\end{align}
}%
Unlike $\mathbf T$, $\bm \pi^\varepsilon$ is not a map, but a \textit{diffused coupling}, where the degree of diffusion is directly proportional to $\varepsilon$. Given $\bm \pi^\varepsilon$, we now define the soft rank.
\begin{definition}\label{sre_def}
Given an absolutely continuous measure $\mu \in \mathcal P(\mathbb R^d)$ and $\nu =\mathsf U[0,1]^d$ - the uniform measure on the unit cube in $\mathbb{R}^d$, the soft rank is defined as:
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}
\mathbf R^\varepsilon(\bm x) \overset{.}{=}\int_{\bm y} \frac{\bm{y}\, \text d\bm \pi^\varepsilon(\bm x, \bm y)}{\int_{\bm y} \text d\bm\pi^\varepsilon(\bm x, \bm y)}.
\end{align}
}%
In other words, soft rank $\mathbf R^\varepsilon(\cdot)$ is the conditional expectation of $\bm y$ under $\bm \pi^{\varepsilon}$, given $\bm X = \bm{x}$.
\end{definition}
We note that the soft rank as defined above corresponds to what is referred to as the barycentric projection of optimal transport plan \citep{seguy2017large, deb2021rates}. Based on Definition \ref{sre_def}, we now define the population version of soft rank energy.
\begin{definition}\label{sre_Def}
Given two independent multivariate random variables {\small $\bm X \sim \mu_{\bm X}$} and {\small $\bm Y\sim \mu_{\bm Y}$}, where {\small $\mu_{\bm X}, \mu_{\bm Y}\in \mathcal P(\mathbb R^d)$}. Also, let {\small $\mathbf R^\varepsilon_\lambda$} denote the population soft rank corresponding to a mixture distribution {\small $\lambda \mu_{\bm X} +(1-\lambda \mu_{\bm Y})$}, for some $\lambda\in (0, 1)$. Then the population soft rank energy is defined as:
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}\label{population_sRE}
\mathop{}\!\mathrm{sRE}^\varepsilon_\lambda(\bm X, \bm Y) =\gamma_d & \int_{\mathbb R} \int_{\mathcal S^{d-1}} \Big( \mathbb{P}(\bm a^\top \mathbf R_\lambda^\varepsilon(\bm X)\leq t)- \mathbb{P}(\bm a^\top \mathbf R_\lambda^\varepsilon(\bm Y)\leq t)\Big)^2 d\kappa(\mathbf a) dt.
\end{align}}
In other words, soft rank energy is the energy measure between the soft ranks corresponding to $\bm X$ and $\bm Y$.
\end{definition}
Based on this definition, we state the following lemma.
\begin{lemma}\label{lemma:sRE}
Under the assumptions of Definition \ref{sre_Def}, we state the following properties of {\small$\mathop{}\!\mathrm{sRE}^\varepsilon_\lambda$}.
\begin{itemize}
\setlength \itemsep{0pt}
\item[(a)] Given $\bm X_1, \bm X_2$ i.i.d. $\sim \mu_{\bm X}$ and given $\bm Y_1, \bm Y_2$ i.i.d. $\sim \mu_{\bm Y}$, following \citep{szekely2013energy}, the soft rank energy as defined via equation \eqref{population_sRE} is equivalent to
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}\label{properties}
\mathop{}\!\mathrm{sRE}_\lambda^\varepsilon(\bm X_1, & \bm Y_1)\!\! = \!\!2\mathbb E\|\mathbf R^\varepsilon_\lambda(\bm X_1)- \mathbf R^\varepsilon_\lambda(\bm Y_1)\|\!\! -\!\! \mathbb E\|\mathbf R^\varepsilon_\lambda(\bm X_1) - \mathbf R^\varepsilon_\lambda(\bm X_2)\|\!\!- \!\!\mathbb E\|\mathbf R^\varepsilon_\lambda(\bm Y_1)\!\!-\!\! \mathbf R^\varepsilon_\lambda(\bm Y_2)\|.
\end{align}}
\item[(b)] (Symmetric) {\small $\mathop{}\!\mathrm{sRE}^\varepsilon_\lambda(\bm X_1, \bm Y_1) = \mathop{}\!\mathrm{sRE}^\varepsilon_\lambda(\bm Y_1, \bm X_1)$},
\item[(c)] {\small$\mathop{}\!\mathrm{sRE}^\varepsilon_\lambda(\bm X_1, \bm Y_1) = 0$ if $\bm X_1\overset{d}{=}\bm Y_1$},
\item [(d)] {\small $\mathop{}\!\mathrm{sRE}^\varepsilon_\lambda$} will converge to {\small $ \mathop{}\!\mathrm{RE}_\lambda$} as $\varepsilon\rightarrow 0$.
\end{itemize}
\end{lemma}
\begin{proof}
Please see Supplementary \ref{supp:lemma2}.
\end{proof}
Note that one has the flexibility to employ any multivariate \textit{two-sample test} to measure the goodness-of-fit between distributions. That is, in similar fashion, one can define {\small $\text{RMMD}_\lambda$} or {\small $\text{sRMMD}^\varepsilon_\lambda$} by measuring the maximum mean discrepancy (MMD) \citep{gretton2012kernel} between the rank-transformed random variables $\mathbf R_\lambda(\bm{X})$ and $\mathbf R_\lambda(\bm{Y})$ or $\mathbf{R}_\lambda^{\varepsilon}(\bm{X})$ and $\mathbf{R}_\lambda^\varepsilon(\bm{Y})$.
We now state the sample soft rank energy {\small $(\text{sRE}^\varepsilon)$} and sample soft rank maximum mean discrepancy {\small$(\text{sRMMD}^\varepsilon)$.} For the sake of brevity, hereafter, we refer to them as {\small $\text{sRE}$} and {\small$\text{sRMMD}$}, respectively.
\begin{figure*}[ht]
\centering
\includegraphics[width =\textwidth]{boundedness_srmmd_02.pdf}
\caption{$\text{RMMD}(\varepsilon =0)$ and $\text {sRMMD}$ (y-axis) between two uniform distributions with bounded supports. $n$ and $d$ denote the number of samples and dimension. $\text{RMMD}$ achieves a constant maximum in the region, where the two supports are completely disjoint, for all $n, d$. $\text{sRMMD}$ do not saturate for increased $d$ if $\varepsilon$ and $n$ is larger.}
\label{fig:boundedness}
\end{figure*}
\subsection{sRE and sRMMD}
Using the similar setting as described in Section \ref{sample_RE}, we now define the sample soft rank map {\small $\mathbf{\widehat R}_{m}^\varepsilon$}. To compute {\small $\mathbf{\widehat R}_{m}^\varepsilon$}, first an entropy regularized OT is solved via Sinkhorn \citep{peyre2019computational}-\textit{a iterative fixed point} algorithm for an entropy regularizer, $\varepsilon>0$,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}
\label{eq:OT_reg}
\mathbf{P}^\varepsilon = \arg \min_{\mathbf{P} \in \Pi} \sum_{i,j = 1}^{m} \mathbf{C}_{i,j} \mathbf{P}_{i,j} - \varepsilon H(\mathbf{P}),
\end{align}
}
where {\small $\mathbf C_{i,j}= \|\bm X_i-\bm h_j\|^2, \Pi = \{ \mathbf{P}: \mathbf{P} \bm{1} = \frac{1}{m} \bm{1}, \bm{1}^\top \mathbf{P} = \frac{1}{m} \bm{1}^\top\}$}, and {\small $H(\mathbf{P}) = - \sum_{i,j} \mathbf{P}_{i,j} \log \mathbf{P}_{i,j}$} is the entropy functional. Given $\mathbf P^\varepsilon$, sample soft rank map is defined via,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}\label{softrank}
\widehat{\mathbf{R}}_m^{\varepsilon} (\bm{X}_i) = \sum_{j = 1}^{m} \frac{\mathbf{P}_{i,j}^\varepsilon}{\sum_{j=1}^{m} \mathbf{P}_{i,j}^\varepsilon} \bm{h}_j.
\end{align}
}%
Now, to define sRE and sRMMD, we first compute a joint sample soft rank map $\hat{\mathbf R}_{m,n}^\varepsilon$ via solving \eqref{eq:OT_reg} between $\mu_{m,n}^{\bm X, \bm Y}$ and $\nu_{m,n }^{\mathbf H}$. Given the sample soft ranks corresponding to $\bm X_i$'s and $\bm Y_j$'s, we define sRE as,
{\small
\setlength{\abovedisplayskip}{0pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{0pt}
\begin{align}\label{sre}
\text{{sRE}} \doteq & \frac{2}{mn} \sum_{i=1}^{m} \sum_{j=1}^{n} \| \widehat{\mathbf{R}}^\varepsilon_{m,n}(\bm{X}_i) - \widehat{\mathbf{R}}_{m,n}^\varepsilon(\bm{Y}_j) \| - \frac{1}{m^2} \sum_{i,j=1}^{m} \| \widehat{\mathbf{R}}^\varepsilon_{m,n}(\bm{X}_i) - \widehat{\mathbf{R}}^\varepsilon_{m,n}(\bm{X}_j)\| \notag \\ & - \frac{1}{n^2} \sum_{i,j=1}^{n} \| \widehat{\mathbf{R}}^\varepsilon_{m,n}(\bm{Y}_i) - \widehat{\mathbf{R}}^\varepsilon_{m,n}(\bm{Y}_j)\|
\end{align}
}%
and sRMMD, with a characteristic kernel function $k(\cdot,\cdot)$, as,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}\label{srmmd}
\text{{sRMMD}} \doteq & \frac{1}{m(m-1)}\sum_{i, j \neq i}^{m} k(\widehat{\mathbf{R}}_{m,n}^{\epsilon}(\bm{X}_i), \widehat{\mathbf{R}}_{m,n}^{\epsilon}(\bm{X}_j)) - \frac{2}{mn}\sum_{i, j = 1}^{m, n} k(\widehat{\mathbf{R}}_{m,n}^{\epsilon}(\bm{X}_i), \widehat{\mathbf{R}}_{m,n}^{\epsilon}(\bm{Y}_j)) \notag \\ & + \frac{1}{n(n-1)}\!\!\sum_{i, j \neq i}^{n} \!\! k(\widehat{\mathbf{R}}_{m,n}^{\epsilon}(\bm{Y}_i), \widehat{\mathbf{R}}_{m,n}^{\epsilon}(\bm{Y}_j)),
\end{align}
}
Employing entropic OT to define rank maps make sRE and sRMMD differentiable everywhere \citep{seguy2017large}. Use of Sinkhorn iterations \citep{peyre2019computational} allows one to exploit the power of GPU. The computational complexity of entropic OT is {\small $\mathcal O(\varepsilon^{-2} n^2 \log\, n\|\mathbf C\|_\infty^2)$} \citep{peyre2019computational} and with increasing $\varepsilon$, complexity decreases. Moreover, under some mild assumptions, namely sub-Gaussianity of the measures, the estimation of entropic OT does not suffer from the curse of dimensionality for a sufficiently large $\varepsilon$ \citep{mena2019statistical}.
\subsection{Proposed Knockoff generation}
We now propose to use sRMMD as the loss function in a generative model to produce knockoffs. Given the design matrix $\mathbf X\in \mathbb R^{n\times d}$ and its knockoff matrix $\mathbf{\tilde X}\in \mathbb R^{n\times d}$, the loss function is defined via,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}
\label{eq:loss_srmmd}
\ell(\mathbf X, \mathbf{\tilde X}) \!\!=\!\! \ell_{\text{sRMMD}}(\mathbf X, \tilde {\mathbf X}) \!\! + \!\!\lambda \ell_{\text{s-o}}(\mathbf X, \mathbf {\tilde X}) \!\!+\!\!\delta \text{D}_{\text{corr}}(\mathbf X, \tilde{\mathbf X}).
\end{align}}
$\ell_{\text{sRMMD}}(\mathbf X, \tilde {\mathbf X})$ is computed in a similar way as \eqref{mmd},
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}\label{smmd}
\ell_{\text{sRMMD}}(\mathbf X, \tilde {\mathbf X})
&= \text{{sRMMD}} \Big [ (\mathbf X', \tilde{\mathbf X'}), (\tilde{\mathbf X''}, \mathbf X'')\Big] + \text{{sRMMD}} \Big [ (\mathbf X', \tilde{\mathbf X'}), (\mathbf X'', \tilde{\mathbf X''})_{\text{swap}(S)}\Big]. \notag
\end{align}}%
For optimal performance, the hyperparameters should be tuned to the specific data distribution at hand.
\section{Experiments}\label{experiment}
In this section, we empirically observe the properties of {\small $\text{sRMMD}$} as a loss function and evaluate the performance of the proposed knockoff generation method on synthetic and real datasets.
\paragraph{{\small \textbf{sRMMD}} as a loss function.}\label{sec:4.1}
To investigate the quality of {\small $\text{sRMMD}$} as a loss function in a generative model, we consider two Uniform distributions having bounded supports, $\mathsf U[0, 1]^d$ and $\mathsf U[s, s+1]^d$ and measure the {\small $\text{sRMMD}$} between them for $s\in [-10, 10]$.
Figure \ref{fig:boundedness} empirically shows the saturation behavior of {\small $\text{sRMMD}$}.
This saturation limits the use of {\small $\text{sRMMD}$} for learning a generative model, as the saturation in those regimes will imply vanishing gradients for learning. As one can see from the plots that either higher values of $\varepsilon$ or larger sample size is needed with increasing dimensions in order to prevent the saturation behavior of $\text{sRMMD}$. Plots showing similar behavior for {\small $\text{sRE}$} can be found in Supplementary \ref{supp:sre}.
\begin{figure}[ht]
\centering
\includegraphics[width = 6cm]{mode_collapse.pdf}
\caption{Mode reconstruction for a Gaussian mixture model (upper left) with sRMMD for various $\varepsilon$}.
\label{fig:mode}
\end{figure}
\begin{figure*}[ht]
\centering
\subfloat[][]{\includegraphics[width=.45\linewidth, height = 3.5cm]{gaussian.pdf}}\quad
\subfloat[][]{\includegraphics[width=.45\linewidth, height = 3.5cm]{gmm.pdf}}\\
\subfloat[][]{\includegraphics[width=.45\linewidth, height = 3.5cm]{mstudent.pdf}}\quad
\subfloat[][]{\includegraphics[width=.45\linewidth, height = 3.5cm]{sparse.pdf}}
\caption{FDR vs amplitude and power vs. amplitude. FDR level is set to 0.1 (black dotted line).}
\label{fig:result}
\end{figure*}
\paragraph{{\small \textbf{sRMMD}} for a generative model.}
To check this, we sample {\small $\bm X \sim \sum_{k=1}^3m_k \mathcal N(\mu_k, \Sigma_k)$}, which is a Gaussian mixture model with three modes. $\Sigma_k$ is a $d$-dimensional covariance matrix whose $(i, j)$ entry is $\rho^{|i-k|}$. We use {\small $(\rho_1, \rho_2, \rho_2)=(0.6, 0.4, 0.2)$}, cluster mean {\small $(\mu_1, \mu_2, \mu_3) = (0, 20, 40)$} and mixture proportion {\small $(m_1, m_2, m_3)=(0.4, 0.2, 0.4)$} for $d= 16$. We use \eqref{eq:loss_srmmd} with $(\lambda, \gamma)=(0,0)$ in the generative model described in \citep{romano2020deep}.
Figure \ref{fig:mode} shows that the generative model suffers from mode collapse for $\varepsilon =1$, which is expected (Figure \ref{fig:boundedness}). On the other hand, $\varepsilon=5, 10$, perfectly captures the mode, covariance, and proportions of the Gaussian mixture model, hence establish sRMMD as a suitable loss function for the knockoff generation.
\paragraph{Synthetic dataset.}
We evaluate the performance of $\text{sRMMD}$ based knockoffs on four different distributional settings adapted from \citep{romano2020deep}. Below, we briefly describe each setting with the optimal hyperparameters used in \eqref{eq:loss_srmmd} to train the model.
\begin{enumerate}
\item[(a)] \emph{\textit{\underline{Multivariate Gaussian}}}: An AR(1) model, with $\bm X =(X_1, X_2,\dots, X_d)\sim \mathcal N(0, \Sigma)$, where $\Sigma$- a $d$ dimensional covariance matrix, whose $(i, j)$ entry is $\rho^{|i-j|}$. $\rho$ is set to $0.5$. Hyperparameters $(\lambda, \delta)$ were set to $(1, 1)$ in the loss function \eqref{eq:loss_srmmd}.
\item[(b)] \emph{\textit{\underline{Gaussian Mixture Model}}}: Mixture of three multivariate Gaussian distributions, $\mathcal N(0, \Sigma_1)$, $\mathcal N(0, \Sigma_2)$ and $\mathcal N(0, \Sigma_1)$, each of equal probability. $\Sigma_1, \Sigma_2, \Sigma_3$ have the same first-order autoregressive structure. Covariance matrices are $\Sigma_1 = 0.3^{|i-j|}, \Sigma_2 = 0.5^{|i-j|}$, and $\Sigma_3 = 0.7^{|i-j|}$. $(\lambda, \delta) = (1,1)$.
\item[(c)] \emph{\textit{\underline{Multivariate Student's t distribution}}}: {\small $\bm X = \sqrt{\frac{(\nu -2)}{\nu}} \frac{\bm Z}{ \sqrt{\Gamma}}$}, where $\nu = 3$ is the degrees of freedom, {\small $\bm Z\sim \mathcal N(0, \Sigma)$} and $\Gamma$ is independently drawn from a Gamma distribution with shape and rate parameters both equal to $\nu/2$. $(\lambda, \delta)=(1, 0.1)$.
\item[(d)] \emph{\textit{\underline{Sparse Gaussian Variables}}}: For a scalar random variable $\eta \sim \mathcal N(0, 1)$, and random subset $A \in \{1, \dots, p\}$ of size $|A| = L$, sparse Gaussian variable is defined as,
{\small
\setlength{\abovedisplayskip}{0pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{0pt}
\begin{align*}
X_j = \sqrt{\frac{\binom{L}{p}}{\binom{L-1}{p-1}}}. \begin{cases}\eta,\;\; \text{if}\;\; j\in A, \\ 0, \;\; \text{otherwise}\end{cases}\;\;\\ \text{and,}\;\; \Sigma_{i, j} = \begin{cases} 1, \;\; \text{if}\;\; i = j,\\ \frac{L-1}{p-1}\;\; \text{otherwise.}\end{cases}
\end{align*}
}%
with $L = 30$, and $(\lambda, \delta) =(1, 5)$.
\end{enumerate}
\textbf{Experimental setup:} We consider a fully connected neural network with 6 hidden layers, each consists of 5$d$ nodes as the generative architecture. This architecture is almost similar to the structure used in \citep{romano2020deep}. The network takes $\bm X \in \mathbb R^d$ and a noise vector {\small $V\in \mathbb R^d\sim \mathcal N(\bm 0, \mathbb I_d)$} as inputs and produce the knockoff vector {\small $\tilde {\bm X}\in \mathbb R^d$}. For further insights, we refer the reader to \citep{romano2020deep}.
We train the model with Stochastic gradient descent with $n=2500$ training samples, each of $d=100$ dimension. The minibatch size, learning rate and the number of epochs were set to 500, $0.01$, and 100, respectively. We use $\varepsilon = 10$, to compute the soft ranks. We employ a Gaussian mixture kernel {\small $k(x, y) = \frac{1}{8}\sum_{i=1}^8 \text{exp}[-\|x - y\|]_2^2/(2 \sigma^2)$}, with $\sigma = (1, 2, 4, 8, 16, 32, 64, 128)$ to compute $\text{sRMMD}$.
We design a matrix $\mathbf X\in \mathbb R^{m\times d}$, with $m=100$ random samples unknown to the training dataset and generate knockoffs $\mathbf {\tilde X}\in \mathbb R^{m\times d}$. For each sample {\small $i\in \{1,2 , \dots, m\}$}, we simulate the response according to a Gaussian linear model, {\small $y_i = \bm X_i^\top \bm \beta+ z$}, where $z\sim \mathcal N(0, 1)$. $\bm \beta\in \mathbb R^d$ denotes the coefficient vector which has 30 non-zero entries each having an amplitude equal to $a/\sqrt{m}$. To find the knockoff statistics, we fit a LASSO-regression \citep{friedman2010regularization} model on the augmented matrix $[\mathbf X, \mathbf {\tilde X}] \in \mathbb R^{m\times 2d}$ with the response vector $\bm y\in \mathbb R^m$ and compute the coefficients $[\hat {\bm \beta}, \hat {\bm \beta}^K]\in \mathbb R^{2d}$ via solving,
{\small
\setlength{\abovedisplayskip}{6pt}
\setlength{\belowdisplayskip}{\abovedisplayskip}
\setlength{\abovedisplayshortskip}{0pt}
\setlength{\belowdisplayshortskip}{6pt}
\begin{align}
(\hat {\bm \beta}, \hat {\bm \beta}^K) = \arg \min_{(\bm \beta, \bm \beta^K)}& \frac{1}{2m} \|y - \mathbf X \bm \beta - \mathbf {\tilde X} \bm \beta^K\|_2^2 + \alpha (\|\bm \beta\|_1 + \|\bm \beta^K\|_1),
\end{align}
}
where $\hat{\bm \beta}$ and $\hat {\bm \beta}^K$ denotes the coefficient vectors corresponding to the original variables and the knockoffs, respectively, and $\alpha$ is a hyperparameter chosen carefully for each distributional setting. Absolute LASSO coefficient difference is taken as the knockoff statistics, {\small $W_j = |\hat {\bm \beta}_j|-|\hat {\bm \beta}_j^k|$}, for each $j\in\{1, \dots, d\}$. We apply \eqref{knockoff} on the statistics $W_j$ for the FDR level $q = 0.1$. We repeat the experiment 500 times for different values of $a$ and take the average FDR and power.
We compare our method to other knockoff generation techniques, namely Second-order knockoff \citep{candes2016panning}, Deep Knockoffs \citep{romano2020deep}, KnockoffGAN \citep{jordon2018knockoffgan} and DDLK \citep{sudarshan2020deep}. For all comparisons, we use publicly available implementations of the code (if available) and used their recommended configurations and hyperparameters. For each method, we follow the exact procedure to compute knockoff statistics as described above.
\paragraph{Performance on synthetic data.}
Figure \ref{fig:result} shows the average FDR vs. amplitude and power vs. amplitude curves for different knockoff generation methods. For the multivariate Gaussian distribution, we observe that, all methods showcase very similar power and FDR control over the entire amplitude region. For the Gaussian mixture model, KnockoffGAN shows higher power, but cannot control the FDR, whereas {\small $\text{sRMMD}$} and other methods demonstrate comparable power and keep the FDR below the FDR level. We notice that for the multivariate Student's t distribution, Second-order and DDLK knockoffs achieves comparably higher power in the low amplitude region but fail to control the FDR. We also observe that, though KnockoffGAN and Deep knockoffs can control the FDR but they have low power in the small amplitude region. On the other hand, {\small $\text{sRMMD}$} gains higher power than the KnockoffGAN and Deep knockoff at this region and is also able to control the FDR. For the sparse Gaussian setting, we notice that DDLK gains very high power in the small amplitude region, but has poor control over FDR. Second-order knockoff also fails to control the FDR in this case. On the other hand, the plot indicates that knockoffGAN and Deep knockoff control the FDR but have less power in the small amplitude region, whereas {\small $\text{sRMMD}$} gains higher power and more control over the FDR in this region.
\paragraph{Evaluation on real data benchmark}
We apply the proposed knockoff filter to a publicly available metabolomics dataset in order to discover important biomarkers. In absence of the ground truth, to qualitatively analyze the performance, we cross-reference the selected metabolites with published literature. We use a study titled \emph{Longitudinal Metabolomics of the Human Microbiome in Inflammatory Bowel Disease} \citep{lloyd2019multi} which is available at the NIH's common Fund's National Metabolomics Data Repository (NMDR) website, the Metabolomics Workbench, \url{https://www.metabolomicsworkbench.org/} under the project DOI: 10.21228/M82T15. The study is related to Inflammatory Bowel Disease (IBD), that describes the conditions like ulcerative colitis (UC) and Crohrn’s disease (CD). We use \emph{C18 Reverse-Phase negative mode} dataset which was collected under this study. There are $546$ samples in this dataset, each having $91$ metabolites on average. Each sample belongs to one of the three different classes e.g., CD, UC, non-IBD and we assign response $y$ to either $\{0, 1,\text{or}\; 2\}$, respectively. Before applying the knockoff filter, we preprocess the dataset in multiple steps, (i) keeping the metabolites that have at least $80\%$ filled values (ii) missing value imputation using K-nearest neighbour (KNN) \citep{gromski2014influence} (iii) standardization. After generating the knockoffs, we apply Random Forest classifier \citep{trainor2017evaluation} on the augmented matrix, and take the difference between the out-of-bag (OOB) \citep{trainor2017evaluation} scores corresponding to the original and knockoffs as the knockoff statistics. We repeat the whole procedure $100$ times, since the generated knockoffs are random. We select those metabolites that appear at least 70 times out of $100$ instances, setting the FDR level at 0.05. 22 metabolites are found as significant. Among them 19 metabolites are shown to have relation to IBD, which are cross referenced by published literature.
Additional experiments and the list of the selected metabolites are to be found in Supplementary \ref{supp:additional}, and, \ref{supp:metalist}.
\section{Conclusion and Future work}
In this paper, we introduced a new statistic called soft-rank Maximum Mean Discrepancy (sRMMD) and used it to generate valid knockoffs. We demonstrate through a series of experiments that sRMMD is a valid loss function that can be used in a generative model. We also showed that knockoffs generated using the proposed method have better power and FDR control on synthetic datasets sampled from Gaussian and non-Gaussian distributional settings. While the proposed approach is better than several existing methods for knockoff generation, it is still computationally expensive compared to MMD. An important future work is to characterize in a precise manner the behaviour of sRMMD w.r.t. the sample size, dimension, and the entropy regularizer.
\section{Acknowledgement}
This research was sponsored by the U.S. Army DEVCOM Soldier Center, and was accomplished under Cooperative Agreement Number W911QY-19-2-0003. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Army DEVCOM Soldier Center, or the U.S. Government. The U. S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.
We also acknowledge support from the U.S. National Science Foundation under award HDR-1934553 for the Tufts T-TRIPODS Institute. Shuchin Aeron is also supported in part by NSF CCF:1553075, NSF RAISE 1931978, NSF ERC planning 1937057, and AFOSR FA9550-18-1-0465.
\bibliographystyle{abbrvnat}
|
1,314,259,993,249 | arxiv | \section*{Abstract}
Bottom-up models of functionally relevant patterns of neural activity provide an explicit link between neuronal dynamics and computation. A prime example of functional activity patterns are propagating bursts of place-cell activities called hippocampal replay, which is critical for memory consolidation.
The sudden and repeated occurrences of these burst states during ongoing neural activity suggest metastable neural circuit dynamics. As metastability has been attributed to noise and/or slow fatigue mechanisms, we propose a concise mesoscopic model which accounts for both. Crucially, our model is bottom-up: it is analytically derived from the dynamics of finite-size networks of Linear-Nonlinear Poisson neurons with short-term synaptic depression. As such, noise is explicitly linked to spiking noise and network size, and fatigue is explicitly linked to synaptic dynamics.
To derive the mesoscopic model, we first consider a homogeneous spiking neural network and follow the temporal coarse-graining approach of Gillespie to obtain a ``chemical Langevin equation'', which can be naturally interpreted as a stochastic neural mass model. The Langevin equation is computationally inexpensive to simulate and enables a thorough study of metastable dynamics in classical setups (population spikes and Up-Down-states dynamics) by means of phase-plane analysis. An extension of the Langevin equation for small network sizes is also presented. The stochastic neural mass model constitutes the basic component of our mesoscopic model for replay. We show that the mesoscopic model faithfully captures the statistical structure of individual replayed trajectories in microscopic simulations and in previously reported experimental data. Moreover,
compared to the deterministic Romani-Tsodyks model of place-cell dynamics, it exhibits a higher level of variability regarding order, direction and timing of replayed trajectories, which seems biologically more plausible and could be functionally desirable.
This variability is the product of a new dynamical regime where metastability emerges from a complex interplay between finite-size fluctuations and local fatigue.
\section*{Author summary}
Cortical and hippocampal areas of rodents and monkeys often exhibit neural activities that are best described by sequences of re-occurring firing-rate patterns, so-called metastable states. Metastable neural population dynamics has been implicated in important sensory and cognitive functions such as neural coding, attention, expectation and decision-making. An intriguing example is hippocampal replay, i.e. short activity waves across place cells during sleep or rest which represent previous animal trajectories and are thought to be critical for memory consolidation. However, a mechanistic understanding of metastable dynamics in terms of neural circuit parameters such as network size and synaptic properties is largely missing. We derive a simple stochastic population model at the mesoscopic scale from an underlying biological neural network with dynamic synapses at the microscopic scale. This "bottom-up" derivation provides a unique link between emergent population dynamics and neural circuit parameters, thus enabling a systematic analysis of how metastability depends on neuron numbers as well as neuronal and synaptic parameters. Using the mesoscopic model, we discover a novel dynamical regime, where replay events are triggered by fluctuations in finite-size neural networks. This fluctuation-driven regime predicts a high level of variability in the occurrence of replay events that could be tested experimentally.
\section*{Introduction}
Metastable dynamics of neural populations is an important concept in computational neuroscience with increasing experimental evidence \cite{LaCFon19,BriYan22}. It is loosely defined as a sequence of recurring, discrete ``states'' of population activity that last much longer than the rapid, jump-like transitions between states (typically hundreds of milliseconds to several seconds). Sequences of metastable states have been frequently observed in cortical and hippocampal areas during task engagement as well as during spontaneous, ongoing activity and have been linked to various sensory and cognitive functions \cite{RabHue08,DurDec08}. These functions include the encoding of sensory stimuli \cite{AbeBer95,MazFon15} and internal representations of expectation \cite{MazLaC19} and attention \cite{EngSte16}. In these studies, the statistical properties of metastable neural activity can often be explained by hidden Markov models with a few latent states \cite{EngSte16,MazFon15}. However, more complex spatio-temporal activity patterns such as sequences of burst activity across hippocampal place cells during periods of both exploration and immobility (``replay of trajectories'') of an animal can also be regarded as metastable activity.
In in-silico studies, metastable dynamics also emerges in networks of excitatory and inhibitory spiking neurons. This is the case for finite-size networks with clustered connectivity \cite{LitDoi12,MazFon15}, spatially-structured networks with slow fatigue processes for hippocampal replay \cite{EckBag22} and even for unstructured random connectivity in the inhibition-dominated regime \cite{TarBru17,KulKno22}. Network models exhibiting metastable dynamics have also been used to explain the stimulus-dependence of cortical variability \cite{LitDoi12}.
The mechanisms of metastable dynamics are often explained using heuristic population, or firing-rate, models.
These mechanisms can be roughly divided into two types: one in which transitions between metastable states are induced by fluctuations and another one in which transitions are induced by the deterministic part of the dynamics. In the first case, noise is essential for metastability because the noiseless dynamics would not exhibit spontaneous transitions. In contrast, in the second case, transitions also occur in the noiseless dynamics, while noise can still be useful to model variability of state durations. An important instance for the first type are multi-attractor models in the presence of noise, such as noisy bistable models for perceptual rivalry \cite{MorRin07,ShpMor09,CaoPas16} and alternating Up-Down states \cite{HolTso06,ErmTer10,JerRox17}. Transitions correspond to noise-induced escapes from the basins of attraction. A popular instance of the second type are transiently stable states governed by a slow fatigue variable such as adaptation or synaptic depression. In these fast-slow systems, rapid transitions occur when quasi-stationary states of the fast subsystem (e.g. population activity) destabilize or vanish as the slow subsystem (fatigue process) evolves on a longer time scale. A prototypical example are relaxation oscillations, i.e. a (noisy) limit-cycle with a strong time-scale separation, used e.g. to model regular alternations between Up and Down states in spontaneous cortical activity \cite{MatSan12,LevBuz19}. A complex example of fatigue-induced metastability is the Romani-Tsodyks ring model for nonlocal events in place cells resembling the hippocampal ``replay'' dynamics \cite{RomTso15}. This deterministic ring model resides in a traveling-wave state (``non-local events'') or in a quiescent state depending on the spatial profile of a slow synaptic depression variable, which leads to a complex spatio-temporal activity pattern. We mention that there are also other mechanisms of metastability including noisy excitable dynamics \cite{LevBuz19} and deterministic motions between saddle points \cite{RabHue08}, also referred to as heteroclinic cycles or winnerless competition \cite{SelTsi03}. Looking at empirical data, however, it can be hard to distinguish different mechanisms, especially at high noise levels.
There has been much effort to infer the mechanism underlying metastable dynamics by studying the consistency of experimental data with heuristic population models. For example, in the case of cortical and hippocampal Up and Down states \cite{MatSan12,JerRox17,LevBuz19} and for perceptual bistability \cite{MorRin07,ShpMor09,CaoPas16}, it has been suggested that populations models, where noise-induced transitions are modulated by a slow fatigue variable, are most consistent with the data. An important question that has received relatively little attention is whether such conclusions are also consistent with the underlying circuit properties at the microscopic scale, modeled as networks of spiking neurons with biologically realistic neuronal, synaptic and network properties.
Unfortunately, a clear link between the employed population models and microscopic circuit models is largely missing, and it thus remains unclear how the mechanisms of metastability depend on physiological parameters.
While neuronal and synaptic properties can be accounted for by mean-field models of integrate-and-fire networks \cite{TsoPaw98, MazFon15}, the dependence of metastable dynamics on the number of neurons in the network is poorly understood. This latter aspect is particularly crucial in the context of metastability because fluctuations due to a finite number of neurons have been found to be essential for fluctuation-induced metastability by several detailed simulation studies \cite{LitDoi12,MazFon15} as well as theoretical considerations \cite{Bre10}. The description of these internally generated fluctuations requires population models at the \emph{mesoscopic} scale, where the finite network size is explicitly taken into account \cite{MatGiu02,SchDeg17,SchChi19}.
Previous models for determining the mechanisms of metastability cannot describe this dependence:
In heuristic population models, fluctuations were introduced ad hoc by adding a phenomenological noise term without a link to the network size. In the case of mean-field models, fluctuations are usually not described at all because they vanish in the mean-field limit of infinitely many neurons.
In this contribution, we develop a theoretical framework for mesoscopic population dynamics with slow fatigue that can describe metastable dynamics and links to an underlying microscopic description. To this end, we use a bottom-up approach starting from a finite-size network of linear-nonlinear Poisson (LNP) spiking neurons connected via dynamic synapses undergoing short-term plasticity (STP). From this microscopic model we derive stochastic differential equations for a few mesoscopic variables describing the coarse-grained population dynamics. We focus on STP in the form of short-term synaptic depression as a slow fatigue mechanism because it is a ubiquitous feature of neural networks in the brain \cite{AbbVar97,ZucReg02,CooSch03,HigCon06,OswUrb12} and has been implicated in important functions such as temporal filtering \cite{MerLin10,DroSch13}, multistability \cite{MonHan2012} and working memory \cite{MonBar08}.
Mean-field models of STP \cite{TsoPaw98} have recently gained renewed attention \cite{TahTor20,GasKno21} in the context of the Montbri{\'o}-Paz{\'o}-Roxin theory for quadratic integrate-and-fire neurons \cite{MonPaz15,PieDev19}.
However, in these models the mean-field description of STP is heuristic -- it is not derived from a microscopic model but introduced ad hoc at the population level. More importantly, the models are deterministic corresponding to the limit of infinitely many neurons, and thus cannot explain fluctuation-induced transitions among metastable states in finite-size networks. Recently, we have developed a mesoscopic bottom-up model for finite-size networks with STP, and have demonstrated that the mesoscopic model accurately reproduces the metastable Up-and-Down-states dynamics of the microscopic model \cite{SchGer20}. The mathematical structure of that model has the intricate form of a state-dependent doubly-stochastic point-process driving a system of stochastic differential equations. As such, it is difficult to analyze and it lacks a straightforward, efficient simulation algorithm. However, the mesoscopic theory of \cite{SchGer20} can be used as a starting point to derive a temporally coarse-grained stochastic dynamics in the form of a simple jump-diffusion process. For the case of synaptic depression and large network size, we also present a short direct derivation of the diffusion limit that yields a mesoscopic model in the form a simple diffusion process.
As we shall show below, our bottom-up modeling framework for mesoscopic population dynamics permits a re-evaluation of existing heuristic models for metastability in terms of an underlying microscopic network model. As a first example, we consider a single population of excitatory neurons with synaptic depression that generates population spikes and can transition between Up and Down states. The corresponding mesoscopic population model is similar to the model by Holcman, Tsodyks and co-workers \cite{BarBao05,HolTso06,DaoDuc15}, which successfully reproduced experimental observations \cite{DaoDuc15}. The important difference is that in our mesoscopic model all parameters are fixed by the microscopic parameters. Thanks to the low-dimensional character of the reduced mesoscopic system, we can apply phase-plane analysis to study the emergence of multiple stable states that soon become metastable when decreasing the network size.
As a second, more complex example, we revisit the Romani-Tsodyks (RT) model for hippocampal replay activity in circular networks of place cells with synaptic depression \cite{RomTso15}. We propose a spiking-neural-network implementation of the original firing-rate model. The corresponding mesoscopic population model with finite-size noise
enables us to shed new light on the mechanisms underlying hippocampal replay in place cells of area CA3 in the hippocampus. In the deterministic (and heuristic) RT model, irregular switching between metastable traveling waves of sequential neural activity and a quiescent state is solely controlled by local synaptic depression as a slow fatigue mechanism \cite{RomTso15}. Yet, it is unclear whether such metastable replay dynamics also occurs in finite-size networks of spiking neurons and whether, in this case, replay sequences are fatigue-induced (like in the RT model) or may also be driven by finite-size fluctuations. Our model being reduced from a microscopic network of a finite number of neurons, we can interpolate
between a fatigue-induced regime and a novel regime of fluctuation-induced hippocampal replay. We show that these two regimes lead to very distinct statistical predictions, which can be tested experimentally.
The present paper is organized in three main parts. In the first part, we present the mesoscopic bottom-up model
in two variants, a diffusion and a jump-diffusion model. In the second part, we use a single population model to demonstrate the performance of the two variants with respect to network size. In the third part, we turn to the more complex scenario of metastable nonlocal replay events in hippocampal place cells. We compare a novel dynamical regime of fluctuation-induced replay with the deterministic replay dynamics of \cite{RomTso15}. In the Discussion, we explicate biological limitations of the model and possible extensions to address these limitations. We also discuss potential advantages of the novel fluctuation-induced replay dynamics. Finally, in the Methods section, we provide the derivation of the mesoscopic model as well as the details on numerical simulations and statistical procedures.
\section*{Results}
\subsection*{Mesoscopic description of microscopic network dynamics}
\label{sec:micro}
We study the dynamics of a network of $N$ spiking neurons that, on the microscopic level, are modeled as linear-nonlinear-Poisson (LNP) neurons \cite{Chi01,SimPan04,OstBru11,GerKis14} with dynamic synapses \cite{SchGer20,GalLoe20}.
The synaptic dynamics is given by the Tsodyks-Markram model of short-term plasticity (STP) \cite{TsoPaw98}. For simplicity, we focus in the Results part on the special case where the synaptic dynamics corresponds to pure depression \cite{PfiDay09} and the linear filter of the LNP neurons has an exponential impulse response function \cite{OstBru11}. The general theory for the full Tsodyks-Markram model with depression and facilitation as well as the straightforward extension to general linear filters, enabling biologically more realistic neuronal dynamics \cite{OstBru11}, is provided in the Methods part.
For the case of synaptic depression and exponential impulse response function, the LNP-STP model is given by the stochastic differential equations
\begin{subequations}\label{eq:micro}
\begin{align}
\od{h_i}{t}&=\frac{\mu(t)-h_i}{\tau}+\frac{JU_0}{N}\sum_{j=1}^Nx_j(t^-)s_j(t),\label{eq:micro_h}\\
\od{x_i}{t}&=\frac{1-x_i}{\tau_D}-U_0x_i(t^-)s_i(t), \label{eq:micro_x}\\
s_i(t)&=\frac{dn_i(t)}{dt}=\sum_{k}\delta(t-t_k^i),\qquad dn_i(t)\sim\text{Pois}\lreckig{f(h_i(t^-))dt},
\end{align}
\end{subequations}
for $i=1, \dots, N$. Here, $s_i(t)$ is the spike train of neuron $i$ with conditional intensities $\{f(h_i(t^-))\}_{i=1}^N$, $\mu(t)$ represents a common external current mimicking, e.g., feedforward input from other areas, and $\tau$ can be interpreted as the membrane time constant. The synaptic parameters are given by the overall synaptic weight factor $J$, the relative depletion of neurotransmitter by a single transmitted spike $U_0$ and the time scale of synaptic depression $\tau_D$. The variables $h_i$ and $x_j$ can be interpreted as the input potential of neuron $i$ and the availability of synaptic resources at the outgoing synapses of neuron $j$, respectively. The trajectories $h_i(t)$ are càdlàg and $h_i(t^-)$ denotes the left limit (the same holds for $x_i(t)$). It can also be noted that $h_i(t) = h_j(t)$ for all $t\geq 0$ and for all $i$ and $j$ if at time $0$ all the $h_i$ share the same initial condition.
\begin{figure}[t]
\centering{
\includegraphics[width=0.9\columnwidth]{figures/fig0new2.pdf}
\caption{{\bf From microscopic to mesoscopic population dynamics.}
(A) Network with microscopic short-term plasticity. Dashed region shows a zoom into a pair of interconnected neurons: presynaptic neuron $1$ sends out an unmodulated spike train to postsynaptic neuron $2$ that receives the spike-train modulated by short-term depression.
(B) Mesoscopic mean-field model with one effective synapse undergoing short-term depression.}
\label{fig0}}
\end{figure}
\subsubsection*{Diffusion model of the mesoscopic dynamics (Gaussian noise)}
Our goal is to derive a mean-field model for the microscopic dynamics Eq.~\eqref{eq:micro} that accounts both for the finite number of neurons as well as for the dynamic synapses undergoing Tsodyks-Markram STP, see Fig.~\ref{fig0}. The mean-field description will be based on the dynamics of the following mesoscopic variables defined as the empirical averages
\begin{equation} \label{eq:empirical}
h(t):=\frac{1}{N}\sum_{i=1}^N h_i(t), \quad x(t):=\frac{1}{N}\sum_{i=1}^N x_i(t), \quad \text{and} \quad Q(t) := \frac{1}{N}\sum_{i=1}^N x_i^2(t).
\end{equation}
The desired dynamics of $h(t), x(t)$ and $Q(t)$ are supposed to no longer depend on (the index $i$ of) individual neurons, so we will approximate terms such as, e.g., the sum $\frac{1}{N}\sum_{i=1}^N x_i(t^-)s_i(t)$, by a diffusion term which only involves the mesoscopic variables.
To this end, we follow the temporal coarse-graining approach by Gillespie \cite{Gil00} for the derivation of a ``chemical Langevin equation'', see the Methods section for a detailed derivation.
In brief, we first use a \textit{macroscopically infinitesimal} time step $\Delta t$ \cite{Gil00} and approximate the coarse-grained sum $\int_t^{t+\Delta t}\frac{1}{N}\sum_{i=1}^N x_i(t^-)s_i(t)\,dt$ by a Gaussian random variable with variance proportional to $Q(t^-)$. In a second step, we derive the dynamics of $Q(t)$, discarding the fluctuations whose effect on $h(t)$ and $x(t)$ is of order $N^{-3/2}$.
The resulting mesoscopic mean-field dynamics is given by the \textit{diffusion model}
\begin{subequations}\label{eq:meso}
\begin{align}
\od{h}{t}&=\frac{\mu(t)-h}{\tau}+JU_0xf(h) + JU_0 \sqrt{\frac{Qf(h)}{N}}\xi(t), \label{eq:diff-model-h}\\
\od{x}{t}&=\frac{1-x}{\tau_D}-U_0xf(h) - U_0\sqrt{\frac{Qf(h)}{N}}\xi(t),\\
\od{Q}{t} &= 2\frac{x - Q}{\tau_D} - U_0(2-U_0)Qf(h),
\end{align}
\end{subequations}
where $\xi(t)$ is a Gaussian white noise with auto-correlation function $\langle\xi(t)\xi(s)\rangle=\delta(t-s)$.
Although it is possible to deduce Eq.~\eqref{eq:meso} from the detailed doubly-stochastic mesoscopic dynamics derived in \cite{SchGer20} (see Methods ``Mesoscopic Tsodyks-Markram model [...]''), the derivation summarized above and presented in Methods ``Diffusion approximation ...'' is much simpler as it relies on a direct application of the diffusion approximation (avoiding the detour via the model presented in \cite{SchGer20}).
\subsubsection*{Jump-diffusion model of the mesoscopic dynamics (Hybrid noise)}
In large networks, it is plausible to assume that the spike input through a large number of recurrent connections can be approximated by a Gaussian process, and the diffusion model Eq.~\eqref{eq:meso} is valid for sufficiently large $N$.
In smaller networks, by contrast, we may no longer rely on the diffusion approximation since we need to take into account the shot noise character of the spike input. To this end, we start from the mesoscopic model of \cite{SchGer20} and derive a mesoscopic jump-diffusion model with facilitation and depression (see Methods). In this model, the noise takes on a hybrid form combining Poisson shot noise and Gaussian white noise. In the special case of short-term synaptic depression only, the resulting \textit{jump-diffusion model} of the mesoscopic dynamics reads
\begin{subequations}
\label{eq:model-depress-3}
\begin{align}
\od{h}{t}&=\frac{\mu(t)-h}{\tau}+JU_0\Big[x(t^-)A(t)+\sqrt{\frac{\tilde Q f(h)}{N}}\xi_x(t)\Big],\label{eq:jump-diff-depress-h}\\
\od{x}{t}&=\frac{1-x}{\tau_D}-U_0\Big[x(t^-)A(t)+\sqrt{\frac{\tilde Q f(h)}{N}}\xi_x(t)\Big],\label{eq:tilo_x}\\
\od{\tilde Q}{t}&= - \Big[ \frac{2}{\tau_D} + U_0(2-U_0) f(h) \Big] \tilde Q + U_0^2 x^2 f(h).
\end{align}
Here, $\xi_x(t)$ is a Gaussian white noise with auto-correlation function $\langle\xi_x(t)\xi_x(s)\rangle=\delta(t-s)$ and
\begin{equation}
\label{eq:popact}
A(t)=\frac{1}{N}\frac{dn(t)}{dt}=\frac{1}{N}\sum_k\delta(t-t_k),\qquad dn(t)\sim\text{Pois}\left[Nf(h(t^-))dt\right],
\end{equation}
\end{subequations}
is a shot noise. The shot noise $A(t)$ is defined by the counting process $n(t)$ with jump times $t_k$ that occur with conditional intensity $Nf(h(t^-))$. The increment of the counting process $dn(t)$ represents the total number of spikes generated by all neurons in the small time interval $[t,t+dt)$, and $A(t)$ is therefore the empirical population activity. The presence of two different sources of noise in Eq.~\eqref{eq:model-depress-3} can be interpreted as the effect of two components that make up the synaptic input $N^{-1}\sum_ix_i(t^-)s_i(t)$ on the mesoscopic scale: First, a term $\lrrund{N^{-1}\sum_ix_i(t^-)}\cdot\lrrund{N^{-1}\sum_is_i(t)}=x(t^-)A(t)$ that arises if the variability of the weighting factors $x_i$ across synapses is neglected. This term represents the common spiking noise caused by shared spike inputs. Second, a correction term that accounts for the variability of $x_i$, approximated by a Gaussian distribution with variance $\tilde{Q}(t)$ as shown previously \cite{SchGer20}.
Mathematically, the mesoscopic model, Eq.~\eqref{eq:model-depress-3}, is a jump-diffusion process because the shot noise leads to small jumps of order $1/N$ in addition to the diffusive dynamics caused by the Gaussian white noise. The jumps, however, occur at a high rate $Nf(h(t))$ so that in simulations with a coarse-grained time step $\Delta t$, unitary jumps will not be resolved. Instead, the increment of the spike count $\Delta n(t)=n(t+\Delta t)-n(t)$ can be drawn from a Poisson distribution with mean $Nf(h(t))\Delta t$ provided a sufficiently small simulation time step $\Delta t\ll 1/f(h),\tau,\tau_D$.
We expect that the jump-diffusion model Eq.~\eqref{eq:model-depress-3} remains valid for small network sizes, for which the diffusion model Eq.~\eqref{eq:meso} ceases to provide an accurate description of the microscopic network dynamics Eq.~\eqref{eq:micro}.
In the large $N$-limit, the jump-diffusion model, Eq.~\eqref{eq:model-depress-3}, converges to the diffusion model, Eq.~\eqref{eq:meso}. In fact, by invoking the diffusion approximation, we can replace the shot noise by $A(t) \approx f(h) + \sqrt{f(h)/N}\xi_A(t)$, where $\xi_A(t)$ is an independent Gaussian white noise with auto-correlation function $\langle\xi_A(t)\xi_A(s)\rangle=\delta(t-s)$. As detailed in the Methods section, by combining the two independent noise terms $\xi_x(t)$ and $\xi_A(t)$ into a single noise process and identifying $\tilde Q$ with $Q - x^2$, we recover the diffusion model Eq.~\eqref{eq:meso} for large $N$.
\subsection*{Microscopic vs.\ mesoscopic dynamics of a single population exhibiting metastability}
The mesoscopic descriptions Eqs.~\eqref{eq:meso} and \eqref{eq:model-depress-3} of the full network of $N$ interacting spiking neurons with short-term depression (STD) effectively reduce the high-dimensional microscopic dynamics Eq.~\eqref{eq:micro} to a system of three stochastic differential equations in $h,x$, and $Q$.
In the limit $N\to \infty$, finite-size fluctuations in the mesoscopic dynamics vanish and the variable $Q$ becomes superfluous. The resulting two-dimensional macroscopic dynamics
readily allows for a comprehensive phase-plane analysis, see, e.g., \cite{HolTso06,DaoDuc15}, which reveals the deterministic backbone of, and can therefore yield important insights about, the full network dynamics.
To demonstrate the high accuracy of our mesoscopic description and also its usefulness for studying the effect of finite-size fluctuations on metastable dynamics, we will focus in this Section on two traditional examples of metastability in a single excitatory population ($J>0$) of LNP spiking neurons with STD: populations spikes and spontaneous transitions between Up and Down states.
For simplicity, we will assume in the following that the transfer function $f(h)$ has the form
\begin{equation}\label{eq:transfer}
f(h) = r a \ln \big\{ 1 + \exp[ (h-h_0) / a ]\big\}
\end{equation}
with slope parameter $r$, smoothness $a$, and threshold $h_0$.
The transfer function $f(h)$ has an exponential sub-threshold tail and a linear supra-threshold part.
In the limit $a\to 0$, $f(h) = r[h-h_0]^+$ becomes a threshold linear function with slope $r$ and threshold $h_0$. The larger $a > 0$, the smoother the transition at the threshold.
\subsubsection*{Population spikes in an excitatory population}
\begin{figure}[!t]
\includegraphics[width=.66\columnwidth]{figures/meta_A}\\
\includegraphics[width=\columnwidth]{figures/meta_B}
\caption{{\bf Population spikes in excitatory populations of finite size.}
(A) Phase-plane analysis of the macroscopic model (Eq.~\eqref{eq:meso} for $N\to\infty$) reveals the backbone of the metastable dynamics due to the proximity of a separatrix (red-dashed) near the unique stable fixed point (red dot = cross-section of the black-dashed nullclines).
Trajectories (blue) of the mesoscopic model reproduce population spikes by following the unstable manifold (orange dotted line) of the saddle fixed point (orange diamond).
Population spikes have variable amplitude and inter-population spike intervals (ISI), see also (B,C).
(D) The mesoscopic models with hybrid noise (jump-diffusion model; blue) and Gaussian noise (diffusion model; orange) accurately capture finite-size fluctuations in the input potential $h$ -- note the logarithmic y-scale -- and population spikes of the microscopic network dynamics (black) of $N=30$ neurons.
(E) Power spectra of the input potential $h$ and (F) ISI distributions coincide for all three models.
(G-H) same as (D-F) for $N=200$. Statistics are for simulations of length $T_\text{sim} = 100'000$s.}
\label{fig:meta}
\end{figure}
As a first example, we study the emergence of spontaneous bursts of synchronized activity due to finite-size fluctuations and short-term depression (Fig.~\ref{fig:meta}). To this end, we tune the parameters of our model such that the macroscopic dynamics for $N\to\infty$ exhibits a unique stable fixed point (red dot in Fig.~\ref{fig:meta}A) together with a pair of unstable fixed points (orange, green). In the absence of external inputs or internal finite-size fluctuations, the system will remain in the stable, low-activity state forever. This state, however, is excitable: Fluctuations can lead to rapid, transient excursions of the neural trajectory, when the system is kicked across a separatrix (red-dashed curve = stable manifold emanating from the unstable saddle point (orange diamond)), see the blue traces in Fig.~\ref{fig:meta}A.
During an excursion along the unstable manifold of the saddle point (orange-dotted), the input potential $h(t)$, and with it the population firing rate $f(h)$, rapidly increases, which corresponds to a short synchronized burst of activity. The increased firing of spikes leads to a strong suppression of the depression variable $x$, which in turn pulls the firing rate down. Once the depression variable $x(t)$ has recovered sufficiently, finite-size fluctuations can again trigger a synchronized burst of activity (Fig.~\ref{fig:meta}B,C).
These bursts of activity, called population spikes, have been studied theoretically in the context of STP \cite{TsoUzi00,LoeTso02,GigDec15,SchGer20} and have also been observed experimentally \cite{DewZad06,GigDec15}. Here, we complement the existing literature by pinpointing to the (finite) network size as a possible mechanism for endogenously generated population spikes without the need for external (noisy) inputs. The $N\to\infty$ limit allowed us to draw important insights from the phase-plane analysis of the underlying deterministic structure, which profoundly shapes the mesoscopic network dynamics when considering smaller network sizes $N < \infty$. In principle, the smaller $N$, the larger the fluctuations and the more frequent are the excursions across the separatrix, leading to more population spikes. As can be appreciated in Fig.~\ref{fig:meta}(D,G), our mesoscopic mean-field models -- the diffusion model Eq.\eqref{eq:meso} with Gaussian noise (orange, dashed traces) and the jump-diffusion model Eq.~\eqref{eq:model-depress-3} with hybrid noise (both Gaussian and Poisson; blue traces) -- accurately capture the microscopic network (black traces) both qualitatively and quantitatively. There is a perfect match between the power spectra (Fig.~\ref{fig:meta}E,H) and between the distributions of the inter-population spike intervals (Fig.~\ref{fig:meta}F,I). The slight deviations of the diffusion model for a network of $N=30$ neurons disappear for a network of $N=200$ neurons: as expected, the diffusion approximation becomes better with increased network size. Remarkably, the jump-diffusion model perfectly matches the microscopic network dynamics even when $N=30$.
\subsubsection*{Up-Down dynamics in an excitatory population}
In a second example, we change the model parameters slightly so that our system now exhibits two co-existing stable fixed points: a high-activity ``Up'' state and a low-activity ``Down'' state. In the macroscopic model, only one of the two states can be realized depending on the initial conditions.
In the mesoscopic models Eq.~\eqref{eq:meso} and \eqref{eq:model-depress-3}, however, finite-size fluctuations lead to irregular transitions between Up and Down states.
An exemplary stochastic trajectory in Fig.~\ref{fig:updown}A starts close to the Down state, but soon gets kicked across the separatrix (red-dashed stable manifold of the (orange) saddle fixed point), from where it follows the (orange-dotted) unstable manifold and undergoes a sharp excursion in phase-space, resembling a population spike as described in the foregoing section.
On its way back to the stable Down state, the trajectory approaches the unstable limit cycle (green dashed) that acts as the boundary of the basin of attraction of the Up state.
Finite-size fluctuations can induce attractor hopping: from the low-activity node (Down state), the trajectory can cross the basin boundary and starts spiraling into the high-activity focus (Up state), until it crosses the basin boundary again and converges towards the low-activity node (Down state), see also Fig.~\ref{fig:updown}(B,C).
The seemingly ongoing oscillations in the Up state are a pure finite-size effect, which will be damped out in the macroscopic model. As an aside, the frequency of the oscillations in the Up state coincides with the imaginary part of the eigenvalue of the high-activity focus, cf.~\cite{DaoDuc15}.
To assess the accuracy of our mesoscopic description of this finite-size induced metastable regime, we performed extensive simulations and compared them to the microscopic network Eq.~\eqref{eq:micro}. In Fig.~\ref{fig:updown}D, we show exemplary time series of the network dynamics for $N=100$ neurons of the jump-diffusion model Eq.~\eqref{eq:model-depress-3} with hybrid noise (blue), of the diffusion model Eq.~\eqref{eq:meso} with Gaussian noise (orange) and of the microscopic model (black). Qualitatively, there is an excellent agreement between micro- and mesoscopic simulations. However, closer inspection of the time series reveal that the Up states in the diffusion model are, on average, of shorter duration than in the microscopic and the jump-diffusion model. This slight shortcoming of the diffusion model also becomes evident when looking at the power spectrum and the bimodal distribution of the input potential $h(t)$ computed over a long simulation of $T_\text{sim} = 100'000s$ (Fig.~\ref{fig:updown}E and F, respectively). The jump-diffusion model perfectly captures the full statistics of the microscopic network, but the diffusion model slightly underestimates the time spent in the Up states (see the zoom in Fig.~\ref{fig:updown}F), which also manifests in small deviations of the power spectrum.
To recap, the finite-size induced Up-Down dynamics is very well captured in our mesoscopic description, with excellent agreement between the jump-diffusion model and the microscopic network and only slight deviations of the diffusion model from the true network dynamics for reasonably small network sizes of $N=100$ neurons. As the diffusion approximation requires $N$ to be large, one could expect that the performance of the diffusion model will increase for larger networks. At the same time, however, finite-size fluctuations will become smaller in amplitude, making attractor hopping between Up and Down states more difficult and less frequent. Moreover, as can be seen in Fig.~\ref{fig:updown}A, the unstable manifold of the saddle fixed point (orange dotted line) leads the neural trajectory close to the basin boundary (green dashed) of the Up state only in a small region of the phase space. In this region, fluctuations need to perturb the neural trajectory in a particular direction so that it can enter (and then also remain within) the basin of attraction of the Up state. In our case, only the jump-diffusion model recapitulates the correct fluctuations, whereas noise in the diffusion model is too diffusive. This discrepancy between Eqs.~\eqref{eq:meso} and \eqref{eq:model-depress-3} can already be anticipated from their $Q$- and $\tilde Q$-dynamics, respectively, which directly influence the finite-size fluctuations. In fact, the steady state profiles of $Q$ and $\tilde Q$ predict that the fluctuations are strongest for intermediate firing rates $f(h)$ and depression levels $0 < x < 1$, that is, right in the aforementioned critical region of the phase space, where a delicate balance between Poisson and Gaussian noise is important to recover the microscopic network dynamics. Consequently, the accuracy of the arguably simpler diffusion model Eq.~\eqref{eq:meso}, and hence its choice over the more complex jump-diffusion model Eq.~\eqref{eq:model-depress-3}, to describe the microscopic network dynamics depends not only on the network size $N$, but also on the dynamical regime under investigation.
\begin{figure}[!t]
\includegraphics[width=.66\columnwidth]{figures/updown_A}\\
\includegraphics[width=\columnwidth]{figures/updown_B_logy.png}\\[1em]
\caption{{\bf Up-down dynamics due to finite-size fluctuations.} Mesoscopic model reproduces noisy bistable population dynamics. (A) Phase-plane analysis of macroscopic dynamics (Eq.~\eqref{eq:meso} for $N\to\infty$) reveals two stable fixed points (red): a high-activity focus representing the Up, and the low-activity node the Down state of the system. From the saddle fixed point (orange diamond), an unstable (orange dotted line) and a stable manifold (red dashed line) emerge. The latter acts as a separatrix -- trajectories (blue curve) starting from above make an excursion around the unstable limit cycle (green dashed) and converge towards the down state. Finite-size fluctuations can make the trajectory cross the limit cycle into the basin of attraction of the Up state. (B,C) Stochastic trajectory of the mesoscopic dynamics \eqref{eq:meso} with $N=100$ transitioning between Down and Up states.
(D) The mesoscopic models with hybrid noise (jump-diffusion model; blue) and Gaussian noise (diffusion model; orange) qualitatively capture Up-Down-dynamics of the microscopic network (black).
(E) Power spectrum and (F) histogram of input potential $h$ over simulation of length $T_\text{sim} = 100'000$s.}
\label{fig:updown}
\end{figure}
Our study of the Up-Down-dynamics has been guided by the work of Holcman, Tsodyks and co-workers \cite{BarBao05,HolTso06}, who considered a firing rate model with external stochastic input as a necessary ingredient to realize the metastable dynamics and irregular transitions between Up and Down states in a network with STD, which was supported by experimental observations in \cite{DaoDuc15}.
The Up and Down transitions described by our mesoscopic model in Fig.~\ref{fig:updown}(A-C) solely stem from internally generated finite-size fluctuations and, importantly, no external noise is needed.
In addition, the mesoscopic model can explain certain dynamical features that Holcman and Tsodyks ascribed to long-term synaptic plasticity by changing the network size.
Specifically, in \cite{HolTso06} Holcman and Tsodyks attested deviations from typical Up-Down dynamics to external stimulation (by changing the input parameter $\mu$) or to long-term synaptic plasticity (by changing the recurrent coupling strength $J$).
A depolarization injection current (larger $\mu$) had a similar effect as long-term potentiation (LTP; stronger recurrent coupling $J$), which led to longer Up states.
On the other hand, a hyperpolarization injection current (smaller $\mu$), or similarly long-term depression (LTD; smaller $J$), led to shorter Up states and more frequent population spikes.
While our mesoscopic description predicts analogous behavior of the microscopic network when changing $\mu$ and/or $J$, it also predicts that similar effects can be realized by varying the network size $N$. Performing simulations with various network sizes between $N=50$ and $N=150$ while keeping the other parameters unchanged, we found that for $N=100$, metastable activity featuring populations spikes is interspersed with Up states that last on average $10.6$~s (mean taken over all Up states that last at least $1$~s), see Fig.~\ref{fig:updown}D. For smaller networks with $N=50$ (simulations not shown), population spikes become more frequent and Up states become shorter (mean duration $3.1s$): Over a $10'000$~s simulation, Up states followed population spikes in $16.9\%$ of all $2249$ cases for $N=50$, whereas for $N=100$ this only occurred in $15.6\%$ of all $1218$ cases. Larger networks with $N=150$, by contrast, exhibit significantly longer Up states (mean duration $41.9$~s) and even less frequent population spikes (see Fig.~\eqref{fig:supp1} in the Supporting Information). Computing the corresponding histograms of the input potential revealed only for sufficiently large network sizes a second peak at around $5.5$~mV (as in Fig.~\ref{fig:updown}F), which coincides with the Up state in Fig.~\ref{fig:updown}A. Furthermore, the fluctuation-driven oscillations in the long-lasting Up states (Fig.\ref{fig:updown}B,C for $N=100$) become visible in the power spectrum for $N=150$ as another peak at around $1.5$~Hz emerges. This frequency corresponds to the imaginary part of the eigenvalues of the stable focus ($\lambda \approx (-1.54 + 9.24i)$~Hz hence $\mathrm{Im}(\lambda)/(2\pi) \approx 1.5$~Hz).
We can conclude that the network size critically affects the Up-Down dynamics. One may wonder whether dynamically changing the network size by recruiting more or less neurons could be used to dynamically control Up and Down states in biology. One needs to keep in mind, however, that when varying the number $N$ of neurons in our model, we have simultaneously rescaled the synaptic weights in proportion to $1/N$. Although this specificic change of parameters is useful for the theoretical analysis, a corresponding biological implementation is difficult to perceive.
In any case, for any given network size and synaptic weights, our mesoscopic mean-field models Eq.~\eqref{eq:meso} and \eqref{eq:model-depress-3} provide accurate descriptions of the microscopic network with short-term synaptic depression and therefore allow for a systematic analysis of how finite-size fluctuations contribute to and shape Up- and Down-states dynamics. Thus, in principle, analyzing the dependence on network size with fixed synaptic weights also seems to be feasible.
\subsection*{Mesoscopic model for hippocampal replays}
We now turn to a more complex biological example for metastability in neural circuits: the spontaneous replay of activity sequences across hippocampal place cells \cite{Buz15,Fos17}. Sequential activation patterns of place cells have been widely observed in experiments when an animal explores its environment \cite{OkeDos71,Oke76,HarCol09,MabAck18} and have been related to neural representations of animal trajectories. Distinct sequences of place cell activation show up when entering novel environments. Hence, sequential activation patterns during exploration provide unique signatures for each environment and may subserve navigation and spatial learning \cite{KniKud95,MosKro08,HarLev14,TheRov18}. Presumably, the animal forms an internal representation, or map, of the corresponding environment during exploration through such sequential activation \cite{WuFos14} that can later be replayed spontaneously, i.e. in the absence of sensory input---a feature that is believed to contribute to memory consolidation and retrieval \cite{DerMos10,DupOne10,Pfe20} as well as to route planning \cite{PfeFos13,OlaBus18}.
The spontaneous replay occurs within burst-like sharp-wave/ripples (SWRs) during quiet wakefulness \cite{FosWil06,KarFra09,CarJad11} and sleep \cite{WilMcN94,LeeWil02}, typically has a much faster, compressed time scale \cite{DavKlo09,FosWil06,LeeWil02} and the replayed trajectories can either be in the originally experienced order or backwards \cite{DraTon11}. Spontaneous replay events appear and disappear abruptly and repeatedly, and can therefore be regarded as metastable states separated by states of low activity.
In previous approaches to model metastable sequential activation patterns, such as in hippocampal replay, it has been difficult to accommodate both the capability to endogenously generate sequential activity \cite{SomKan86,Kle86,PerBru20} and trial-to-trial temporal variability \cite{LitDoi12,MazFon15}. Successful candidate mechanisms to account for both characteristics have been implemented with the help of firing rate models. Recanatesi et al. \cite{RecPer22} proposed a two-area mesoscale attractor network very much in the spirit of the winnerless competition model by Seliger, Tsimring and Rabinovich \cite{SelTsi03}, in which the combination of asymmetric synaptic connectivity, arising from reciprocal coupling between fast and slow systems, with stochastic synaptic efficacy is crucial for the generation of sequences.
Alternatively, Romani and Tsodyks \cite{RomTso15} proposed a fully deterministic mechanism for hippocampal replay based on short-term depression (STD) and without the need for asymmetric connectivity, see also \cite{TheRov18}: A ring-attractor network model with symmetric synaptic connectivity with local excitation and long-range inhibition exhibits multistability between a global quiescent state and various spatially localized bump states \cite{Ama77,BenBar95,BurFie12}. STD, as a slow fatigue mechanism, destabilizes these bumps and gives rise to traveling wave states\cite{YorRos09}. In combination with the high-dimensional character of the network, the resulting dynamics appears effectively stochastic and spontaneously switches between quiescence and metastable traveling waves.
Both the approaches by Recanatesi et al. \cite{RecPer22} and by Romani and Tsodyks \cite{RomTso15} rely on heuristic firing-rate models, and as such both of them suffer from the limitations already discussed in the Introduction. In particular, neither can account for a systematic investigation of finite-size effects on metastable activity.
In addition, it is unclear to what extent the deterministic Romani-Tsodyks model describes neural variability. For these reasons, we use our mesoscopic theory to construct a stochastic ring-attractor network model from a microscale model, and thereby provide an alternative mechanistic description of hippocampal replay with a direct link to microscopic networks of spiking neurons.
\subsubsection*{Microscopic and mesoscopic multi-population model of place cells}
We aim for a mesoscopic description of place cells in area CA3 of the hippocampus.
Following Romani and Tsodyks \cite{RomTso15}, we consider a network of neuronal populations, where each population is a group of neurons with highly overlapping place fields. We assume that the full map of the environment is covered by in total $M$ populations each containing $N$ neurons.
The activity of individual neurons $j \in \{1,\dots,N\} $ of a given population $\alpha \in \{1,\dots,M\}$ are described by the spike trains $s_j^\alpha(t) = \sum_k \delta(t-t_{j,k}^\alpha)$ associated with the spike times $\{t_{j,k}^\alpha\}$. They are modeled as stochastic point processes with intensity $r^\alpha(t) = f\big( h^\alpha(t^-) \big)$, where the input potentials $h^\alpha$ (identical for all neurons in population $\alpha$) are given by the following neuronal dynamics with STD:
\begin{subequations}\label{eq:ring_micro}
\begin{align}
\od{h^\alpha}{t}&=\frac{\mu^\alpha-h^\alpha}{\tau}+\frac{1}{M}\sum_{\beta=1}^M \frac{J_{\alpha\beta}}{N}\sum_{j=1}^N U_0x_j^\beta(t^-)s_j^\beta(t),\label{eq:ring_micro_a}\\
\od{x_j^\alpha}{t}&=\frac{1-x_j^\alpha}{\tau_D}-U_0x_j^\alpha(t^-)s_j^\alpha(t).
\end{align}
\end{subequations}
Here, the input potential $h^\alpha(t)$ integrates the external input $\mu^\alpha$ (common to all neurons in population $\alpha$) and the recurrent input. The latter consists of contributions from the neurons in the same population but also from all the other populations $\beta\neq \alpha$, weighted with a synaptic strength $J_{\alpha\beta}$ that depends on the distance between the place fields of the corresponding populations (as detailed below).
The resulting recurrent connectivity of the network with weights $J_{\alpha\beta}$ is assumed to encode the internal representation of one (or multiple) environment(s) that the animal has explored recently.
It should be noted that the recurrent weights of an internal map can also be ``learnt'' via spike-timing-dependent synaptic plasticity (STDP) during active exploration of the environment, see, e.g., \cite{TheRov18,EckBag22}. Here, however, we assume for simplicity that the animal has already internalized the relevant environments and that the corresponding internal maps are hardwired (at least on the relevant time scale) within the synaptic connectivity matrix $\{J_{\alpha\beta}\}_{\alpha,\beta}$ in a Hopfield-like manner \cite{Hop82}.
Analogously to the Results subsection ``Diffusion model for mesoscopic dynamics'', we can reduce the microscopic dynamics Eq.~\eqref{eq:ring_micro} to a mesoscopic mean-field model.
As before, we introduce the mesoscopic quantities $x^\alpha(t)$ and $Q^\alpha(t)$ that correspond to the first and second moment, respectively, of the depression variables $x_i^\alpha(t)$ for population $\alpha \in \{1,\dots,M\}$.
We then obtain the mesoscopic dynamics
\begin{subequations}\label{eq:ring_meso}
\begin{align}
\od{h^\alpha}{t}&=\frac{\mu^\alpha-h^\alpha}{\tau}+\frac{1}{M}\sum_{\beta=1}^M J_{\alpha\beta}U_0 \Big[ x^\beta f\big(h^\beta\big) + \sqrt{\frac{Q^\beta f(h^\beta)}{N}} \xi^\beta(t) \Big],\label{eq:ring_meso_h}\\
\od{x^\alpha}{t}&=\frac{1-x^\alpha}{\tau_D}-U_0\Big[ x^\alpha f\big(h^\alpha\big) + \sqrt{\frac{Q^\alpha f(h^\alpha)}{N}} \xi^\alpha(t) \Big],\\
\od{Q^\alpha}{t}&=2\frac{x^\alpha-Q^\alpha}{\tau_D}-U_0(2-U_0)Q^\alpha f(h^\alpha),
\end{align}
\end{subequations}
with Gaussian white noises $\xi^\alpha(t)$ obeying $\langle \xi^\alpha(t) \rangle = 0$ and $\langle \xi^\alpha(t)\xi^\beta(s) \rangle = \delta_{\alpha,\beta}\delta(t-s)$ for all $\alpha,\beta \in \{1,\dots,M\}$.
We only present the mesoscopic ring-attractor network Eq.~\eqref{eq:ring_meso} under the diffusion approximation, but remark that it can readily be extended to a jump-diffusion model as in Eq.~\eqref{eq:model-depress-3}.
In the following, we use the simpler diffusion model because it already faithfully reproduces microscopic simulations and is sufficient to study the mechanisms of some recent experimental observations. For one, there is a need for stochastic network models of hippocampal replay as replay patterns resemble Brownian diffusion \cite{SteBar19} or even super-diffusion \cite{KraDru22}, and interburst intervals exhibit significant variability. Moreover, replay episodes do not always draw a smooth continuous path, but often follow a jumpy, discontinuous trajectory \cite{PfeFos15,DenGil21}.
\subsubsection*{A circular environment}
\begin{figure}[p]
\begin{adjustwidth}{-1.0in}{-1in}
\centering{
\includegraphics[width=7.5in]{figures/1env_A.pdf}
}
\end{adjustwidth}
\captionsetup{width=7.5in}
\caption{{\bf Hippocampal replay in micro- and mesoscopic ring-attractor network model.}
(A) Ring-attractor model of $M$ population units of $N$ LNP spiking neurons with STD. Synaptic weights $J_{\alpha,\beta}$ are excitatory for units with nearby place field positions $\theta_\alpha$ and inhibitory at longer distances, see the coupling function on the right.
(B) Mesoscopic and (C) microscopic network simulations reveal (i) spontaneous bursts of the averaged activity, resembling SWRs, during which replay patterns evolve as (ii) metastable traveling waves, or nonlocal replay events (NLE), along the circular environment---the expected activity $r_j = f(h_j)$ at location $\theta_j=2\pi j/M$, $j=1,\dots,M$ is color-coded.
Statistics of the (D) mesoscopic and (E) microscopic simulations perfectly match each other with respect to: (i) the distribution of event duration, (ii) the correlation between the number of peaks per burst and its duration, (iii) the correlation between the length of the traveled path during an event and its duration, as well as (iv) the distribution of average bump speed during an event, computed from events with more than one peak; see Methods for more details.
}
\label{fig:1Env}
\end{figure}
To begin with, we consider a single circular environment and assume that, after exploration, the animal has learnt an internal representation (map) of the corresponding environment, which is encoded in the synaptic connectivity of the place cells in hippocampal area CA3. Accordingly, we assign to all neurons in population $\alpha\in\{1,\dots,M\}$
a place field at angle $\theta_\alpha = 2\pi \alpha/M$, so that the place field locations are equally spaced on a ring, see Fig.~\ref{fig:1Env}A.
The synaptic strength $J_{\alpha\beta}$ depends on the distance between locations according to
\begin{equation}
J_{\alpha\beta} = J_1 \cos( \theta_\alpha-\theta_\beta ) - J_0 = J_1 \cos( 2\pi (\alpha-\beta)/M ) - J_0,
\end{equation}
where $J_1$ scales the strength of map-specific interactions and $J_0$ corresponds to uniform feedback inhibition.
For $J_1 > J_0 \ge 0$, populations with adjacent place fields excite each other, whereas populations with place fields far apart from each other are inhibitory.
This form of symmetric interaction is known to generate spatially coherent activity, leading to so-called bump attractors \cite{Ama77,BenBar95,TsoSej95,HanSom98}.
STD has been shown to destabilize stationary bumps that move around the environment \cite{YorRos09}, creating burst-like nonlocal traveling wave events (NLEs) that resemble hippocampal replay patterns on a population level.
In our network simulations, we consider $M=100$ populations around the ring with $N=50$ neurons each. The external input is homogeneous, i.e. $\mu^\alpha=\mu$ for all $\alpha=1,\dotsc,M$. As shown in Fig.~\ref{fig:1Env}Bi, the network spontaneously generates a burst of elevated activity with one or more peaks, that lasts up to a few hundred milliseconds and is then terminated due to STD. Another burst is generated only after a recovery period--the so-called interburst interval (IBI).
The intermittently elevated activity on the ring is strongly localized because of the synaptic connectivity. STD makes the localized bump move to neighboring place field locations and, thus, gives rise to an NLE, Fig.~\ref{fig:1Env}Bii; we define an NLE (=nonlocal replay event) as a burst of the averaged activity with more than one peak, see also Methods.
As the elevated activity locally alters the spatial profile of the slow synaptic depression variable, highly irregular activity patterns emerge with bursts that start at seemingly random locations, travel in either backward or forward direction, and vary both in duration and distance (note, however, that a single wave of activity never travels further than once around the circle).
This type of metastable dynamics strongly resembles the behavior observed by Romani and Tsodyks in their deterministic firing rate model \cite{RomTso15}, hereafter referred to as the RT model, but with the important difference that here the emergence of metastability is a finite-size effect---increasing the population size from $N=50$ to $N=5000$ renders any initiation of burst-like activity impossible (see the orange curve in Fig.~\ref{fig:1Env}Bi). Put differently, our model reveals a novel dynamical regime, in which metastable burst states are fluctuation driven, in contrast to the RT model, where bursts are induced deterministically when depression slowly abates.
Comparing the simulations of our mesoscopic model with the microscopic network, we find an excellent agreement both from a qualitative (Fig.~\ref{fig:1Env}B,C) and from a quantitative perspective (Fig.~\ref{fig:1Env}D,E), where we follow the statistical analysis of \cite{RomTso15}, see also Methods.
For comparison, we considered two deterministic models with fatigue-induced bursts: (i) the original RT model \cite{RomTso15}, and (ii) our model with $N\to\infty$ and slightly increased external drive ($\mu=-0.9$ instead of $\mu=-1$). This second model, which will be referred to as the \emph{macroscopic} model in the following, essentially reproduces the dynamics of the original RT model \cite{RomTso15}. We included this second model in the comparison (see also Fig.~\ref{fig:supp2} in the Supporting Information) because the macroscopic limit $N\to\infty$ of our mesoscopic model yields slightly different model equations compared to the heuristic RT model.
A closer inspection of the statistical properties of NLEs and IBIs reveals that mesoscopic and microscopic simulations not only match almost perfectly, but the fluctuation-induced bursts may also have more biologically realistic properties than the fatigue-induced regime.
For example, experiments in rodents exposed to long linear tracks \cite{DavKlo09,GupvdM10} had an experimentally estimated number of ~$10$ SWRs/s. Assuming that each peak in the average activity corresponds to a SWR, the micro- and mesoscopic models closely match the experimental observations with $9.3$ SWRs/s in contrast to less than $8$ SWRs/s in the macroscopic and original RT model, see Methods for more details.
Furthermore, our fluctuation-driven model reveals larger temporal variability with a unimodal IBI distribution (Fig.~\ref{fig:1EnvB}A) and low serial correlations of the event speeds (Fig.~\ref{fig:1EnvB}B).
In marked contrast, in the macroscopic and RT models of fatigue-induced bursts, the IBI distributions are bimodal, and clearly not exponentially distributed as observed experimentally \cite{AxmElg08,SchKal14,Buz15} and expected for a Poisson process. By contrast, our fluctuation-driven model shows the tendency towards exponential IBI distributions with longer tails, showing in particular a larger mean ($0.65$~s vs. $0.29$~s) and a higher coefficient of variation (CV $= 0.85$ vs. $0.79$).
Moreover, the macroscopic and RT models exhibit strong correlations between forward and backward replay events as seen in the serial correlations of the event speed (Fig.~\ref{fig:1EnvB}Bii). The alternating structure of the serial correlation coefficient with strong anti-correlations at lag $1$ means that forward and backward motion alternate almost perfectly. In contrast, the motion directions in the sequence of NLEs in our fluctuation-driven model are almost uncorrelated (Fig.~\ref{fig:1EnvB}Bi).
Another difference to the deterministic models is that the onset location of the fluctuation-induced NLEs is independent of the offset location of the preceding event. By contrast, in the deterministic fatigue-induced (macroscopic and RT) models, the activity bursts start at the location where the slow depression variable has had most time to recover, leading to more regular event patterns (see also \cite{RomTso15} and Fig.~\ref{fig:supp2}).
On shorter time scales, all models (micro-, meso-, macroscopic and RT) exhibit the experimentally observed discontinuous nature of replay events \cite{PfeFos15}. In Fig.~\ref{fig:1EnvB}Ci, we zoom into one exemplary NLE of around $350$ms, during which a metastable wave travels around the ring in backward direction. The place field activity (color coded) varies critically in location and activity. Binning activity per location in moving $50$ms windows, we estimate the animal's (hypothetical) position along the ring by applying the population vector average (PVA; \cite{KimRou17}), see Fig.~\ref{fig:1EnvB}Cii. The decoded trajectory (black dots) does not follow a smooth path, but the step sizes vary irregularly in length (Fig.~\ref{fig:1EnvB}Ciii). In Fig.~\ref{fig:1EnvB}D, we show the step-size distributions for the four different models, featuring a large number of very short steps and a long tail of larger steps. The broad distributions significantly differ from the narrow step-size distribution that would be expected if the movement trajectory was uniform (thin lines in Fig.~\ref{fig:1EnvB}D). More precisely, the latter distribution describes the variation of the average step sizes (computed for each NLE) across different NLE's, which corresponds to approximating each replay trajectory by a straight line (red-dashed curves in Fig.~\ref{fig:1EnvB}C), see also \cite{PfeFos15} for more details on computing the step-size distributions.
In conclusion, the mesoscopic model Eq.~\eqref{eq:ring_meso} perfectly recovers the metastable replay dynamics of the microscopic network model Eq.~\eqref{eq:ring_micro}. While our fluctuation-driven model is similar to the deterministic models with respect to the structure of single replay trajectories (Fig.~\ref{fig:1EnvB}C,D), the fluctuation-driven model features unimodal IBI distributions and low serial correlations of motion directions, in marked contrast to the bimodal IBI distribution and the strong serial correlations of the deterministic fatigue-induced replay model. Thus, the IBI distribution and the sequence of motion directions provide experimentally testable predictions that may be useful to disentangle the contributions of deterministic and stochastic sources of metastable hippocampal dynamics.
\begin{figure}[t]
\begin{adjustwidth}{-1.0in}{-1in}
\centering{
\includegraphics[width=7.5in]{figures/1env_Bnew}
}
\end{adjustwidth}
\captionsetup{width=7.5in}
\caption{{\bf Comparison of fluctuation-induced and depression induced hippocampal replay dynamics.}
(A) Interburst-interval distributions and (B) serial correlations of the event speeds of consecutive NLEs for (i) the meso- and microscopic models and (ii) the deterministic models (green: original Romani-Tsodyks model \cite{RomTso15}, purple: macroscopic model in the fatigue-driven regime obtained by setting $N^\alpha\to\infty$ and $\mu=-0.9$ (instead of $\mu=-1$)).
(C) On shorter time scales, all models capture the discontinuous nature of the replayed trajectory: (i) Single NLE of the mesoscopic model. (ii) Place field positions decoded using PVA in bins of 50ms (black dots). This "replayed trajectory" deviates from a straight line (red-dashed) corresponding to a hypothetical uniform motion. (iii) Increments of the movement trajectory exhibit strongly irregular movement features (black dots) in contrast to the constant increments expected for a straight line (red-dashed).
(D) (i): The distributions of reconstructed step-sizes of the meso- (blue) and microscopic models (gray histogram) coincide and strongly deviate from the narrow distributions of average step sizes (computed for each NLE by fitting straight lines to individual movement trajectories) for the meso- (red) and microscopic models (gray thin line). The average step sizes vary for different NLE's, causing the non-zero width of their distribution. (ii) Similar behavior is observed for the macroscopic model (purple; with increased external input $\mu = -1.0 \mapsto -0.9$) and the Romani-Tsodyks (RT) model (green histogram); red/gray thin lines correspond to average step sizes in the macro/RT-models, respectively. See main text and Methods for more details.
}
\label{fig:1EnvB}
\end{figure}
\subsubsection*{Multiple circular environments}
In a next step, we assume that an animal has internalized multiple environments. The ability to code for spatial locations in multiple environments is considered one of the hallmarks of place cell activity in the hippocampus.
Experimental results have shown that rodents, when exposed to two distinct environments of similar shape, most place cells are active in only one environment. However, a few place cells are active in both environments but they typically exhibit place fields at different spatial locations, which is referred to as global remapping \cite{MulKub87,BosMul91}. Replay events can then be observed in both neural maps corresponding to each of the two environments \cite{KarFra09}, see also \cite{DraTon13,GriSch20}.
Following \cite{RomTso15}, we consider $K$ circular environments and store their respective maps within the synaptic connectivity $J_{\alpha,\beta}$ of the network model Eq.~\eqref{eq:ring_micro}. To this end, we endow each population $\alpha$ with a binary vector of selectivities for these $K$ environments, $\zeta_\alpha = (\zeta_\alpha^1,\dots,\zeta_\alpha^K) \in \{0,1\}^K$, where $\zeta_\alpha^k = 1$ indicates that the neurons in population $\alpha$ are selective for environment $k \in \{1,\dots,K\}$ (i.e., the neurons contribute to the encoding of this environment) \cite{RomTso15,SolYou14}.
Otherwise, if $\zeta_\alpha^k=0$, population $\alpha$ is not selective for environment $k$.
Selectivity to particular environments is assigned randomly, but with the constraint that $\sum_{\alpha=1}^M \zeta_\alpha^k = f M$ with $f \in [0,1]$, i.e. exactly $f M$ populations are selective for environment $k$.
Furthermore, we introduce place field locations $\theta_{\tilde\alpha}^k = 2\pi \tilde\alpha/(fM)$ for each environment $k$ and randomly assign a unique place field angle to each of the $fM$ populations $\tilde\alpha \in \{1,\dots,fM\}$ selective for environment $k$.
We then define the synaptic weights as
\begin{equation}\label{eq:J_3env}
J_{\alpha\beta} = \frac{1}{f} \sum_{\mu=1}^K J_1 \zeta_\alpha^k \zeta_\beta^k \cos( \theta_\alpha^k - \theta_\beta^k) - J_0,
\end{equation}
where map-specific interactions of strength $J_1$ only occur within environments, and $J_0$ represents uniform feedback inhibition as before \cite{RomTso15}.
In our simulations, we use $K=3$, $f=0.3$ and $M=300$, and find parameters of synaptic strength $J_0$ and $J_1$ as well as the homogeneous external input $\mu$ such that replay events are again a pure finite-size effect: When increasing the population size from $N=50$ to $N=500$, the network remains in a quiescent state and bursts of elevated activity no longer occur in our simulation.
As can be seen in Fig.~\ref{fig:3env}, bursts of elevated activity occur spontaneously and with high temporal variability. During these nonlocal replay events (NLE) a metastable traveling wave state is generated randomly in one of the three stored environments; activity in the other environments is suppressed due to global inhibition. As expected, our meso- and microscopic network simulations show qualitatively very similar behavior (cf.~Fig.~\ref{fig:3env}A and B). The metastable dynamics exhibit high variability with respect to the duration of NLEs and the interburst intervals (Fig.~\ref{fig:3env}E), the length of the traveled path during a NLE within the active environment, as well as the order of environment activation.
In more detail, we statistically analyzed the patterns of sequential activations. For instance, in Fig.~\ref{fig:3env}A the order of environment activation reads 313123122312321. To quantify whether replay of distinct environments, i.e.\ their order of activation, is random or correlated, we first computed the transition probabilities between environments. While the stochastic (meso- and microscopic) models did not show strong preference for any environment transition, in the deterministic (macroscopic and RT) models, preferred transitions were clearly visible (Fig.~\ref{fig:3env}C).
Moreover, we checked whether particular sequences of recalled environments are more probable than others, which may point at some (hidden) deterministic origin of sequence generation and recall. In order to avoid spurious deterministic effects that may be inherited from asymmetries in the synaptic weights, we constructed the selectivity vectors $\zeta_\alpha$ symmetrically and guaranteed that bursts within the environments were equally distributed, see Table~\ref{table:3env} in the Methods section. The deterministic (macroscopic and RT) models, nonetheless, exhibited a strong preference for specific order of environment activation ($3\to2\to1\to3$ in the example on Fig.~\ref{fig:3env}). By contrast, the stochastic models did not show any preference for a particular order, Fig.~\ref{fig:3env}D, which underlines once more the strong variability in the finite-size induced metastable dynamical regime of hippocampal replay.
\begin{figure}[p]
\begin{adjustwidth}{-1.0in}{-1in}
\centering
\includegraphics[width=7.25in]{figures/3env.pdf}
\end{adjustwidth}
\captionsetup{width=7.5in}
\caption{{\bf Spontaneous replay switches between multiple environments.}
In the (A) mesoscopic and (B) microscopic ring-attractor network storing multiple environments, metastable replay dynamics spontaneously emerge due to finite-size fluctuations when decreasing the population size from $N=500$ (orange/red in panels i) to $N=50$ (blue/black) per unit.
Nonlocal replay events (NLEs) occur randomly in exclusively one of three environments, while activity in the respective other two is suppressed.
The resulting activation sequences of replayed environments--in (A) the activation sequence reads 313123122312321--are analyzed with respect to (C) the transition probabilities between subsequently active environments and (D) sequential activation patterns.
In the meso- and microscopic models, transitions from environment $k$ to $j$ are equally likely for all pairs $(k,j) \in \{1,2,3\}^2$. But the deterministic (macro and RT) models show a clear preference for transitions $1\to3\to2\to1$, which is also apparent in the high probability of the corresponding subsequences of three distinct, subsequently active environments.
(E) Larger heterogeneity with respect to NLE duration and interburst intervals (IBI), as assessed by the respective means $\mu$ and coefficients of variation $CV$, further distinguishes the more variable metastable regime of the micro-/mesoscopic vis-\'a-vis macroscopic/deterministic models.
}
\label{fig:3env}
\end{figure}
\section*{Discussion}
To better understand the mechanisms of emerging collective dynamics and metastability in neural networks, low-dimensional mean-field models have become indispensable in theoretical, computational and systems neuroscience.
In this paper, we have proposed a novel mesoscopic mean-field model for networks of spiking neurons with short-term synaptic plasticity. This mesoscopic model readily allows for systematically analyzing the effect of finite-size fluctuations on metastable dynamics.
Following a bottom-up approach, we have derived simple stochastic differential equations for networks consisting of a finite number of Linear-Nonlinear Poisson (LNP) spiking neurons with dynamic synapses undergoing short-term synaptic plasticity (STP). The mesoscopic model comes in two variants: First, a jump-diffusion model captures the network dynamics of only a few neurons with high accuracy thanks to a hybrid formulation of the finite-size-induced fluctuations, which takes the shot-noise properties of the spike-train inputs into account.
Second, using a diffusion approximation, we obtained an even simpler diffusion model for pure short-term depression, whose accuracy naturally improves with increasing network size. Noteworthy, its accuracy also depends on the dynamical regime under investigation, e.g., when the skewness of the shot noise critically affects the transitions to the Up-states. Nonetheless, as we showed above, the mesoscopic diffusion model is able to capture the microscopic network dynamics formidably well, allowing us to uncover finite-size induced population spikes, spontaneous transitions between Up and Down states, and a novel dynamical regime of quasi-traveling waves as a putative mechanism for fluctuation-driven hippocampal replay.
\subsubsection*{Modeling population spikes and Up-Down dynamics}
In the modeling literature, theoretical models of metastability are typically based on an interplay between the network's tendency to posit itself in a self-excitable dynamical regime and some fatigue mechanism that generates activity-dependent self-inhibition in response to elevated network activity \cite{MilMih10,LevHer07}. Such a fatigue mechanism can be implemented in neural networks via neural spike-frequency adaptation (SFA) or via synaptic short-term depression (STD). Here, we focused on STD, but acknowledge that similar behavior can, in principle, also be achieved with SFA.
One possibility of self-excitability is that a stable low-activity state of asynchronous activity is close to a Hopf bifurcation, at which it becomes destabilized in favor of stable global oscillations. In the subcritical regime, noise can promote transient departures from the fixed point, resembling populations spikes \cite{GigDec15}. Another possibility is that the system exhibits two stable fixed points, a high-activity (Up) and a low-activity (Down) state. Switching between the states can be induced by internal or external noise. In addition, the Up state can get destabilized in a saddle-node bifurcation by an adaptive fatigue mechanism described above \cite{GigMat07,MejKap10,JerRox17,LevBuz19}. These scenarios are effectively low-dimensional and, in consequence, firing rate models describing the mean population activity
can successfully explain metastable dynamics. As noted in the Introduction, however, the firing rate models largely miss a clear link to microscopic circuit models. While neuronal and synaptic properties can be partly accounted for by mean-field models, incorporating biologically realistic fluctuations in such models is often doomed to failure as it lacks a rigorous footing. Neglecting fluctuations may explain purely deterministic, fatigue-induced metastable activity patterns with low variability. Experimentally observed metastability in the brain, however, shows larger variability suggesting fluctuation-induced metastable dynamics. Fluctuations can have manifold origins that range from external noisy (cortical or thalamic) inputs or background noise \cite{RomAmi06}, via specific network connectivity topologies (random, sparse, or clustered) \cite{EckJac08,SheVol08,MarTso12,LucBen14,PirRic15}, heterogeneity \cite{diSVil18}, (loose or strong) balance between excitation and inhibition, up to finite-size effects \cite{SouCho07,BenCow10,GigDec15}, or even a combination of them \cite{TouHer12,diVRom19}. On the mesoscopic scale, such fluctuations are often modeled heuristically by adding noise terms to the mean-field equations \emph{ad hoc}.
With the mesoscopic mean-field model that we have proposed here, we restricted ourselves to explaining the fluctuations observed on a network level that are due to a finite number of neurons. To minimize confounding factors, we considered just one excitatory population of Poisson spiking neurons, all-to-all coupling, and no external noise.
Indeed, it has been shown that inhibition is not necessarily needed to generate population spikes and Up-Down transitions, neither is short-term synaptic facilitation needed; see \cite{BarBao05,HolTso06,DaoDuc15} who used a mean-field model with STD and Gaussian white noise in the voltage dynamics. Building on their previous work and complementing that mean-field model by an additional facilitation dynamics, Holcman and co-workers more recently provided an improved, and analytically tractable, description of network bursts, Up-Down dynamics and slow oscillations that is compatible with experimental recordings on various scales \cite{DaoLee15,ZonHol21} and allows for detailed stochastic analysis \cite{ZonHol21Com,ZonHol21PRR,ZonHol22}.
In the Methods, we extend our mesoscopic description considering also short-term facilitation. Our mesoscopic mean-field model Eq.~\eqref{eq:meso} yields exactly the same deterministic dynamics of their depression(-facilitation) mean-field model in the limit $N\to\infty$, see also \cite{BarTso07}.
The important difference, however, is how to deal with noise. While Holcman, Tsodyks and co-workers rather vaguely motivated an additive Gaussian white noise term that is meant to represent the fluctuations from independent vesicular release events and/or closings and openings of voltage gated channels, we here provide an explicit and rigorous derivation of the multiplicative noise terms in our mesoscopic mean-field model. Our model can thus accurately account for the finite-size fluctuations of the microscopic network including the thereby induced heterogeneity of synaptic depression across the neurons.
Previous approaches to model finite-size effects either included a multiplicative noise term ad hoc in the firing rate equation \cite{BruHak99,SpiGer99} or dwelled on a master equation formalism leading to coarse-grained phenomenological models of collective activity dynamics \cite{BuiCow07,ElBDes09,Bre10,diSVil18}. By contrast, we here derived Langevin equations for the mesoscopic population dynamics directly from the underlying microscopic network of finitely many Poisson spiking neurons. In our bottom-up approach we explicitly take into account fluctuations due to the variability of the individual depression variables across synapses, that are typically neglected in the other approaches. Our resulting nonlinear mesoscopic model remains nonetheless simple enough---thanks to a minimal microscopic network model considered here, but see below for possible extensions towards more biological realism---to allow for a systematic analysis of finite-size induced metastable dynamics in the presence of a slow fatigue mechanism in form of STD.
\subsubsection*{Predictions and possible functional roles of variable hippocampal dynamics}
Given the recent success in explaining a wide range of experimental observations on hippocampal dynamics by means of firing-rate models with STD \cite{RomTso15,TheRov18}, we extended our mesoscopic description for a single population to a ring-attractor model of spiking neurons. Aiming at a minimal spiking neuron network model that can offer unique insights in the generative mechanisms of hippocampal dynamics in area CA3---in particular those underlying sharp waves and bidirectional activity replay---, we ignored various degrees of biological plausibility on purpose, but see below for possible extensions. Previous models that incorporated more biological details \cite{JahTim15,MisKim16,CheSpre17,HagFuk18,NicClo19,MalBaz19,EckBag22} are limited in their capacity to provide clear and concise mechanistic explanations.
By contrast, our framework allowed us to systematically analyze the neuronal, synaptic and network mechanisms at work. We first corroborated the findings of \cite{RomTso15,TheRov18} as we recovered, in the limit of infinitely many neurons per population, an analogous deterministic regime of spontaneously emerging hippocampal replay patterns as in the Romani-Tsodyks (RT) model \cite{RomTso15}, see Fig.~\ref{fig:supp2}. Thanks to the similarity between our macroscopic model and the RT model, we are confident that our mesoscopic description readily allows for capturing realistic hippocampal dynamics not only on periodic tracks---which we considered here for simplicity and as a proof of concept---but also on linear tracks, in T-maze environments, and planar (and higher dimensional) fields; we leave these extensions for future work as well as the formation of theta sequences and phase precession \cite{TsoSka96,RomTso15,TheRov18}, see also \cite{BuzMos13,Col16}.
Our multiscale modeling framework directly links observed mesoscopic dynamics with the microscopic dynamics at the single neuron and single synapse level. This link could be a useful step towards a mechanistic understanding of the neuronal, synaptic and network constituents for generating spontaneous hippocampal activity critical for memory consolidation, recall and spatial working memory, navigational planning, as well as reward-based learning \cite{CarJad11,Fos17,OlaBus18,Pfe20}.
Importantly, our mesoscopic description opens a new perspective on the variability of hippocampal dynamics. In fact, we uncovered a novel, finite-sized induced metastable regime of hippocampal replay. At first glance, these fluctuation-induced quasi-traveling waves are similar in nature to those in \cite{RomTso15}. On time-scales of a few hundred milliseconds, both the deterministic RT model and our mesoscopic model exhibit spontaneously emerging bursts of activity that resemble a nonlocal replay event. As a single replay event unfolds, we showed that both models feature discontinuous replay trajectories---consistent with experimental observations \cite{PfeFos15,DenGil21,KraDru22}, which also underlines the biological plausibility of the ring-attractor assumption. On longer time-scales, however, our mesoscopic model allows for a much richer repertoire of replay dynamics. We found that the mesoscopic ring-attractor network can exhibit significantly larger variability for finite-sized induced replay dynamics than for deterministic, fatigue-induced dynamical regimes. This variability manifests in the spatio-temporally irregular succession of replay events (Figs.~\ref{fig:1Env}, \ref{fig:1EnvB} and \ref{fig:supp2}) and could be tested experimentally, following, e.g., \cite{AxmElg08,SchKal14,Buz15}.
Consequently, our results insinuate that different replay dynamics can have distinct dynamical origins. In particular, replay dynamics in rodents are reportedly different during awake rest versus sleep \cite{McNSta21,KraDru22}, possibly relating to fluctuation- versus fatigue-induced metastability.
\subsubsection*{Replay in brains and machines}
Our findings may also be relevant for human neuroscience, where a paradigm shift to decode cognition rather from off-task, than from task-based, neural activity seems to be imminent \cite{LiuNou22}. Hippocampal replay in rodents is a prime example for a ``representation-rich'' approach to spontaneous neural activity by uncovering the temporal structure of task-related representations. In understanding how the temporal dynamics of a particular neural activity pattern (e.g., a nonlocal traveling wave) unfolds, researchers could shed light on various cognitive functions that are subserved by spontaneous neural activity, including memory, learning, and decision-making \cite{JadKem12,OlaBus18}.
Recent technical advances in human neuroimaging have inspired ``human replay'' studies that investigate spontaneous task-related neural reactivations \cite{SchNiv19,LiuDol19,HigLiu21}, which bears strong resemblance to rodent replay. Instead of the spontaneous recall of an environment map in rodent hippocampal replay, the focus lies now on reactivation of a more abstract ``cognitive map'' of task space.
As the associated cognitive processes include memory retrieval, planning and inference, and thus lie at the heart of sophisticated model-based reasoning, our results can also be regarded as a proof-of-concept for the model-based representation-rich approach advocated in \cite{LiuNou22} when re-interpreting the environmental maps stored in synaptic connectivities as cognitive maps.
Intriguingly, also in human replay studies, replay can occur in forward and backward direction, with putative functional roles for spatial and non-spatial learning \cite{KurEco16,LiuMat21}.
In particular, Liu and co-workers suggest that nonlocal backward replay may serve as a neural mechanism for model-based reinforcement learning \cite{LiuMat21}. A comprehensive view about the different roles the wide variety of replay dynamics may subserve, however, remains elusive.
Insights from machine learning, where replay is commonly implemented in artificial agents, may help to find answers about the putative computational functions \cite{WitChi21,HayKri21}.
``Experience replay'' was already introduced as a reinforcement learning technique in the early 90s \cite{Lin91} and is nowadays a crucial ingredient in building human-level intelligence in deep neural networks \cite{MniKav15,KumHas16}.
Note also that in reinforcement learning, a `model' has a similar meaning to the notion of a `cognitive map', which thus naturally bridges the gap from (human) cognition to artificial intelligence \cite{BehMul18}.
Nonetheless, research on replay in neuroscience and machine learning has progressed largely in parallel, so that insights from the latter can also inform future neuroscientific studies.
An outstanding problem in the field of deep learning is the catastrophic forgetting problem in online learning \cite{MccCoh89,AbrRob05,HayKri21}. This problem is due to the fact that during online learning, data is not guaranteed to be independent and identically distributed ($i.i.d.$), which is a challenge for standard optimization methods. Replay-like methods are used to overcome this problem, but current replay implementations are computationally expensive to deploy. By contrast, uniform sampling of past experiences has proven to be remarkably efficient both in supervised learning \cite{ChaDok18,WuChe19, HayKaf20} and in reinforcement learning tasks \cite{MniKav13,MniKav15}. These insights provide some indirect, yet important evidence for the computational benefit of the stochastic replay that we have uncovered in this work and hint at an important role of the novel fluctuation-induced replay regime.
\subsubsection*{Biological limitations and extensions}
In this paper, we aimed at a minimal bottom-up population model that accounts for spiking noise, short-term synaptic plasticity and basic neuronal properties. The result can be regarded as a proof of concept that a simple nonlinear mesoscopic model, which enables the analysis of metastable dynamics in terms of network size and the aforementioned properties, can be derived from an underlying microscopic model. However, our model has several biological limitations and lacks some important features. First, neurons exhibit post-spike \emph{refractoriness}, which is not captured by the LNP model. While the response of the instantaneous firing rate can be well reproduced by choosing the linear filter function $\kappa(t)$ and the nonlinearity $f(h)$ of the LNP model corresponding to realistic dynamical transfer functions of neurons with refractoriness \cite{OstBru11}, the temporal spike-train correlations caused by refractoriness violate the Poisson assumption of our derivation. Although the strict Poisson assumption can be relaxed to some degree \cite{SchGer20}, strong spike-train auto-correlations influence the noise properties of the mesoscopic model and hence the fluctuations of the population activities. This effect has already been described for the mesoscopic model of \cite{SchGer20} in the case of leaky integrate-and-fire neurons with pronounced refractoriness. Because our theory is based on the previous mesoscopic model, we expect that these effects carry over to the present model. How to account for non-Poisson statistics due to refractoriness in a mesoscopic theory with STP is a challenging theoretical problem that is left for future research. We also mention that a related neuronal property is spike-triggered \emph{adaptation}, which -- similar to synaptic depression -- is a slow negative feedback mechanism. Incorporating adaptation into a mesoscopic theory, either instead of or in addition to depression, is interesting for two reasons: first, it represents an alternative slow fatigue mechanisms driving metastable dynamics, and second, adaptation is an important biological feature found in many cell types \cite{PozNau13}. A promising generalization of the present theory to adaptation could be based on the quasi-renewal approximation \cite{NauGer12} and its extension to mesoscopic theories \cite{DegSch14,SchDeg17}.
Apart from a more realistic description of neuronal properties, also the synaptic dynamics can be improved towards important biological features. First, the synaptic conductances exhibit temporal filtering and, in a conductance-based model, they enter the voltage dynamics in a multiplicative way. Both effects are neglected in our model. At least, the synaptic filtering of the conductance dynamics is straightforward to include in our theory as shown in \cite{SchGer20}. For an extension of the mesoscopic model to a conductance-based description of synaptic input, the interested reader is referred to the discussion of \cite{SchDeg17}. Second, the Tsodyks-Markram model considered here is a phenomenological and deterministic model of the synaptic dynamics. However, synaptic transmission is stochastic, and thus a stochastic STP model \cite{RosRub12} would be biologically more realistic (see also discussion in \cite{SchGer20}), but how to treat such stochasticity within a mesoscopic theory remains unclear.
A biologically difficult problem is how to account for realistic network topology. Our theory applies to networks of multiple interacting homogeneous populations. In turn, each population is a fully-connected network of neurons. However, the connectivity of biological neural networks is not fully connected and often exhibits a large degree of heterogeneity. Regarding the first issue, we have shown previously that the full connectivity model represents an effective model that faithfully reproduces the dynamics of a non-fully-connected, random network with fixed in-degree if the synaptic weights are rescaled correspondingly \cite{SchGer20} (see also \cite{SchDeg17} for the case of static synapses). The second issue is a principle challenge for mean-field theories as they rely on the possibility to use averages over many neurons, and hence cannot describe heterogeneous networks that are strongly affected by single neurons. However, in many cases it might be valid to subdivide the network into many small subpopulations that can be regarded as roughly homogeneous. Following this strategy, it is crucial to have a mesoscopic description of the subpopulations because the grouping of neurons with similar properties may result in small population sizes. For example, in our hippocampal network model, we have grouped place cells with highly overlapping place fields into one homogeneous subpopulation. If the number of these similarly tuned place cells is small, the mesoscopic framework can show its full strength because in this case, the jump-diffusion model still provides an accurate description (see Fig.~\ref{fig:updown} for $N=30$ neurons). Finally, we note that a basic type of network heterogeneity in biology, the separation of excitatory and inhibitory neurons (Dale's law), is not realized in our hippocampal network with ``Mexican-hat''-type connectivity. However, it has been shown that such connectivity can be re-implemented in accordance with Dale's law by two layers of neurons, one excitatory and one inhibitory layer \cite{OzeFin09}.
\subsubsection*{Theoretical challenges}
The diffusion model Eq.~\eqref{eq:meso} and its derivation based on temporal coarse-graining \cite{Gil00} greatly simplify our previous theoretical work \cite{SchGer20} but both results are consistent as shown below in Methods. The simplicity of our new derivation and the remarkable agreement of the diffusion model with microscopic simulations beg the question of whether the diffusion model is the \textit{exact} diffusion approximation \cite{Kur78}. The only way to give a positive answer to this question would be to propose a rigorous diffusion approximation proof (as in \cite{DitLoe17} for the case of LNP neurons without STP), which we leave as an open mathematical problem. A second open theoretical question is the convergence of the multi-population model Eq.~\eqref{eq:ring_meso} to a stochastic neural field equation. The circular environment we study can be regarded as a space-discretized stochastic neural field but the exact expression of the continuous equation and its physical interpretation is unclear and will be subject of future work. Ideally, one would want to be able to prove the convergence to such an equation, as in \cite{CheOst20} in the case without STP.
The low-dimensional mesoscopic model could be also interesting from a data-analytical perspective. Recently, Bayesian state-space models have been developed to infer replay events from spiking data \cite{DenGil21}. Such inference does not exploit any knowledge about the dynamical mechanisms underlying the data and the likelihood function is assumed ad hoc. Our low-dimensional mesoscopic could provide an analytical likelihood function so as to enable improved data assimilation methods to infer replay events.
\paragraph*{}
In conclusion, we have put forward a multiscale framework for systematically investigating metastable network dynamics in finite-sized networks of LNP-STP neurons using a bottom-up mesoscopic model. This model is efficient to analyze and simulate and is also versatile for incorporating more biological realism. Thanks to a unique link between the underlying microscopic network and its mesoscopic description, it becomes possible to disentangle the differential roles of neuronal, synaptic and network properties---in particular the network size---for emerging metastable brain dynamics. The mesoscopic model may also be instrumental for distinguishing between fatigue-driven and fluctuation-driven metastability because of their distinct statistical predictions---as in the case of hippocampal replay. Such predictions could be tested experimentally and reveal the dynamical origin of spontaneous neural activity.
\section*{Methods}
\subsection*{Diffusion approximation for the mesoscopic dynamics with short-term depression}
The microscopic dynamics of a network of $N$ LNP spiking neurons with short-term synaptic depression and exponential linear filter dynamics are given by Eq.~\eqref{eq:micro}. To derive the diffusion model of the mesoscopic dynamics Eq.~\eqref{eq:meso}, we focus on the mesoscopic variables $h(t), x(t)$ and $Q(t)$ defined as in Eq.~\eqref{eq:empirical}:
\begin{equation*}
h(t):=\frac{1}{N}\sum_{i=1}^N h_i(t), \quad x(t):=\frac{1}{N}\sum_{i=1}^N x_i(t) \quad \text{and} \quad Q(t) := \frac{1}{N}\sum_{i=1}^N x_i^2(t).
\end{equation*}
From Eq.~\eqref{eq:micro} and using the definition of $x(t)$, we get
\begin{subequations}
\begin{align}
\od{h}{t}&=\frac{\mu(t)-h}{\tau}+JU_0\frac{1}{N}\sum_{i=1}^Nx_i(t^-)s_i(t), \\
\od{x}{t}&=\frac{1-x}{\tau_D}-U_0\frac{1}{N}\sum_{i=1}^N x_i(t^-)s_i(t).
\end{align}
\end{subequations}
To approximate the sum $\frac{1}{N}\sum_{i=1}^Nx_i(t^-)s_i(t)$ by a diffusion term which only involves mesoscopic variables, we follow the coarse-graining approach by Gillespie \cite{Gil00} for the derivation of a ``chemical Langevin equation''.
To this end, we study the stochastic increments $\int_t^{t+\Delta t} \frac{1}{N}\sum_{i=1}^Nx_i(\hat{t}^-)s_i(\hat{t})d\hat{t}$, where $\Delta t > 0$ is assumed to be a \textit{macroscopically infinitesimal} time step \cite{Gil00}: $\Delta t$ is small enough such that (i) the $x_i$'s can be assumed to jump at most once in the time interval $[t,t+\Delta t]$, (ii) $x_i(\tau_i^-) \approx x_i(t^-)$ if neuron $i$ has a spike at time $\tau_i\in[t,t+\Delta t]$, and (iii) $h(\hat{t})\approx h(t^-)$ for all $\hat{t}\in[t,t+\Delta t]$; and $\Delta t$ is large enough such that many neurons spike in the time interval $]t,t+\Delta t]$. These assumptions are expected to hold if $\Delta t\ll \tau,\tau_D$ and $1\ll Nf(h(t))\Delta t\ll N$ for all $t$. By the smallness assumption, we have
\begin{equation}\label{eq:increments}
\int_t^{t+\Delta t} \frac{1}{N}\sum_{i=1}^Nx_i(\hat{t}^-)s_i(\hat{t})d\hat{t} \approx \frac{1}{N}\sum_{i=1}^N x_i(t^-)z_i(t),
\end{equation}
where $\{z_i(t)\}_{i=1}^N$ are \textit{i.i.d.} Bernoulli random variables with mean $f(h(t^-))\Delta t$. Conditioned on $h(t^-)$, all the variables $x_1(t^-), \dots, x_N(t^-)$, $z_1(t), \dots, z_N(t)$ are independent, and the $\{x_i(t^-)\}_{i=1}^N$ are \textit{i.i.d.}. Hence, by the Central Limit Theorem,
\begin{equation*}
\frac{\sum_{i=1}^N x_i(t^-)z_i(t) - N\mathbb{E}[x_1(t^-)z_1(t)\,|\,h(t^-)]}{\sqrt{N}} \xrightarrow[N\to\infty]{\mathcal{L}} \mathcal{N}\left(0,\text{Var}[x_1(t^-)z_1(t)\,|\,h(t^-)]\right).
\end{equation*}
We now use the empirical averages Eq.~\eqref{eq:empirical} to approximate the conditional expectation and variance:
\begin{align*}
\mathbb{E}[x_1(t^-)z_1(t)\,|\,h(t^-)] &= \mathbb{E}[x_1(t^-)\,|\,h(t^-)]f(h(t^-))\Delta t \approx x(t^-) f(h(t^-))\Delta t\\
\text{Var}[x_1(t^-)z_1(t)\,|\,h(t^-)] &=\expect{x_1^2(t^-)\middle|h(t^-)}\expect{z_1^2(t^-)\middle|h(t^-)}\\
&\qquad -\expect{x_1(t^-)\middle|h(t^-)}^2\expect{z_1(t^-)\middle|h(t^-)}^2\\
&\approx Q(t^-)f(h(t^-))\Delta t + O(\Delta t^2).
\end{align*}
We can now approximate the increment Eq.~\eqref{eq:increments} by a Gaussian:
\begin{equation*}
\int_t^{t+\Delta t} \frac{1}{N}\sum_{i=1}^Nx_i(\hat{t}^-)s_i(\hat{t})d\hat{t} \,\sim\, \mathcal{N}\left(x(t^-)f(h(t^-))\Delta t,\, \frac{Q(t^-)f(h(t^-))\Delta t}{N}\right).
\end{equation*}
Taking the limit $\Delta t \to 0$, we obtain the diffusion approximation
\begin{subequations}\label{eq:diffusion}
\begin{align}
\od{h}{t}&=\frac{\mu(t)-h}{\tau}+JU_0xf(h) + JU_0 \sqrt{\frac{Q(t)f(h)}{N}}\xi(t), \\
\od{x}{t}&=\frac{1-x}{\tau_D}-U_0xf(h) - U_0\sqrt{\frac{Q(t)f(h)}{N}}\xi(t),
\end{align}
\end{subequations}
where $\xi$ is a Gaussian white noise with auto-correlation function $\langle\xi(t)\xi(t')\rangle=\delta(t-t')$.
To close the system Eq.~\eqref{eq:diffusion}, we have to derive the dynamics of $Q(t)$.
Going back to Eq.~\eqref{eq:micro_x}, by Itô's formula for jump processes, we have
\begin{equation*}
\od{x_i^2}{t} = 2\frac{x_i - x_i^2}{\tau_D} - U_0(2-U_0)x_i^2(t^-)s_i(t).
\end{equation*}
Taking the empirical average, we get
\begin{equation} \label{eq:Q_emp}
\od{Q}{t} = 2\frac{x - Q}{\tau_D} - U_0(2-U_0)\frac{1}{N}\sum_{i=1}^N x_i(t^-)^2s_i(t).
\end{equation}
We could follow the same steps as before and try to obtain a diffusion approximation for Eq.~\eqref{eq:Q_emp}. However, the fluctuations of such a diffusion approximation would be of order $1/\sqrt{N}$ and since $Q(t)$ affects the dynamics of Eq.~\eqref{eq:diffusion} only through the term $\sqrt{Q(t)f(h)/N}\xi(t)$, the effect of the fluctuation of $Q(t)$ on $h(t)$ and $x(t)$ are of order $N^{-3/2}$ and can therefore be neglected when $N$ is large. Hence, we approximate the increments by their (approximate) expectation: for the time step $\Delta t > 0$,
\begin{equation*}
\int_t^{t+\Delta t}\frac{1}{N}\sum_{i=1}^N x_i(\hat{t}^-)^2s_i(\hat{t})d\hat{t} \,\approx\, Q(t^-)f(h(t^-))\Delta t,
\end{equation*}
whence,
\begin{equation} \label{eq:Q}
\od{Q}{t} = 2\frac{x - Q}{\tau_D} - U_0(2-U_0)Qf(h).
\end{equation}
Finally, gathering Eqs.~\eqref{eq:diffusion} and \eqref{eq:Q}, we obtain the mesoscopic dynamics in form of the diffusion model Eq.~\eqref{eq:meso}.
The fact that we are here considering the diffusion approximation (or Langevin dynamics) allows to significantly shorten the original derivation of the mesoscopic model presented in \cite{SchGer20}. In particular, in the present derivation, we do not need to approximate the distribution of the $x_i(t^-)$'s in Eq.~\eqref{eq:increments} by a Gaussian, since we only need to approximate the sum in Eq.~\eqref{eq:increments} by a Gaussian. Note that the arguments enabling the present derivation were already hinted at in the Section ``Remarks on the approximation'' and Appendix B of \cite{SchGer20} but not put together. Both derivations lead to the same mesoscopic model, except that here, for simplicity, we neglect fluctuations of order $N^{-3/2}$ and we consider the diffusion limit; see also the subsection ``Reduction to a pure diffusion process'' below for an explicit derivation of the diffusion model from the original mesoscopic model presented in \cite{SchGer20}.
\subsection*{Jump-diffusion model with synaptic depression and facilitation}
Starting from our previous work \cite{SchGer20}, we can derive an improved mesoscopic model that also accounts for synaptic facilitation and the shot-noise character of the finite-size spiking noise.
When allowing only for short-term depression while keeping the facilitation variable constant, the resulting mesoscopic dynamics boils down to the jump-diffusion model Eq.~\eqref{eq:model-depress-3} with hybrid noise. For large $N\gg 1$, the Poisson shot noise can be simplified under a diffusion approximation, which recovers the diffusion model Eq.~\eqref{eq:meso} derived in the previous section.
\subsubsection*{Microscopic model with synaptic depression and facilitation}
\label{sec:micro-full}
We consider a network of LNP neurons with dynamic synapses similar to Eq.~\eqref{eq:micro} but now complemented with a facilitation variable $\hat{u}_i$ for each neuron $i = 1,\dots,N$. The full synaptic dynamics corresponds to the STP model by Tsodyks and Markram \cite{TsoPaw98,MonBar08} and results in the following microscopic network model:
\begin{subequations}
\label{eq:micro-full-stp}
\begin{align}
r_i(t)&=f\lrrund{\hat{h}_i(t^-)}\\
\frac{d\hat{h}_i}{dt}&=\frac{\mu(t)-\hat{h}_i}{\tau}+\frac{J}{N}\sum_{j=1}^N\hat{u}_j(t^-)\hat{x}_j(t^-)s_j(t) \label{eq:micro-h}\\
\frac{d\hat{u}_j}{dt}&=\frac{U_0-\hat{u}_j}{\tau_F}+U(1-\hat{u}_j(t^-))s_j(t)\\
\frac{d\hat{x}_j}{dt}&=\frac{1-\hat{x}_j}{\tau_D}-\hat{u}_j(t^-)\hat{x}_j(t^-)s_j(t),
\end{align}
\end{subequations}
where $s_i(t)=\sum_k\delta(t-t_k^i)$ is a point process with conditional intensity $r_i(t)$.
In Eq.~\eqref{eq:micro-full-stp}, $\tau_F$ and $\tau_D$ are the facilitation and depression time constants, respectively, and $U_0$ is the
baseline utilization of synaptic resources, whereas $U$ determines the increase in the utilization of
synaptic resources by a spike.
As before, $\hat{u}_j(t^-)$ is a shorthand for the left limit at time $t$.
We note that, for simplicity, we only consider the case of full connectivity. However, as shown in our previous work \cite{SchGer20}, the mean-field theory also works well for random connectivity with fixed in-degree.
We remark that the model Eq.~\eqref{eq:micro-h} corresponds to LNP neurons with an exponential linear filter, which we have chosen in the Results part for simplicity. The mesoscopic theory developed here can readily be extended to LNP neurons described by a general linear filter $\kappa(t)$. In this case and assuming $\hat{h}_i(0)=0$, the $h$-dynamics will be given by
\begin{equation}
\hat{h}_i(t)=\int_0^{t^+} \kappa(t-t')\lrrund{\frac{\mu(t')}{\tau}+\frac{J}{N}\sum_{j=1}^N \hat{u}_j(t'^-)\hat{x}_j(t'^-)s_j(t')}\,dt'.
\end{equation}
With this extension more realistic neuronal dynamics can be modeled \cite{OstBru11}. The simple dynamics Eq.~\eqref{eq:micro-h} is recovered if the linear filter is chosen as $\kappa(t)=e^{-t/\tau}\theta(t)$, where $\theta(t)$ is the Heaviside step function.
\subsubsection*{Mesoscopic model}
\label{sec:meso-full}
In the Appendix B of \cite{SchGer20}, it has been shown that the mesoscopic dynamics of the empirical variables
\begin{equation} \label{eq:empirical2}
\begin{gathered}
h(t):= \frac{1}{N}\sum_{i=1}^N \hat{h}_i(t), \quad u(t):= \frac{1}{N}\sum_{i=1}^N \hat{u}_i(t), \quad x(t) := \frac{1}{N}\sum_{i=1}^N \hat{x}_i(t),\\
P(t):= \frac{1}{N}\sum_{i=1}^N \hat{u}_i^2(t), \quad Q(t):= \frac{1}{N}\sum_{i=1}^N \hat{x}_i^2(t), \quad R(t):= \frac{1}{N}\sum_{i=1}^N \hat{u}_i(t) \hat{x}_i(t)
\end{gathered}
\end{equation}
can be approximated in discrete time with a macroscopic infinitesimal time step $\Delta t$ (as defined above) by the moment-closure equations
\begin{subequations}
\label{eq:update}
\begin{align}
h_{k+1}&=h_k+\frac{\mu_k-h_k}{\tau}\Delta t+\frac{J}{N}\left[R_k\Delta n_k +(u_k\varepsilon_k^x + x\varepsilon_k^u)\sqrt{\Delta n_k}\right], \\
u_{k+1}&=u_k+\frac{U_0-u_k}{\tau_F}\Delta t+\frac{U}{N}\left[(1-u_k)\Delta n_k - \varepsilon_k^u\sqrt{\Delta n_k}\right]\\
x_{k+1}&=x_k+\frac{1-x_k}{\tau_D}\Delta t-\frac{1}{N}\left[R_k\Delta n_k +(u_k\varepsilon_k^x + x\varepsilon_k^u)\sqrt{\Delta n_k}\right], \\
P_{k+1}&=P_k+2 \frac{U_0u_k-P_k}{\tau_F}\Delta t+\frac{1}{N}\lreckig{\mu^P(u_k)\Delta n_k +\varepsilon_k^P\sqrt{\Delta n_k}},\\
Q_{k+1}&=Q_k+2 \frac{x_k-Q_k}{\tau_D}\Delta t+\frac{1}{N}\lreckig{\mu^Q(u_k,x_k,P_k,Q_k,R_k)\Delta n_k +\varepsilon_k^Q\sqrt{\Delta n_k}},\\
R_{k+1}&=R_k+\frac{U_0x_k-R_k}{\tau_F}\Delta t+\frac{u_k-R_k}{\tau_D}\Delta t+\frac{1}{N}\lreckig{\mu^R(u_k,x_k,P_k,R_k)\Delta n_k +\varepsilon_k^R\sqrt{\Delta n_k}},
\end{align}
\end{subequations}
where $u_k = u(k \Delta t)$ and analogous expressions hold for $h_k,x_k, P_k, Q_k,R_k$ and the external input $\mu_k$. In Eq.~\eqref{eq:update}, we use the abbreviations
\begin{align*}
\mu^P(u) &= U \big(P (U - 2) - 2 u (U - 1) + U\big),\\
\mu^Q(u,x,P,Q,R) &= P Q - 2 Q u + 2 \big(R + (u - 2) x\big) (R - u x),\\
\mu^R(u,x,P,R) &= (U (1 - u)^2 - u^2) x + (U - 1) x (P - u^2) + 2 (U (u - 1) - u) (R - u x),
\end{align*}
and
\begin{align*}
\varepsilon_k^P &= 2U \big(1 + u_k (U - 2) - U\big)\varepsilon_k^u, \\
\varepsilon_k^Q &= 2(u_k - 1)x_k^2\varepsilon_k^u + 2u_k(u_k-2)x_k\varepsilon_k^x,\\
\varepsilon_k^R &= 2\big(U(u_k - 1) - u_k\big)x_k\varepsilon_k^u + \big(U(1 - u_k)^2 - u_k^2\big)\varepsilon_k^x.
\end{align*}
Importantly, the dynamics is driven by two sources of noise: First, $\varepsilon_k^u$ and $\varepsilon_k^x$ are correlated Gaussian random numbers with means $\langle\varepsilon_k^u\rangle=\langle\varepsilon_k^x\rangle=0$ and (co)variances
\begin{equation}
\langle\varepsilon_k^u\varepsilon_l^u\rangle=(P_k-u_k^2)\delta_{k,l},\quad \langle\varepsilon_k^x\varepsilon_l^x\rangle=(Q_k-x_k^2)\delta_{k,l},\quad \langle\varepsilon_k^u\varepsilon_l^x\rangle=(R_k-u_kx_k)\delta_{k,l},
\end{equation}
where $\delta_{k,l}$ is the Kronecker delta.
The random numbers $\varepsilon_k^u$ and $\varepsilon_k^x$ reflect the heterogeneity of $\hat{u}_i$ and $\hat{x}_i$ across synapses $i=1,\dotsc,N$, respectively. Second, $\Delta n_k$ represents the total spike count in the time step $\Delta t$ which is drawn independently from a Poisson distribution with mean $Nf(h_k)\Delta t$:
\begin{equation}
\Delta n_k\sim\text{Pois}[Nf(h_k)\Delta t].
\end{equation}
This equation closes the discrete mesoscopic dynamics with STP derived in \cite{SchGer20}.
We will now use the discrete dynamics to derive a Langevin equation in continuous time.
Because of our assumption that $\Delta t$ is large enough such that it contains many spikes, i.e. $\langle\Delta n_k\rangle\equiv Nf(h_k)\Delta t\gg 1$, we can use again a Gaussian approximation. Thus, we write $\Delta n_k\approx Nf(h_k)\Delta t+\sqrt{Nf(h_k)}\Delta W_k$, where $\Delta W_k$ is an independent, normally distributed random number with $\langle\Delta W_k\rangle=0$ and $\langle{\Delta W_k^2}\rangle=\Delta t$. Furthermore, the noise terms appearing in Eq.~\eqref{eq:update} can be written within a Gaussian approximation as
\begin{equation}
\varepsilon_k^u\sqrt{\Delta n_k}\approx\sqrt{N(P-u^2)f(h_k)}\Delta W_k^u,\qquad \varepsilon_k^x\sqrt{\Delta n_k}\approx\sqrt{N(Q-x^2)f(h_k)}\Delta W_k^x,
\end{equation}
where we neglected terms of order $\mathcal{O}(N^{\frac{1}{4}})$ and where $\Delta W_k^u$ and $\Delta W_k^x$ are mean-zero Gaussian random numbers with covariance
\begin{equation}
\langle\Delta W_k^u\Delta W_l^u\rangle=\langle\Delta W_k^x\Delta W_l^x\rangle=\delta_{k,l}\Delta t,\qquad \langle \Delta W_k^u\Delta W_l^x\rangle=\delta_{k,l}\rho_k\Delta t
\end{equation}
Here, we introduced the correlation coefficient
\begin{equation}
\rho_k=\frac{R_k-u_kx_k}{\sqrt{(P_k-u_k^2)(Q_k-x_k^2)}}.
\end{equation}
It follows that, within the Gaussian approximation, the other noise terms are given by
\begin{align*}
\varepsilon_k^P\sqrt{\Delta n_k} &\approx 2U \big(1 + u_k (U - 2) - U\big)\sqrt{N(P-u^2)f(h_k)}\Delta W_k^u, \\
\varepsilon_k^Q\sqrt{\Delta n_k} &\approx 2(u_k - 1)x_k^2\sqrt{N(P-u^2)f(h_k)}\Delta W_k^u\\
&\quad+ 2u_k(u_k-2)x_k\sqrt{N(Q-x^2)f(h_k)}\Delta W_k^x,\\
\varepsilon_k^R\sqrt{\Delta n_k} &\approx 2\big(U(u_k - 1) - u_k\big)x_k\sqrt{N(P-u^2)f(h_k)}\Delta W_k^u\\
&\quad + \big(U(1 - u_k)^2 - u_k^2\big)\sqrt{N(Q-x^2)f(h_k)}\Delta W_k^x.
\end{align*}
Taking the continuum limit $\Delta t\rightarrow 0$ yields the Itô stochastic differential equation
\begin{subequations}
\begin{align}
dh_t&=\frac{\mu_t-h_t}{\tau}dt+\frac{J}{N}\left[R_tdn_t +u_t\sqrt{N(Q_t-x_t^2)f(h_t)}dW_t^x + x_t\sqrt{N(P_t-u_t^2)f(h_t)}dW_t^u\right], \\
du_t&=\frac{U_0-u_t}{\tau_F}dt+\frac{U}{N}\left[(1-u_t)dn_t - \sqrt{N(P_t-u_t^2)f(h_t)}dW_t^u\right]\\
dx_t&=\frac{1-x_t}{\tau_D}dt-\frac{1}{N}\left[R_tdn_t +u_t\sqrt{N(Q_t-x_t^2)f(h_t)}dW_t^x + x_t\sqrt{N(P_t-u_t^2)f(h_t)}dW_t^u\right], \\
dP_t&=2 \frac{U_0u_t-P_t}{\tau_F}dt+\frac{1}{N}\lreckig{\mu^P(u_t)dn_t +2U \big(1 + u_t (U - 2) - U\big)\sqrt{N(P_t-u_t^2)f(h_t)}dW_t^u},\\
dQ_t&=2 \frac{x_t-Q_t}{\tau_D}dt+\frac{1}{N}\bigg[\mu^Q(u_t,x_t,P_t,Q_t,R_t)dn_t \nonumber\\
&\quad+2(u_t - 1)x_t^2\sqrt{N(P_t-u_t^2)f(h_t)}dW_t^u+ 2u_t(u_t-2)x_t\sqrt{N(Q_t-x_t^2)f(h_t)}dW_t^x\bigg],\\
dR_t&=\frac{U_0x_t-R_t}{\tau_F}dt+\frac{u_t-R_t}{\tau_D}dt+\frac{1}{N}\bigg[\mu^R(u_t,x_t,P_t,R_t)dn_t\nonumber\\
&\quad+2\big(U(u_t - 1) - u_t\big)x_t\sqrt{N(P_t-u_t^2)f(h_t)}dW_t^u\nonumber\\
&\quad+ \big(U(1 - u_t)^2 - u_t^2\big)\sqrt{N(Q_t-x_t^2)f(h_t)}dW_t^x\bigg].
\end{align}
with Poisson noise
\begin{equation}
dn_t=\pi(dt,[0,Nf(h_{t^-})]),
\end{equation}
\end{subequations}
where $\pi$ is a two-dimensional Poisson random measure with mean $\langle\pi(ds,dt)\rangle=dsdt$ (i.e. $n_t$ is a counting process with stochastic intensity $Nf(h_{t^-})$ and $dn_t/dt$ is the associated Dirac delta spike train).
Furthermore, $W_t^u$ and $W_t^x$ are Wiener processes, where $W_t^u$ and $W_t^x$ have correlated increments
\begin{subequations}
\begin{gather}
\langle dW_t^udW_s^u\rangle=\langle dW_t^xdW_s^x\rangle=\delta(t-s)dtds,\\
\langle dW_t^udW_s^x\rangle=\frac{R_t-u_tx_t}{\sqrt{(P_t-u_t^2)(Q_t-x_t^2)}}\delta(t-s)dtds.
\end{gather}
\end{subequations}
Introducing the Gaussian white-noise processes
\begin{subequations}
\begin{align}
\xi_x(t)&=\sqrt{\frac{(Q_t-x_t^2)f(h_t)}{N}}\frac{dW_t^x}{dt},\\
\xi_u(t)&=\sqrt{\frac{(P_t-u_t^2)f(h_t)}{N}}\frac{dW_t^u}{dt},
\end{align}
\end{subequations}
the full stochastic dynamics of the mesoscopic neural-mass model can be rewritten in the form of a Langevin equation
\begin{subequations}
\label{eq:langevin}
\begin{align}
\frac{dh}{dt}&=\frac{\mu(t)-h}{\tau}+J\lreckig{RA(t) +u\xi_x(t) + x\xi_u(t)}, \label{eq:h-meso-full-exp}\\
\frac{du}{dt}&=\frac{U_0-u}{\tau_F}+U(1-u)A(t) - U\xi_u(t)\\
\frac{dx}{dt}&=\frac{1-x}{\tau_D}-RA(t) -u\xi_x(t) - x\xi_u(t), \\
\frac{dP}{dt}&=2 \frac{U_0u-P}{\tau_F}+\mu^P(u)A(t) +2U \big(1 + u(U - 2) - U\big)\xi_u(t),\\
\frac{dQ}{dt}&=2 \frac{x-Q}{\tau_D}+\mu^Q(u,x,P,Q,R)A(t) + 2(u-1)x^2\xi_u(t) + 2u(u-2)x\xi_x(t)\\
\frac{dR}{dt}&=\frac{U_0x-R}{\tau_F}+\frac{u-R}{\tau_D}+\mu^R(u,x,P,R)A(t
+2\big(U(u - 1) - u\big)x\xi_u(t)+ \big(U(1 - u)^2 - u^2\big)\xi_x(t).
\end{align}
with
\begin{equation}
A(t)=\frac{1}{N}\frac{dn_t}{dt}.
\end{equation}
The Gaussian white noise processes are given by their covariance functions
\begin{align}
\langle \xi_u(t)\xi_u(s)\rangle&=\frac{(P_t-u_t^2)f(h_t)}{N}\delta(t-s),\\
\langle \xi_x(t)\xi_x(s)\rangle&=\frac{(Q_t-x_t^2)f(h_t)}{N}\delta(t-s),\label{eq:langevin_xix}\\
\langle \xi_u(t)\xi_x(s)\rangle&=(R_t-u_tx_t)\delta(t-s).
\end{align}
\end{subequations}
Equation \eqref{eq:langevin} constitutes the jump-diffusion model for the full Tsodyks-Markram STP model with depression and facilitation.
In the case of a general impulse-response function $\kappa$, the derivation of the mesoscopic STP dynamics does not change. The only difference is that the dynamics for $h$ above needs to be changed to corresponding convolution equations. Therefore, for a general impulse-response function one only needs to replace Eq.~\eqref{eq:h-meso-full-exp} by
\begin{equation}
\label{eq:h-meso-full-kappa}
h(t)=\int_0^{t^{+}}\kappa(t-\hat{t})\left\{\frac{\mu(\hat{t})}{\tau}+J\lreckig{R(\hat{t}^{-})A(\hat{t})+u(\hat{t}^{-})\xi_x(\hat{t})+x(\hat{t}^{-})\xi_u(\hat{t})}\right\}\,d\hat{t}.
\end{equation}
\subsection*{Mesoscopic model with pure synaptic depression}
In order to obtain the jump-diffusion model Eq.~\eqref{eq:model-depress-3} only with short-term synaptic depression but without facilitation as considered in the Results section, we set $u\equiv U_0, P \equiv U_0^2$ and $R \equiv U_0x$. Then, $\mu^Q(u,x,P,Q,R) = -U_0(2-U_0)Q$ and $\xi_u(t) \equiv 0$, and the dynamics Eq.~\eqref{eq:langevin} reduce to
\begin{subequations}
\label{eq:langevin_depress}
\begin{align}
\frac{dx}{dt}&=\frac{1-x}{\tau_D}-U_0xA(t) -U_0\xi_x(t) , \\
\frac{dQ}{dt}&=2 \frac{x-Q}{\tau_D}-U_0(2-U_0)Qf(h(t)) \label{eq:Q-full} ,
\end{align}
\end{subequations}
with the Gaussian white noise process $\xi_x(t)$ defined as in Eq.~\eqref{eq:langevin_xix}.
In Eq.~\eqref{eq:Q-full}, we have neglected the noise terms and replaced the population activity $A(t)$ by its mean $f(h(t))$ because they enter the dynamics of $x$ only to order $N^{-3/2}$.
Finally, introducing the new mesoscopic variable $\tilde Q = Q-x^2$, we can combine Eqs.~\eqref{eq:langevin_depress} and \eqref{eq:langevin_xix} to arrive at the jump-diffusion model Eq.~\eqref{eq:model-depress-3}.
Again, in the case of a general impulse-response function $\kappa$, the stochastic differential equation \eqref{eq:jump-diff-depress-h} for $h(t)$ needs to be replaced by the corresponding integral expression
\begin{equation}
h(t)=\int_0^{t^{+}}\kappa(t-\hat{t})\left\{\frac{\mu(\hat{t})}{\tau}+JU_0\lreckig{x(\hat{t}^-)A(\hat{t})+\sqrt{\frac{\tilde Q f(h)}{N}}\xi_x(\hat{t})}\right\}\,d\hat{t}.
\end{equation}
\subsubsection*{Reduction to a pure diffusion process}
We can further reduce the jump-diffusion model Eq.~\eqref{eq:model-depress-3} in the large $N\gg 1$ limit by exploiting the Gaussian approximation of the Poisson shot noise Eq.~\eqref{eq:popact} representing the empirical population activity
\begin{equation}
A(t) = \frac1N \frac{n(t)}{dt} \approx f\big(h(t)\big) + \xi_A(t),
\end{equation}
where the Gaussian white noise process $\xi_A(t)$ has the auto-correlation function
\begin{equation}
\langle \xi_A(t)\xi_A(s)\rangle = \frac{f(h(t))}N \delta(t-s).
\end{equation}
Consequently, we find that
\begin{align*}
U_0xA(t) + U_0\xi_x(t) = U_0xf(h) + U_0 \big[ x\xi_A(t) + \xi_x(t)\big].
\end{align*}
We can simplify the term in brackets by capitalizing on the fact that $x\xi_A$ and $\xi_x$ are independent Gaussian white noises, whose sum is itself a Gaussian random variable with variance
\begin{align*}
\frac{x^2 f(h)}{N} + \frac{(Q-x^2) f(h)}{N} = \frac{Q f(h)}{N}.
\end{align*}
Finally, we can replace the term $U_0xA(t) + U_0\xi_x(t) $ in the $h$- and $x$-dynamics of the jump-diffusion model by
\begin{equation}
U_0xA(t) + U_0\xi_x(t) = U_0x f\big( h(t)\big) + U_0\frac{Q f\big( h(t)\big)}{N} \xi(t),
\end{equation}
where $\xi(t)$ is a Gaussian white noise with auto-correlation function $\langle \xi(t)\xi(s) \rangle = \delta(t-s)$, and we retrieve, in an alternative way, the mesoscopic diffusion model Eq.~\eqref{eq:meso} with short-term synaptic depression.
As before, for a general impulse-response function $\kappa$, the stochastic differential equation \eqref{eq:diff-model-h} for $h(t)$ needs to be replaced by the corresponding integral expression
\begin{equation}
h(t)=\int_0^{t^{+}}\kappa(t-\hat{t})\left\{\frac{\mu(\hat{t})}{\tau}+JU_0\lreckig{x(\hat{t})f(h(\hat{t})) + \sqrt{\frac{Q(\hat{t}^{-})f(h(\hat{t}^{-}))}{N}}\xi(\hat{t})}\right\}\,d\hat{t}.
\end{equation}
\subsection*{Recurrent network parameters and numerical simulations}
In the Results section, we presented and analyzed the network dynamics of a single excitatory population consisting of $N$ LNP-STD spiking neurons following the microscopic dynamics Eq.~\eqref{eq:micro} or the mesoscopic dynamics Eq.~\eqref{eq:meso}/\eqref{eq:model-depress-3}. For the ring-attractor network model to study hippocampal replay dynamics, we used the microscopic dynamics Eq.~\eqref{eq:ring_micro} and the mesoscopic dynamics Eq.~\eqref{eq:ring_meso}.
The parameters to obtain Figures~\ref{fig:meta}--\ref{fig:3env} are detailed in Table~\ref{table:parameters}.
The number $N$ of neurons per population $\alpha=1,\dots,M$ is indicated inside the Figures.
The model specification and parameters of the Romani-Tsodyks (RT) model for fatigue-induced hippocampal replay, with which we compared our results for the mesoscopic ring-attractor network model in the single and in the multiple environment case in Figs.~\ref{fig:1Env}--\ref{fig:3env}, respectively, can be found in \cite{RomTso15}.
\begin{table}[t]
\begin{adjustwidth}{-0.in}{0in}
\centering
\caption{
{\bf Parameters used in Figs.~\ref{fig:meta}--\ref{fig:3env}}}
\begin{tabular}{|l c c+c|c|c|c|}
\hline
{\bf Parameters} & &{\bf Fig.} & {\bf \ref{fig:meta}} & {\bf \ref{fig:updown}} & {\bf \ref{fig:1Env}/\ref{fig:1EnvB}} & {\bf\ref{fig:3env}} \\ \thickhline
Synaptic time constant & $\tau$ & [s] & \hspace{.1cm}0.05\hspace{.1cm} & \hspace{.1cm}0.05\hspace{.1cm} & \hspace{.1cm}0.01\hspace{.1cm} & \hspace{.1cm}0.01\hspace{.1cm} \\
Depression time constant & $\tau_D$ & [s] & 0.8 & 0.6 & 0.8 & 0.8\\
Utilization of synaptic resources & $U_0$ & & 0.4 & 0.4 & 0.8 & 0.8\\
Transfer function $f(h)$ & & & & & &\\
---Suprathreshold slope & $r$ & & 3.15 & 3.15 & 1.0 & 1.0\\
---Smoothness at threshold & $\alpha$& & 0.25 & 0.2 & 1.0 & 1.0\\
---Threshold & $h_0$ & [mV] & 2.0 & 2.0 & 0.0 & 0.0\\
Coupling constant & $J \cdot \tau$ & [mV] & 3.5 & 3.5 & -- & --\\
Uniform feedback inhibition & $J_0 \cdot \tau$ & [mV] & -- & -- & 13 & 16\\
Map-specific interaction & $J_1 \cdot \tau$ & [mV] & -- & -- & 30 & 25\\
External input (meso-,microscopic models) & $\mu$ & [mV] & 1.4 & 1.4 & -1.4 & -1.5\\
External input (only macroscopic model) & $\mu_\text{macro}$ & [mV] & -- & -- & -0.9 & -0.35\\
Number of populations & $M$ & & 1 & 1 & 100 & 300\\\hline
\hline
\end{tabular}\label{table:parameters}
\end{adjustwidth}
\end{table}
We performed numerical simulations of the microscopic, mesoscopic and macroscopic dynamics using an Euler–Maruyama scheme with time step $dt=10^{-4}s$.
In the single-population scenario considered in Figs.~\ref{fig:meta} and \ref{fig:updown}, we ran the simulations for $T_\text{sim} = 100'000s$ to obtain significant statistics.
\subsubsection*{A circular environment}
In the ring-attractor network with a single environment stored in the synaptic connectivity, we ran the simulations long enough to have at least $5'000$ burst events, which allows for a meaningful comparison of the different models.
In Table~\ref{table:1env}, we list the simulation length $T_\text{sim}$ together with an overview over the simulation results.
\begin{table}[!ht]
\begin{adjustwidth}{-0.in}{0in}
\centering
\caption{
{\bf Simulation results complementing Fig.~\ref{fig:1Env}.}}
\begin{tabular}{|l+l|l|l|l|}
\hline
& {\bf Mesoscopic} & {\bf Microscopic} & {\bf RT model} & {\bf Macro$_{\mu=-0.9}$} \\ \thickhline
$T_\text{sim}$ [s] & 4000 & 4000 & 2350 & 2350\\ \hline
\# bursts & 5030 & 5040 & 5096 & 5167 \\
slope(\# peaks/duration) & 9.28 & 9.26 & 7.80 & 6.85 \\
slope(distance/duration) & 17.27 & 17.17 & 17.56 & 16.81 \\
mean(IBI) & 0.652 & 0.651 & 0.293 & 0.316 \\
CV(IBI) & 0.846 & 0.842 & 0.794 & 0.847 \\
skewness $\gamma_s$(IBI) & 0.917 & 0.940 & 1.230 & 1.224 \\
resc.\ skewness $\alpha_s$(IBI) & 0.361 & 0.372 & 0.516 & 0.481 \\
kurtosis $\gamma_e$(IBI) & -0.115 & -0.047 & 0.499 & 0.454 \\
resc.\ kurtosis $\alpha_e$(IBI) & -0.011 & -0.004 & 0.053 & 0.042 \\\hline
\# NLE ($>1$ peak) & 1019 & 967 & 939 & 1140 \\
fraction(NLE/bursts) & 20.3\% & 19.2\% & 18.4\% & 22.1\% \\
fraction(forward/NLE) & 51.03\% & 47.88\% & 51.54\% & 49.39\% \\
mean(abs(NLE speed)) & 12.41 & 12.54 & 12.79 & 12.64 \\\hdashline
Serial correlations & & & & \\
Lag 1 (event speed) & 0.048 & 0.054 & -0.563 & -0.344 \\
Lag 1 (forward/backward) & 0.062 & 0.062 & -0.525 & -0.331 \\
Lag 2 (event speed) & -0.043 & -0.010 & 0.434 & 0.229 \\
Lag 2 (forward/backward) & -0.041 & -0.001 & 0.401 & 0.226 \\
Lag 3 (event speed) & -0.013 & 0.053 & -0.320 & -0.074 \\
Lag 3 (forward/backward) & -0.018 & 0.056 & -0.312 & -0.082 \\
Lag 4 (event speed) & -0.010 & 0.011 & 0.227 & 0.021 \\
Lag 4 (forward/backward) & -0.015 & 0.007 & 0.215 & 0.024 \\
Lag 5 (event speed) & 0.008 & -0.032 & -0.139 & 0.058 \\
Lag 5 (forward/backward) & 0.007 & -0.033 & -0.135 & 0.052 \\\hline
\hline
\end{tabular}
\begin{flushleft} We computed the statistics for the sequence of interburst intervals (IBI) $T_i, i=1,2,\dots,$ as follows: the mean is the first cumulant $\kappa_1 = \langle T_i \rangle$; the coefficient of variation (CV) is CV$=\sqrt{\kappa_2}/\kappa_1$ with second cumulant $\kappa_2 = \langle T_i^2\rangle - \kappa_1^2$; the skewness is $\gamma_s = \kappa_3 \kappa_2^{-3/2}$ with third cumulant $\kappa_3 = \langle T_i^3\rangle - 3\kappa_1\kappa_2 - \kappa_1^3$; the rescaled skewness is $\alpha_s = \gamma_s / (3CV) = \kappa_1\kappa_2/(3\kappa_2^2)$; the kurtosis is $\gamma_e = \kappa_4 \kappa_2^{-2}$ with fourth cumulant $\kappa_4 = \langle T_i^4\rangle - 4\kappa_1\kappa_3 - 3\kappa_2^2 - 6\kappa_1^2\kappa_2 - \kappa_1^4$; the rescaled kurtosis is $\alpha_e = \gamma_e / (15CV^2) = \kappa_1^2\kappa_4/(15\kappa_2^3)$. Similar to the definition of the CV, for which the Poisson process serves as a reference for the IBI variability with CV$=1$, the rescaled skewness $\alpha_s$ and rescaled kurtosis $\alpha_e$ use the inverse Gaussian distribution as a reference. Values of $\alpha_s$ larger (smaller) than $1$ and $\alpha_e$ larger (smaller) than $0$ indicate that the IBI distribution is more (less) skewed and more (less) peaked, respectively, than an inverse Gaussian, see \cite{SchFis10}.
\end{flushleft}
\label{table:1env}
\end{adjustwidth}
\end{table}
In more detail, we defined burst events as contiguous epochs of the averaged population activity above a certain threshold (taken as the activity averaged across neurons and the whole simulation period).
There is an almost perfect agreement between the microscopic network and our mesoscopic description: First, the number of bursts ($5'040$ vs.\ $5'030$) coincides up to an error of less than $0.2\%$. Second, the slopes of the linear regression between the number of peaks and the duration per burst are almost the same, see also Fig.~\ref{fig:1Env}Dii and Eii, and our model prediction ($\sim9.3$) is closer to the value observed experimentally ($\sim10.0$, \cite{DavKlo09,GupvdM10}) than the macroscopic model predictions.
Third, linear regression between the distance traveled during an event and its duration, yield almost identical slopes for the micro- and the mesoscopic models (Fig.~\ref{fig:1Env}Diii and Eiii).
Fourth, also the means and the coefficients of variation (CV) of the interburst intervals, i.e.\ the time from the end of the $k$th burst until the start of the $k+1$st burst, coincide.
Next, we defined nonlocal replay events (NLEs) as bursts of the average activity with more than one peak so that a transient traveling wave is visible in the density plots in Fig.~\ref{fig:1Env}Bii and Cii, resembling a hippocampal replay pattern.
Among all bursts events, around $20\%$ are NLEs, which is consistent across all four models. Around half of all the NLEs travel in anti-clockwise/negative direction (``forward replay'') and the other half in clockwise/positive direction (``backward replay'' or ``preplay''); also this feature is consistent across the different models. To determine the absolute event speed per NLE, see Fig.~\ref{fig:1Env}Div and Eiv, we divided the distance of the traveled path (irrespective of in forward or backward direction) by the duration of the NLE. The means of the absolute NLE speeds in all four models were again close to each other, which stresses the robustness of our findings across parameters and models.
Finally, we also investigated serial correlations of the NLEs. Positive serial correlations of the event speed (now taking into account also the travel direction by considering negative speed for backward replays) at lag $n$ indicate that the $k$th and the $k+n$th NLE are more likely to travel in the same direction, whereas negative correlations indicate these NLEs to travel in opposite directions.
While the macroscopic and the RT model showed strong negative correlations at lag $1$ and positive correlations at lag $2$, correlations in the mesoscopic and microscopic models were negligible, see also Fig.~\ref{fig:1EnvB}B. Besides, computing the serial correlations not for the (directional) event speed, but for a binary vector of forward/backward replay directions (by taking the sign of the directional event speed) yielded comparable results (see Table~\ref{table:1env}).
\subsubsection*{Multiple circular environments}
In the ring-attractor network with three circular environments stored in the synaptic connectivity, we ran micro- and mesoscopic simulations long enough to match the number of bursts in the deterministic (macroscopic and RT) model simulations of length $T_\text{sim} = 1'000s$, see Table~\ref{table:3env}.
To reduce confounding asymmetries in the underlying synaptic connectivity structure, we constructed the selectivity vectors $\zeta_\alpha$ in Eq.~\eqref{eq:J_3env} pseudo-randomly while guaranteeing that the number of bursts was equally distributed across the three environments, see Table~\ref{table:3env}.
More precisely, we created the binary selectivity vectors $\zeta_\alpha$ under the constraint that exactly $fM = 90$ of in total $M=300$ units $\alpha = 1,\dots,M$ are selective for each environment $k \in \{1,2,3\}$.
Out of these $90$ selective units for environment $k$, units $1,\dots,7$ were selective for all three environments, units $8,\dots,17$ were selective for environments $k$ and $j$ and units $18,\dots,27$ for environments $k$ and $l$ with $j,l \in \{1,2,3\}, k \neq j \neq l\neq k$. The remaining $63$ units were exclusively selective for environment $k$. After this selection process, we randomly shuffled these $90$ units and drew unique, evenly distributed place field angles $\theta_\alpha^k \in \{2 \pi/90, 4\pi/90,\dots, 2\pi\} $ for each unit $\alpha = 1,\dots,90$ selective for environment $k$.
To quantify which subsequences occurred more frequent than others, we computed the probabilities for subsequences with three distinct environments $(k,j,l)\in\{1,2,3\}^3$ with $k\neq j \neq l \neq k$ by dividing the number of occurrences of a particular subsequence by the number of all possible 3-sequences (= number of all bursts$- 2$). The results are shown in Fig.~\ref{fig:3env}.
\begin{table}[!ht]
\begin{adjustwidth}{-0.in}{0in}
\centering
\caption{
{\bf Simulation results complementing Fig.~\ref{fig:3env}.}}
\begin{tabular}{|l+l|l|l|l|}
\hline
& {\bf Mesoscopic} & {\bf Microscopic} & {\bf RT model} & {\bf Macro$_{\mu=-0.35}$} \\ \thickhline
$T_\text{sim}$ [s] & 1350 & 1350 & 1000 & 1000\\ \hline
\# bursts & 2291 & 2296 & 2354 & 2259\\
bursts in env.\ 1 & 34.57\% & 33.71\% & 32.71\% & 33.24\% \\
bursts in env.\ 2 & 31.12\% & 31.66\% & 34.45\% & 33.95\% \\
bursts in env.\ 3 & 34.31\% & 34.63\% & 32.84\% & 32.80\% \\\hline
\hline
\end{tabular}
\label{table:3env}
\end{adjustwidth}
\end{table}
\section*{Acknowledgments}
This research has received funding from the European Union’s Horizon 2020 research and innovation
programme under the Marie Sk{\l}odowska-Curie grant agreement no. 101032806 and the Swiss National Science Foundation (grant no. 200020\_184615).
\nolinenumbers
|
1,314,259,993,250 | arxiv | \section{Introduction}
Ultrasound (US) and Magnetic Resonance Imaging (MRI) signals are highly complementary. MRI is based on magnetic and RF fields and can achieve diversified soft-tissue contrasts, while US imaging is based on longitudinal pressure waves and offers a high temporal resolution, convenient and relatively low cost approach to diagnostic imaging. Efforts have been made to combine these two very different modalities, for US-MRI image fusion \cite{Petrusca_2013}, as well as prospective motion compensation in MRI \cite{Feinberg_2010}, using brightness mode (B-mode) ultrasound. A potentially useful idea in the context of image-guided intervention would be to learn the appearance of free-breathing MRI images during a training stage, then estimate them later on when MRI scanning may not be available anymore, for example after the patient left the MRI suite. Whether on the same day or a different day, the ability to generate MRI contrast based solely on US signals alone would be helpful as the patient proceeds to other diagnostic and/or therapy device(s), to continue generating MRI-like images even as the patient lies in a positron-emission tomography (PET) scanner or a radiotherapy device, for example. To this end,
the approach introduced in \cite{Preiswerk_2016} and the publicly-available software\footnote{https://github.com/fpreiswerk/OCMDemo} was considerably expanded here to allow the rapid synthesizing of MRI contrast using a long-term recurrent convolutional network inspired from the video-recognition work in \cite{Donahue_2014}.
An MR-compatible single-element ultrasound transducer \cite{Schwartz_2013} and a 3D-printed capsule, collectively referred to here as an ’organ-configuration motion’ (OCM) sensor, acquired amplitude mode (A-mode) US signals of respiratory organ motion. In contrast to the conventional 2D spatial interpretation of US signals through delay-and-sum beamforming, the OCM's A-mode signals were not spatially encoded but provided a high temporal resolution signature of abdominal configuration, sensitive over a region in the area of sensor placement. Fast OCM signals (100 fps) can be correlated with slower-rate MRI acquisitions (1 fps), to estimate fast synthetic MR images of respiratory organ motion at the rate of the OCM signals (100 fps).
This could be done using kernel density estimation (KDE) \cite{Nadaraya_1964,Watson_1964} to model this relationship in a non-parametric way, as data is acquired during online learning, as proposed in \cite{Preiswerk_2015,Preiswerk_2016}. KDE is well suited for online learning, because there is no separation into training and inference stage. However, this comes at computational cost, as the time complexity at inference depends on the size of the dataset. In \cite{Preiswerk_2016}, an image reconstruction time of 45 ms for a single 2D MR image was reported using such KDE approach, on a relatively-small database accumulated over \SI{2}{\minute} of hybrid OCM-MRI data. Furthermore, the inter-fraction variability of OCM signals was reported to be significant, which would presumably prevent any removal/re-attachment of an OCM probe, and confuse the KDE-based processing. As a result, any scenario involving the use of MRI+OCM data acquired on a given day to supplement, for example, radiotherapy treatments performed on a different day could not be considered, as the removal and re-attachment of the sensor days later would destroy the ability to generate accurate MR images from OCM signals. Lastly, due to the curse of dimensionality being a limiting factor in kernel methods, a small subset of depth values had to be pre-selected in the OCM traces in \cite{Preiswerk_2016}, as a trade-off between information vs. dimensionality of the data.
Recently, artificial neural networks have become state-of-the-art models for computer vision (CV) and natural language processing (NLP) \cite{LeCun_Bengio_Hinton_2015}. Feed-forward architectures, most notably convolutional neural networks (CNNs) \cite{Lecun_1998} are used to automatically extract hierarchical features from (labeled) data, while recurrent networks (RNNs), typically based on long-short term memory (LSTM) units \cite{Hochreiter_1997}, allow temporal structures to be learned from data.
We propose to use a combined CNN-LSTM model, called a long-term recurrent convolutional network (LRCN) \cite{Donahue_2014}, to learn the relationship between OCM sensor data and fully reconstructed MR images end-to-end. Our method improves on all the aforementioned challenges associated with KDE; By directly learning a mapping between OCM signals and MR images, the computational cost of image reconstruction is shifted from inference time to the training stage. Hence, the computational cost of image reconstruction becomes independent of the training set size. Our approach can therefore, in principle, be scaled to estimating several planes at once, i.e., 4D-MRI, at a high temporal rate. Our pre-processing step, closely related to Doppler processing, makes OCM signals more robust against signal changes that have little to do with physiological motion, and more
to do with inconsequential details on exact sensor placement and/or anatomy. As a consequence, the Doppler-like pre-processing may help avoid registration steps when removing and re-attaching OCM sensors. Lastly, the curse of dimensionality is defeated since, unlike kernel methods, the proposed method does not rely on a high-dimensional similarity measure to be evaluated between any new OCM signal and all signals from the training set.
\section{Materials and Methods}
Hybrid OCM-MRI data were acquired on \nSub subjects following informed consent using an IRB-approved protocol. Scanning was performed on a Siemens Verio 3T system, using a T1-weighted spoiled gradient echo MRI sequence with two-fold parallel imaging acceleration and 5/8 partial-Fourier acceleration. The US transducer at the heart of the OCM sensor was either a 5MHz (subjects 1-4) or 1MHz (subjects 5 and 6) MR-compatible transducer (Imasonics). The transducer was enclosed in a custom 3d-printed capsule that allowed for quick and easy attachment to the skin, regulation of pressure through a screwable lid (see Figure \ref{fig:hardware_setup}), and retention of water-based US gel for acoustic coupling. The 1MHz transducer employed in later subjects achieved greater signal penetration; nevertheless, both 5MHz and 1MHz OCM signals appeared equally appropriate for our purpose.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.42\textwidth}
\includegraphics[width=\textwidth]{figure1a.png}
\caption{}
\label{fig:setup_transducer}
\end{subfigure}
\qquad
\begin{subfigure}[b]{0.34\textwidth}
\includegraphics[width=\textwidth]{figure1b.png}
\caption{}
\label{fig:OCM_data}
\end{subfigure}
\caption{a) 3d model rendering (1) and Individual parts (2) of an OCM sensor. The US transducer (a 1 Mhz version is depicted in (a)) was fitted into a 3d-printed capsule of our own design (green parts), which housed water-based gel for acoustic coupling and allowed for the pressure onto the skin to be adjusted by twisting the screw-like lid (3). Two-sided tape on the bottom was used for adhesion on the skin. b) Visualization of unprocessed signals $u$ and phase-processed versions $v$ over a 3 s window. Respiratory motion is more pronounced in phase-processed signals (bright/dark pixels correspond to traces acquired during inspiration/expiration, respectively).}\label{fig:hardware_setup}
\end{figure}
The OCM data acquisition was synchronized with the scanner's repetition rate, TR = \SI{10}{\milli\second}, using dedicated hardware and minor modifications to the MRI pulse sequence: At the beginning of each TR interval, the scanner was programmed to generate an optical synchronization pulse, which was then converted to a TTL voltage pulse using dedicated hardware. These pulses were used to trigger the OCM acquisition, at the rate of exactly one OCM trace acquisition per TR interval, thus precisely synchronizing the MRI and OCM streams of data. The purpose of such synchronization was two-fold: to allow MRI and OCM data to be unambiguously located on a common time axis, and to avoid the OCM sensor being fired during an MRI acquisition window, which would have caused artifacts in the MRI images. A total of 60 k-space lines and corresponding OCM signals were acquired per image. Individual OCM traces $u$ were sampled at $f_s = \SI{100}{\mega\sample\per\second}$ for $t_s = \SI{200}{\micro\second}$, yielding $D = f_s \cdot t_s = 20 e^3$ samples per trace. The window from index 1000 to 8000 was retained for further processing, and downscaled to $d=560$ samples. MR images of the breathing liver in the sagittal plane were acquired at a rate of \SI{0.85}{fps}. Figure \ref{fig:hardware_setup} gives an overview of the OCM sensor and data.
\subsubsection{Preprocessing of OCM signals and MR images:}
Raw (magnitude) OCM signals $u(s,t)$ are highly sensitive to physiological motion along $t$ (the repeat index, as OCM traces are repeatedly acquired every TR = \SI{10}{\milli\second}), but unfortunately, they tend to also prove highly sensitive to mostly unimportant details along $s$ (the sampling index) relating to sensor placement and underlying anatomy. In the process to separate the former from the latter, OCM signals were first transformed into a complex entity:
\begin{equation}
\hat{u}(s,t) = \mathcal{F}_s^{-1}(\Omega(\mathcal{F}_s(u(s,t)))) = |u(s,t)| (\cos \theta(s,t) + i \sin \theta(s,t)),
\end{equation}
where $\mathcal{F}_s$ is the discrete Fourier transform along $s$, and $\Omega$ is a Fermi filter that cancels negative as well as very high frequencies ($>10 \cdot f_0$, where $f_0$ is the transducer center frequency). In analogy with Doppler ultrasound, we shall now consider $\theta(s,t)$, the complex angle of $\hat{u}(s,t)$ for further analysis. Variations along $s$ have more to do with the object itself rather than how the object moves; for this reason the signal evolution along $t$, i.e., from trace to trace, was more closely linked to internal organ motion than variations along $t$. In particular, from $\theta(s,t)$, speed can be computed according to
\begin{eqnarray}
v(s,t) &=& \alpha \cdot \frac{d\theta(s,t)}{dt} = \alpha \cdot \frac{\theta(s,t) - \theta(s,t-1)}{2},
\end{eqnarray}
with $\alpha = \frac{0.5 \cdot \lambda}{360}$, where $\lambda$ is the wavelength in mm. Figure \ref{fig:OCM_data} visualizes $u$ and $v$. We denote the vector of signals of a single timestep $t$, over the whole signal depth $s = \{1, \ldots, d\}$, as $\mathbf{v}(t) \vcentcolon= [v(1,t), \ldots, v(d,t)]^T$.
For further processing, OCM signals were rearranged as $X_t \vcentcolon= [\mathbf{v}(t{-}n{+}1), \ldots, \mathbf{v}(t)]$, combining the most recent signal history of length $n=300$ (\SI{3}{\second}) in the form of a 2d image patch. This format proved well suited as input to the neural network described in the next section.
Instead of explicitly modeling all pixels of the MR image domain (i.e., the model output dimension), we exploit correlations between pixels by compressing the images first, using Principal Component Analysis (PCA); 10 principal components are retained and used as target variables $\mathbf{y}_t$ for the neural network. This compression from size 192\,px $\times$ 192\,px into a vector of 10 principal components for each image allows to significantly reduce the number of parameters in our model, at the cost of an acceptable loss of high-frequency image content.
\subsection{Network architecture}
In \cite{Preiswerk_2016}, KDE is used to compute the expectation of unknown MR images $I_t$, given new OCM signals $X_t$ and a database of previously seen data $D_t = \{I_\tau, U_\tau | \tau < t\}$,
\begin{equation}
\mathbb{E}_{I \sim p(I | X)} [ I_t | X_t, D_t ]
\label{eq:expectation}
\end{equation}
From a learning theory perspective, our motivation to replace KDE with a neural network to solve Equation \ref{eq:expectation} is guided by the following result from calculus of variations. We can view a neural network as any function $f$, granted the network is sufficiently powerful. Learning then becomes equivalent to choosing the best function according to the variational problem
\begin{equation}
f^* = \underset{f}{\operatorname{argmin}} \,\, \mathbb{E}_{I, X \sim p_{data}} \, ||I_t - f(X)||^2,
\label{eq:argmin}
\end{equation}
which has a solution at
\begin{equation}
f^*(X) = \mathbb{E}_{I \sim p_{data}(I|X)} [I].
\end{equation}
In the hypothetical case where infinitely many samples are available, Equation \ref{eq:argmin} implies that the mean squared error loss leads to an optimal estimate of Equation \ref{eq:expectation}, so long as $f^*$ is part of the class of functions we optimize over. In practice, of course, a limited amount of data is available, and regularization techniques are typically applied.
The major difference to non-parametric approaches, including KDE, is that a set of fixed model parameters is obtained. If the number of neurons is treated as a constant, the time complexity of a single prediction equals O(1), while a single prediction using KDE has complexity $\mathcal{O}(Nd)$, where $N$ is the number of OCM training samples in $D$, and $d$ is their dimensionality.
Inspired by recent work in image captioning and related tasks in video analysis, a long-term recurrent convolutional network (LRCN) \cite{Donahue_2014} architecture is used to learn the mapping $f(\cdot)$ from signals $X_t$ to MR images, $f(X_t) = \mathbf{y}_t$. The network consists of convolutional layers, $\phi_\tau(\cdot)$, followed by recurrent layers $\psi_\upsilon(\cdot)$, $f_{\tau,\upsilon}(X_t) = \psi_\upsilon({\phi_\tau(X_t))}$, both with their respective set of parameters $(\tau, \upsilon)$. For brevity, we omit these parameters from here on. Figure \ref{fig:LRCN_overview_overall} shows an overall picture of the network.
The purpose of the convolutional layers is to extract features over the spatial dimension, $s$ (i.e., columns), from input signals $X_t$. To this end, 1-d convolutions are applied along $s$; each output feature map corresponds to one 1-d filter applied along $s$ to all columns of the input. Thus, the output of convolutional layer $\phi_i(\cdot)$ is a set of $k_i$ row vectors $V_i = [\{\mathbf{\tilde{v}}_i^1\}^T, \ldots,\{\mathbf{\tilde{v}}_i^{k_i}\}^T]$ (another image), each row being a convolved version of all columns in $V_{i-1}$ (see Fig. \ref{fig:LRCN_overview_cnn}). Down at the last convolutional layer $l$, $k_l=1$, so its output $V_i = \{\mathbf{\tilde{v}}_i\}^T$ represents one-dimensional encoding of the signal evolution over the $n$ time steps contained in $X_t$. It is now the task of the following recurrent layers to learn how this encoding evolves over time. Recurrent layer $i$ transforms its input according to $\psi_i(V_i,h_{t-1})$, where $h_{t-1}$ is its internal state from the previous time step. Through this recurrence, coupled with an internal memory state, recurrent units are able to learn from the arbitrarily distant past, if necessary. Long-short term memory (LSTM) units \cite{Hochreiter_1997} are used here in all recurrent layers. Finally, a densely-connected output layer at depth $L$ maps $V_{L-1}$ to final outputs $\mathbf{y} = g(W V_{L-1})$, with weight matrix $W$ and linear activation $g(\cdot)$. For all experiments, the network structure was set to 4 convolutional layers with 64, 32, 16 and 1 output channels, respectively, followed by 2 recurrent layers with 10 output channels each. Both convolutional and recurrent units use $tanh$ activations. The network architecture is depicted in Figure \ref{fig:LRCN_overview_cnn}. Not shown in the figure are average pooling operations (pool size 2) between all convolutional layers, as well as are dropout layers (rate 0.2) active during training on all convolutional layers.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{figure2a.png}
\caption{}
\label{fig:LRCN_overview_overall}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.49\textwidth}
\includegraphics[width=\textwidth]{figure2b.png}
\vspace{0.23cm}
\caption{}
\label{fig:LRCN_overview_cnn}
\end{subfigure}
\caption{a) Overview of unrolled long-term recurrent convolutional network (LRCN) structure. 1-d convolutional layers extract image features from the input, and recurrent layers learn the temporal evolution of transformed features, using LSTM units. A densely-connected layer maps to 10 PCA coefficient outputs, $\mathbf{y}_i$ used to reconstruct the final image $\hat{I}_t$. b) Detailed view of a 1-d convolutional layer. Convolutions are applied along the spatial dimension $s$ only, transforming all columns to a vector $\mathbf{v}$. Colors green, blue and yellow represent different feature maps, i.e. learned kernels. A dense output layer transforms the LSTM outputs into a vector of principal components, and the inverse PCA transform (PCA$^{-1}$) restores the final image.}\label{fig:LRCN_overview}
\end{figure}
\section{Results and Discussion}
Each of the \nSub datasets was separated into a training set of $\SI{60}{\second}$ (100 MR images of size 192 $\times$ 192\,px with 100 corresponding OCM signal histories of 300 $\times$ 560\,px) and a test set of $\SI{30}{\second}$ (50 MR images with OCM signals). For each subject, a separate LRCN model was trained on the training set and evaluated on the test set. The mean squared error loss function was employed to optimize the $28,471$ trainable parameters of each of the 7 networks, using the Adam optimizer (learning rate 0.001, $\beta_1$=0.9, $\beta_2$=0.999) over 1000 epochs. Training time was below \SI{5}{\minute} per dataset on an NVIDIA Titan X GPU. Code and sample data is available online\footnotemark[1]. Figure \ref{fig:MRI_comparison} compares MRI reconstructions from the test set with their ground-truth. High-speed MRI reconstructions at the rate of OCM signals are best appreciated in video format\footnotemark[1]. \footnotetext[1]{\url{https://github.com/fpreiswerk/OCM-LRCN}}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.56\textwidth}
\includegraphics[width=\textwidth]{figure3a.png}
\caption{}
\label{fig:MRI_comparison}
\end{subfigure}
\quad
\begin{subfigure}[b]{0.40\textwidth}
\includegraphics[width=\textwidth]{figure3b.png}
\caption{}
\label{fig:KDE_comparison}
\end{subfigure}
\caption{a) Top row: M-mode display of all 50 test images of subjects 1-3. Reconstruction, ground-truth (in PCA space) and difference images side-by-side. Bottom row: Random image from subject 1 from test set, ground-truth (in PCA space) and difference image. b) Comparison of LRCN vs. KDE approach\cite{Preiswerk_2016}, where an average error of \SI{1}{pixel} was reported through manual validation by a radiologist. LRCN-based reconstructions are smoother but comparable.\label{fig:KDE_comparison_all}}
\end{figure}
We used publicly available code and data from \cite{Preiswerk_2016} for the KDE approach to compare the two methods, as shown by the M-mode image in Figure \ref{fig:KDE_comparison}.
In \cite{Preiswerk_2016}, a CPU reconstruction time of \SI{45}{\milli\second} per frame for a single plane was reported using KDE, on \SI{2}{\minute} of data. Using LRCN on the CPU, with the same hardware used in \cite{Preiswerk_2016}, one reconstruction took only \SI{4}{\milli\second}, for LRCN forward pass and PCA reconstruction combined. This amounts to a 10-fold speedup compared to KDE. On the GPU (NVIDIA Titan X), an additional factor of two was gained, with a reconstruction time of \SI{2}{\milli\second} (20 faster than KDE on CPU). Moreover, the reconstruction cost of the proposed LRCN method is constant, while KDE would become even slower with increasing size of the training set. This speedup might enable multi-plane real-time image synthesis in the future. We performed a pixel-wise sum of squared error (SSE) analysis between KDE, LRCN and ground-truth images, to link our LRCN results to the quantitative validation for KDE in \cite{Preiswerk_2016}. For the dataset presented in Fig. 3.b), the average SSE per image was slightly higher with LRCN, but comparable ($39.0 \pm 12$ for LRCN vs. $33.9 \pm 7$ for KDE), which can be explained by the loss of information resulting from working in PCA subspace of the original MR images.
In conclusion, the intriguing possibility of compressing the imaging capabilities of an MRI machine into small OCM sensors using machine learning could lead to promising image-guided therapy applications, such as real-time motion imaging for radiotherapy and biopsy needle guidance, even outside the MR bore.
\textbf{Acknowledgment.} Support from grants NIH P41EB015898, R03EB025546, R01CA149342, and R21EB019500
is duly acknowledged. GPU hardware was generously donated by NVIDIA Corporation.
|
1,314,259,993,251 | arxiv | \section{Introduction}
For consistent thermodynamics of self-gravitating systems, a finite size is
a key point to be taken into account. This was demonstrated first by York,
Jr. \cite{york86} who showed that account for the boundary resolves the
problems that seem to prevent the construction of the canonical ensemble for
Schwarzschild black holes and leads to new interesting features. In
particular, it turned out that for a given temperature $T$ on the boundary
and the radius $R$ of the cavity, there exist two branches of black hole
metrics, if $T$ is high enough, one of two branches being locally or even
globally stable. This also showed that in discussing thermal nucleation of
black holes from hot empty space-time \cite{gross}, one cannot neglect the
presence of the boundary without which the thermal ensemble does not exist
at all.
Further, some general formulas describing thermal properties of the
gravitating ensembles were derived \cite{marty} - \cite{6}. Unfortunately,
application of the general formalism in concrete analysis was done for
spherically symmetric metrics only \cite{brad}, \cite{jose} since the
explicit expression for distorted black holes is, as a rule, absent or too
complicated. This somewhat slowed down further progress in this area.
The aim of the present note is to show that there are some simple but useful
properties of distorted black holes in the canonical ensemble that can be
inferred with minimum information, almost "from nothing". It turned out that
this step can be performed due to using a simple and elegant coordinate
frame used by W. Israel \cite{isr} in proving his famous uniqueness theorems
for black holes. Now, however, one should bear in mind that in contrast to
\cite{isr}, our space-time is not asymptotically flat due to the presence of
the boundary and this is the crucial point.
\section{Basic equations}
In the coordinate frame of Ref. \cite{isr}, the metric can be written in the
for
\begin{equation}
ds^{2}=-V^{2}dt^{2}+\rho ^{2}dV^{2}+\gamma _{ab}dx^{a}dx^{b}\text{.}
\label{met}
\end{equation
Here, $a,b=1,2$. It is supposed that the metric coefficients do not depend
on $t$. It is seen from (\ref{met}) that
\begin{equation}
\rho ^{-2}=(\nabla V)^{2}\text{.}
\end{equation
In the horizon limit,
\begin{equation}
\rho \rightarrow \rho _{H}=\frac{1}{\kappa }\text{,} \label{kappa}
\end{equation
where $\kappa $ is the surface gravity (see, e.g.. eqs. 105 - 107 in \cit
{vis}).
Then, the Hawking temperatur
\begin{equation}
T_{H}=\frac{\kappa }{2\pi }=\frac{1}{2\pi \rho _{H}}\text{.} \label{surf}
\end{equation
Eq. (32) of \cite{isr} gives u
\begin{equation}
\frac{\partial }{\partial V}(\frac{\sqrt{\gamma }}{\rho })=0\text{,}
\label{gv}
\end{equation
whenc
\begin{equation}
\rho =\sqrt{\gamma }C(x^{1},x^{2})\text{.} \label{roc}
\end{equation}
Eq. (39) of \cite{isr} tells us that the Kretschmann scalar $Kr$ in the
vacuum space-time
\begin{equation}
\frac{Kr}{8}=\frac{1}{8}R_{ABCD}R^{ABCD}=(V\rho ^{)^{-2}}[K_{ab}K^{ab}+2\rho
^{-2}\rho _{;a;b}\rho ^{a;b}+\rho ^{-4}\left( \frac{\partial \rho }{\partial
V}\right) ^{2}]\text{.} \label{kr}
\end{equation}
\section{Properties of the canonical ensemble}
Usually, when dealing with thermodynamic description of some space-time, one
first finds its metric (or takes the already known one) and only afterwards
ascribes thermodynamic parameters to the system. Meanwhile, a coherent
approach to finite size thermodynamic implies something quite different.
What is done in the problem are boundary data that are specified on some
surface that is not necessarily spherical. These includes the boundary
metric $\gamma _{ab}$ and the local temperature. Also, \ for specifying a
solution of field equations, we need $\rho $ and $K_{ab}$. Integrating
equations of motion in the inward direction, one can in principle (but not
in practice) find these solutions inside a cavity. Meanwhile, for our
purpose we need much less information (see below) that require only one
equation plus regularity conditions.
Statement 1. If inside a cavity the vacuum space-time is not flat, it cannot
be horizonless.
Proof. If the space-time is flat, $V\equiv 1$ and cannot be taken as an
independent variable. We assume that it is not flat, so the metric in the
form (\ref{met}) can be used. Let us suppose that there is no horizon, so
the metric has a regular centre. This means that for some $V=V_{1}>0$ the
quantity $\sqrt{\gamma }=0$. Then, according to (\ref{roc}), $\rho
\rightarrow 0$ as well when $V\rightarrow V_{1}$. In (\ref{kr}) this entails
that the finiteness of $Kr$ require
\begin{equation}
\frac{\partial \rho }{\partial V}\sim \rho ^{3}\text{, }\frac{1}{\rho ^{2}
\sim V+const\text{, }\rho (V_{1})\neq 0\text{.}
\end{equation
This is in contradiction with (\ref{roc}).
Thus there are only two phases take part in thermodynamic competition: the
flat space-time and a black hole.
Statement 2. A vacuum black hole \ with a finite horizon area cannot be
extremal ($\kappa \neq 0$).
Proof. If $\kappa =0$, it follows from (\ref{kappa}) that $\rho _{H}=\infty
. Then, it follows from (\ref{roc}) that $\gamma _{H}=\infty $ as well in
contradiction with the assumption.
Statement 3. If the horizon area shrinks, the Hawking temperature in this
limit grows unbounded:
\begin{equation}
\lim_{A_{H}\rightarrow 0}T_{H}=\infty \text{.} \label{th}
\end{equation}
Proof. The area $A$ of any equipotential surface $V=V_{1}=const$ is equal t
\begin{equation}
A=\int dx^{1}dx^{2}\sqrt{\gamma }\text{.}
\end{equation}
Here, $\gamma =\gamma (V,x^{1},x^{2})$, integration is performed in some
\textit{fixed} intervals $a_{1}\leq x^{1}\leq b_{1}$, $a_{2}\leq x^{2}\leq
b_{2}$ (for instance, one can choose the analogue of angular variables).
Therefore, the condition that $A\rightarrow 0$ entails that $\sqrt{\gamma
\rightarrow 0$ as well. Then, it follows from (\ref{roc}) that $\rho
_{H}\rightarrow 0$, so that $T_{H}\rightarrow \infty $ according to (\re
{surf}). Eq. (\ref{th}) generalizes the corresponding property of the
Schwarzschild metric, where $T_{H}=(4\pi r_{+})^{-1}$, $r_{+}$ being the
horizon radius.
It is essential that we deal with equipotential surfaces. For comparison let
us consider, say, the Schwarzschild metric. We can take an arbitrary point
and encircle it by a small sphere with minimum and maximum value of the
standard Schwarzschild coordinate $r_{1}$ and $r_{2}$. Obviously, such a
sphere is not equipotential surface and the above reasonings do not apply.
When $r_{2}\rightarrow r_{1}$, the area vanishes although $\sqrt{\gamma
=r^{2}\sin ^{2}\theta $ remains separated from zero.
Statement 4. Black hole solutions are possible only in the high temperature
phase, $T>T_{m}$, where the concrete value $T_{m}$ is determined by the
boundary conditions (and thus cannot be found in a general form).
Proof. Let $\beta \equiv T^{-1}$ be the inverse temperature on the boundary.
It follows from the Tolman formula tha
\begin{equation}
\beta =\beta _{0}V_{B}\text{,} \label{bv}
\end{equation
where $\beta _{0}=T_{H}^{-1}$ is a constant, $V_{B}$ is the value of $V$ on
the boundary.
For a black hole, the horizon area $A_{H}$ lies in the interval
0<A_{H}<A_{B}$, where $A_{B}$ corresponds to the boundary. It follows from
\ref{th}) that in the limit when the horizon shrinks to the point, so
A_{H}\rightarrow 0$, the quantity $\beta \rightarrow 0$ due to the factor
\beta _{0}$. On the other hand, if a black hole occupies almost the whole
cavity, the boundary almost coincides with the horizon, so $V_{B}$
approaches $V$ on the horizon where it is equal to zero. Now, $\beta
\rightarrow 0$ due to the second factor in (\ref{bv}). Thus the quantity
\beta $ vanishes in both limits: for the minimum possible area (equal to
zero) and the maximum one (corresponding to the boundary). Therefore, in
some intermediate point $A_{H}=A_{m}$ the inverse temperature $\beta $
should pass through the maximum point equal to some $\beta _{m}=\beta
(A_{m}) $. This proves the statement. For extremal black holes it would be
possible for $\beta \,\ $to diverge but we deal now with nonextremal ones.
Statement 5. For a given $T$, there exist at least two branches of black
holes, one of which (at least in some interval of temperatures) is locally
stable.
Proof. As $\beta (A)$ has two zeros at $A_{H}=0$ and $A_{H}=A_{B}$, the
branch $A_{m}<A_{H}<A_{B}$ has $\frac{d\beta }{dA_{H}}<0$, if it is
monotonic. Then, $\frac{dT}{dA_{H}\text{,}}>0$. Meanwhile, the heat capacity
$C=\frac{dE}{dT}=T\frac{dS}{dT}=\frac{T}{4}\frac{dA_{H}}{dT}$, where $E$ is
the energy, $S=\frac{A_{H}}{4}$ is the Bekenstein-Hawking entropy, and the
first law $dE=TdS$ was used. We see that $C>0$, so the solution is locally
stable. Even if $\beta $ as a function of $A_{H}$ is not monotonic, the part
of $\beta (A)$ in the vicinity of $A_{B}$ is monotonically decreasing, and
the above reasonings apply.
Statement 6. For sufficiently high temperature, a black hole phase is
favorable both locally and globally.
Proof. It is quite obvious that for very high temperatures configuration
with a black hole will dominate. Indeed, in the Eucldiean action approach,
the free energy $F=TI$, the Euclidean action for a black hole topology
I=\beta E-S$ \cite{can}. When $\beta \rightarrow 0$, the first term in $I$
is negligible, so $I<0$. Meanwhile, for a hot empty space $I=0\,.$ As a
black hole exists for sufficiently high temperature $T>T_{m}$ and becomes
thermodynamically favorable in the limit $T\rightarrow \infty $, there
exists some $T_{1}>T_{m}$ such that for $T>T_{1}$ a black hole not only
stable locally (see above) but also globally. Statements 4 - 6 generalize
the corresponding properties for Schwarzschild black holes \cite{york86}.
\section{Conclusions}
Thus we showed that, defining boundary data, the canonical ensemble does
exist if a temperature is high enough. Moreover, inside a cavity a black
hole is favorable thermodynamically for sufficiently big temperature. This
is done in a quite general approach without appealing to the explicit form
of solutions that, generically, cannot be found at all. In doing so, the
full system of Einstein equations was not used directly. We relied only on
one relation (\ref{gv}), and the expression for the Kretschmann scalar (\re
{kr}) for vacuum space-times (in which, though, the validity of Einstein
equations was already taken into account indirectly). We also proved
rigorously that for vacuum black holes the Hawking temperature diverges when
the horizon area shrinks. It would be of interest to check discussed
properties for nonvacuum backgrounds and apply them to nucleation of black
holes in cosmological problems.
\begin{acknowledgments}
This work was funded by the subsidy allocated to Kazan Federal University
for the state assignment in the sphere of scientific activities. O. Z. also
thanks for support SFFR, Ukraine, Project No. 32367.
\end{acknowledgments}
|
1,314,259,993,252 | arxiv | \section{Introduction}
It is one of the central features of quantum theory that only some pairs of quantum observables can be measured simultaneously.
There are various ways how two quantum observables may permit a simultaneous measurement.
Joint measurability, or compatibility, is the general concept related to simultaneous measurements.
Compatibility of two observables does not say anything how those observables can be implemented jointly, just that there is some measurement set-up giving the correct marginal probability distributions.
In contrast, broadcasting of observables is a modification of broadcasting of states, and it is a very specific way to implement simultaneous measurement.
It requires an existence of a broadcasting channel that gives two approximate copies of an arbitrary input state and, even if the copies are not identical to the original state, there is no difference with respect to the target observables.
A broadcastable pair of observables is compatible, but it is compatible in a very strong sense.
These two scenarios raise some immediate questions.
What is exactly the additional feature that makes some compatible pairs broadcastable, especially from the point of view of implementation of their simultaneous measurement?
How different are these two relations on observables, and are there any intermediate steps between them?
In this paper we tackle these questions.
We will define three relations on observables that are between broadcastability and compatibility; they are weaker than broadcastability but stronger than compatibility.
These relations are \emph{one-side broadcastability}, \emph{mutual nondisturbance} and \emph{nondisturbance}.
All together we then obtain a hierarchy of five relations on quantum observables; see Fig. \ref{fig:hierarchy}.
The hierarchy of relations is useful in several different ways.
Firstly, it reveals that there are different levels of joint measurability, and in this sense, different layers of classicality.
Secondly, if we can show that some pair of observables is, e.g., not compatible, then we know that all the stronger relations fail as well.
We will demonstrate the usage of this kind of argument, and we will completely characterize all the four relations stronger than compatibility in the case of qubit observables.
To understand the differences of the five relations, we will formulate them in a unifying way.
We show that the five relations can be understood in the differences of the needed devices in the implementation of a simultaneous measurement.
Using the presented framework we can also demonstrate that a natural generalization of the compatibility relation is, in fact, equivalent to the compatibility.
\section{Compatibility}
In the following $\hi$ is a fixed Hilbert space, either finite or countably infinite dimensional.
We denote by $\sh$ the set of all states, i.e., positive trace class operators of trace 1.
A quantum observable is mathematically defined as a positive operator valued measure (POVM) \cite{PSAQT82}, \cite{OQP97}.
We will restrict our investigation to observables with finite number of outcomes, hence we will understand an observable as a map $\A$ from a finite set of measurement outcomes $\Omega_\A$ to the set of bounded linear operators $\lh$ on $\hi$ such that $\A(x)\geq 0$ and $\sum_x \A(x) = \id$.
For a subset $X\subseteq\Omega_\A$, we denote $\A(X) = \sum_{x\in X} \A(x)$.
The probability of getting an outcome $x$ in a measurement of $\A$ in an initial state $\varrho$ is given by the formula $\tr{\varrho \A(x)}$.
We denote by $\mathcal{O}(\hi)$ the set of all observables $\A$ on $\hi$ with $\Omega_\A \subset \integer$.
A binary relation on $\mathcal{O}(\hi)$ is a subset $\mathcal{R}$ of the Cartesian product $\mathcal{O}(\hi) \times \mathcal{O}(\hi)$.
Hence, a binary relation can be thought as a property that a pair of observables may or may not possess.
The relations that we will study are all symmetric:
if $(\A,\B) \in \mathcal{R}$, then also $(\B,\A) \in \mathcal{R}$.
For this reason, we can talk about properties of $\A$ and $\B$ rather than $(\A,\B)$.
If $\mathcal{R}$ and $\mathcal{R}'$ are two binary relations on $\mathcal{O}(\hi)$ such that $\mathcal{R} \subset \mathcal{R}'$, then we say that $\mathcal{R}$ is \emph{stronger} than $\mathcal{R}'$, and that $\mathcal{R}'$ is \emph{weaker} than $\mathcal{R}$.
The complement of a binary relation $\mathcal{R}$ is the subset of those pairs $(\A,\B) \in \mathcal{O}(\hi) \times \mathcal{O}(\hi)$ that do not belong to $\mathcal{R}$.
The complement relation of a symmetric relation is also symmetric, and the inclusion of two relations is reversed in their complement relations.
\begin{figure}
\begin{center}
\includegraphics[width=6.5cm]{h2.png}
\end{center}
\caption{\label{fig:hierarchy} The whole area depicts the set of all pairs of quantum observables. The strictest condition for a pair of observables is broadcastability, and the loosest is compatibility. The other three properties are between these two.}
\end{figure}
The most general formulation of simultaneous measurability is based on the concept of a joint observable \cite{Busch87}, \cite{LaPu97}.
A \emph{joint observable} of two observables $\A$ and $\B$ is an observable $\mathsf{J}:\Omega_\A \times \Omega_\B \to \lh$ such that
\begin{align}\label{eq:marginals}
& \mathsf{J}(x,\Omega_\B) = \A(x) \, ,
& \mathsf{J}(\Omega_\A,y) = \B(y)
\end{align}
for all $x\in\Omega_\A,y\in\Omega_\B$, where we have used the shorthand notation $\mathsf{J}(x,\Omega_\B)=\sum_{y\in\Omega_\B} \mathsf{J}(x,y)$ and $\mathsf{J}(\Omega_\A,y)=\sum_{x\in\Omega_\A} \mathsf{J}(x,y)$.
The existence of a joint observable determines the following symmetric relation on the set of observables.
\begin{definition}
Two observables $\A$ and $\B$ are called \emph{compatible} or \emph{jointly measurable} if they have a joint observable; otherwise they are \emph{incompatible}.
\end{definition}
In the following sections we formulate and study four symmetric relations on the set of observables that are related to simultaneous measurability of two observables, and which are all stronger than compatibility; see Fig. \ref{fig:hierarchy}.
Hence, they correspond to stronger and weaker levels for two observables to be simultaneously measurable.
Their complement relations refer to the impossibility of simultaneous measurement using specified resources.
\section{Broadcasting and one-side broadcasting}
A \emph{quantum channel} $\Lambda$ is a completely positive linear map from an input state space $\mathcal{S}(\hi)$ to an output state space $\mathcal{S}(\hi')$.
In the following we will consider quantum channels that take a single system as an input and give two similar systems as outputs, so that $\hi'= \hi_{\mathcal{A}}\otimes\hi_{\mathcal{B}}$ and $\hi_{\mathcal{A}}=\hi_{\mathcal{B}}=\hi$.
These kind of channels are called \emph{broadcasting channels}.
A broadcasting channel $\Lambda:\mathcal{S}(\hi) \to \mathcal{S}(\hi_\mathcal{A} \otimes \hi_\mathcal{B})$ \emph{broadcasts a state $\varrho$} if the reduced states of the output state $\Lambda(\varrho)$ coincide with the input state, i.e.,
\begin{align}
& \ptr{\mathcal{B}}{\Lambda(\varrho)} = \varrho \, , \quad \ptr{\mathcal{A}}{\Lambda(\varrho)} = \varrho \, . \label{eq:broad}
\end{align}
A subset $\mathcal{T}$ of states is \emph{broadcastable} if there is a channel $\Lambda$ that broadcasts each state $\varrho$ belonging to $\mathcal{T}$.
It is known that a subset $\mathcal{T}$ is broadcastable if and only if all the states in $\mathcal{T}$ commute with each other \cite{Barnumetal96}, \cite{Fanetal14}.
The broadcasting conditions in \eqref{eq:broad} for a state $\varrho$ are equivalent to the requirement that the equations
\begin{align}
\tr{\varrho \A(x) }= \tr{\Lambda(\varrho) \A(x)\otimes\id} = \tr{\Lambda(\varrho) \id \otimes\A(x)} \label{eq:broad-tr}
\end{align}
hold for all observables $\A$ and outcomes $x\in\Omega_\A$.
This formulation allows us to change the aim of the broadcasting procedure; we may want to satisfy these equations for all states but only for some chosen observables.
Hence, we arrive to the following definition.
\begin{definition}\label{def:broadcast}
A channel $\Lambda$ \emph{broadcasts an observable $\A$} if the condition \eqref{eq:broad-tr} holds for all states $\varrho\in\mathcal{S}(\hi)$.
A subset $\mathcal{A}$ of observables is \emph{broadcastable} if there is a channel $\Lambda$ that broadcasts every observable $\A\in\mathcal{A}$.
\end{definition}
The requirement that the equations in \eqref{eq:broad-tr} hold for all states $\varrho\in\mathcal{S}(\hi)$ is equivalent to the condition
\begin{align}
\A(x) = \Lambda^*( \A(x)\otimes\id )= \Lambda^*( \id \otimes\A(x) ) \, ,
\end{align}
where $\Lambda^*$ is the dual channel of $\Lambda$.
We will mostly use the Schr\"odinger picture of $\Lambda$ and the condition \eqref{eq:broad-tr} to make the physical content more visible, but the Heisenberg picture $\Lambda^*$ is useful when we write joint observables.
The idea of concentrating in observables rather than states was presented in \cite{FeGaPa06} and further investigated in \cite{FePa07},\cite{AlSaLaSo14}.
In these works the cloning of an observable was identified with the cloning of its mean value, so our definition is slightly different from that.
However, the essential fact that cloning of observables is more related to joint measurement than is cloning of states was observed already in \cite{FeGaPa06}.
Let us then focus on the broadcastability of two observables.
By Def. \ref{def:broadcast}, a channel $\Lambda$ broadcasts two observables $\A$ and $\B$ if
\begin{align}
& \tr{\varrho \A(x) }= \tr{\Lambda(\varrho) \A(x)\otimes\id} = \tr{\Lambda(\varrho) \id \otimes\A(x)} \label{eq:broad-A} \\
& \tr{\varrho \B(y) }= \tr{\Lambda(\varrho) \B(y)\otimes\id} = \tr{\Lambda(\varrho) \id \otimes\B(y)} \label{eq:broad-B}
\end{align}
for all states $\varrho\in\mathcal{S}(\hi)$ and outcomes $x\in\Omega_\A$, $y\in\Omega_\B$.
We can think the broadcastibility of two observables in the following way.
Two approximate copies are made of an unknown initial state $\varrho$.
One copy is sent to Alice and another one to Bob.
Both Alice and Bob can choose if they want to measure either $\A$ or $\B$ on their respective copies.
The conditions \eqref{eq:broad-A}--\eqref{eq:broad-B} guarantee that the measurement outcome probabilities are the same as in separate measurements of $\A$ and $\B$ on the initial state $\varrho$.
To provide an example of broadcastable pairs of observables, we consider the following special class of observables.
\begin{definition}
Let $\{\varphi_j\}_{j=1}^d$ be an orthonormal basis.
An observable $\A$ is \emph{diagonal in $\{\varphi_j\}_{j=1}^d$} if
\begin{equation}\label{eq:commutative}
\A(x) = \sum_{j =1}^d \alpha_j(x) \kb{\varphi_j}{\varphi_j} \, ,
\end{equation}
where $0\leq \alpha_j(x) \leq 1$ and $\sum_x \alpha_j(x)=1$ for all $j=1,\ldots,d$.
\end{definition}
The observable $\A$ defined in \eqref{eq:commutative} is \emph{commutative}, i.e., $\A(x)\A(y)=\A(y)\A(x)$ for all $x,y\in\Omega_\A$.
If the dimension of $\hi$ is finite, then a commutative observable is diagonal in some orthonormal basis.
However, if the dimension of $\hi$ is infinite, then not all commutative observables are of the form \eqref{eq:commutative} since a positive operator need not have a pure point spectrum.
We also observe that two observables $\A$ and $\B$ that are diagonal in the same basis are \emph{mutually commuting}, i.e., $\A(x)\B(y)=\B(y)\A(x)$ for all $x\in\Omega_\A,y\in\Omega_\B$.
The following observation is analogous to the fact that a set containing two commuting states is broadcastable.
\begin{proposition}\label{prop:diagonal}
Let $\mathcal{A}$ be a set of observables that are diagonal in the same orthonormal basis $\{\varphi_j\}_{j=1}^d$.
Then $\mathcal{A}$ is broadcastable.
\end{proposition}
\begin{proof}
We define a channel $\Lambda$ as
\begin{equation}
\Lambda(\varrho) = \sum_{j=1}^d \ip{\varphi_j}{\varrho \varphi_j} \kb{\varphi_j \otimes \varphi_j}{\varphi_j \otimes \varphi_j} \, .
\end{equation}
If $\A$ has the form \eqref{eq:commutative}, then
\begin{equation*}
\tr{\varrho \A(x) }= \tr{\Lambda(\varrho) \A(x)\otimes\id} = \tr{\Lambda(\varrho) \id \otimes\A(x)} \, ,
\end{equation*}
hence $\Lambda$ broadcasts $\A$.
\end{proof}
In a finite dimensional Hilbert space a commutative set of selfadjoint operators can be diagonalized in the same orthonormal basis.
The following statement is hence a direct consequence of Prop. \ref{prop:diagonal}.
\begin{proposition}\label{prop:commu}
Let $\dim\hi < \infty$.
A mutually commuting pair of commutative observables is broadcastable.
\end{proposition}
A relaxation of the broadcasting conditions \eqref{eq:broad-A}--\eqref{eq:broad-B} is that we require only
\begin{align}
& \tr{\varrho \A(x) } = \tr{\Lambda(\varrho) \A(x)\otimes\id} \label{eq:a-ub}\\
& \tr{\varrho \B(y) } = \tr{\Lambda(\varrho) \id \otimes\B(y)} \label{eq:b-ub}
\end{align}
for all states $\varrho\in\mathcal{S}(\hi)$ and outcomes $x\in\Omega_\A$, $y\in\Omega_\B$.
This still refers to a process where we first make approximate copies of $\varrho$ by using the channel $\Lambda$ and then measure $\A$ and $\B$ on those copies; see Fig. \ref{fig:boxes}a.
The difference to the earlier broadcasting set-up is that now the sides of the measurements are relevant: Alice must measure $\A$ and Bob must measure $\B$ on their respective subsystems.
We are led to the following definition.
\begin{definition}\label{def:one-sided}
Two observables $\A$ and $\B$ are \emph{one-side broadcastable} if there exists a channel $\Lambda:\mathcal{S}(\hi) \to \mathcal{S}(\hi \otimes \hi)$ such that \eqref{eq:a-ub}--\eqref{eq:b-ub} hold for all states $\varrho\in\mathcal{S}(\hi)$ and outcomes $x\in\Omega_\A$, $y\in\Omega_\B$.
\end{definition}
It is clear that if two observables are broadcastable, then they are one-side broadcastable.
Further, a one-side broadcastable pair is compatible; if $\A$ and $\B$ are one-side broadcastable with a channel $\Lambda$, then they have a joint observable $\mathsf{J}$ defined as
\begin{align}
\mathsf{J}(x,y) = \Lambda^* (\A(x) \otimes \B(y) ) \, .
\end{align}
The conditions \eqref{eq:a-ub} -- \eqref{eq:b-ub} guarantee that $\mathsf{J}$ is indeed a joint observable.
\begin{figure}
\centering
\subfigure[]
{
\includegraphics[width=6cm]{one-side.png}
}
\subfigure[]
{
\includegraphics[width=6cm]{nondist.png}
}
\subfigure[]
{
\includegraphics[width=6cm]{global.png}
}
\caption{ \label{fig:boxes}(a) In the one-side broadcasting scenario two approximate copies of the input state are produced and the target observables are measured on these copies. (b) In a nondisturbing sequential measurement one of the measured observables is allowed to be different than the corresponding target observable. The auxiliary observable can operate on a different Hilbert space than the target observable.
(c) In the most general set-up a global measurement is allowed. Two observables can be obtained in this way exactly when they are compatible.}
\end{figure}
\section{Nondisturbing measurements}
An observable $\A$ can be measured without disturbing another observable $\B$ if the measurement outcome distributions of $\B$ are the same if we measure $\A$ before $\B$ or not measure $\A$ at all.
To formulate this relation in the standard mathematical formalism, we recall the concept of an instrument \cite{QTOS76}.
An instrument which implements a measurement of $\A$ is a map $x\mapsto \mathcal{I}_x$ such that each $\mathcal{I}_x$ is a completely positive linear map and
\begin{equation}\label{eq:i-repro}
\tr{\mathcal{I}_x(\varrho)} = \tr{\varrho \A(x)}
\end{equation}
for all states $\varrho$ and outcomes $x\in\Omega_\A$.
We will again use the notation $\mathcal{I}_X \equiv \sum_{x\in X} \mathcal{I}_x$ for all subsets $X\subseteq\Omega_\A$.
The nondisturbance condition for an observable $\B$ then reads
\begin{equation}\label{eq:i-nond}
\tr{\varrho \B(y) } = \tr{\mathcal{I}_\Omega(\varrho) \B(y)}\, ,
\end{equation}
required to hold for all states $\varrho\in\sh$ and outcomes $y\in\Omega_\B$.
We say that an observable \emph{$\A$ can be measured without disturbing $\B$} if there exists an instrument $\mathcal{I}$ such that \eqref{eq:i-repro}--\eqref{eq:i-nond} hold for all states $\varrho\in\sh$ and outcomes $x\in\Omega_\A$, $y\in\Omega_\B$.
If $\A$ can be measured without disturbing $\B$, then $\A$ and $\B$ are compatible.
This is clear since a sequential measurement of $\A$ followed by $\B$ is a joint measurement of $\A$ and $\B$ if the first measurement does not disturb $\B$.
A joint observable $\mathsf{J}$ is defined as $\mathsf{J}(x,y) = \mathcal{I}_x^*(\B(y))$ and the marginal conditions \eqref{eq:marginals} follow from \eqref{eq:i-repro}--\eqref{eq:i-nond}.
To see a connection to the one-side broadcasting, we recall that every instrument can be written in the measurement model form
\begin{equation*}
\mathcal{I}_x(\varrho) = \ptr{\hik}{U \eta \otimes \varrho U^* \A'(x) \otimes \id} \, ,
\end{equation*}
where $\eta$ is a fixed initial state of an ancillary system $\hik$, $\A'$ is a probe observable on $\hik$ and $U:\hik\otimes\hi \to \hik\otimes\hi$ is a unitary operator describing a measurement interaction \cite{Ozawa84}.
The condition \eqref{eq:i-repro} can then be written as
\begin{equation}
\tr{\varrho \A(x)} = \tr{U \eta \otimes \varrho U^* \A'(x) \otimes \id} \, ,
\end{equation}
and the nondisturbance condition \eqref{eq:i-nond} takes the form
\begin{align}
\tr{\varrho \B(y) } = \tr{ U \eta \otimes \varrho U^* \id \otimes \B(y) } \, .
\end{align}
By denoting $\Lambda(\varrho) = U \eta \otimes \varrho U^*$ we can write these equations as
\begin{align}
& \tr{\varrho \A(x) } = \tr{\Lambda(\varrho) \A'(x)\otimes\id} \label{eq:repro}\\
& \tr{\varrho \B(y) } = \tr{\Lambda(\varrho) \id \otimes\B(y)} \label{eq:nond}
\end{align}
These are exactly the same equations as in one-side broadcasting, except that in the latter case it is required that $\hik=\hi$ and $\A'=\A$.
This difference is illustrated in Fig. \ref{fig:boxes}b.
To see that the previous conditions are equivalent to the existence of a nondisturbing measurement, assume there is a channel $\Lambda:\mathcal{S}(\hi) \to \mathcal{S}(\hik \otimes \hi)$ and an observable $\A'$ on $\hik$ such that \eqref{eq:repro}--\eqref{eq:nond} hold for all states $\varrho$ and outcomes $x\in\Omega_\A$, $y\in\Omega_\B$.
For each $x\in\Omega_\A$, we then define a map $\mathcal{I}_x$ as
\begin{align}
\mathcal{I}_x(\varrho) = \ptr{\hik}{\sqrt{\A'(x)}\otimes\id \Lambda(\varrho) \sqrt{\A'(x)}\otimes\id} \, .
\end{align}
As $\mathcal{I}_x$ is a composition of completely positive maps, it is completely positive.
A direct calculation shows that $\mathcal{I}$ satisfies \eqref{eq:i-repro}--\eqref{eq:i-nond}.
We summarize the previous discussion in the following proposition.
\begin{proposition}\label{prop:nondist}
An observable $\A$ can be measured without disturbing an observable $\B$ if and only if there exists an ancillary system $\hik$, a probe observable $\A'$ on $\hik$, and a channel $\Lambda:\mathcal{S}(\hi) \to \mathcal{S}(\hik \otimes \hi)$ such that
\begin{align}
& \tr{\varrho \A(x) } = \tr{\Lambda(\varrho) \A'(x)\otimes\id} \\
& \tr{\varrho \B(y) } = \tr{\Lambda(\varrho) \id \otimes\B(y)}
\end{align}
hold for all states $\varrho$ and outcomes $x\in\Omega_\A$, $y\in\Omega_\B$.
\end{proposition}
We are interested on symmetric relations on the set of observables, hence we make the following definitions.
\begin{definition}
Two observables $\A$ and $\B$ are
\begin{itemize}
\item \emph{mutually nondisturbing} if $\A$ can be measured without disturbing $\B$ and $\B$ can be measured without disturbing $\A$.
\item \emph{nondisturbing} if $\A$ can be measured without disturbing $\B$ or $\B$ can be measured without disturbing $\A$.
\end{itemize}
\end{definition}
If $\A$ can be measured without disturbing $\B$, it does \emph{not} imply that $\B$ can be measured without disturbing $\A$.
An example demonstrating this fact was given in \cite{HeWo10}.
We thus conclude that the mutual nondisturbance is a strictly stronger relation than the disturbance relation.
As we noted earlier, nondisturbing observables are compatible.
Further, a comparison of Prop. \ref{prop:nondist} with Def. \ref{def:one-sided} shows that one-side broadcastable observables are mutually nondisturbing.
We have thus reached the hierarchy depicted in Fig. \ref{fig:hierarchy}.
As a demonstration, let us recall a class of mutually nondisturbing pairs of observables: \emph{two mutually commuting observables $\A$ and $\B$ are mutually nondisturbing} \cite{BuSi98}.
This can be seen by using the L\"uders instruments of $\A$ and $\B$.
The L\"uders instrument of $\A$ is defined as
\begin{equation}
\mathcal{I}_x(\varrho) = \sqrt{\A(x)} \varrho \sqrt{\A(x)} \, .
\end{equation}
It follows from $\A(x)\B(y)=\B(y)\A(x)$ that $\sqrt{\A(x)}\B(y)=\B(y)\sqrt{\A(x)}$.
Hence,
\begin{align*}
\tr{\mathcal{I}_{\Omega}(\varrho) \B(y)} &= \sum_x \tr{\sqrt{\A(x)} \varrho \sqrt{\A(x)}\B(y)} \\
&= \sum_x \tr{\varrho \B(y) \A(x)} = \tr{\varrho \B(y)} \, ,
\end{align*}
so that $\A$ can be measured without disturbing $\B$.
\section{Reformulation of compatibility}
All the four relations stronger than compatibility have been formulated as certain requirements on a broadcasting channel and auxiliary observables.
We will now put the compatibility relation into this same framework.
Let us look a relaxation of the nondisturbance as it was formulated in Prop. \ref{prop:nondist}.
We can ask for the existence of two ancillary systems $\hik_1$, $\hik_2$, a channel $\Lambda:\mathcal{S}(\hi) \to \mathcal{S}(\hik_1 \otimes \hik_2)$, and observables $\A'$ and $\B'$ on systems $\hik_1$ and $\hik_2$, respectively, such that
\begin{align}
& \tr{\varrho \A(x) } = \tr{\Lambda(\varrho) \A'(x)\otimes\id} \label{eq:gen-a}\\
& \tr{\varrho \B(y) } = \tr{\Lambda(\varrho) \id \otimes\B'(y)} \label{eq:gen-b}
\end{align}
for all states $\varrho$ and outcomes $x\in\Omega_\A$, $y\in\Omega_\B$.
This is a relaxation of the nondisturbance relation as now auxiliary observables are allowed on both sides of the output.
We can go one step further and ask for the existence of a channel $\Lambda:\mathcal{S}(\hi)\to\mathcal{S}(\hi')$ and observable $\G$ on an arbitrary output space $\hout$ such that
\begin{align}
& \tr{\varrho \A(x) } = \tr{\Lambda(\varrho) \G(x,\Omega_\B)} \label{eq:global-a}\\
& \tr{\varrho \B(y) } = \tr{\Lambda(\varrho) \G(\Omega_\A,y) } \label{eq:global-b}
\end{align}
for all states $\varrho$ and outcomes $x\in\Omega_\A$, $y\in\Omega_\B$.
This includes the case when $\hout=\hik_1 \otimes\hik_2$ and $\G$ is a global observable; see Fig. \ref{fig:boxes}c.
Both of the above generalizations are equivalent to the compatibility; this is the content of the following result.
\begin{proposition}\label{prop:comp}
For two observables $\A$ and $\B$, the following are equivalent:
\begin{itemize}
\item[(i)] $\A$ are $\B$ compatible.
\item[(ii)] There exist ancillary systems $\hik_1$ and $\hik_2$, probe observables $\A'$ on $\hik_1$ and $\B'$ on $\hik_2$, and a channel $\Lambda:\mathcal{S}(\hi) \to \mathcal{S}(\hik_1 \otimes \hik_2)$ such that \eqref{eq:gen-a}--\eqref{eq:gen-b} hold for all states $\varrho$ and outcomes $x\in\Omega_\A$, $y\in\Omega_\B$.
\item[(iii)] There exists an ancillary system $\hik$, a channel $\Lambda$ and an observable $\G$ such that \eqref{eq:global-a}--\eqref{eq:global-b} hold for all states $\varrho$ and outcomes $x\in\Omega_\A$, $y\in\Omega_\B$.
\end{itemize}
\end{proposition}
\begin{proof}
We have (iii)$\Rightarrow$(i) as $\mathsf{J}(x,y) = \Lambda^*(\G(x,y))$ defines a joint observable of $\A$ and $\B$.
It is clear that (ii)$\Rightarrow$(iii) since there are less constrains in (iii) than in (ii).
To see that (i)$\Rightarrow$(ii), assume that $\A$ are $\B$ compatible, so there exists a joint observable $\mathsf{J}$.
We fix Hilbert spaces $\hik_1$ and $\hik_2$ with the dimensions $\# \Omega_\A$ and $\# \Omega_\B$, respectively.
On both of these Hilbert spaces we fix orthonormal bases $\{\varphi_x\}$ and $\{\eta_y\}$, labeled with the elements of $\Omega_\A$ and $\Omega_\B$.
We then define a channel $\Lambda$ as
\begin{equation}
\Lambda(\varrho) = \sum_{x,y} \tr{\varrho \mathsf{J}(x,y)} \kb{\varphi_x \otimes \eta_y}{\varphi_x \otimes \eta_y} \, ,
\end{equation}
and we define the observables $\A'$ and $\B'$ as
\begin{equation}
\A'(x) = \kb{\varphi_x}{\varphi_x} \, , \quad \B'(y) = \kb{\eta_y}{\eta_y} \, .
\end{equation}
With these choices the requirements \eqref{eq:gen-a}--\eqref{eq:gen-b} are satisfied.
\end{proof}
\section{Qualitative differences}\label{sec:qualitative}
The qualitative differences of the two extreme relations, broadcasting and compatibility, to the other relations link to the fundamental theorems of no-broadcasting \cite{Barnumetal96} and no-information-without-disturbance \cite{Busch09}.
In the following we explain these connections, which are both based on the concept of an informationally complete observable.
By definition, a collection $\mathcal{A}$ of observables is \emph{informationally complete} if the measurement data $\{ \tr{\varrho \A(x)} : \A\in\mathcal{A},x\in\Omega_\A\}$ is unique for every state $\varrho\in\mathcal{S}(\hi)$ \cite{BuLa89}.
Even a single observable can be informationally complete \cite{SiSt92}, and a standard example of such observable is a covariant phase space observable (either in finite or infinite phase space) satisfying certain criterion \cite{KiLaScWe12}.
\subsection{Broadcastability versus other relations}\label{sec:versus}
The no-broadcasting theorem for states implies some immediate limitations on the broadcastability of subsets of observables.
Namely, let $\mathcal{A}$ be an informationally complete set of observables.
The broadcastability of $\mathcal{A}$ would then imply that the reduced states of the bipartite output state $\Lambda(\varrho)$ coincide with the input state $\varrho$.
This cannot hold for all states by the no-broadcasting theorem, so we conclude that \emph{an informationally complete set of observables is not broadcastable}.
In particular, a single informationally complete observable is not broadcastable.
The formulation of the broadcastability relation implies a trivial but significant feature: if two observables $\A$ and $\B$ are broadcastable, then $\A$ is broadcastable with itself.
Therefore, an informationally complete observable is not broadcastable with any other observable.
In the language of binary relations, this means that informationally complete observables are isolated elements in the broadcasting relation.
The existence of isolated elements, i.e., observables that are not related to any other observable, is a distinctive feature of the broadcasting relation.
Too see this, we observe that \emph{every observable is one-side broadcastable with any trivial observable}.
By a trivial observable we mean an observable for which the measurement outcome probabilities do not depend on the input state at all.
Mathematically, this kind of observable can be written as $\mathsf{T}(x) = t(x) \id$, where $t$ is a probability distribution and $\id$ is the identity operator.
Hence, to prove the claim, fix a state $\eta\in\sh$ and define a channel $\Lambda$ as $\Lambda(\varrho) = \varrho \otimes \eta$.
Let $\A$ be any observable and $\mathsf{T}$ a trivial observable.
Then
\begin{align}
& \tr{\Lambda(\varrho) \A(x)\otimes\id} = \tr{\varrho \A(x) } \\
& \tr{\Lambda(\varrho) \id \otimes\mathsf{T}(y)} = \tr{\eta \mathsf{T}(y) } = \tr{\varrho \mathsf{T}(y) }
\end{align}
so $\A$ and $\mathsf{T}$ are one-side broadcastable.
Due to the hierarchy of the relations, we conclude that a trivial observable is related to any other observable in all the relations except broadcasting.
\subsection{Compatibility versus other relations}
A specific feature of the compatibility relation is that \emph{every observable is compatible with itself}.
To see this, let $\A$ be an observable. We define an observable $\mathsf{J}$ on $\Omega_\A\times\Omega_\A$ as $\mathsf{J}(x,y) = \delta_{xy} \A(x)$.
Then
\begin{align}
\mathsf{J}(x,\Omega_\A) = \mathsf{J}(\Omega_\A,x) = \A(x)
\end{align}
for all $x \in \Omega_\A$, hence $\mathsf{J}$ is a joint observable of $\A$ and $\A$.
The physical explanation of this feauture is the fact that the measurement outcomes of $\A$ are distinguishable classical states and therefore can be duplicated.
This reflexivity of the compatibility relation is a qualitative difference to all other four relations: in all of them we can find an observable $\A$ which is not in the given relation with itself.
Due to the hierarchy of the relations, it is enough to find an observable $\A$ such that $\A$ cannot be measured without disturbing $\A$ itself.
A whole class of these kind of observables consists of informationally complete observables.
The no-information-without-disturbance theorem states that a measurement that gives information causes necessarily some disturbance.
Since, by definition, an informationally complete observable $\A$ gives unique probability outcome distribution to all states, we conclude that a measurement of $\A$ necessarily disturbs a subsequent measurement of $\A$.
Another distinctive feature, more important but not as sharply formulated, is the fact that \emph{addition of sufficient amount of noise makes any pair of observables compatible} \cite{BuHeScSt13}, \cite{HeMiZi16}.
By the addition of noise we mean mixing an observable with a trivial observable.
For instance, let us consider two incompatible observables $\A$ and $\B$ and their deformations $\tilde{\A}$ and $\tilde{\B}$, where
\begin{equation}\label{eq:deformed}
\tilde{\A}(x) = \half \A(x) + \half t_1 (x) \id \, , \quad \tilde{\B}(y) = \half \B(y) + \half t_2 (y) \id
\end{equation}
and $t_1,t_2$ are some probability distributions on $\Omega_\A,\Omega_\B$, respectively.
Then $\tilde{\A}$ and $\tilde{\B}$ are compatible as they have a joint observable
\begin{equation}
\mathsf{J}(x,y) = \half t_2(y) \A(x) + \half t_1(x) \B(y) \, .
\end{equation}
Using the normalizations $\sum_x \A(x)=\sum_y \B(y)=\id$ and $\sum_x t_1(x) = \sum_y t_2(y) =1$ it is straightforward to verify that $\mathsf{J}$ gives $\tilde{\A}$ and $\tilde{\B}$ as its marginals.
In contrast, \emph{addition of white noise does not make an arbitrary pair of observables nondisturbing}.
To see this, suppose that $\A$ is informationally complete.
Then a deformed observable of the form
\begin{equation}\label{eq:deformed-2}
\widetilde{\A}(x) = \lambda \A(x) + (1-\lambda) t(x) \id
\end{equation}
with $0<\lambda \leq 1$ is still informationally complete.
This follows from the fact that an observable is informationally complete if and only if its range spans $\lh$ \cite{SiSt92}, and the deformation in \eqref{eq:deformed-2} does not change the span of the range when $\lambda \neq 0$.
Therefore, our earlier discussion implies that $\widetilde{\A}$ cannot be measured without disturbing itself.
\section{Qubit observables}
As we noted earlier, a nondisturbing pair of observables need not be mutually nondisturbing.
However, if the dimension of the Hilbert space is $2$, then these relations are the same and equivalent to the mutual commutativity.
Namely, the result \cite[Prop. 6]{HeWo10} implies the following:
\begin{proposition}\label{prop:qubit-1}
For two qubit observables $\A$ and $\B$, the following are equivalent:
\begin{itemize}
\item[(i)] $\A$ and $\B$ are mutually commuting.
\item[(ii)] $\A$ and $\B$ are mutually nondisturbing.
\item[(iii)] $\A$ and $\B$ are nondisturbing.
\end{itemize}
\end{proposition}
Using Prop. \ref{prop:commu} and Prop. \ref{prop:qubit-1} we get a complete characterization of broadcastable pairs of qubit observables.
\begin{proposition} \label{prop:qubit-2}
Two qubit observables $\A$ and $\B$ are broadcastable if and only if $\A$ and $\B$ are commutative and mutually commuting.
\end{proposition}
\begin{proof}
The 'if' part is a direct consequence of Prop. \ref{prop:commu}.
To show the 'only if' part, we assume that $\A$ and $\B$ are broadcastable.
Then $\A$ and $\B$ are also mutually nondisturbing, hence by Prop. \ref{prop:qubit-1} mutually commuting.
Further, since $\A$ and $\B$ are broadcastable, $\A$ is broadcastable with itself.
By the hierarchy of the relations this implies that $\A$ is nondisturbing with itself, hence using again Prop. \ref{prop:qubit-1} we conclude that $\A$ is commutative.
In a similar way we conclude that $\B$ is commutative.
\end{proof}
Further, utilizing the hierarchy of relations and the previous results, we can also characterize the one-side broadcastability of qubit observables.
The following statement extends Prop. \ref{prop:qubit-1}.
\begin{proposition}\label{prop:qubit-final}
For two qubit observables $\A$ and $\B$, the following are equivalent:
\begin{itemize}
\item[(i)] $\A$ and $\B$ are one-side broadcastable.
\item[(ii)] $\A$ and $\B$ are mutually nondisturbing.
\item[(iii)] $\A$ and $\B$ are nondisturbing.
\item[(iv)] $\A$ and $\B$ are mutually commuting.
\end{itemize}
If one (and hence all) of these relations holds and neither $\A$ nor $\B$ is trivial, then $\A$ and $\B$ are broadcastable.
\end{proposition}
\begin{proof}
By the general hierarchy of the relations we have (i)$\Rightarrow$(ii)$\Rightarrow$(iii), and by Prop. \ref{prop:qubit-1} we have (iii)$\Leftrightarrow$(iv). It is thus enough to show that (iv)$\Rightarrow$(i).
Let $\A$ and $\B$ be mutually commuting qubit observables.
Then at least one of the following holds:
\begin{itemize}
\item[(a)] $\A$ and $\B$ are both commutative.
\item[(b)] $\A$ is a trivial observable.
\item[(c)] $\B$ is a trivial observable.
\end{itemize}
To see this, let us first note that a selfadjoint operator on a two-dimensional Hilbert space either has nondegenerate spectrum or is a multiple of the identity operator.
Now, assume that the observable $\A$ is not commutative, and let $\A(x)$ and $\A(x')$ be two noncommuting operators.
Since $\B(y)$ commutes with both $\A(x)$ and $\A(x')$, it is diagonal in the eigenbases of $\A(x)$ and $\A(x')$.
It follows that $\B(y)$ is a multiple of the identity operator.
Therefore, if $\A$ is not commutative, then $\B$ is trivial, and vice versa.
The one-side broadcastability of $\A$ and $\B$ follows in all cases (a)--(c).
If (a) holds, then by Prop. \ref{prop:qubit-2} the pair is broadcastable, hence one-side broadcastable.
If (b) or (c) holds, then the pair is one-side broadcastable since we seen in Sec. \ref{sec:versus} that every observable is one-side broadcastable with any trivial observable.
The last claim follows from the division into the cases (a)--(c) and Prop. \ref{prop:qubit-2}.
\end{proof}
We recall that two qubit observables can be compatible even if they are not mutually commuting.
For instance, the compatibility relation for the pairs of two-outcome qubit observables has been characterized in \cite{StReHe08},\cite{BuSc10},\cite{YuLiLiOH10}, and it is easy to see that most of the compatible pairs are not mutually commuting.
\section{Discussion}
The set of bipartite states divides into separable states and entangled states.
Among all separable states, some states are more classical than others.
Especially, the set of zero discord states is a proper subset of separable states, and separable states with nonzero discord yield advantage over zero discord states in certain tasks like phase estimation \cite{Girolamietal14}.
A comparable partitioning on pairs of observables is the division into compatible pairs and incompatible pairs, and then compatible pairs further into subsets of broadcastable, one-side broadcastable, nondisturbing and mutually nondisturbing pairs.
It would be interesting to see if their complement relations have a similar kind of task oriented characterizations as incompatibility, in which case a pair is incompatible if and only if it enables steering \cite{UoBuGuPe15}.
|
1,314,259,993,253 | arxiv | \section{Introduction}
Charged fluids are abundant in man-made or natural systems, in which thermalized mobile ions interact via
Coulomb forces collectively, and also with more macroscopic charged bodies such as colloids, proteins, or DNA.
The first theoretical attempt for describing inhomogeneous Coulomb fluids dates back about a century ago, to pioneering works
of Gouy in Lyon \cite{Gouy10} and Chapman in Oxford \cite{Chap13}.
These predate the Debye and H\"uckel approach which aimed at accounting for
the unusual thermodynamic properties of electrolytes like NaCl, where dissociation leads to a fluid of
Na$^+$ and Cl$^-$ ions in water \cite{DeHu23}. These early treatments are all mean-field in spirit.
It was realized in the 1980s that by discarding electrostatic correlations, mean-field theory precludes some counter-intuitive
effects such as the electrostatic attraction of like charge surfaces, revealed
by experiments, simulations, and theoretical approaches,
see \cite{LiLo99,WBBP99,GBPP00,SodlC01,GrNS02,Levi02,BAO04,NJMN05,NJMN05,Mess09} and references
therein.
It is now recognized that the validity of mean-field treatments, epitomized by the Poisson-Boltzmann theory
of extensive use in colloid science \cite{Ande06},
requires the necessary condition of sufficiently small electrostatic coupling; in
the language of the coupling parameter $\Xi$ to be defined below and which pits electrostatic against thermal energies,
this means $\Xi \ll 1$ up to $\Xi \simeq 1$.
On the other hand, systems with moderate to strong coupling are profuse,
starting with nucleic acids and cell membranes in aqueous solutions.
Charges are pivotal to their stability {\it in vivo}.
The study of these biological objects from a physics perspective has rekindled interest in Coulomb fluids,
with particular emphasis on strong coupling regime.
\emma{Yet, analytical progress for moderately to strongly coupled charged fluids has proven elusive,
as will be illustrated below. Our goal here is to fill this gap, with a theoretical treatment
that is both physically transparent, and remarkably accurate. It takes advantage of the existence
of a correlation hole around individual ions in the system, a well known feature, that has nevertheless not
been turned into an explicit analytical treatment so far. It is also relevant to emphasize from
the outset that our approach deals with salt-free systems,
where only counterions are present in the solution. This situation, with no added buffer electrolyte,
applies to deionized suspensions (see e.g. the experiments reported in \cite{PMGE04}).}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.37\textwidth,clip]{geometry.eps}
\caption{Schematic side view of the system, without (panel a) and with (panel b) dielectric mismatch.
The mobile counter-ions, point-like, are drawn
as spheres for the sake of illustration.
In a), the dielectric constant of the solvent ($\varepsilon$) and that of the
interface ($\varepsilon'$) are equal. We will also consider in b) the case where both constants differ,
for which the dielectric mismatch is quantified by $\Delta=(\varepsilon-\varepsilon')/(\varepsilon+\varepsilon')$.
Panels a) and b) depict regimes of large Coulomb coupling ($\Xi\gg1 $). Then, the characteristic distance $a$
between the counter-ions is set by electro-neutrality: $\sigma a^2 \propto q$, where $\sigma e$ is the plate surface charge
density at $z=0$ and $-qe$ is the ion's charge, with $e$ the elementary charge.
The typical extension $\mu$ follows by balancing thermal energy $kT$ with the energy of an ion
$-q e$ at position $z$ in the potential $-2\pi \sigma e z/\varepsilon$ created by the bare plate:
$\mu = \varepsilon \,kT / (2\pi q \sigma e^2)$, the so-called Gouy length.
The coupling parameter is defined as $\Xi = 2 \pi \sigma q^3 e^4 /(\epsilon k T)^2$.
Thus, $\Xi \propto a^2/\mu^2$ and $\Xi \gg 1 \Rightarrow \mu \ll a$.
In panel b), repulsive dielectric images should be considered
($\varepsilon' < \varepsilon$)
and a depletion zone of size $z^*$ appears.
The typical extension of the profile, $\mu'$, is no longer given by $\mu$ \cite{rque20}.
}
\label{fig:model}
\end{center}
\end{figure}
\section{Length Scale Separation}
The limit of asymptotically large couplings admits a simple description,
in elementary settings such as that sketched in Fig. \ref{fig:model}-a.
It can be understood by a length scale analysis, which we now illustrate on the emblematic primitive
counter-ion only model.
\emma{For strongly charged plates, most counterions remain in a close vicinity of
the surface. The characteristic distance $a$
between the condensed counter-ions is ruled by electro-neutrality: $\sigma a^2 \propto q$,
where $\sigma e$ is the plate surface charge
density at $z=0$ and $-qe$ is the ion's charge, with $e$ the elementary charge.
The typical extension, or excursion of the counter-ions from the surface, is denoted $\mu$.
This quantity, named the Gouy length, follows by the balance of
thermal energy $kT$ with the energy of an ion
$-q e$ at position $z$,
$\mu = \varepsilon \,kT / (2\pi q \sigma e^2)$.
The dimensionless coupling parameter, defined as $\Xi = 2 \pi \sigma q^3 e^4 /(\epsilon k T)^2$, is proportional
to $a^2/\mu^2$.
When $\Xi \gg 1$, Coulomb interaction between the
counter-ion exceeds
thermal energy, so that the mobile counter-ions in the vicinity of a plate
are strongly attracted to the surface, and at the same time repelled from the
adjacent counterions, $\Xi \gg 1 \Rightarrow \mu \ll a$. This results in a
correlation hole size $a$
\cite{RoBl96,Shkl99}, exceeding a typical transverse excursion of a counter-ios from the surface characterized by
the Gouy length $\mu$, see Figure \ref{fig:model}-a, where the key length scales are depicted. For colloidal particles with
bare charge $Z=10^4e$ and radius of $R=10^3$ \AA, in aqueous solution,
the coupling parameter is $\Xi \approx 0.26$ for monovalent counterions ($q=1$),
$2.1$ for divalent counter-ions, and $7.0$ for trivalent counter-ions. However, since $\Xi$ is inversely proportional to
the square of the dielectric constant, for solvents of lower dielectric constants such as mixtures containing
water and alcohol, $\Xi$ can easily reach $50$ for
moderately charged surfaces with trivalent counterions.
It is also relevant to provide reasonable bounds for the possible values of $\Xi$, as a function
of valence $q$. In water at room temperature, highly charged interfaces
have $\sigma e$ on the order of one $e$ per namometer square,
and therefore $\Xi$ is on the order of $q^3$. With trivalent ions,
this means $\Xi\simeq 30$, which is already way into the regime covered by our treatment.}
The length scale separation provides the grounds for a surprisingly simple picture of a strongly correlated Coulomb system
where the ions react mostly to the bare plate potential, while ion-ion interactions become insignificant as
$\Xi\to\infty$ \cite{Shkl99,NJMN05,Mess09}.
\emma{Thus, the ionic density profile
takes an exponential form $\rho(z) \propto \exp(-z/\mu)$ characteristic
of a particle in a constant field. The proportionality factor can be
determined by the contact value
theorem \cite{HeBl78}. }
This ``ideal gas'' barometric law has been fully validated by numerical simulations \cite{MoNe02,Varenna}.
Corrections beyond the ideal gas regime can be computed in a $1/\sqrt{\Xi}$ expansion
by a perturbation around the Wigner crystal \cite{SaTr11}, that forms when $\Xi$ exceeds some (very large)
crystallization value $\Xi_c \simeq 3 .10^4$ \cite{BaHa80}.
It is generally believed that single particle ideas fail in situations where scale separation
no longer holds: for instance if $\Xi$ is in some crossover regime of
moderate coupling or in the situation of Fig. \ref{fig:model}-b)
with a dielectric mismatch.
We shall see that although the ideal gas view indeed severely breaks down in these generic cases
-- which as a matter of fact
significantly limits its practical interest -- a ``correlation hole modified'' single-particle treatment
can be effectively applied. It is our purpose to present this fully analytical, self-consistent approach.
The theory developed here allows to accurately determine the counter-ion density distribution
$\rho$, which is in striking agreement with computer simulation results. This leads to an
unexpected conclusion that somewhat beyond the usual mean-field regime of weakly coupled fluids,
an even simpler mean-field provides a quantitative description.
In the limiting cases where the ideal gas formulation is relevant, our analysis recovers it.
\section{Correlation Hole: Treatment and Consequences}
We now address the simplest geometry where lack of scale separation forestalls the ideal gas single particle physics:
the planar interface alluded to above, with a dielectric jump between
the solvent (dielectric constant $\varepsilon$) and the confining charged body (dielectric constant $\epsilon'$)
occupying the lower half space as shown in Figure \ref{fig:model}-b.
Although simplified, such a geometry provides a paradigmatic testbed to shape intuition and theoretical ideas.
The situation $\Delta = (\varepsilon-\varepsilon')/(\varepsilon+\varepsilon') >0 $ is the most relevant one,
since the dielectric constant of materials like glass, proteins, or polarizable colloids is much smaller than that of water:
each charge admits an image of the same sign \cite{Jackson}, with a resulting repulsive interaction.
\emma{It also encompasses the air-liquid interfaces, for which $\varepsilon/\varepsilon' \simeq 80$.
The case $\Delta <0$ leads to attractive images \cite{SaTr12CPP},
and to the disappearance of the depletion zone in
Fig. \ref{fig:model}-b. The extreme limit corresponds to a grounded electrode with $\varepsilon' \rightarrow \infty$
for which $\Delta = -1$. In this case the ions can no longer be modeled as point particles and a hardcore must be introduced.
In this paper we will restrict our attention to systems with $\Delta>0$.}
The mobile ions are attracted to the oppositely charged interface at $z=0$, but concomitantly
each charge $-qe$ at position $z$ has a dielectric image of charge $-qe \Delta$ at $-z$ \cite{Jackson},
which strongly repels it. A depletion zone ensues \cite{OnSa33};
it is quite straightforward to estimate its size $z^*$, which turns out to be of the same order as $a$.
Thus, one can no longer consider that ions are far from each other compared to their distance to the plate:
the intrusion of a new length scale, $z^*$, explains the failure of the single particle ideal gas picture.
Nevertheless, the ionic profile's extension, $\mu'$, remains the smallest length scale of the problem \cite{rque20}.
Hence, we are led to neglect the correlations between the ion's fluctuations, while taking due account
of their interactions in an effective way, at variance with the ideal gas formulation.
The problem we face reduces to computing the effective potential $u$ that a given ion experiences, when at a distance
$z$ away from the interface. When known, $u$ directly leads, through a Boltzmann weight, to the main quantity
of interest, the density profile: $\rho(z) \propto \exp(-\beta u) $, $\beta=1/(kT)$ being the inverse temperature.
We emphasize that when explicit analitic expressions are sought, the state
of the art lies in the single particle ideal gas view, in which case the potential of mean-force $u$ stems from the force due to the plate
at $z=0$ and the test particle image charge \cite{NJMN05,Mess09,MoNe02,MoNeEPL02}.
We shall see that this treatment is inappropriate for $\Delta \neq 0$, so that there
is no analytical treatment available in the literature to study this general case.
We attempt here to fill the gap.
In other words, while the idea of correlation holes in more or less correlated Coulombic fluids is not novel
\cite{Nord84,RoBl96,Shkl99,BaDH00,GrNS02,DoRu03,BAO04,HaLu10}, transforming the corresponding insight into a fully analytical theory is
new; it is the subject of our paper.
Since practically relevant values of the coupling parameter are orders of magnitude smaller than
the crystallization threshold, we envision the ions as forming a liquid, essentially two dimensional
since we do not aim at covering the limit of too small $\Xi$ (we will address the range
$\Xi>10$ here \cite{rque90}). The key structural features of this liquid are embodied in the
pair correlation function $g(r)$ \cite{HaMD86,rque30}, a function
of inter-ion distance providing the density of neighbors. This $g(r)$ is more or less
structured depending on the value of $\Xi$ \cite{MoNe02}, but is always strongly depleted at small
distances $r$ due to the strong Coulomb repulsion \cite{Nord84,RoBl96,Chen06,Sant06,HaLu10,BdSL11}:
we recover the correlation
hole depicted in Fig. \ref{fig:model}. A second characteristics is that the size of
this hole is essentially $\Xi$-independent: being set by electro-neutrality,
it is always given by the length scale $a$ introduced in the caption of
Fig. \ref{fig:model} \cite{MoNe02}; besides, each particle has a coordination six
\cite{rque50}. We claim that these gross features are sufficient for a proper account of the ionic profile,
without inclusion of further details.
Two levels of simplification will be provided, having in common the existence of a
correlation hole around the test particle, in the form of a concentric disk.
1) Apart from the test particle, the fluid of counter-ions is assumed structureless beyond $R_0$ (meaning $g(r)=1$ for $r>R_0$).
The size of the hole is set by balancing the hole and ion charges: $\pi R_0^2 \sigma e = q e$.
This leads to a system of a moving ion
in the field of a plate at $z=0$, a punctured plate at $z^*$ having a circular hole of size $R_0$,
plus the dielectric images of all charges, of the same sign but weighted with a prefactor $\Delta$, and
located at the symmetric position with respect to the mid plane at $z=0$.
We call this route the correlation hole + strong coupling with zero neighbor (ch$_0$).
2) In a refined approach, we set $g(r)=1$ beyond the first neighbors.
Then, each particle with its 6 neighbors is in the center of a hole with radius $R_6$,
now such that $\pi \sigma R_6^2 = q + 6q = 7q$.
Due account of image charges leads to the model represented in Figure \ref{fig:viewch6},
referred to as ch$_6$. For both ch$_0$ and ch$_6$ routes, the process of smearing out
an infinite number of counter-ions leads to a punctured charged plate, with a hole concentric with the
test ion. Its interaction with the test particle is essential for a good account of the density profile.
\begin{figure}[htb]
\psfrag{AA}{$z^*$}
\psfrag{BB}{$(1+\Delta) \sigma\,e$}
\psfrag{CC}{$-\sigma\,e$}
\psfrag{DD}{$-\Delta\,\sigma\,e$}
\psfrag{EE}{$2 R_6$}
\psfrag{FF}{$z$}
\psfrag{GG}{$z=0$}
\begin{center}
\includegraphics[width=0.37\textwidth,clip]{model_ch6.eps}
\caption{Schematics of the ch$_6$ approach. A test particle (filled disc) is singled out at elevation $z$.
Other counter-ions are assumed to be at their typical location $z^*$.
Upon smearing out the counter-ions beyond
a cutoff distance $R_6$, one obtains a {\em punctured} plate with charge density $-\sigma e$. The empty circles
stand for the 6 nearest neighbors of the test particle. The symmetrically located dielectric images
-- discrete (displayed in gray) or continuous -- are also shown. The simplified
ch$_0$ view leads to a very similar setup, with the difference that there are no discrete neighbors:
these ions are also smeared out, so that the hole becomes smaller,
of radius $R_0 = R_6/\sqrt{7}$.
}
\label{fig:viewch6}
\end{center}
\end{figure}
\section{Results}
\emma{To explore the range of validity of the theory all the results will be compared with the Monte Carlo simulations
performed using the 3D Ewald summation with a correction for slab geometry and for surface polarization.
More details regarding simulations can be found in Refs.~\cite{DoLe14} and ~\cite{DoLe15}. An interested reader
can also consult an efficient
implementation of slab geometry simulations for charged interfaces which has
recently been developed in Ref. \cite{DoGi16}.}
The analysis now proceeds in two steps \cite{rque97}. First, the optimal
distance $z^*$ is derived, which yields the maximum of the ionic profile $\rho(z)$.
Second, the effective one-particle potential $u$ is computed.
\emma{For the sake of simplicity, we start by presenting the ch$_0$ approach. We fix all ions at $z=z^*$
(including the test particle), and calculate $E_0$,
the energy per particle of the system, made up of 3 charged planes, two of which are punctured and located
at $\pm z^*$, and 2 discrete charges (image included). It proves convenient to add and subtract to the image
plane at $z=-z^*$, the potential of a charged disc with same density as the plate, $-\Delta \sigma e$.
In doing so, one obtains a non-punctuated plate at $z=-z^*$, and a disc of charge density $\Delta\sigma e$,
with radius $R_0$. The resulting energy per particle is}
\begin{eqnarray}
E_0(z^*) &=& \frac{2\pi}{\varepsilon}\,(1+\Delta)\,\sigma q \,e^2 z^* \, - \frac{1}{2}\,\Delta\,\frac{2\pi}{\varepsilon}\,\sigma q \,e^2 (2z^*)
\,-\,\frac{1}{2} \Delta q \sigma\frac{e^2}{\varepsilon}\,\int_0^{R_0} dr \,\frac{2\pi r}{\sqrt{r^2+(2z^*)^2}}
+\frac{q^2 e^2}{2\varepsilon} \Delta \frac{1}{2z^*}
\nonumber\\
&=&
\frac{2\pi}{\varepsilon}\,\sigma q \,e^2 z^* + \frac{q^2 e^2}{2\varepsilon} \Delta \frac{1}{2z^*}\,
-\, \pi \Delta \,q \,\sigma \frac{e^2}{\varepsilon}\left[\sqrt{R_0^2+(2z^*)^2}-2z^* \right] .
\end{eqnarray}
Turning to the ch$_6$ case, we have to consider 3 charged planes, two of which are punctured and located
at $\pm z^*$, and 14 discrete charges. \emma{Proceeding along similar lines as above,} the energy per particle now reads:
\begin{eqnarray}
E_0(z^*) =&&\frac{2\pi}{\varepsilon}\,\sigma q \,e^2 z^* +\frac{q^2 e^2}{2\varepsilon} \Delta
\left[\frac{1}{2z^*} + \frac{6}{\sqrt{a^2 + 4 z^{*2}}}
\right]
\nonumber\\
&&- \pi \Delta \,q \,\sigma \frac{e^2}{\varepsilon}\left[\sqrt{R_6^2+(2z^*)^2}-2z^* \right] .
\end{eqnarray}
Introducing the dimensionless variable $t=2z^*/a$ where $a=3^{-1/4} \sqrt{2q/\sigma}$
\cite{rque60} and minimizing $E_0$ with respect to $t$, we have to solve
$$
1-\Delta\left[\frac{t}{\sqrt{(R_6/a)^2+t^2}} -1
\right] = \frac{\sqrt{3}}{4\pi} \left[\frac{\Delta}{t^2} \,+ \,\frac{6\,\Delta \,t}{\left(1+t^2\right)^{3/2}}
\right].
$$
Once $t$ and thus the depletion zone extension $z^*$ is found, we have to dissociate the test
particle from the ionic layer, move it along the $z$ axis as depicted in Fig. \ref{fig:viewch6},
and compute the resulting
potential $u(z)$. This is another elementary electrostatics exercise \cite{rque70}, with the result:
\begin{eqnarray}
&&\beta u(z) \,= \,(1+\Delta) \, \widetilde z \,+\, \frac{\Xi \Delta}{4 \, \widetilde z}
\nonumber\\
&& - \,\sqrt{ \left(\widetilde R_6\right)^2 + \left(\widetilde z - \widetilde z^*\right)^2 } \,-\,
\Delta \sqrt{ \left(\widetilde R_6\right)^2 + \left(\widetilde z + \widetilde z^*\right)^2 }
\nonumber\\
&& + \frac{6 \,\Xi}{\sqrt{ \widetilde a^{\,2} + \left(\widetilde z - \widetilde z^*\right)^2 }}
\,+\, \frac{6 \,\Xi \,\Delta}{\sqrt{ \widetilde a^{\,2} + \left(\widetilde z + \widetilde z^*\right)^2 }}
\label{eq:uch6}
\end{eqnarray}
where tilde distances are rescaled by the Gouy length, e.g. $\tilde z = z/\mu$.
The ch$_0$ counterpart of Eq. (\ref{eq:uch6}) is again very similar, without the last two terms
in $6\,\Xi$ and with the substitution $\widetilde R_6\to \widetilde R_0$ for the hole size. Since $\widetilde R_6^2=14\,\Xi$, we have
$\widetilde R_0^2=2\,\Xi$. Finally, the suitably normalized Boltzmann weight is the density
profile sought for:
\begin{equation}
\rho(z) \,=\, \frac{\sigma}{q}\, \frac{e^{-\beta u(z)}}{\int e^{-\beta u(z')} dz'} .
\label{eq:rho_u}
\end{equation}
By accounting solely for the interaction with the plate at $z=0$ and with the test particle image,
one has $\beta u = (1+\Delta) \, \widetilde z \,+\, \Xi \Delta/(4 \, \widetilde z)$, which,
when inserted into Eq. (\ref{eq:rho_u}), leads to the ideal gas profile proposed in
\cite{MoNeEPL02,JKNP08}. Such an approach is expected to fail as soon as the afore discussed scale separation is violated,
that is whenever $\Delta \neq 0$ \cite{rque110}. This is confirmed in Fig. \ref{fig:smallXi}.
On the other hand, the rather rough ch$_0$ picture significantly improves the agreement with Monte Carlo
data,
while the extended ch$_6$ description fares remarkably well (see Fig. \ref{fig:smallXi}). Extensive simulations
have also been performed for larger $\Xi$ values, confirming the accuracy of the ch$_6$ route
for all values of the dielectric jump $\Delta$, while the simple ch$_0$ description is also shown to be quite accurate.
In view of the underlying physical hypothesis (such as the two dimensional assumption for the
fluid of counter-ions), better justified for strongly coupled systems, the very good
agreement at $\Xi=10$ rather comes as a surprise. A similar remark holds for ch$_0$, a crude,
but nonetheless trustworthy approximation.
It is interesting to compare and contrast our theory with the approach of Reference \cite{BAO04} which also
also relies on the idea of singling out a test particle. However, at variance with our treatment,
a) the remaining
ions are treated at the Poisson-Boltzmann level ; b) the approach is restricted to $\Delta=0$,
and thus to a regime where many-body effects are less pronounced; c) the numerical resolution of a highly non-linear
partial differential equation
is required, with subsequent numerical integration of some auxiliary potential.
In contrast, our treatment is fully analytical, and reduces to three simple
equations presented above.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.35\textwidth,clip]{rho_Xi10_del0.95_simp.eps}
\includegraphics[width=0.36\textwidth,clip]{rho_Xi25_del1_q5.eps}
\caption{Density profile of counter-ions for $\Delta=0.95$ (meaning $\varepsilon/\varepsilon' \simeq 40$),
$\Xi=10$ (upper graph) and for $\Delta=1$, $\Xi=25$ (lower graph). The ch$_0$ and ch$_6$ predictions
are compared to the ideal gas profile proposed in \cite{MoNeEPL02}, and to the results of
Monte Carlo simulations (taken from \cite{MoNeEPL02} for the upper graph). Here, $l_B=\beta e^2/\epsilon$
is the Bjerrum length.}
\label{fig:smallXi}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.42\textwidth,clip]{rho_Xi51_del0_q5.eps}
\caption{Same as Fig. \ref{fig:smallXi}, without dielectric mismatch ($\Delta=0$),
and $\Xi=51$. The density profile is maximum for $z=0$, at contact with the plate:
there is no depletion zone ($z^*=0$).}
\label{fig:Delta0}
\end{center}
\end{figure}
It is of particular interest to analyze the well documented $\Delta=0$ situation, where
$\varepsilon=\varepsilon'$. There, the ideal gas view provides the dominant large coupling profile
\cite{MoNe02,NJMN05,SaTr11}. As seen in Fig. \ref{fig:Delta0}, both ch$_0$ and ch$_6$
perform significantly better, and account correctly for the deviations from the exponential behavior:
the overpopulated tail with respect to exponential behavior
is a fingerprint of the repulsive effect of the fellow counter-ions
forming a layer at $z\simeq 0$, that becomes more pronounced as the test particle moves away
from this plane. We have found a similar agreement at $\Delta=0$ for larger $\Xi$ values.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.42\textwidth,clip]{rho_Xi1220_del1_q10.eps}
\caption{Counter-ion profile at large coupling for $\Delta=1$, symbols are the results of MC simulations.
The ``Wigner strong coupling''
prediction (ch$_\infty$) is also shown: it is almost indistinguishable from the ch$_6$ treatment.}
\label{fig:largeXi}
\end{center}
\end{figure}
Finally, we have tested our approach at very large couplings ($\Xi>10^3$), see Fig
\ref{fig:largeXi}. While the ideal gas picture of Refs. \cite{MoNeEPL02,JKNP08}
is inoperative, the ch$_6$ theory agrees well with the simulation data, in spite
of the fact that the fluid of counter-ions is strongly modulated. We thus have considered extensions
of ch$_6$, of the ch$_n$ type, including a growing number of neighbors in the approach
($n=6, 12, 18, 30\ldots$), that we locate at their ground state position, in order to reach
gradually the $\Xi\to\infty$ hexagonal arrangement. Pushing this logic, we show in Fig.
\ref{fig:largeXi} the ch$_\infty$ prediction, where all ions are in their ground state position,
except the test particle. It is still possible to compute analytically the resulting one body potential $u$
making use of the lattice summation techniques developed in Ref. \cite{SaTr12}.
There is barely any difference between the ch$_6$ and the ch$_\infty$
predictions. Incidentally, all ch$_n$ formulations, for $n$ between 6 and $\infty$,
remain extremely close for all
couplings we have investigated, which emphasizes the robustness of the approach \cite{rque64}.
Furthermore, the depletion zone extension,
$z^*$, hardly depends on the level $n$ in a ch$_n$ treatment, from $n=0$ up to $n\to\infty$!
\section{Conclusion}
In conclusion, we have presented a theory that accounts very accurately for the ionic density profiles of salt-free
systems at moderate and strong couplings.
Extensive comparisons with Monte Carlo simulations have been carried out.
Our approach is accurate
for $\Xi>10$, and thus covers a wealth of experimentally relevant
situations; for instance, DNA with trivalent counter-ions ($q=3$) has $\Xi$ around $100$.
The couplings that both evade mean-field and our analysis, namely $\Xi$ in the range $[1,10]$,
must be addressed by computer simulation.
Our formulation relies on basic electrostatics considerations, at variance with other
more complex treatments such as the splitting field-theory \cite{Chen06,Sant06,HaLu10,LuLi15},
and invokes transparent physical hypothesis pertaining to ionic correlations.
The latter are accounted for at a one body level, which qualifies the approach as mean-field.
Furthermore, besides accuracy, our treatment has been shown to be very robust.
More complex geometries such as a slit, explored for small
separations in Ref. \cite{JKNP08}
provide possible applications for the theory presented in this paper.
Another important perspective includes addition of co-ion \cite{persp}, which brings
an extra coupling parameter and hard core effects. This leads to significant complications,
but can elaborate on the no-salt treatment presented here, in the spirit of previous
approaches \cite{KNFP11,PaTr11}. On general grounds, salt ions "dress" the interactions
between multivalent counterions \cite{KNFP11}, in a way that may be complex, but that may admit rather
simple limiting laws. For instance, with highly asymmetric electrolytes,
counter-ions may be in a strong coupling regime while coions are not. This leads to
a picture where the counterions interact
through a screened potential, which allows further progress \cite{KNFP11}.
Alternatively, if coions themselves are stronly coupled,
they will form Bjerrum pairs with the counter-ions, leading to a system with excess
counter-ions and a number of dipoles, see e.g. \cite{Yan2011}. In a first approximation, neglecting from pairs \cite{PaTr11},
the formalism presented here is directly applicable.
The support received from the Grant VEGA No. 2/0015/15 and from CNPq, INCT-FCx, and US-AFOSR under the grant
FA9550-16-1-0280 is acknowledged.
|
1,314,259,993,254 | arxiv | \section{Introduction}
Random walks, of various types, have been comprehensively studied
and widely applied in physics, engineering, and especially many fields
of mathematics. The connection between random walk models and special
polynomials, especially Bernoulli and Euler polynomials, was not obvious,
since those polynomials, though with numerous applications in different
areas, mainly appear in number theory and combinatorics. However,
the recent work by the first author and Vignat \cite{JiuVignat}, on
the $1$-dimensional reflected Brownian motion and $3$-dimensional
Bessel process, discovered and proved that some non-trivial identities involving Bernoulli
and Euler polynomials of order $p$, denoted by $B_{n}^{(p)}(x)$
and $E_{n}^{(p)}(x)$ and defined via their exponential generating
functions
\begin{equation}
\left(\frac{t}{e^{t}-1}\right)^{p}e^{xt}=\sum_{n=0}^{\infty}B_{n}^{(p)}(x)\frac{t^{n}}{n!}\quad\text{and}\quad\left(\frac{2}{e^{t}+1}\right)^{p}e^{xt}=\sum_{n=0}^{\infty}E_{n}^{(p)}(x)\frac{t^{n}}{n!}.\label{eq:GF}
\end{equation}
In particular, $B_{n}(x)=B_{n}^{(1)}(x)$ and $E_{n}(x)=E_{n}^{(1)}(x)$
are the ordinary Bernoulli and Euler polynomials; and Bernoulli numbers
$B_{n}=B_{n}(1)$ and Euler numbers $E_{n}=2^{n}E_{n}(1/2)$ are special
evaluations. See, e.g., \cite[Chap.~24]{NIST} for details.
This study originally arises from early work \cite[Eq.~(3.8)]{Euler},
where the first author, Moll, and Vignat expressed the usual Euler
polynomials as a linear combination of higher-order Euler polynomials:
for any positive integer $N$,
\[
E_{n}(x)=\frac{1}{N^{n}}\sum_{\ell=N}^{\infty}p_{\ell}^{(N)}E_{n}^{(\ell)}\left(\frac{\ell-N}{2}+Nx\right).
\]
It is surprising that the positive coefficients $p_{\ell}^{(N)}$
also appear as transition probabilities in the context of a random
walk over a finite number of sites \cite[Note 4.8]{Euler}, which
reveals the possibility to connect random walks and $E_{n}^{(p)}(x)$,
as well as $B_{n}^{(p)}(x)$.
Results in \cite{JiuVignat} are obtained by decomposing the successive
hitting times ONLY of \emph{two}, \emph{three}, and \emph{four} fixed
levels, i.e., walks with \emph{one} or \emph{two} loops. Therefore,
it is the purpose of this paper to generalize this work into general
$n$ loops,
\begin{itemize}
\item by both inclusion-exclusion principle and induction;
\item in order to obtain the general hitting time decomposition for $n$ loops;
\item and to derive the corresponding identities involving $B_{n}^{(p)}(x)$ and $E_{n}^{(p)}(x)$.
\end{itemize}
Hence, this paper is organized as follows. In Section \ref{sec:Loop}, we introduce basic notation for random walk,
especially the generating function of the hitting time; the models of $1$-loop and $2$-loop cases are recalled as examples. In Section \ref{sec:NLoop}, we generate the model to general $n$-loops, with two proofs, both combinatorially, by inclusion-exclusion principle, and inductively. In Section \ref{sec:umbral}, we recall the three umbral symbols: Bernoulli, Euler and uniform symbols, with their important properties and connection. These formulas are crucial to derive identities in Section \ref{sec:1loop}. In this last section, we first consider the $1$-dimensional reflected Brownian motion model; as an analogue, the loop decomposition and identites in $3$-dimensional Bessel process model are also derived.
\section{Preliminaries: loops}\label{sec:Loop}
One can find similar summary in \cite{JiuVignat}, except for a slight
change in the notation: see Def.~\ref{def:phi} below. We still include
them to make this paper self-contained.
\subsection{Notation for paths and loops}
We begin with some notation on the moment generating functions of
random walks among loops.
\begin{defn}
\label{def:phi}Consider sites $a<b$ and a third site $c$, different
from $a$ and $b$.
\begin{itemize}
\item We let $\phi_{a\rightarrow b}$ be the \emph{moment generating function
of the hitting time of site $b$ starting from site $a$}; also let
$\phi_{b\rightarrow a}$ be the \emph{counterpart from $b$ to $a$}.
Therefore,
\[
L_{a,b}:=\phi_{a\rightarrow b}\phi_{b\rightarrow a}.
\]
is the moment generating function of the hitting time
\begin{itemize}
\item starting from $a$;
\item hitting $b$ first;
\item and finally returning to $a$.
\end{itemize}
\item[] Note the symmetry that $L_{b,a}=\phi_{b\rightarrow a}\phi_{a\rightarrow b}=L_{a,b}$.
Thus, this is the \emph{loop} between sites $a$ and $b$.
\item Also, we let $\phi_{a\rightarrow b|\cancel{c}}$ be the moment generating
function of the hitting time of site $b$ starting from site $a$
\emph{before hitting site $c$}; and similarly for $\phi_{b\rightarrow a|\cancel{c}}$.
It is easy to see, $\phi_{a\rightarrow b|\cancel{c}}=\phi_{a\rightarrow b}$
if $c>b$ and $\phi_{a\rightarrow b|\cancel{c}}=0$ if $a<c<b$.
\item If $a$ is the $m$th site, denoted by $a_{m}$; and $b$ is the $n$th
site as $a_{n}$, we shall use $\phi_{m\rightarrow n}$, for simplicity,
instead of $\phi_{a_{m}\rightarrow a_{n}}$.
\item Finally, let $\phi_{m\rightarrow n|\cancel{k}}$ be the moment generating
function of the hitting time from the $m$th site to the $n$th site
before hitting the $k$th site.
\item We can similarly use $t_{a\rightarrow b}$ etc.~for the hitting times, rather than
their moment generating functions. For instance, $t_{m\rightarrow n|\cancel{k}}$ is the
hitting time from the $m$th site to the $n$th sitebefore hitting the $k$th site.
\item
Here, it is important and als convenient for us to let
\[
L_{n}=\phi_{(n-1)\rightarrow n|\cancel{(n-2)}}\cdot\phi_{n\rightarrow(n-1)|\cancel{(n+1)}}
\]
denote the moment generating function of the hitting time of the
loop between the (consecutive) $(n-1)$th site and the $n$th site.
\end{itemize}
\end{defn}
\begin{example}
We first recall the $1$-loop and $2$-loop cases, already studied
in \cite{JiuVignat}.
\begin{figure}
\begin{centering}
\includegraphics[scale=0.5]{1.png}
\par\end{centering}
\caption{$1$-loop case\label{fig:1loop}}
\end{figure}
The $1$-loop case can be view in Fig.~\ref{fig:1loop}, in which
we assume the initial site is $a_{0}$, namely there is no other site
to the left of $a_{0}$. It is not hard to see the hitting time decomposition as follows:
\[
t_{0\rightarrow2}=t_{0\rightarrow1}+\underset{k\text{ copies}}{\underbrace{t_{1\rightarrow0|\cancel{2}}+t_{0\rightarrow1}+\cdots+t_{1\rightarrow0|\cancel{2}}+t_{0\rightarrow1}}}+t_{1\rightarrow2|\cancel{0}},
\]
namely, there can be $k$ copies of $L_{1}$ in the moment generating
functions, for $k=0,1,2,\ldots$. When considering the moment generating
functions of both sides, independence turns the summation into products.
Therefore, we have
\begin{equation}
\phi_{0\rightarrow2}=\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}\sum_{k=0}^{\infty}\left(\phi_{1\rightarrow0|\cancel{2}}\phi_{0\rightarrow1}\right)^{k}=\frac{\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}}{1-L_{1}}.\label{eq:1loop}
\end{equation}
(See also \cite[Eq.~(2.5)]{JiuVignat}.) In addition, the $2$-loop case is
\begin{equation}
\phi_{0\rightarrow3}=\frac{\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}\phi_{2\rightarrow3|\cancel{1}}}{1-(L_{1}+L_{2})},\label{eq:2loop}
\end{equation}
by a combinatorial enumeration \cite[Eq.~(2.6)]{JiuVignat}.
\end{example}
We shall give the loop decomposition for $n$-loops in the following section.
\section{\label{sec:NLoop}General $n$-loop Decomposition}
In this section, we shall give the expression of the general $n$-loop
formula (see Fig.~\ref{fig:Mloop}, the black paths), as the generalization
of (\ref{eq:1loop}) and (\ref{eq:2loop}). Again, we assume the walk
begins at the site $a_{0}$ and only walk to its right on consecutive
sites $a_{1}<a_{2}<\cdots<a_{m+1}$. Namely the walk will consider $a_0$ as its starting point and there are no
sites to the left of $a_0$.
\begin{thm}\label{thm:LOOP}
\begin{eqnarray*}
& & \phi_{0\rightarrow (n+1)} = \phi_{0\rightarrow1}\prod_{j=1}^{n}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\, \left(
\sum_{k\geq 0} \sum_{**}\prod_{t=1}^k L_{i_t}\right)\\[0.2cm]
& & = \phi_{0\rightarrow1}\prod_{j=1}^{n}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\, \sum_{k\geq 0} \left((L_1+L_2+\cdots + L_n)+\sum_{*'}' (-1)^{l+1} (L_{j_1}L_{j_2}\cdots L_{j_l}) \right)^k\\[0.2cm]
& & = \phi_{0\rightarrow1}\prod_{j=1}^{n}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\,
\frac{1}{1-(L_1+L_2+\cdots + L_n)+\sum\limits_{*'}'(-1)^{l} (L_{j_1}L_{j_2}\cdots L_{j_l})},
\end{eqnarray*}
where
\[
**=\{(i_1,i_2,\cdots, i_k): 1 \leq i_t \leq n, \text{and } i_t = i_{t+1}, i_t =i_{t+1}+1, \text{or } i_t < i_{t+1}\};
\]
and
\[
*'=\{n\geq j_{1} > \cdots >j_{l}\ge 1,\; l \geq 2, \; j_{m}-j_{m+1}\ge2\}.
\]
\end{thm}
\begin{rem*}
As one can tell, terms in $*'$ are loops without consecutive ones; and they are listed in a {\emph{descending}} order, for its combinatorial interpretation later in the proof. Before the proof of Thm.~\ref{thm:LOOP}, we would like to present several example for small number of loops.
\begin{example}
\label{exa:2-5loops}The formulas for $n=2,3,4,5$, given by Theorem (\ref{thm:LOOP}) are listed as follows.
\begin{align*}
\phi_{0\rightarrow3} & =\frac{\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}\phi_{2\rightarrow3|\cancel{1}}}{1-(L_{1}+L_{2})},\\
\phi_{0\rightarrow4} & =\frac{\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}\phi_{2\rightarrow3|\cancel{1}}\phi_{3\rightarrow4|\cancel{2}}}{1-(L_{1}+L_{2}+L_{3}-L_{3}L_{1})},\\
\phi_{0\rightarrow5} & =\frac{\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}\phi_{2\rightarrow3|\cancel{1}}\phi_{3\rightarrow4|\cancel{2}}\phi_{4\rightarrow5|\cancel{3}}}{1-(L_{1}+L_{2}+L_{3}+L_{4}-L_{4}L_{2}-L_{4}L_{1}-L_{3}L_{1})},\allowdisplaybreaks\\
\phi_{0\rightarrow6} & =\frac{\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}\phi_{2\rightarrow3|\cancel{1}}\phi_{3\rightarrow4|\cancel{2}}\phi_{4\rightarrow5|\cancel{3}}\phi_{5\rightarrow6|\cancel{4}}}{1-(L_{1}+\cdots+L_{5}-L_{5}L_{3}-L_{5}L_{2}-L_{5}L_{1}-L_{4}L_{2}-L_{4}L_{1}-L_{3}L_{1}+L_{5}L_{3}L_{1})}.
\end{align*}
Apparently, if there are only two loops, $L_1$ and $L_2$, then $*$, as well as $*'$ is empty. So we can recover the case of two loops, i.e., \eqref{eq:2loop}.
\end{example}
\begin{proof}[Proof of Thm.~\ref{thm:LOOP}, by inclusion-exclusion principle]
The proof will give a combinatorial interpretation to the terms appearing in Theorem \ref{thm:LOOP}.
\vspace{0.2cm}
Let $n\geq 1$. An arbitrary decomposition of $t_{0\rightarrow (n+1)}$ as a sum of hitting times between consecutive sites, which generalizes the one given in Example 2.2, can be written as
\[
(A)\qquad t_{0\rightarrow (n+1)}= \sum_{j=0}^{2k+n+1} t_{i_j \rightarrow i_{j+1}|\cancel{i_j^*}}
\]
where
\vspace{0.2cm}
\begin{itemize}
\item[(i)] $i_0=0,\, i_1 =1,\, i_{2k+n+1}=n+1$;\\[0.2cm]
\item[(ii)] if $\; 0 \leq j < 2k+n$, $\quad 0 \leq i_j , i_{j+1} \leq n$, $\quad $ and $\quad \; |i_j-i_{j+1}|=1$;\\[0.2cm]
\item[(iii)] if $\; i_j \not = 0, $ $\; |i_j^* - i_j|=1 \; $ and $\; |i_j^* - i_{j+1}|=2$;\\[0.2cm]
\item[(iv)] if $\; i_j = 0,$ $\; t_{i_j \rightarrow i_{j+1}|\cancel{i_j^*}} = t_{i_j \rightarrow i_{j+1}} = t_{0\rightarrow 1}.$
\end{itemize}
\vspace{0.3cm}
It is not hard to see that as we move through the sum in $(A)$, in the direction of increasing $j$, every time we encounter a stopping time which corresponds to a step to the left, i.e., from a site $t_j$ to site $t_{j+1}=t_j-1$, a loop is formed. These loops can only occur between consecutive sites, and the order in which they appear is important: different walks with the same collection of loops will have different orderings. The ordering of the loops has an additional constraint, given by the assumption the walk only moves between consecutive sites. That is, if $t_{j+1} < t_j$, then $t_{j+1}=t_j -1$. No such constraint is needed if $t_{j+1} > t_j$. Hence every decomposition of $t_{0\rightarrow (n+1)}$ in (A) can be uniquely written as
\[
(B) \qquad t_{0\rightarrow (n+1)}=
t_{0\rightarrow 1} + \sum_{j=1}^{n}t_{j\rightarrow(j+1)|\cancel{j-1}} + \sum_{t=1}^{k} l_{i_t}
\]
where
\begin{itemize}
\item[(a)] the order in which we sum the $l_{i_t}$'s is important;\\[0.2cm]
\item[(b)] $1 \leq i_t \leq n$, and $i_t = i_{t+1}$, or $\; i_t =i_{t+1} +1$, or $i_t < i_{t+1}$.
\end{itemize}
\vspace{0.2cm}
\noindent
Clearly this correspondence can be reversed, and (B) can be used to express the moment generating function of $t_{0\rightarrow (n+1)}$ in terms of the moment generating functions of loops. That is,
\begin{equation} \label{new form 0}
\phi_{0\rightarrow n+1}=\phi_{0\rightarrow1}\prod_{j=1}^{n}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\, \left(
\sum_{k\geq 0} \sum_{**}\prod_{t=1}^k L_{i_t}\right),
\end{equation}
where
\[ **=\{(i_1,i_2,\cdots, i_k):\; \mbox{the $ i_t$'s satisfy $(b).$}\}
\]
\vspace{0.3cm}
Let $k$ be arbitrary but fixed. We are going to apply the method of inclusion exclusion to recover $ \sum\limits_{**}\prod\limits_{t=1}^k L_{i_t}$ from $(L_1+L_2+\cdots + L_n)^k$.
Note that, if $n=1, 2$, (b) necessarily holds, and from (\ref{new form 0}) we immediately obtain
\[
\phi_{0\rightarrow 2} = \phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}} \sum_{k=0}^{\infty} L_1^k = \frac{\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}}{1-L_1},
\]
and
\[
\phi_{0\rightarrow 3} = \phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}\phi_{2\rightarrow3|\cancel{1}}
\sum_{k=0}^{\infty} (L_1+L_2)^k = \frac{\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}\phi_{2\rightarrow3|\cancel{1}}
}{1-(L_1+L_2)},
\]
which coincide with (\ref{eq:1loop}) and (\ref{eq:2loop}), respectively (see also Example (\ref{exa:2-5loops})).
Let $n \geq 3$. Before applying the method of inclusion exclusion it is convenient to introduce an additional notation. Since property (b) can be expressed in terms of the order in which multiplication of the $L_j$'s is performed, we write
$$(L_1+L_2+\cdots +L_n)^k = \prod_{t=1}^k \, (L_1+L_2+\cdots + L_n)_{(t)}$$
and use these additional subscripts to label the forbidden products of pair of moment generating functions of loops (m.g.f. of loops) in the expansion of $(L_1+L_2+\cdots + L_n)^k$. We write $(L_iL_j)_{(s)}$ when $i-j>1$, $L_i$ comes from the factor $(L_1+L_2+\cdots L_n)_{(s)}$, and $L_j$ comes from $(L_1+L_2+\cdots + L_n)_{(s+1)}$. Moreover when we write $ (L_iL_j)_{(s)}\, (L_r L_u)_{(v)},$ we assume that
$$ v \geq s+1,\quad \mbox{ and } v=s+1 \quad \mbox{if and only if } \quad L_j=L_r.$$
In this latter case,
\begin{equation} \label{convention 1}
(L_iL_j)_{(s)}\, (L_j L_u)_{(s+1)} \quad \mbox{ reduces to } \quad (L_i L_j L_u)_{(s)}.
\end{equation}
(\ref{convention 1}) naturally extends to products of several forbidden pairs of m.g.f. of loops, thus producing forbidden tuples $(L_{i_1} L_{i_2}\cdots L_{i_t})$, where $i_j \geq i_{j+1} + 2$, $1\leq j \leq t-1$.
\vspace{0.2cm}
The method of inclusion exclusion now gives that $\sum\limits_{k\geq 0} \sum\limits_{**}\prod\limits_{t=1}^k L_{i_t} $ can be written as
\begin{equation} \label{new form 1}
\sum_{k\geq 0} \left[ (L_1 +\cdots + L_n)^k +
\sum_{1 \leq l \leq k-1} (-1)^{l} \sum_{(l)}
\big(\prod_{j=1}^l (L_{i_j}L_{t_j})_{s_j} (L_1 + \cdots + L_n)^{n-\#_l} \big)\right],
\end{equation}
where $\#_l$ denotes the number of distinct $L_r$ in $\prod\limits_{j=1}^l (L_{i_j}L_{t_j})_{s_j} $, and $\sum\limits_{(l)}$ runs over of all possible $\{(i_j,t_j)_{s_j}, 1\leq j \leq l, s_j < s_{j+1}, \, i_j > t_{j}+1\}.$ Note that this set could be empty.
Next we rewrite (\ref{new form 1}) in a very simple form. First, whenever possible we apply (\ref{convention 1}) and its extensions, then we drop the subscripts (now they do not provide any additional information), but we keep parentheses around each forbidden quantity. Finally, we combine like terms. We view each forbidden tuple, i.e, $ (L_{i_1}L_{i_2}), (L_{i_3}L_{i_4}L_{i_5}), \cdots $ as distinct variables, and, for every $k$, we collect all monomials of degree $k$ in these variables. We claim that this procedure reduces (\ref{new form 1}) to
\begin{equation} \label{new form 2}
\sum_{k\geq 0} \left((L_1+L_2+\cdots + L_n)+\sum_{*'} (-1)^{l+1} (L_{j_1}L_{j_2}\cdots L_{j_l}) \right)^k,
\end{equation}
where $*'=\{n\geq j_{1} > \cdots >j_{l}\ge 1,\; l \geq 2, \; j_{m}-j_{m+1}\ge2\}$.
We show the equivalence of (\ref{new form 1}) and (\ref{new form 2}) by showing that, for every $k \geq 0$, there is a one to one correspondence between the terms of degree $k$ of these two quantities. Let $k=m$ be arbitrary but fixed. An arbitrary term in the expansion of
$$\left((L_1+L_2+\cdots + L_n)+\sum_{*'} (-1)^{l+1} (L_{j_1}L_{j_2}\cdots L_{j_l}) \right)^m$$
in (\ref{new form 2}) will have the form
\[
(-1)^s{m \choose t_1,\cdots,t_{r+1}}
\left(\prod_{i=1}^{s_1}L_ {j^{(1)}_{i}}\right)^{t_1}\, \left(\prod_{i=1}^{s_2}L_ {j^{(2)}_{i}}\right)^{t_2}\cdots \left(\prod_{i=1}^{s_r}L_ {j^{(r)}_{i}}\right)^{t_r}\, (L_1+\cdots + L_n)^{t_{r+1}}
\]
for arbitrary positive integers $t_1, t_2, \cdots ,t_{r+1}$, $t_1 + t_2 +\cdots + t_{r+1}=m$, arbitrary forbidden tuples $(j^{(1)}_{1},\cdots , j^{(1)}_{s_1}), \cdots , (j^{(r)}_{1},\cdots , j^{(r)}_{s_r})$, and $s=(s_1+1)t_1 + \cdots + (s_r+1) t_r$.
In (\ref{new form 1}) the terms
\begin{equation} \label{shift}
(-1)^s\, \left(\prod_{i=1}^{s_1}L_ {j^{(1)}_{i}}\right)^{t_1}\, \left(\prod_{i=1}^{s_2}L_ {j^{(2)}_{i}}\right)^{t_2}\cdots \left(\prod_{i=1}^{s_r}L_ {j^{(r)}_{i}}\right)^{t_r}\, (L_1+\cdots + L_n)^{t_{r+1}}
\end{equation}
are introduced when the method of inclusion exclusion is applied
to remove the forbidden pairs of m.g.f. of loops from
\[\left(L_1+\cdots + L_n\right)^{t_1s_1+t_2s_2+\cdots + t_r s_r + t_{r+1}}.
\]
The number of these terms can be obtained by viewing each pair of parentheses $(\cdots)$ in (\ref{shift}) as distinct objects, $(\cdots)^{t_i}$ implies $(\cdots)$ is repeated $t_i$ times, and counting the number of ways they can be arranged on a line, order is important. It is easy to see that this number is
\[\frac{(t_1+t_2+\cdots + t_r + t_{r+1})!}{t_1!t_2!\cdots t_r!\, t_{r+1}!} = {m \choose t_1,t_2,\cdots ,t_r,t_{r+1}}.
\]
This shows that all the terms of degree $m$ in (\ref{new form 2}) can be found in (\ref{new form 1}). Moreover it is easy to see that these are the only terms of degree $m$ in (\ref{new form 1}), thus proving the equivalence between (\ref{new form 1}) and (\ref{new form 2}).
Hence,
\begin{eqnarray*}
& & \phi_{0\rightarrow (n+1)} = \phi_{0\rightarrow1}\prod_{j=1}^{n}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\, \left(
\sum_{k\geq 0} \sum_{**}\prod_{t=1}^k L_{i_t}\right)\\[0.2cm]
& & = \phi_{0\rightarrow1}\prod_{j=1}^{n}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\, \sum_{k\geq 0} \left((L_1+L_2+\cdots + L_n)+\sum_{*'}' (-1)^{l+1} (L_{j_1}L_{j_2}\cdots L_{j_l}) \right)^k\\[0.2cm]
& & = \phi_{0\rightarrow1}\prod_{j=1}^{n}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\,
\frac{1}{1-(L_1+L_2+\cdots + L_n)+\sum_{*'}' (-1)^{l} (L_{j_1}L_{j_2}\cdots L_{j_l})}.
\end{eqnarray*}
The proof is now complete.
\end{proof}
\newpage
Next we provide an alternative proof of Theorem (\ref{thm:LOOP}) by using induction on $n$. To simplify the notation in the proof, we shall use
\begin{equation}
\sum\limits _{(k,l,n)}=\sum\limits _{*}(-1)^{l+1}L_{j_{1}}\cdots L_{j_{l}},\label{eq:SummationNotation}
\end{equation}
{\large{}w}here $*=\{k=j_{1}<\cdots<j_{l}\le n,1\le l\le n-k+1,j_{m+1}-j_{m}\ge2\}$, in which we reordered
the loops in the ascending order.\\
\begin{rem*}
Notice that we have reversed the order of subscript in the newly defined product notation. Indeed, $\sum\limits_{*}$ and $\sum\limits_{*'}$ are mathematically equivalent. Although we believe $\sum\limits_{*'}$ does a better job in conveying the combinatorics idea behind the loop identity, it is for the simplicity of expression that we choose to use the reversed order in the following proof.
\end{rem*}
\noindent It is easy to see that
\begin{equation}
\sum\limits _{(k,l,n)}=L_{k}-L_{k}\sum\limits _{j=k+2}^{n}\sum\limits _{(j,l,n)}=L_{k}\left(1-\sum\limits _{j=k+2}^{n}\sum\limits _{(j,l,n)}\right).\label{eq:LoopExpansion}
\end{equation}
Hence, we can further rewrite Thm.~\ref{thm:LOOP} as
\begin{equation}
\phi_{0\rightarrow n+1}=\phi_{0\rightarrow1}\prod_{j=1}^{n}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\frac{1}{1-\sum\limits _{k=1}^{n}\sum\limits _{(k,l,n)}}.\label{eq:Main}
\end{equation}
\end{rem*}
\begin{figure}
\includegraphics[scale=0.3]{2.png}
\caption{\label{fig:Mloop}$m$-loop}
\end{figure}
\begin{proof}[Proof of Thm.~\ref{thm:LOOP}, by induction]
For the case $n=1$, \eqref{eq:Main}
reduces to
\begin{equation}
\phi_{0\rightarrow2}=\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}\frac{1}{1-L_{1}},\label{eq:n2}
\end{equation}
the same as \eqref{eq:1loop}. Suppose \eqref{eq:Main} holds for
$n=m-1$. Then, we need to show
\begin{equation}
\phi_{0\rightarrow(m+1)}=\phi_{0\rightarrow1}\prod_{j=1}^{m}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\frac{1}{1-\sum\limits _{k=1}^{m}\sum\limits _{(k,l,m)}}.\label{eq:Inductive}
\end{equation}
We combine the first two loops together, which reduces to $m-1$ new
loops, labeled as $L_{2}',L_{3}',L_{4}',\ldots$. See Fig.~\ref{fig:Mloop}.
Note that $\phi_{0\rightarrow2}$ is given by (\ref{eq:n2}); and
similarly
\[
\phi_{2\rightarrow0|\cancel{3}}=\frac{\phi_{1\rightarrow2|\cancel{0}}\phi_{2\rightarrow1|\cancel{3}}}{1-\phi_{1\rightarrow2|\cancel{0}}\phi_{2\rightarrow1|\cancel{3}}}.
\]
Hence,
\[
L'_{2}=\phi_{0\rightarrow2}\phi_{2\rightarrow0|\cancel{3}}=\frac{L_{1}L_{2}}{(1-L_{1})(1-L_{2})}.
\]
In addition,
\begin{align*}
L'_{3} & =\phi_{2\rightarrow3|\cancel{0}}\phi_{3\rightarrow2|\cancel{4}}\\
& =\phi_{2\rightarrow3|\cancel{1}}\sum_{k=0}^{\infty}(\phi_{2\rightarrow1|\cancel{3}}\phi_{1\rightarrow2|\cancel{0}})^{k}\phi_{3\rightarrow2|\cancel{4}}\\
& =(\phi_{2\rightarrow3|\cancel{1}}\phi_{3\rightarrow2|\cancel{4}})\frac{1}{1-\phi_{1\rightarrow2|\cancel{0}}\phi_{2\rightarrow1|\cancel{3}}}\\
& =\frac{L_{3}}{1-L_{2}},
\end{align*}
and $L'_{k}=L_{k}\text{, for all }4\le k\le m$. To further simplify
expressions, we need a new summation symbol:
\[
\sum'\limits _{(k,l,n)}=\sum\limits _{*}(-1)^{l+1}L'_{j_{1}}\cdots L'_{j_{l}},
\]
where $*=\{k=j_{1}<\cdots<j_{l}\le n,1\le l\le n-k+1,j_{m+1}-j_{m}\ge2\}$,
which is exactly the same as (\ref{eq:SummationNotation}), with all
$L$'s replaced by $L'$.
Now apply (\ref{eq:Main}) for sites $0,2,3,\ldots,m+1$, (i.e., with
$m-1$ loops,) to get
\begin{align*}
\phi_{0\rightarrow(m+1)} & =\phi_{0\rightarrow2}\phi_{2\rightarrow3|\cancel{0}}\prod_{j=3}^{m}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\frac{1}{1-\sum\limits _{k=2}^{m}\sum'\limits _{(k,l,m)}}\\
& =\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}\frac{1}{1-L_{1}}\phi_{2\rightarrow3|\cancel{1}}\frac{1}{1-L_{2}}\prod_{j=3}^{m}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\frac{1}{1-\sum\limits _{k=2}^{m}\sum'\limits _{(k,l,m)}}\\
& =\phi_{0\rightarrow1}\prod_{j=1}^{m}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\frac{1}{(1-L_{1})(1-L_{2})\left(1-\sum\limits _{k=2}^{m}\sum'\limits _{(k,l,m)}\right)}.
\end{align*}
Therefore, (\ref{eq:Inductive}) is equivalent to
\begin{equation}
(1-L_{1})(1-L_{2})\left(1-\sum\limits _{k=2}^{m}\sum'\limits _{(k,l,m)}\right)=1-\sum\limits _{k=1}^{m}\sum\limits _{(k,l,m)}.\label{eq:ProofExpansion}
\end{equation}
By applying (\ref{eq:LoopExpansion}), we have the left-hand side
\begin{align*}
& \quad(1-L_{1})(1-L_{2})\left(1-\sum'\limits _{(2,l,m)}-\sum'\limits _{(3,l,m)}-\sum\limits _{k=4}^{m}\sum'\limits _{(k,l,m)}\right)\allowdisplaybreaks\\
& =1-L_{1}-L_{2}+L_{1}L_{2}-(1-L_{1})(1-L_{2})\left[L'_{2}\left(1-\sum\limits _{k=4}^{m}\sum'\limits _{(k,l,m)}\right)\right.\\
& \quad\left.+L'_{3}\left(1-\sum\limits _{k=5}^{m}\sum'\limits _{(k,l,m)}\right)\right]-\sum\limits _{k=4}^{m}\sum'\limits _{(k,l,m)}+L_{1}\sum\limits _{k=4}^{m}\sum'\limits _{(k,l,m)}+L_{2}\sum\limits _{k=4}^{m}\sum'\limits _{(k,l,m)}\\
& \quad-L_{1}L_{2}\sum\limits _{k=4}^{m}\sum'\limits _{(k,l,m)}\\
& =1-L_{1}-L_{2}+\underline{L_{1}L_{2}-L_{1}L_{2}\left(1-\sum\limits _{k=4}^{m}\sum'\limits _{(k,l,m)}\right)}-L_{3}\left(1-\sum\limits _{k=5}^{m}\sum'\limits _{(k,l,m)}\right)\\
& \quad+L_{1}L_{3}\left(1-\sum\limits _{k=5}^{m}\sum'\limits _{(k,l,m)}\right)-\sum\limits _{k=4}^{m}\sum'\limits _{(k,l,m)}+L_{1}\sum\limits _{k=4}^{m}\sum'\limits _{(k,l,m)}+L_{2}\sum\limits _{k=4}^{m}\sum'\limits _{(k,l,m)}\\
& \quad\underline{-L_{1}L_{2}\sum\limits _{k=4}^{m}\sum'\limits _{(k,l,m)}}.
\end{align*}
Note that the two term underlined cancel; and since $L'_{k}=L_{k}$
for $k\geq4$, all the $\sum\limits'$ terms above are actually $\sum$.
Therefore, the left-hand side of (\ref{eq:ProofExpansion}) is
\begin{align*}
& =1-L_{1}\left[1-L_{3}\left(1-\sum\limits _{k=5}^{m}\sum\limits _{(k,l,m)}\right)-\sum\limits _{k=4}^{m}\sum\limits _{(k,l,m)}\right]-L_{2}\left(1-\sum\limits _{k=4}^{m}\sum\limits _{(k,l,m)}\right)\allowdisplaybreaks\\
& \quad-L_{3}\left(1-\sum\limits _{k=5}^{m}\sum\limits _{(k,l,m)}\right)-\sum\limits _{k=4}^{m}\sum\limits _{(k,l,m)}\\
& =1-\sum\limits _{k=1}^{m}\sum\limits _{(k,l,m)},
\end{align*}
which is the right-hand side of (\ref{eq:ProofExpansion}).
\end{proof}
\vspace{0.2cm}
\section{Preliminaries: umbral random symbols}\label{sec:umbral}
Again, one can find similar summary of this section in \cite{JiuVignat}; but we are restating them here, for self-containedness. We will let $\mathcal{B}$, $\mathcal{E}$, and $\mathcal{U}$ be
the \emph{Bernoulli}, \emph{Euler}, and \emph{uniform (umbral) symbols},
respectively. They are defined as follows.
\subsection{Bernoulli $\mathcal{B}$}
The Bernoulli symbol $\mathcal{B}$ satisfies the evaluation rule
\begin{equation}
(x+\mathcal{B})^{n}=B_{n}(x).\label{eq:BernoulliBEval}
\end{equation}
In fact, $\mathcal{B}$ can be viewed as a random variable \cite[Thm.~2.3]{Zagier},
i.e., $\mathcal{B}=iL_{B}-1/2$, for $i^{2}=-1$ and $L_{B}$ is the
random variable on $\mathbb{R}$, with density $p_{B}(t)=\pi\sech^{2}(\pi t)/2$.
Hence, the evaluation rule is equivalent to the expectation operator.
Moreover, for any suitable function $f$ (, i.e., one making integrals absolutely convergent,)
\[
f(x+\mathcal{B})=\mathbb{E}\left[f\left(x+iL_{B}-\frac{1}{2}\right)\right]=\frac{\pi}{2}\int_{\mathbb{R}}f\left(x+it-\frac{1}{2}\right)\sech^{2}(\pi t)dt.
\]
In particular, $f(x)=x^{n}$ yields (\ref{eq:BernoulliBEval}). In addition,
we have
\[
B_{n}^{(p)}(x)=\left(x+\mathcal{B}^{(p)}\right)^{n}=\left(x+\mathcal{B}_{1}+\cdots+\mathcal{B}_{p}\right)^{n}
\]
for a set of $p$ {\emph{independent}} umbral symbols (or random variables)
$\left(\mathcal{B}_{i}\right)_{i=1}^{p}$, satisfying:
\begin{itemize}
\item if $i\neq j$, so that $\mathcal{B}_{i}$ and $\mathcal{B}_{j}$ are
independent, then we evaluate
\[
\mathcal{B}_{i}^{n}\mathcal{B}_{j}^{m}=B_{n}B_{m};
\]
\item and if $i=j$, then
\[
\mathcal{B}_{i}^{n}\mathcal{B}_{j}^{m}=\mathcal{B}_{i}^{n+m}=B_{n+m}.
\]
\end{itemize}
Now, with (\ref{eq:GF}), we deduce that
\begin{equation}
e^{\mathcal{B}t}=\frac{t}{e^{t}-1},\quad e^{t(2\mathcal{B}+1)}=\frac{t}{\sinh t},\quad\text{and}\quad e^{t(2\mathcal{B}^{(p)}+p)}=\left(\frac{t}{\sinh t}\right)^{p}.\label{eq:Bp}
\end{equation}
\subsection{Euler $\mathcal{E}$. }
The Euler symbol $\mathcal{E}$ can be similarly defined via the random
variable interpretation that $\mathcal{E}=iL_{E}-1/2$, where $L_{E}$'s
density is given by $p_{E}(t)=\sech(\pi t)$. Then, $(\mathcal{E}+x)^n=E_n(x)$; and in particular,
for sum of independent symbols $\mathcal{E}^{(p)}=\mathcal{E}_{1}+\cdots+\mathcal{E}_{p}$,
\[
E_{n}^{(p)}(x)=\left(x+\mathcal{E}^{(p)}\right)^{n}.
\]
Therefore, (\ref{eq:GF}) yields
\begin{equation}
e^{t\mathcal{E}}=\frac{2}{e^{t}+1},\quad e^{t(2\mathcal{E}+1)}=\sech t,\quad\text{and}\quad e^{t(2\mathcal{E}^{(p)}+p)}=\sech^{p}t.\label{eq:Ep}
\end{equation}
Moreover, from the generating functions, \eqref{eq:Bp} and \eqref{eq:Ep},
\[
e^{2\mathcal{B}t}=\frac{2t}{e^{2t}-1}=\frac{t}{e^{t}-1}\cdot\frac{2}{e^{t}+1}=e^{t(\mathcal{B}+\mathcal{E})},
\]
we can have $2\mathcal{B}=\mathcal{B}+\mathcal{E}$, namely for any
suitable function $f$, $f(x+2\mathcal{B})=f(x+\mathcal{B}+\mathcal{E})$.
\subsection{Uniform $\mathcal{U}$}
The uniform symbol $\mathcal{U}$ is the uniform random variable on
$[0,1]$, i.e., $\mathcal{U}\sim U[0,1]$, so that the evaluation
is
\begin{equation}
\mathcal{U}^{n}=\int_{0}^{1}t^{n}dt=\frac{1}{n+1}.\label{eq:RVU}
\end{equation}
Easily, we have
\begin{equation}
e^{t\mathcal{U}}=\sum_{n=0}^{\infty}\frac{t^{n}}{(n+1)!}=\frac{e^{t}-1}{t},\quad e^{t(2\mathcal{U}-1)}=\frac{\sinh t}{t},\quad\text{and}\quad e^{t(2\mathcal{U}^{(p)}-p)}=\left(\frac{\sinh t}{t}\right)^{p},\label{eq:Up}
\end{equation}
for the sum of independent symbols~$\mathcal{U}^{(p)}=\mathcal{U}_{1}+\cdots+\mathcal{U}_{p}$.
An important link between $\mathcal{B}$ and $\mathcal{U}$ is the
cancellation rule. Note that
\[
e^{t(\mathcal{U}+\mathcal{B})}=e^{t\mathcal{U}}e^{t\mathcal{B}}=\frac{e^{t}-1}{t}\cdot\frac{t}{e^{t}-1}=1.
\]
So for a suitable function $f$,
\[
f(x+\mathcal{B}+\mathcal{U})=f(x).
\]
In what follows, we will use independent copies of the three symbols.
In order to distinguish them, we shall denote independent uniform
symbols by $\mathcal{U},\mathcal{U}',\ldots$ and $\mathcal{U}^{(p)},\mathcal{U}'^{(p)},\ldots$;
and similarly for the other two symbols.
\section{Identities of Bernoulli and Euler polynomials}\label{sec:1loop}
Now, with the loop decomposition (\ref{eq:Main}) and evaluation of
symbols (\ref{eq:Bp}), (\ref{eq:Ep}), and (\ref{eq:Up}), we can
derive certain identities. Note that, even in the case of two loops,
it does not seem possible to further simplify the expressions completely
in terms of Bernoulli and Euler polynomials, but in terms of the three
symbols; see e.g., \cite[Thms.~3.4 and 4.2 ]{JiuVignat}. We would
like to set all the sites \emph{equally distributed}, namely $a_{j}=j$,
for $j=0,1,2,\ldots,n$, throughout this section.
\subsection{$1$-dim reflected Brownian motion on $\mathbb{R}_{+}$}
In this case, for three consecutive sites $a<b<c$, the generating
functions of the corresponding hitting times can be found in \cite[p.~198 and p.~355]{Formulas}
with variable $w$:
\begin{align*}
\phi_{a\rightarrow b} & =\frac{\cosh(aw)}{\cosh(bw)},\\
\phi_{b\rightarrow a|\cancel{c}} & =\frac{\sinh\left((c-b)w\right)}{\sinh\left((c-a)w\right)},\\
\phi_{b\rightarrow c|\cancel{a}} & =\frac{\sinh\left((b-a)w\right)}{\sinh\left((c-a)w\right)}.
\end{align*}
In this case, we begin with $a_{0}=0$ as the initial site and then
apply the formulas above to have, for $n\geq1$.
\begin{equation}
\phi_{0\rightarrow n}=\frac{1}{\cosh(nw)}\quad\text{and}\quad\phi_{n\rightarrow n+1|\cancel{n-1}}=\phi_{n\rightarrow n-1|\cancel{n+1}}=\frac{1}{2\cosh(w)}.\label{eq:1dim}
\end{equation}
Before we state and prove the general formula, we first compute an
example of $3$-loop case, which is not included in \cite{JiuVignat}.
\begin{example}
As stated in Ex.~\ref{exa:2-5loops},
\[
\phi_{0\rightarrow4}=\frac{\phi_{0\rightarrow1}\phi_{1\rightarrow2|\cancel{0}}\phi_{2\rightarrow3|\cancel{1}}\phi_{3\rightarrow4|\cancel{2}}}{1-(L_{1}+L_{2}+L_{3}-L_{1}L_{3})}.
\]
Now, apply (\ref{eq:1dim}) to have
\begin{align*}
\frac{1}{\cosh(4w)} & =\frac{1}{8\cosh^{4}w}\sum_{k=0}^{\infty}\left(\frac{\sinh w}{\sinh(2w)\cosh w}+\frac{2\sinh^{2}w}{\sinh^{2}(2w)}\right.\\
& \quad\left.-\frac{\sinh w}{\sinh(2w)\cosh w}\cdot\frac{\sinh^{2}w}{\sinh^{2}(2w)}\right)^{k}.
\end{align*}
\begin{itemize}
\item The left-hand side is simply $\sech(4w)=\exp\left\{ 4w(2\mathcal{E}+1)\right\} .$
\item For the right-hand side, we first simplify that
\begin{align*}
\frac{\sinh w}{\sinh(2w)\cosh w}+\frac{2\sinh^{2}w}{\sinh^{2}(2w)} & =\frac{1}{\cosh^{2}w},\\
\frac{\sinh w}{\sinh(2w)\cosh w}\cdot\frac{\sinh^{2}w}{\sinh^{2}(2w)} & =\frac{1}{8\cosh^{4}w}.
\end{align*}
Hence, we have
\begin{align*}
& \quad\frac{1}{8\cosh^{4}w}\sum_{k=0}^{\infty}\left(\frac{1}{\cosh^{2}w}-\frac{1}{8\cosh^{4}w}\right)^{k}\\
& =\frac{1}{8\cosh^{4}w}\sum_{k=0}^{\infty}\sum_{\ell=0}^{k}(-1)^{\ell}\binom{k}{\ell}\left(\frac{1}{\cosh^{2}w}\right)^{k-\ell}\left(\frac{1}{8\cosh^{4}w}\right)^{\ell}\\
& =\sum_{k=0}^{\infty}\sum_{\ell=0}^{k}(-1)^{\ell}\binom{k}{\ell}\frac{1}{8^{\ell+1}}\cdot\left(\frac{1}{\cosh w}\right)^{2k+2\ell+4}\\
& =\sum_{k=0}^{\infty}\sum_{\ell=0}^{k}(-1)^{\ell}\binom{k}{\ell}\frac{1}{8^{\ell+1}}\exp\left\{ w(2\mathcal{E}^{(2k+2\ell+4)}+2k+2\ell+4)\right\} .
\end{align*}
\end{itemize}
Namely,
\[
\exp\left\{ 8\mathcal{E}w+4w\right\} =\sum_{k=0}^{\infty}\sum_{\ell=0}^{k}(-1)^{\ell}\binom{k}{\ell}\frac{1}{8^{\ell+1}}\exp\left\{ w(2\mathcal{E}^{(2k+2\ell+4)}+2k+2\ell+4)\right\} ,
\]
i.e.,
\[
\exp\left\{ 8\mathcal{E}w\right\} =\sum_{k=0}^{\infty}\sum_{\ell=0}^{k}(-1)^{\ell}\binom{k}{\ell}\frac{1}{8^{\ell+1}}\exp\left\{ w(2\mathcal{E}^{(2k+2\ell+4)}+2k+2\ell)\right\} .
\]
Multiplying both sides by $\exp\{xw\}$ and comparing the coefficients
of $w^{n}$ , we see
\begin{itemize}
\item the left-hand side yields
\[
\exp\left\{ (8\mathcal{E}+x)w\right\} \Rightarrow\left(8\mathcal{E}+x\right)^{n}=8^{n}E_{n}\left(\frac{x}{8}\right);
\]
\item while the right-hand side gives
\begin{align*}
& \quad\sum_{k=0}^{\infty}\sum_{\ell=0}^{k}(-1)^{\ell}\binom{k}{\ell}\frac{1}{8^{\ell+1}}(2\mathcal{E}^{(2k+2\ell+4)}+2k+2\ell+x)^{n}\\
& =\sum_{k=0}^{\infty}\sum_{\ell=0}^{k}(-1)^{\ell}\binom{k}{\ell}\frac{2^{n}}{8^{\ell+1}}E_{n}^{(2k+2\ell+4)}\left(\frac{x}{2}+k+\ell\right).
\end{align*}
\end{itemize}
Therefore, we have
\[
E_{n}(x)=\frac{1}{4^{n}}\sum_{k=0}^{\infty}\sum_{\ell=0}^{k}(-1)^{\ell}\binom{k}{\ell}\frac{2^{n}}{8^{\ell+1}}E_{n}^{(2k+2\ell)}\left(4x+k+\ell\right).
\]
\end{example}
\begin{rem*}
It is easy to see that $\phi_{0\rightarrow1}$ is twice of other paths:
$\phi_{n\rightarrow n+1|\cancel{n-1}}=\phi_{n\rightarrow n-1|\cancel{n+1}}$,
which also causes the difference between $L_{1}$ and $L_{j}$'s $j=2,3,\ldots$
The following combinatorial enumeration is the key to find the general
formulas.
\end{rem*}
\begin{defn}
Suppose $S=\{a_{1},a_{2},\dots,a_{n}\}$ are a set of a sequence of
$n$ mathematical objects.
\begin{enumerate}
\item If the subindices of $a_{j_{1}}a_{j_{2}}\cdots a_{j_{m}}$ satisfy
$1\le j_{1}\le\cdots\le j_{m}\le n,j_{k}-j_{k-1}\ge2$, we call it
a \emph{nonadjacent product of order $n$, length $m$ with initial
state $j_{1}$}.
\item We define $N(\ell,n)$ as \emph{the number of all nonadjacent products
of order $n$ and length $\ell$ }(, without specific initial state).
\item And finally we let $n(a,\ell,m)$ be \emph{the number of different
nonadjacent products of order $n$, length $l$ with initial state
$a$}.
\end{enumerate}
\end{defn}
\begin{rem*}
By convention, $N(\ell,n)$ and $n(a,\ell,m)$ can both be zero, if
no such product exist.
\end{rem*}
\begin{proof}
Let $\ell$ be an integer such that $3\leq\ell\leq n$, then
\begin{equation}
N(\ell,n)=N(\ell,n-1)+N(\ell-1,n-2).\label{eq:NlnRec}
\end{equation}
All the nonadjacent products of order $n$ and length $\ell$ can
be divide into two part: the first part consists of the existed nonadjacent
products before adding $a_{n}$, which are all nonadjacent products
of order $n$ and length $\ell$; and the second part consists of
the new nonadjacent products after adding $a_{n}$, which are all
nonadjacent products of order $n-2$ and length $\ell-1$.
\end{proof}
\begin{thm}
Suppose $2\le\ell\le n$, then
\[
n(1,\ell,n)=\sum_{k=3}^{n}n(k,\ell-1,n)=N(\ell-1,n-2).
\]
\end{thm}
\begin{proof}
A nonadjacent product of order $n$, length $\ell$ and initial state
$1$ starts with $a_{1}$ and follows with $a_{j},j\ge3$. So the
number of all nonadjacent products of order $n$, length $\ell$ and
initial state $1$ starts with $a_{1}$ equals to the number of all
nonadjacent products of order $n$, length $\ell-1$ and initial state
$1$ starts with $a_{j},j\ge3$, which also equals to the number of
all nonadjacent products of order $n-2$ and length $\ell-1$.
\end{proof}
Note that $\binom{n}{k}=0$ if $k>n$, or $k<0$. So we can identify
$N(\ell,n)$ as the binomial coefficients.
\begin{thm}
Suppose $1\le\ell\le n$, then
\[
N(\ell,n)=\binom{n-\ell+1}{\ell}.
\]
\end{thm}
\begin{proof}
It suffices to show that $\binom{n-l+1}{l}$ satisfies the same initial
conditions and recurrence relation with $N(\ell,n)$, which is easy
to see, since
\[
\binom{n-\ell+1}{\ell}=\binom{n-\ell}{\ell}+\binom{n-\ell}{\ell-1}.
\]
coincides with (\ref{eq:NlnRec}). For the initial conditions, notice
that when $\ell=1$,
\[
N(1,n)=n=\binom{n}{1},
\]
which are exact the loops $L_{1},\ldots,L_{n}$.
\end{proof}
Recall the \emph{multinomial coefficients}: for $k_{1},\ldots,k_{m}\in\mathbb{N}$,
$k_{1}+\cdots+k_{m}\leq n$,
\[
\binom{n}{k_{1},\ldots,k_{m}}=\frac{n!}{k_{1}!\cdots k_{m}!(n-k_{1}-\cdots-k_{m})!}.
\]
In particular, for $m=1$, we have the binomial coefficients $\binom{n}{k}$.
\begin{thm}
Let $M=\lceil m\rceil-1$, and $M'$ be the largest odd number less
or equal to $M$, then
\begin{align}
E_{n}\left(\frac{x}{m+1}\right) & =\frac{1}{(m+1)^{n}}\sum_{k=0}^{\infty}\frac{(m+1)^{k}}{2^{2k+m}}\sum_{n_{1},\dots,n_{M}=0}^{k}\binom{k}{n_{1},\dots,n_{M}}\label{eq:1dimIdentity}\\
& \ \ \ \times(-1)^{n_{1}+n_{3}+\cdots+n_{M'}}\frac{4^{n_{1}+\cdots+n_{M}}}{(m+1)^{n_{1}+\cdots+n_{M}}}\nonumber \\
& \ \ \ \times\left(\frac{\binom{m-2}{1}}{2^{3}}+\frac{\binom{m-2}{2}}{2^{4}}\right)^{n_{1}}\cdots\left(\frac{\binom{m-M-1}{M}}{2^{2M+1}}+\frac{\binom{m-M-1}{M+1}}{2^{2M+2}}\right)^{n_{M}}\nonumber \\
& \ \ \ \times E_{n}^{(2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+m)}\left(k+n_{1}+2n_{2}+\cdots+Mn_{M}+x\right).\nonumber
\end{align}
\end{thm}
\begin{proof}
Let $n=m$ in (\ref{eq:Main}) and apply (\ref{eq:1dim}) to have
\begin{align*}
\sech((m+1)w) & =\sech w\frac{\sinh^{m}w}{\sinh^{m}(2w)}\sum_{k=0}^{\infty}\left(n(1,1,m)\frac{\sech^{2}w}{2}+\right.\\
& \quad N(1,m-1)\frac{\sech^{2}w}{2^{2}}-n(1,2,m)\frac{\sech^{4}w}{2^{3}}-\\
& \quad N(2,m-1)\frac{\sech^{4}w}{2^{4}}+\cdots+(-1)^{M}n(1,M+1,m)\times\\
& \quad\left.\frac{\sech^{(2M+2)}w}{2^{2M+1}}+(-1)^{M}N(M+1,m-1)\frac{\sech^{(2M+2)}w}{2^{2M+2}}\right)^{k}\\
& =\frac{\sech^{(m+1)}w}{2^{m}}\sum_{k=0}^{\infty}\left(\frac{\sech^{2}w}{2}+\frac{\binom{m-1}{1}\sech^{2}w}{2^{2}}-\right.\\
& \quad\frac{\binom{m-2}{1}\sech^{4}w}{2^{3}}-\frac{\binom{m-2}{2}\sech^{4}w}{2^{4}}+\cdots+(-1)^{M}\times\\
& \quad\left.\frac{\binom{m-M-1}{M}\sech^{(2M+2)}w}{2^{2M+1}}+(-1)^{M}\frac{\binom{m-M-1}{M+1}\sech^{(2M+2)}w}{2^{2M+2}}\right)^{k}.
\end{align*}
Applying (\ref{eq:Ep}) to get
\begin{align*}
e^{(m+1)w(2\mathcal{E}+1)} & =\sum_{k=0}^{\infty}\frac{(m+1)^{k}}{2^{2k+m}}\sum_{n_{1},\dots,n_{M}=0}^{k}\binom{k}{n_{1},\dots,n_{M}}\frac{(-1)^{n_{1}+n_{3}+\cdots+n_{M'}}4^{n_{1}+\cdots+n_{M}}}{(m+1)^{n_{1}+\cdots+n_{M}}}\\
& \quad\times\left(\frac{\binom{m-2}{1}}{2^{3}}+\frac{\binom{m-2}{2}}{2^{4}}\right)^{n_{1}}\cdots\left(\frac{\binom{m-M-1}{M}}{2^{2M+1}}+\frac{\binom{m-M-1}{M+1}}{2^{2M+2}}\right)^{n_{M}}\\
& \quad\times e^{w(2\mathcal{E}^{(2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+m+1)}+2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+m+1)}.
\end{align*}
Multiplying by $e^{wx}$ produces
\begin{align*}
e^{w(2m\mathcal{E}+2\mathcal{E}+x)} & =\sum_{k=0}^{\infty}\frac{(m+1)^{k}}{2^{2k+m}}\sum_{n_{1},\dots,n_{M}=0}^{k}\binom{k}{n_{1},\dots,n_{M}}\frac{(-1)^{n_{1}+n_{3}+\cdots+n_{M'}}4^{n_{1}+\cdots+n_{M}}}{(m+1)^{n_{1}+\cdots+n_{M}}}\\
& \quad\times\left(\frac{\binom{m-2}{1}}{2^{3}}+\frac{\binom{m-2}{2}}{2^{4}}\right)^{n_{1}}\cdots\left(\frac{\binom{m-M-1}{M}}{2^{2M+1}}+\frac{\binom{m-M-1}{M+1}}{2^{2M+2}}\right)^{n_{M}}\\
& \quad\times e^{w(2\mathcal{E}^{(2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+m+1)}+2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+x)}.
\end{align*}
Now, we identify the coefficients of $w^{n}$ on both sides:
\begin{itemize}
\item The left-hand side is simply
\[
(2m\mathcal{E}+2\mathcal{E}+x)^{n}=(2m+2)^{n}\left(\mathcal{E}+\frac{x}{2m+2}\right)^{n}=(2m+2)^{n}E_{n}\left(\frac{x}{2m+2}\right);
\]
\item while the right-hand side, we only need to focus on the term
\begin{align*}
& \quad(2\mathcal{E}^{(2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+m)}+2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+x)^{n}\\
& =2^{n}E_{n}^{(2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+m)}\left(k+n_{1}+2n_{2}+\cdots+Mn_{M}+\frac{x}{2}\right).
\end{align*}
\end{itemize}
Therefore, simplification, with $x\mapsto2x$, gives the desired identity.
\end{proof}
\begin{example}
The formulas derived from $4$- and $5$-loop cases are as follows.
\begin{align*}
E_{n}\left(\frac{x}{5}\right) & =\frac{1}{5^{n}}\sum_{k=0}^{\infty}\sum_{\ell=0}^{k}\frac{5^{k}(-1)^{\ell}}{2^{2k+2\ell+4}}\binom{k}{\ell}E_{n}^{(2\ell+2k+5)}\left(x+\ell+k\right),\\
E_{n}\left(\frac{x}{6}\right) & =\frac{1}{6^{n}}\sum_{k=0}^{\infty}\sum_{n_{1},n_{2}=0}^{k}\frac{(-1)^{n_{1}}3^{k-n_{1}-n_{2}}}{2^{k+3n_{1}+4n_{2}+5}}\binom{k}{n_{1},n_{2}}\\
& \quad\times E_{n}^{(2k+2n_{1}+4n_{2}+6)}\left(x+k+n_{1}+2n_{2}\right).
\end{align*}
\end{example}
\subsection{$3$-dim Bessel process on $\mathbb{R}^{3}$. }
\begin{figure}
\includegraphics[scale=0.3]{3.png}
\caption{\label{fig:3dim}$3$-dim Bessel Process}
\end{figure}
We can also find the generating functions for three consecutive sites
$a<b<c$, \cite[pp.~463--464]{Formulas} with variable $w$:
\begin{align*}
\phi_{a\rightarrow b} & =\frac{b\sinh(aw)}{a\sinh(bw)},\\
\phi_{b\rightarrow a|\cancel{c}} & =\frac{a\sinh\left((c-b)w\right)}{b\sinh\left((c-a)w\right)},\\
\phi_{b\rightarrow c|\cancel{a}} & =\frac{c\sinh\left((b-a)w\right)}{b\sinh\left((c-a)w\right)}.
\end{align*}
As stated in \cite[Rem.~4.1]{JiuVignat}, $\phi_{a\rightarrow0|\cancel{b}}=0$
for $0<a<b$. In this case, the first loop occurs between site $1$
and $2$ (instead of $0$ and $1$). We can still have (\ref{eq:Main})
by shifting all the indices as $a_{m}\mapsto m+1$, ONLY for the loops:
\begin{thm}
For the $3$-dimensional Bessel process on sites $0,1,\ldots,m+2$,
we have
\begin{equation}
\phi_{0\rightarrow(m+2)}=\phi_{0\rightarrow1}\prod_{j=1}^{m+1}\phi_{j\rightarrow(j+1)|\cancel{j-1}}\frac{1}{1-\sum\limits _{k=1}^{m}\sum\limits _{(k,l,n)}}.\label{eq:3dim}
\end{equation}
Let $M=\lceil m\rceil-1$, and $M'$ be the largest odd number less
or equal to $M$, then
\begin{align*}
& \quad B_{n+1}\left(\frac{2+x}{m+2}\right)-B_{n+1}\left(\frac{x}{m+2}\right)\\
& =\frac{n+1}{(m+2)^{n}}\sum_{k=0}^{\infty}\frac{m^{k}}{2^{2k+m}}\sum_{n_{1},\dots,n_{M}=0}^{k}\binom{k}{n_{1},\dots,n_{M}}(-1)^{n_{1}+n_{3}+\cdots+n_{M'}}\\
& \quad\times\frac{\binom{m-1}{2}^{n_{1}}\binom{m-2}{3}^{n_{2}}\cdots\binom{m-M}{M}^{n_{M}}}{4^{n_{1}+2n_{2}+\cdots+Mn_{M}}m^{n_{1}+\cdots+n_{M}}}\\
& \quad\times E_{n}^{(2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+m)}(k+n_{1}+2n_{2}+\cdots+Mn_{M}+x).
\end{align*}
\end{thm}
\begin{proof}
The steps will be similar to that of (\ref{eq:1dimIdentity}), so
we shall skip some direct but tedious calculation steps. By (\ref{eq:3dim})
and the well-known formula $\sinh(2w)=2\sinh w\cosh w$, we have
\begin{align*}
\frac{(m+2)w}{\sinh((m+2)w)} & =\frac{(m+2)w}{\sinh(2w)}\sum_{k=0}^{\infty}\left(\frac{\binom{m}{1}\sech^{2}w}{4}-\frac{\binom{m-1}{2}\sech^{4}w}{4^{2}}\right.\\
& \quad\left.+\cdots+(-1)^{M}\frac{\binom{m-M}{M+1}\sech^{2M+2}w}{4^{M+1}}\right)^{k}.
\end{align*}
By (\ref{eq:Bp}) and (\ref{eq:Ep}), we deduce that
\begin{align*}
e^{(m+2)w(2\mathcal{B}+1)} & =e^{2w(2\mathcal{B}'+1)}\sum_{k=0}^{\infty}\frac{(m+2)m^{k}}{2^{2k+m+1}}\sum_{n_{1},\dots,n_{M}=0}^{k}\binom{k}{n_{1},\dots,n_{M}}\\
& \quad\times(-1)^{n_{1}+n_{3}+\cdots+n_{M'}}\frac{\binom{m-1}{2}^{n_{1}}\binom{m-2}{3}^{n_{2}}\cdots\binom{m-M}{M}^{n_{M}}}{4^{n_{1}+2n_{2}+\cdots+Mn_{M}}m^{n_{1}+\cdots+n_{M}}}\\
& \quad\times e^{w\left(2\mathcal{E}^{(2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+m)}+2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+m\right)}.
\end{align*}
In order to cancel the $e^{4w\mathcal{B}'}$ on the right-hand side,
we multiplying by $e^{4w\mathcal{U}+wx}$ to have
\begin{align*}
e^{w((2m+4)\mathcal{B}+4\mathcal{U}+x)} & =\sum_{k=0}^{\infty}\frac{(m+2)m^{k}}{2^{2k+m+1}}\sum_{n_{1},\dots,n_{M}=0}^{k}\binom{k}{n_{1},\dots,n_{M}}\times\\
& \quad(-1)^{n_{1}+n_{3}+\cdots+n_{M'}}\frac{\binom{m-1}{2}^{n_{1}}\binom{m-2}{3}^{n_{2}}\cdots\binom{m-M}{M}^{n_{M}}}{4^{n_{1}+2n_{2}+\cdots+Mn_{M}}m^{n_{1}+\cdots+n_{M}}}\times\\
& \quad e^{w(2\mathcal{E}^{(2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+m)}+2k+2n_{1}+4n_{2}+\cdots+2Mn_{M}+x)}.
\end{align*}
When identifying the coefficient of $w^{n}$, the right-hand side
directly gives the Euler polynomial of higher-orders; while the left-hand
side is, by (\ref{eq:RVU}),
\begin{align*}
((2m+4)\mathcal{B}+4\mathcal{U}+x)^{n} & =\int_{0}^{1}((2m+4)\mathcal{B}+4u+x)^{n}du\\
& =\frac{((2m+4)\mathcal{B}+4u+x)^{n+1}}{4(n+1)}\bigg|_{u=0}^{u=1}\\
& =\frac{(2m+4)^{n+1}}{4(n+1)}\left(B_{n+1}\left(\frac{x+4}{2m+4}\right)-B_{n+1}\left(\frac{x}{2m+4}\right)\right).
\end{align*}
Simplification and substitution $x\mapsto2x$ complete the proof.
\end{proof}
\begin{example}
The identities from \emph{three} and \emph{four} loops are given by
\begin{align*}
& \quad B_{n+1}\left(\frac{x+2}{5}\right)-B_{n+1}\left(\frac{x}{5}\right)\\
& =\frac{n+1}{5^{n}}\sum_{k=0}^{\infty}\frac{3^{k}}{2^{2k+3}}\sum_{\ell=0}^{k}\binom{k}{\ell}(-1)^{\ell}\frac{1}{12^{\ell}}E_{n}^{(2k+2\ell+3)}(k+\ell+x),
\end{align*}
and
\begin{align*}
& \quad B_{n+1}\left(\frac{x+2}{6}\right)-B_{n+1}\left(\frac{x}{6}\right)\\
& =\frac{n+1}{6^{n}}\sum_{k=0}^{\infty}\frac{4^{k}}{2^{2k+4}}\sum_{\ell=0}^{k}\binom{k}{\ell}(-1)^{\ell}\frac{3^{\ell}}{4^{2\ell}}E_{n}^{(2k+2\ell+4)}(k+\ell+x).
\end{align*}
\end{example}
We shall briefly verify the identities in the example for the readers by direct calculation of the generating functions of Bernoulli and Euler polynomials.
\begin{proof}
It suffices to show
\begin{align*}
& \quad \sum_{n=0}^{\infty}\left(B_{n+1}\left(\frac{x+2}{5}\right)-B_{n+1}\left(\frac{x}{5}\right)\right)\frac{t^{n+1}}{(n+1)!}\\
& =t\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}\frac{3^{k}}{2^{2k+3}}\sum_{\ell=0}^{k}\binom{k}{\ell}(-1)^{\ell}\frac{1}{12^{\ell}}E_{n}^{(2k+2\ell+3)}(k+\ell+x)\frac{\left(\frac{t}{5}\right)^n}{n!},
\end{align*}
where the left-hand side is
\begin{align*}
& \quad \sum_{n=0}^{\infty}\left(B_{n+1}\left(\frac{x+2}{5}\right)-B_{n+1}\left(\frac{x}{5}\right)\right)\frac{t^{n+1}}{(n+1)!}\\
& =\sum_{n+1=0}^{\infty}\left(B_{n+1}\left(\frac{x+2}{5}\right)-B_{n+1}\left(\frac{x}{5}\right)\right)\frac{t^{n+1}}{(n+1)!}-B_0\left(\frac{x+2}{5}\right)+B_0\left(\frac{x}{5}\right)\\
& =\left(\frac{t}{e^t-1}\right)e^{\frac{x+2}{5}\cdot t}-\left(\frac{t}{e^t-1}\right)e^{\frac{x}{5}\cdot t}\\
& =\frac{t(e^{\frac{2t}{5}}-1)}{e^t-1}e^{\frac{tx}{5}},
\end{align*}
while the right-hand side is
\begin{align*}
& \quad t\sum_{n=0}^{\infty}\sum_{k=0}^{\infty}\frac{3^{k}}{2^{2k+3}}\sum_{\ell=0}^{k}\binom{k}{\ell}(-1)^{\ell}\frac{1}{12^{\ell}}E_{n}^{(2k+2\ell+3)}(k+\ell+x)\frac{\left(\frac{t}{5}\right)^n}{n!}\\
& =t\sum_{k=0}^{\infty}\frac{3^{k}}{2^{2k+3}}\sum_{\ell=0}^{k}\binom{k}{\ell}(-1)^{\ell}\frac{1}{12^{\ell}}\sum_{n=0}^{\infty}E_{n}^{(2k+2\ell+3)}(k+\ell+x)\frac{\left(\frac{t}{5}\right)^n}{n!}\\
& =t\sum_{k=0}^{\infty}\frac{3^{k}}{2^{2k+3}}\sum_{\ell=0}^{k}\binom{k}{\ell}(-1)^{\ell}\frac{1}{12^{\ell}}\left(\frac{2}{e^{\frac{t}{5}}+1}\right)^{2k+2\ell+3}e^{(k+\ell+x)\frac{t}{5}}\\
& =te^{\frac{tx}{5}}\left(\frac{1}{e^{\frac{t}{5}}+1}\right)^3\sum_{k=0}^{\infty}3^ke^{\frac{kt}{5}}\left(\frac{1}{e^{\frac{t}{5}}+1}\right)^{2k}\sum_{\ell=0}^{k}\binom{k}{\ell}(-1)^{\ell}\frac{1}{3^{\ell}}\left(\frac{1}{e^{\frac{t}{5}}+1}\right)^{2\ell}e^{\frac{\ell t}{5}}\\
& =te^{\frac{tx}{5}}\left(\frac{1}{e^{\frac{t}{5}}+1}\right)^3\sum_{k=0}^{\infty}3^ke^{\frac{kt}{5}}\left(\frac{1}{e^{\frac{t}{5}}+1}\right)^{2k}\left(1-\frac{1}{3}\left(\frac{1}{e^{\frac{t}{5}}+1}\right)^2e^{\frac{t}{5}}\right)^{k}\\
& =te^{\frac{tx}{5}}\left(\frac{1}{e^{\frac{t}{5}}+1}\right)^3\frac{1}{1-\frac{3e^{\frac{t}{5}}\left(e^{\frac{t}{5}}+1\right)^2-e^{\frac{2t}{5}}}{\left(e^{\frac{t}{5}}+1\right)^4}}\\
& =te^{\frac{tx}{5}}\frac{e^{\frac{t}{5}}+1}{e^{\frac{4t}{5}}+e^{\frac{3t}{5}}+e^{\frac{2t}{5}}+e^{\frac{t}{5}}+1}\\
& =\frac{t(e^{\frac{2t}{5}}-1)}{e^t-1}e^{\frac{tx}{5}}\qedhere
\end{align*}
\end{proof}
|
1,314,259,993,255 | arxiv | \section{Introduction}
The analysis of peaks in the single particle spectral function, measured, for
instance, by photoemission experiments in solids or radio frequency (RF)
spectroscopy for ultracold atoms, provides important information about
correlation effects in interacting quantum many-body systems.
In the limit of weak interactions the spectral function displays peaks close
to the energies of the free fermion energy-momentum distribution and as such
directly represents single-particle properties.
At finite temperature and energies away from the Fermi surface, peaks are
broadened and the width is indicative of interaction effects, which open up
decay channels. If spontaneous symmetry breaking occurs below a certain
temperature, such as in a superconductor below $T_c$, the single-particle
excitations become gapped out and shifted by an amount $\Delta$, the
superconducting gap.
A more peculiar behavior is that of excitations being gapped out
(or suppressed) even though no obvious symmetry breaking and thermodynamic ordering
transition occurs. This is often referred to as pseudogap (PG) physics. The
spectral gap can look very similar to a gap due to symmetry breaking at
finite temperature. Therefore, it can be difficult to clarify the origin of
PG physics and to distinguish whether it is due to some hidden order or
a different effect. A very prominent example of such physics is provided by
the experimental observations in the hole doped copper-oxide high temperature
superconductors,\cite{TS99c,Kor15} where a relatively
large part of the phase diagram is occupied by such a PG
behavior. This phenomenon has attracted an enormous amount of attention,
however, there is currently no consensus about the physical origin of the
this PG for the cuprates, and different scenarios have been invoked as
an explanation. These include hidden order,\cite{CLMN01} spin-fluctuations, \cite{Sca12}
phase fluctuations and preformed pairs,\cite{EK95,MCCN14}
and the interplay with charge fluctuations.\cite{EMP13}
Here we focus on a conceptually simpler situation where PG physics has
also been reported and that is for systems of fermions with locally attractive
interactions. In situations without nesting the dominant instability at low
temperature is superconductivity and, correspondingly, pairing processes are
expected to be most relevant. In particular, the crossover from weak coupling
\citet*{BCS57} (BCS) theory
of superconductivity to strong coupling Bose-Einstein condensation (BEC) of pairs
has been studied extensively and is a classical problem in condensed
matter physics.\cite{Eag69,Leg80,NS85,Ran95} In the last decade it has
attracted renewed interest due to experimental realizations with ultracold fermions.
Superfluidity has been reported in such
systems,\cite{GRJ03,ZSSRKK04,ZSSSK05,BDZ08} also in the case where the
fermions are confined to an optical lattice.\cite{CMLSSSXK06} Moreover, based
on RF spectroscopy PG signatures have been reported for two \cite{FFVKK11} and
three-dimensional systems without optical lattices.\cite{GSDJPPS10,Ran10}
Three non-exclusive concepts are usually invoked to discuss the origin of
PG physics for attractive fermions: (i) {\em Preformed pairs}; for
intermediate coupling strength, pair formation without condensation is expected
to occur at a certain temperature $T_{\rm p}$ which is larger than the
superfluid (SF) phase transition temperature $T_c$. These preformed pairs can lead to PG
formation as a certain binding energy is required to break the pair and resolve a single
fermion excitation.\cite{Ran95,Ran10,CLS06} This idea leads to a popular scenario for PG
physics and is illustrated in a schematic phase diagram in
Fig.~\ref{schemphase_diagram}. (ii) {\em Pairing fluctuations} above $T_c$ and
their
effect on single particle properties via a many-body self-energy can lead to
PG physics.\cite{CLS06}
(iii) {\em Phase fluctuations}; in a situation where fermions
are paired one can imagine that a finite magnitude of the order parameter
establishes locally, however, no macroscopic coherent SF phase
develops due to strong phase fluctuation.\cite{EK95} In this situation the presence of
the ordering tendency related to a gap can then lead to PG signatures in the
spectral functions.\cite{ESAH02}
This behavior, which coincides with a small SF
stiffness, is expected to be particularly pronounced in two-dimensional systems.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\columnwidth]{fig1.pdf}
\end{center}
\vspace{-0.5cm}
\caption{(Color online) Schematic phase diagram for the preformed pair scenario in the $T-U$
plane with critical temperature $T_c$, pairing temperature $T_{\rm p}$,
Fermi liquid (FL) regime, and PG physics below $T_{\rm p}$ (after
Randeria \cite{Ran10}).
\label{schemphase_diagram}}
\end{figure}
Within one and the same calculation it is very difficult to obtain
non-perturbative results {\em and} to include all relevant fluctuation effects.
The purpose of this work is to contribute to a better understanding of the
importance of particular effects in the lattice situation based on
non-perturbative calculations.
It is important to distinguish different setups when comparing the
occurrence of PG physics for attractive fermions. First of all, dimensionality
plays an important role in determining the strength of fluctuations, and in
particular, the two-dimensional situation has more pronounced fluctuation effects.
Moreover, results can differ in calculations for a model defined in the continuum
and one on a lattice, such as the Hubbard model.
A well known example is the $T_c$ curve which drops with the coupling strength on the
lattice as $1/U$, whereas it approaches a constant in the continuum.
Here we will analyze the attractive Hubbard model in three spatial
dimensions. We will use the dynamical mean field theory (DMFT) approximation
\cite{GKKR96} to compute the self-energies and spectral function in the normal
and SF phase. This approximation is non-perturbative in the
interaction strength and therefore can describe very well the occurrence of
preformed pairs. However, it does not include the effect of phase fluctuations
(iii) and also does not include the effect of small momentum pairing
fluctuations. The PG physics observed in our work can therefore not be related
to such effects. Phase fluctuations above $T_c$ are usually argued to be of
minor importance for spectral properties in three dimensions.
\begin{figure}
\begin{center}
\includegraphics[width=0.95\columnwidth]{fig2.pdf}
\end{center}
\caption{(Color online) Phase diagram at half filling. We distinguish four
different regimes: the superfluid phase (SF); a
non-Fermi-liquid regime (NFL), which is separated into a region with
PG in $\rho(\omega)$ and a region without PG (no PG); and a Fermi liquid regime (FL) below the
temperature $T_{\rm FL}$.
\label{phase_diagram}}
\end{figure}
There is a substantial literature of previous work on BCS-BEC crossover and PG
physics for attractive fermions, which however does not provide a clear and
complete picture about PG physics.
A popular approach is the diagrammatic T-matrix approximation,\cite{CSTL05,CLS06,CGHL10}
which captures well the effect of pairing fluctuations (ii). It was applied to the 2d
Hubbard model \cite{KMS99,RM01} and PG features have been found in the {\em
non}-selfconsistent version,\cite{CLS06} also in the continuum in two
\cite{MPPPS15} and three dimensions.\cite{PPSC02,TWO09,WTO10} Selfconsistent
T-matrix calculations for the
3d continuum model have found no PG in the spectrum.\cite{Hau92,HPZ09}
However, in the two-dimensional case recently PG behavior was found.\cite{BME14}
There are also non-perturbative calculations, such as DMFT and quantum Monte Carlo (QMC)
which found PG features in the continuum model\cite{MWBD09,HLDD10,WMDBR13} and for the Hubbard
model at different filling factors.\cite{RTMS92,TR95,VT97,MALKPVT00,KW11a,KW11b,RT14} The latter
results were found to be in good agreement with a diagrammatic technique.\cite{KAT01} It is worth
noting that QMC techniques usually need to perform
analytic continuation of imaginary axis data which can lead to uncertainties
in results for spectral functions.
DMFT studies, including cellular versions, for the
attractive Hubbard model have been carried out in the normal
phase,\cite{KMS01,CCG02,KGT06,KKS14} and in the broken symmetry
phase.\cite{GKR05,TBCC05,TCC05,BH09,BHD09,KW11a,KW11b}
Our major results are the following:
\begin{itemize}
\item
For large enough coupling strength we find PG physics at temperatures
$T>T_c$. At half filling the PG remains for {\em all} temperatures above
$T_c$ and therefore a pairing temperature $T_{\rm p}$
(Fig.~\ref{schemphase_diagram}) is not decisive to invoke the PG
in the spectral function (see Fig.~\ref{phase_diagram}). For different fillings the spectral function is
shifted due to the flattening of the Fermi function, such that the main
suppression of spectral weight does not occur at $\omega=0$.
\item
The occurrence of PG physics at high temperatures can be understood via split
local excitations on lattice sites visible for strong enough interactions.
\item
PG physics in the spectral function is related to Non Fermi Liquid (NFL) properties
of the self-energy (see Fig.~\ref{fig:sigma_schematic}, detailed definition below).
\item
We demonstrate in detail how the PG transforms smoothly into the
superconducting gap, when the temperature is lowered through $T_c$ (see
Fig.~\ref{TC}).
\end{itemize}
The paper is organized as follows: In Sec.~II we briefly describe our model
and method. Sec.~III discusses conceptual background about the occurrence of
PG physics in relation to the self-energy. In Sec.~IV and V we show results
for spectra and self-energies at and away from half filling before concluding
in Sec.~VI. In the appendix we compare the DMFT-NRG calculations to iterated
perturbation theory and to T-matrix calculations.
\section{Model definition and DMFT calculations}
Our study is based on the three-dimensional attractive Hubbard
model,\cite{MRR90,Ran95} which in the grand canonical formalism reads
\begin{equation}
H=\sum_{i,j,\sigma}(t_{ij}\elcre {i}{\sigma}\elann
{j}{\sigma}+\mathrm{h.c.})-\mu\sum_{i\sigma}n_{i\sigma}-U\sum_in_{i,\uparrow}n_{i,\downarrow},
\label{attHub}
\end{equation}
with the chemical potential $\mu$, the interaction strength $U>0$ and the
hopping parameters $t_{ij}$. $\elcre {i}{\sigma}$ creates a fermion at site
$i$ with spin $\sigma$, and $n_{i,\sigma}=\elcre {i}{\sigma}\elann
{i}{\sigma}$.
We take only a nearest neighbor hopping ($-t$), so that the non-interacting
energy-dispersion relation in the three-dimensional cubic system is given as,
${\bm k}=(k_x,k_y,k_z)$,
\begin{equation}
\epsilon_{{\bm k}}=-2t[\cos(k_x)+\cos(k_y)+\cos(k_z)].
\label{eq:dispersion}
\end{equation}
The dispersion satisfies $\epsilon_{-{\bm k}}= \epsilon_{{\bm k}}$. The corresponding
density of states (DOS) is denoted by $\rho_0(\epsilon)$. In the calculations we
will use the hopping $t$ and the bandwidth $W=12t$ as energy scales.
The main method used to study the Hamiltonian (\ref{attHub}) is the dynamical
mean field theory (DMFT).\cite{GKKR96}
Within DMFT, we have to self-consistently solve a quantum impurity model
describing a single lattice site in the environment of all other lattice
sites. In order to calculate the self energy for this quantum impurity model,
we mainly use the numerical renormalization group (NRG),\cite{BCP08} which is able to
calculate accurately expectation values, Green's functions, and self energies at
zero and finite temperatures\cite{PPA06,WD07} also in the superconducting
case.\cite{BOH07,HWDB08,BH09,BHD09}
Dynamical correlation functions are calculated within the NRG by broadening of
a large number of discrete excitations in the Lehman representation, and as
such do not require analytic continuation.
For our calculations, we choose a log-normal broadening function \cite{BCV01,BCP08} with
unusually narrow and temperature-independent width, $b=0.3$. One of the main reasons
for this is to avoid a large transfer of spectral weight to high energies
which can be particularly important at higher temperatures.
Using this narrow broadening leads to artificial oscillations in the spectra,
which originate from the discretization of the bath in the
NRG-calculation. In order to produce physical spectra, we finally smooth these
oscillations by averaging over $\Delta\omega=0.01 W$. This averaging is
justified for the present purpose, because we do not expect very fine and sharp
structures in our spectra on this energy scales to determine the physics of
the PG. Furthermore, we carefully compared our NRG calculated spectra with
iterated perturbation theory (see appendix). The latter technique does not
require to broaden discrete excitations and provides therefore a useful test
in a suitable parameter regime.
\section{Features of the spectral functions and self-energy}
Before presenting the results of our calculations it is useful to discuss some
basic features of the Green's functions and self-energy, which will help
us to better understand under which conditions PG physics occurs. In the literature
PG physics is considered quite generally either for the
integrated spectral function $\rho(\omega)=\frac{1}{N}\sum_{{\bm k}}\rho_{{\bm k}}(\omega)$,
which is equivalent to the local spectrum $\rho_{ii}(\omega)$, or for ${\bm k}$-resolved spectra
$\rho_{{\bm k}}(\omega)$ close to the Fermi surface. We will consider both
quantities in this paper. We note that a PG in one of
these does not necessarily imply one in the other quantity.
Let us first note that in the limit of high temperature $T\gg W,U$
correlation lengths become small and the physics is dominated by local
processes.\cite{Geo11}
This is seen, for instance, when we consider the bare single-particle propagator in
imaginary time
\begin{equation}
G_{ij}^0(\tau)=-\frac{1}{N}\sum_{{\bm k}}\mathrm e^{i{\bm k} \vct r_{ij}}\mathrm e^{-\xi_{{\bm k}}\tau}
\mathrm e^{\beta\xi_{{\bm k}}} n_{\rm F}(\xi_{{\bm k}}),
\label{eq:G0tau}
\end{equation}
where $\xi_{{\bm k}}=\epsilon_{{\bm k}}-\mu$ and $\tau\in[0,\beta)$. In the limit of
high temperature, $\beta=1/T\to 0$ and $n_{\rm F}(\xi_{{\bm k}})\to 1/2$. Then
$G_{ij}^0(\tau)$ in Eq.~(\ref{eq:G0tau}) becomes essentially local, $\sim
\delta_{ij}$, and spatial components with $i\neq j$ vanish exponentially with
length scale $\lambda\sim \frac{at}{T}$.\cite{fn1}
Quantum mechanical
hopping is largely incoherent in this situation.
This also means that DMFT based on a local self-consistent approximation
can become very accurate in this high temperature limit.
For instance, high temperature expansions for the three dimensional Hubbard
model agree well with DMFT calculations for thermodynamical quantities, and
this remains to be the case down to temperatures of the order $T\simeq
W/8$.\cite{Geo11,LBKGS11}
One should, however, note that the self-energy of the three dimensional
Hubbard model does not become completely ${\bm k}$-independent even in the limit
$T\to\infty$.\cite{KPRS14}
What are the implications from this for the spectral function and the self-energy?
The excitations in the limit where local physics dominates
are determined by the local part of the Hamiltonian, $H_{\rm
loc}=-\mu\sum_{i\sigma}n_{i\sigma}-U\sum_in_{i,\uparrow}n_{i,\downarrow}$. At
half filling the chemical potential is fixed to $\mu=-U/2$, and depending on
the occupation $n=0,1,2$ we have the energies $E_{\alpha}=0,U/2,0$,
respectively. Excitations in the spectral function have finite matrix
elements for states where the particle number differs by one. Hence, in the spectral
function excitation at energies $\Delta E=\pm U/2$ can be expected. The corresponding
self-energy for the atomic problem reads,
$\Sigma_{ii}(\omega)=\frac{U^2}{4(\omega+i\Gamma)}$, where $\Gamma\to 0$. This
implies $\delta$-function peaks at $\pm U/2$ in the spectral function. In
Sec.~IV we will see that DMFT results at high temperature and large $U$ are
indeed of a similar form,
$\mathrm{Im}\Sigma_{ii}(\omega)=-\frac{U^2\Gamma}{4(\omega^2+\Gamma^2)}$.
Away from half filling the situation is more complicated, but similar
features remain visible.
If the peak in the self-energy is strong enough,
we find in the spectral function increased weight at $\omega=\pm U/2$ and a suppression of
spectral weight at the Fermi energy. These are the signatures of the PG in the
integrated spectral function.
For strong interactions this effect remains observable down to
intermediate temperatures. In other words, the PG in $\rho(\omega)$ is
related to the existence of Hubbard bands which are visible in the spectral
function at all temperatures.
We now discuss the appearance of a gap and PG in the momentum resolved
spectral function $\rho_{{\bm k}}(\omega)$.
In the normal phase the Matsubara Green's function reads
\begin{equation}
G_{{\bm k}}(i\omega_n)=\frac{1}{i\omega_n-\xi_{{\bm k}}-\Sigma(i\omega_n)},
\end{equation}
where we have assumed a momentum
independent self-energy as appropriate for DMFT calculations.
The spectral function is obtained from analytic continuation, $i\omega_n\to
\omega+i\eta$, $\eta\to 0$, to yield
\begin{equation}
\rho_{{\bm k}}(\omega)=-\frac{1}{\pi}\frac{\Sigma^I(\omega)}{[\omega-\xi_{{\bm k}}-\Sigma^R(\omega)]^2+\Sigma^I(\omega)^2}.
\end{equation}
We have separated real (R) and imaginary (I) parts of the self-energy.
In the SF state we can include an explicit symmetry breaking term,
$\Delta_{\rm sc}^{0}$, $\Delta_{\rm sc}^{0}\to 0$ for spontaneous symmetry
breaking, and
the non-interacting Green's function matrix $\underline G_{{\bm k}}^0(i\omega_n)$ has
the form,
\begin{equation}
\underline G_{{\bm k}}^0(i\omega_n)^{-1}=
\left(\begin{array}{cc}
i\omega_n-\xi_{{\bm k}} & \Delta^0_{\rm sc} \\
\Delta^0_{\rm sc} & i\omega_n + \xi_{{\bm k}}
\end{array}\right),
\label{frGfctsc}
\end{equation}
For the interacting system we introduce the matrix self-energy $\underline\Sigma_{{\bm k}}(i\omega_n)$ such
that the inverse of the full Green's function matrix $\underline
G_{{\bm k}}(i\omega_n)$ is given by the Dyson equation
\begin{equation}
\underline G_{{\bm k}}(i\omega_n)^{-1}=
\underline G_{{\bm k}}^0(i\omega_n)^{-1}-\underline\Sigma_{{\bm k}}(i\omega_n).
\label{scdyson}
\end{equation}
The diagonal component of the ${\bm k}$-dependent Green's function reads
\begin{equation}
G_{{\bm k}}(\omega)=\frac{\zeta_{2,{\bm k}}(i\omega_n)}{\zeta_{1,{\bm k}}(i\omega_n)\zeta_{2,{\bm k}}(i\omega_n)-\Sigma_{12}(i\omega_n)\Sigma_{21}(i\omega_n)} ,
\end{equation}
with $\zeta_{1,{\bm k}}(i\omega_n)=\omega-\xi_{{\bm k}}-\Sigma_{11}(i\omega_n)$,
$\zeta_{2,{\bm k}}(i\omega_n)=i\omega_n+\xi_{{\bm k}}-\Sigma_{22}(i\omega_n)$.
The off-diagonal self-energy $\Sigma_{12}(\omega)$, in particular its real
part, plays the role of a dynamic gap function,
$\mathrm{Re}\Sigma_{12}(\omega)\sim
\Delta$. Therefore, low energy spectral excitations which correspond to
$\omega=z(\xi_{{\bm k}}-\Sigma^{R}(0))$ in the normal phase are
shifted by the gap $\Delta$ to $\pm
E_{{\bm k}}\sim\pm z\sqrt{(\xi_{{\bm k}}-\Sigma^{R}(0))^2+\Delta^2}$, where
$z^{-1}=1-\partial_{\omega}\overline{\Sigma}^R_{11}(0)$ is the renormalization
factor. Usually we associate the gap with a
binding energy of pairs and hence we can interpret this energy shift as an energy
required to break a pair and see a single-particle excitation.
We now discuss the occurrence of a PG for momenta close to the Fermi
surface in the situation where no off-diagonal self-energy is present. Thus
consider ${\bm k}=\vk_{{\scriptscriptstyle \mathrm{F}}}$ (interacting Fermi surface) such that
$\xi_{\vk_{{\scriptscriptstyle \mathrm{F}}}}-\Sigma^R(0)=0$.\cite{fn2}
Then we can write
\begin{equation}
\rho_{\vk_{{\scriptscriptstyle \mathrm{F}}}}(\omega)=-\frac{1}{\pi}\frac{\Sigma^I(\omega)}{[\omega-\overline{\Sigma}^R(\omega)]^2+\Sigma^I(\omega)^2},
\end{equation}
where $\overline{\Sigma}^R(\omega)=\Sigma^R(\omega)-\Sigma^R(0)$.
Provided that $\Sigma^I(\omega)$ does not vary rapidly, we expect $\rho_{\vk_{{\scriptscriptstyle \mathrm{F}}}}(\omega)$ to be peaked when
the implicit equation $\omega=\overline{\Sigma}^R(\omega)$ is
satisfied. According to our definitions there is always a solution to this
equation for $\omega=0$. In a weakly interacting system at low
temperature $|\Sigma^I(\omega)|$ usually has a local minimum at $\omega=0$,
\begin{equation}
\label{eq:FL}
\mathrm{Im} \Sigma(\omega)=-a(T)-b\omega^2,
\end{equation}
where $a(T)\to 0$ for $T\to 0$ and $a,b>0$.
By the Kramers-Kronig relation $\partial_{\omega}\overline{\Sigma}^R(0)<0$ [see
Fig.~\ref{fig:sigma_schematic} (left)]. Then the only solution
of $\omega=\overline{\Sigma}^R(\omega)$ is the one at $\omega=0$. This is the
Fermi liquid peak in the spectral function at $\omega=0$ with width $\sim z
|\Sigma^I(0)|$ and weight $z$, where $z^{-1}=1-\partial_{\omega}\overline{\Sigma}^R(0)$.
We define the low energy behavior in Eq.~(\ref{eq:FL}) as Fermi liquid (FL) regime.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\columnwidth]{fig3a.pdf}
\hspace{0.1cm}
\includegraphics[width=0.48\columnwidth]{fig3b.pdf}
\vspace{-1cm}
\end{center}
\caption{(Color online) Schematic plot for real and imaginary part of the
self-energy $\Sigma(\omega)$ in the FL (left) and NFL (right) regime. We
also show the corresponding spectral function $\rho(\vk_{{\scriptscriptstyle \mathrm{F}}},\omega)$ which
shows PG behavior in the NFL regime. The dashed diagonal line, $\omega$,
helps to identify the solutions of the equation
$\omega=\mathrm{Re}\Sigma(\omega)$, and those positions roughly coincide with
the PG peaks.}
\label{fig:sigma_schematic}
\end{figure}
A PG is obtained with different behavior.\cite{KS90a,KS90b}
If $|\Sigma^I(\omega)|$ possesses a local maximum at $\omega=0$,
\begin{equation}
\label{eq:NFL}
\mathrm{Im} \Sigma(\omega)=-a(T)+b\omega^2,
\end{equation}
then $\partial_{\omega}\overline{\Sigma}^R(0)>0$. If the slope is large enough we
will then encounter additional solutions of
$\omega=\overline{\Sigma}^R(\omega)$ as can be easily seen
graphically [see Fig.~\ref{fig:sigma_schematic} (right)]. Whether this is the
case depends on the interaction strength, filling fraction and
temperature. Since $|\Sigma^I(\omega)|$ is decreasing, we obtain a local
minimum at $\omega=0$ in the spectral function and broadened peaks at finite
energies.
This means that the original peak at $\omega=0$ is split and hence
we obtain a PG. Notice that a local maximum of $|\Sigma^I(\omega)|$
does not necessarily lead to a PG, if the self-energy is not large enough.
In the following we call the low energy behavior of Eq.~(\ref{eq:NFL})
Non-Fermi liquid (NFL) behavior.
As we have discussed above $|\Sigma^I(\omega)|$ is typically maximal at
$\omega=0$ at high temperature when the physics becomes dominated by local
interactions. It is also directly visible in the phase space factor appearing
in the second order perturbation theory in $U$ (see appendix).
Therefore, at high temperature we expect NFL behavior, and at low
temperature we usually have FL behavior.
We define the crossover scale as $T_{\rm FL}$, i.e., where the behavior of
$\Sigma^I(\omega)$ changes from Eq.~(\ref{eq:NFL}) to (\ref{eq:FL}).
In this picture PG behavior in $\rho_{\vk_{{\scriptscriptstyle \mathrm{F}}}}(\omega)$ occurs therefore as long
as (i) $U$ is large enough ($\sim W$) and (ii) $T> T_{\rm FL}(U)$. In
particular, the PG is always present above $T_c$ if $T_c>T_{\rm FL}(U)$.
\section{PG physics at half filling}
In this section we analyze results from the DMFT calculations for spectral
functions and self-energies and
focus on the situation at half filling.
An overview of the different regimes as function of $U$ and $T$ is shown
in Fig.~\ref{phase_diagram}.
The phase diagram includes the SF phase and the regimes where the self-energy shows FL and
NFL behavior as defined in Eq.~(\ref{eq:FL}) and Eq.~(\ref{eq:NFL}),
respectively. By performing calculations suppressing the SF phase below $T_c$,
we find that the boundary between FL and NFL regimes (not shown) is connected
to the bipolaron transition at $T=0$, which is equivalent to the Mott
transition for repulsive interactions.
The NFL regime in the phase diagram is separated into a region for stronger
interactions where we observe a PG in the integrated spectral function, and a region without PG (no PG) for weaker
couplings.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{fig4a.pdf}
\includegraphics[width=0.95\columnwidth]{fig4b.pdf}
\end{center}
\caption{(Color online) Integrated DOS, $\rho_{11}(\omega)$, and imaginary
part of the self-energy, $\Sigma_{11}(\omega)$, for different interaction
strengths and temperatures.\label{compare}}
\end{figure}
In the upper panels of Fig.~\ref{compare} we show the interacting local
DOS $\rho(\omega)$. At weak coupling and intermediate temperatures, $\rho(\omega)$ very much
resembles the non-interaction DOS, $\rho_0(\omega)$, and the small
self-energy does not have a pronounced effect.
Although $|\mathrm{Im}\Sigma|$ is peaked at the Fermi energy for $U=0.4W$ at high
temperatures, there is no PG structure in the DOS.
In contrast, for larger interactions, $U/W=0.6$, $U/W=1$, we find at high
temperatures a PG structure of two peaks at $\pm U/2$ and
a suppression of the density of states at $\omega=0$. The behavior
is more pronounced for larger interactions. In both cases the magnitude of the
PG is clearly related to $U$.
This structure is induced by the NFL peak in the $|\mathrm{Im}\Sigma|$ (lower
panels). As discussed in the previous section III this result
can be understood in terms of the local excitations dominating the physics at
high temperature.
At weak and intermediate interaction strengths the system crosses
over to a FL regime before $T_c$ is reached when decreasing the
temperature. For $U/W=0.4$ and $U/W=0.6$, $|\mathrm{Im}\Sigma(\omega)|$ exhibits a dip at
the Fermi energy at low enough temperature,
$T/W<0.1$, which is accompanied with a peak structure in the DOS. Such a
change in the behavior of self-energy and DOS cannot be observed for strong
coupling, where the PG structure exists for all temperatures above $T_c$. At
very low temperatures, the system is in the SF
phase in all cases, which is characterized by a gap in the DOS, which
coincides with a dip in $\mathrm{Im}\Sigma(\omega)$.
So even though the two cases, $U/W=0.6$, $U/W=1$, in Fig.~\ref{compare} look similar at high
temperature (PG) and very low temperature (SF gap), they display
a striking difference for intermediate temperatures. For the larger
coupling strength the SF transition occurs from a PG state (see also
Fig.~\ref{TC}); in contrast for $U/W=0.6$, the SF instability happens in the
FL regime.
Further insights can be obtained by studying the behavior of the double
occupancy or local pair density, $\langle n_\uparrow n_\downarrow\rangle$,
which is shown in Fig.~\ref{double} for different temperatures and interaction
strengths.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{fig5.pdf}
\end{center}
\caption{(Color online) The local pair density $\langle n_\uparrow n_\downarrow\rangle$
for different temperatures and interaction strengths. The black arrow marks
the transition from non-Fermi-liquid to Fermi-liquid behavior. The red
arrow marks the transition into the SF phase. \label{double}}
\end{figure}
Independent of the interaction strength, in the high temperature limit, $T\gg
W,U$, the pair densities approach the non-interacting values $n_{\sigma}^2$, where
$n_{\sigma}$ is the density for one spin component, at half filling $\langle n_\uparrow
n_\downarrow\rangle=0.25$.
In the atomic limit, $t=0$, the double occupancy can be easily calculated.
At half filling, all atomic states are occupied with equal probability, so
that the double occupancy reads
\begin{equation}
\langle n_\uparrow n_\downarrow\rangle=\frac{1}{2+2\exp(-U/(2T))}.
\end{equation}
At high temperature, $T/W>0.5$, this formula agrees very well with the results in
Fig.~\ref{double}, demonstrating again that the physics at high temperature
is dominated by local processes.
Decreasing the temperature, $\langle n_\uparrow n_\downarrow\rangle$
increases due to the attractive interaction. This
effect is stronger for stronger interaction. For interaction strengths $U/W<0.8$, we
find a maximum {\em before} the system enters the
SF phase at $T_c$. This maximum appears to be correlated with the crossover
temperature $T_{\rm FL}$ (black arrows) between FL and NFL behavior in the
self-energy. The disappearance of the maximum in the pair density for
interaction strengths $U/W>0.8$ agrees with the vanishing of the FL regime
phase in the phase diagram. For $U/W<0.8$, the pair density
decreases when lowering the temperature below $T_{\rm FL}$, but then increases again when entering the SF
phase (arrow at $T_c$). For strong interactions ($U/W>0.8$) on the other hand,
the pair density increases with decreasing temperature until $T_c$ is
reached and then decreases. This agrees with the
known fact that the superfluidity is driven by interaction
energy gain for weak coupling, as opposed to kinetic energy gain for
strongly coupled systems.\cite{TCC05}
With these insights we can comment on how our results compare to the preformed
pair scenario in Fig.~\ref{schemphase_diagram}.
It is interesting to note that for very high temperatures the PG
behavior in Fig.~\ref{compare} does not change significantly anymore. In other
words the PG persists and no $T_{\rm p}$ for its appearance can be identified.
This is the case even for temperatures where the pair density has decreased to values
close to the non-interacting result.
Furthermore we found PG behavior for the two cases $U/W=0.6$, $U/W=1$ at high
temperature, but for intermediate temperatures ($T/W \sim 0.05$) the case
$U/W=0.6$ shows FL behavior. In both cases we observe a strongly enhanced
local pair density for such temperatures, which can be interpreted as a preformed pair
state, however, the manifestation in the spectral function is different.
Both of these observations are in clear contrast to the preformed pair
scenario, where the existence of the PG behavior is linked to the presence of an enhanced pair
density.\cite{Ran10,CLS06}
In Fig.~\ref{TC}, we take a closer look at dynamic response functions close to the
SF transition temperature $T_c$.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{fig6a.pdf}
\includegraphics[width=0.95\columnwidth]{fig6b.pdf}
\end{center}
\caption{(Color online) $\rho(\omega)$ and $\Sigma(\omega)$ close to the
SF phase transition. We use the same legend for the self-energies as
for the Greens functions. The transition temperatures are $T_c/W=0.32$ for
$U/W=0.6$ and $T_c/W=0.35$ for $U/W=1.0$.\label{TC}}
\end{figure}
\begin{figure*}[!ht]
\includegraphics[width=0.32\linewidth]{fig7a.pdf}
\includegraphics[width=0.32\linewidth]{fig7b.pdf}
\includegraphics[width=0.32\linewidth]{fig7c.pdf}
\includegraphics[width=0.32\linewidth]{fig7d.pdf}
\includegraphics[width=0.32\linewidth]{fig7e.pdf}
\includegraphics[width=0.32\linewidth]{fig7f.pdf}
\includegraphics[width=0.32\linewidth]{fig7g.pdf}
\includegraphics[width=0.32\linewidth]{fig7h.pdf}
\includegraphics[width=0.32\linewidth]{fig7i.pdf}
\caption{(Color online) Momentum resolved spectral
function for $U/W=0.4$ (upper panel), $U/W=0.6$ (middle panels) and $U/W=1$ (lower panels). The
temperature are $T/W=0.2, 0.08, 0.01$ from left to right. The red line
corresponds to the non-interacting system, the green line corresponds to the
Fermi energy.\label{spec_momentum}}
\end{figure*}
The plots in the upper part of figure show $\rho(\omega)$ for $U/W=0.6$
and $U/W=1$, which correspond to a transition into the SF phase from the FL
and NFL regime, respectively. The lower part of the figure
displays the corresponding diagonal and off-diagonal self-energies (real and
imaginary parts). For the weaker coupling case, $U/W=0.6$, at $T>T_c$
there is the usual FL dip in $\mathrm{Im}\Sigma_{11}(\omega)$ and the corresponding peak in $\rho(\omega)$.
When the temperature is lowered through $T_c$ the off-diagonal self-energy
becomes finite and a dip in $\rho(\omega)$ is induced.
Very close below the transition temperature, the main effects for this come
from $\mathrm{Re}\Sigma_{12}(\omega)$. Lowering the temperature further, the
amplitude of the diagonal part of the self-energies decreases without showing
new features.
As discussed in Sec.~III the gapping out of excitation is dominated by
contributions from $\mathrm{Re}\Sigma_{12}(\omega)$.
In the case of stronger interaction, $U/W=1$, superfluidity sets in
the NFL regime with a PG at the Fermi energy.
When lowering the temperature through $T_c$, the off-diagonal
self-energy becomes finite, but at first the diagonal part of the self-energy
remains nearly unchanged (The orange line, $T/T_c=1$, overlaps with the dark
green line, $T/T_c=1.1$).
On further reducing the temperature the off-diagonal self-energy increases
substantially and $\mathrm{Im}\Sigma_{11}(\omega)$ is strongly reduced developing a FL dip at
the Fermi energy.
The gap in $\rho(\omega)$ changes smoothly from the PG with broad peaks separated by $U$
to the sharper structures (coherence peaks) in the
SF phase. It is interesting to note that the gap, if defined as
the distance between the maxima is larger above $T_c$ in the PG regime than in the SF
phase. One should also note that for low temperatures the gap becomes much
more pronounced with a suppression of spectral weight at $\omega=0$ and as such
is approaching a full gap in the limit $T\to 0$.
A remarkable observation is that the qualitative behavior of the off-diagonal
part of the self-energy can change within the SF phase.
Generally, $|\mathrm{Re}\Sigma_{12}(\omega)|$ approaches the mean field result $U\langle
c_{i,\uparrow} c_{i,\downarrow}\rangle$ for $|\omega|\to \infty$.\cite{BHD09}
At weaker coupling ($U/W=0.6$) and low temperature it is minimal for small $\omega$.
Decreasing the temperature, the anomalous expectation value increases and this
is reflected in the results for $\Sigma_{12}(\omega)$. The $\omega$-dependence
can be understood at weak coupling from the effective interaction for inducing
superfluidity, which possesses a repulsive component which is peaked for small
$\omega$.\cite{fn3}
However, when entering the SF phase from the PG regime at
stronger coupling ($U/W=1$), $|\mathrm{Re} \Sigma_{12}(\omega)|$ first develops a
strong maximum at $\omega=0$. When the temperature is lowered further this
behavior continually reverts to the one of the weak coupling situation. The form of the
spectral function changes at $T_c$ and there is a shift from the gap feature
being induced by $\Sigma_{11}(\omega)$ (above $T_c$) to $\Sigma_{12}(\omega)$
(below $T_c$). The observed strong changes are related to this shift and
a more thorough understanding requires further investigation.
We now turn our attention to features in the momentum resolved spectral function
$\rho_{{\bm k}}(\omega)$. A good overview of the behavior for different
interactions and temperatures can be obtained in the intensity plots in
Fig.~\ref{spec_momentum}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{fig8.pdf}
\end{center}
\caption{(Color online) $\rho_{\vk_{{\scriptscriptstyle \mathrm{F}}}}(\omega)$ for $\vk_{{\scriptscriptstyle \mathrm{F}}}=(\pi/2,\pi/2,\pi/2)$ for $U/W=0.4$,
$U/W=0.6$ and $U/W=1$ for different temperatures.\label{compare2}}
\end{figure}
We show $\rho_{{\bm k}}(\omega)$ for three
interaction strengths [$U/W=0.4$ (upper panels), $U/W=0.6$ (middle panels),
and $U/W=1$ (lower panels)] for three different temperatures [$T/W=0.2$ (left), $T/W=0.08$ (middle), and
$T/W=0.01$ (right)]. We also show the Fermi level (dashed line) and the
non-interacting dispersion (full red line) as an orientation.
At weak coupling, $U/W=0.4$, the spectral function only displays a weak
modification from the non-interacting result with certain broadening of the
peaks and a minor shift of spectral weight. At low temperature the system is
SF and excitations at $\vk_{{\scriptscriptstyle \mathrm{F}}}$ are gapped out. Notice that the width of the
Bogoliubov peaks at the gap edge is overestimated by our broadening
procedure.\cite{BHD09}
For $U/W=0.6$, we find similar features for $\rho_{{\bm k}}(\omega)$ as what has
been found for the integrated spectral function, $\rho(\omega)$, as far as the PG is concerned. At high
temperatures we see a broadened dispersion similar but shifted from the
non-interacting one. Spectral weight is suppressed at the Fermi energy such
that PG features are realized at high temperatures. Curiously, this PG closes at
intermediated temperatures $\sim T_{\rm FL}$ where the behavior of the
self-energy changes. Below $T_c$ the spectrum is gapped again. Notice that
band renormalization features appear somewhat weaker than at high
temperatures.
For $U/W=1$ the NFL regime extends from high temperatures down to
$T_c$. The self-energy undergoes only very slight changes
when decreasing the temperature in the NFL regime. Accordingly, the
momentum-resolved spectral function for $T/W=0.2$ and $T/W=0.08$
(lower left panel and lower middle
panel) are nearly the same. We observe a large PG around the
Fermi energy; the spectral weight at the Fermi energy is very small. When
entering the SF phase, gap features are visible and the dispersion changes in
the vicinity of $\omega=0$. For this interaction strength, we observe a clear
deviation between the non-interacting band structure and the interacting
spectral function.
In the SF phase we find a mirror or ``shadow''
band appearing as reflected from $\omega=0$.
These bands can be understood due to a particle-hole doubling
in the Nambu representation. This is an effect also observed in the
antiferromagnetically ordered phase with zone doubling.\cite{BH07c}
In Fig.~\ref{compare2} we show particular cuts for $\rho_{\vk_{{\scriptscriptstyle \mathrm{F}}}}(\omega)$ as a
function of $\omega$. Here we can see the PG features even more
clearly. Similar as in the integrated spectrum, $\rho(\omega)$, the PG is absent for the weak coupling
case, $U/W=0.4$, but present at high temperature for stronger interactions, $U/W=0.6$ and
$U/W=1$. For $U/W=0.6$ the PG disappears in the FL regime,
whereas it remains for $U/W=1$. We also
show the real part of the diagonal self-energy. As discussed in Sec.~III,
the peak splitting in $\rho_{\vk_{{\scriptscriptstyle \mathrm{F}}}}(\omega)$ can be induced from non-trivial
solutions of $\omega=\mathrm{Re}\overline{\Sigma}_{11}(\omega)$, and we have
included a dashed line to see this graphically. As can be clearly seen, this
condition is not satisfied in the weak coupling case. In contrast, at strong
coupling, $U/W=1$, the intersection points characterize the peak positions
well. The spectral function changes in the SF phase (lowest temperatures), where the
coherence peaks at the gap edge become visible.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{fig9a.pdf}
\includegraphics[width=0.95\columnwidth]{fig9b.pdf}
\end{center}
\caption{(Color online) The DOS and imaginary part of the self-energy for
$U/W=0.6$ and $U/W=1$ for different temperatures. The filling of the system
is fixed to $n=0.5$.\label{compare_doped}}
\end{figure}
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{fig10.pdf}
\end{center}
\caption{(Color online) The DOS and imaginary part of the self-energy for
$U/W=0.6$, and $U/W=1$ for different fillings. The temperature of the system
is $T/W=0.05$. All data shown corresponds to the normal phase.\label{compare_diff_doped}}
\end{figure}
\begin{figure*}
\includegraphics[width=0.32\linewidth]{fig11a.pdf}
\includegraphics[width=0.32\linewidth]{fig11b.pdf}
\includegraphics[width=0.32\linewidth]{fig11c.pdf}
\includegraphics[width=0.32\linewidth]{fig11d.pdf}
\includegraphics[width=0.32\linewidth]{fig11e.pdf}
\includegraphics[width=0.32\linewidth]{fig11f.pdf}
\includegraphics[width=0.32\linewidth]{fig11g.pdf}
\includegraphics[width=0.32\linewidth]{fig11h.pdf}
\includegraphics[width=0.32\linewidth]{fig11i.pdf}
\caption{(Color online) Momentum resolved spectral function
$\rho_{{\bm k}}(\omega)$ for $n=0.5$ from top to bottom $U/W=0.4$, $U/W=0.6$, and $U/W=1$;
and from left to right $T/W=0.2$, $T/W=0.08$, and $T/W=0.01$. The red line
corresponds to the non-interacting dispersion $\epsilon_{{\bm k}}$, the dashed green line
corresponds to the Fermi energy.\label{spec_momentum_doped}}
\end{figure*}
\section{PG physics away from half filling}
So far we have focused on the situation at half filling where the discussion
is somewhat simplified due to the particle-hole symmetry. In this section we
show results for different filling factors ($n<1$) to see
how the PG behavior is affected. This is important for comparison with
experiments with ultracold atoms, where due to the
trapping potential no homogeneous filling fraction can be expected.
In Fig.~\ref{compare_doped} results for $\rho(\omega)$ and
$\mathrm{Im}\Sigma(\omega)$ analogous to the ones in Fig.~\ref{compare} for the
half-filled case are displayed for $n=0.5$ over a wide temperature range. The
lowest temperature corresponds to a gapped SF state.
Looking at the self-energies in the lower panel, we can clearly see
that the classification into FL and NFL regions is still applicable and
$|\mathrm{Im}\Sigma(\omega)|$ can either show a double peak with dip in the vicinity of
$\omega=0$ (FL) or a strong single peak (NFL). It is useful here to distinguish
temperatures $T/W\lesssim 0.2$, where features are close to $\omega=0$, and
higher temperatures, where the NFL peak in $|\mathrm{Im}\Sigma(\omega)|$ moves
systematically to higher energies. In contrast to the half filled situation
the case $U/W=0.6$ does not show a clear PG in $\rho(\omega)$. However, for
$U/W=1$ the PG is clearly visible. For lower temperatures the minimum in
$\rho(\omega)$ is close to $\omega=0$ and for higher temperatures it moves to
higher energies together with the NFL peak in $|\mathrm{Im}\Sigma(\omega)|$. Notice,
however, that the minimum in $\rho(\omega)$ and the peak in
$|\mathrm{Im}\Sigma(\omega)|$ do not coincide as they do for $n=1$.
The shift of the PG with temperature can be understood by recalling that at
high temperature the Fermi distribution becomes flatter such that higher
energies contribute to the particle number, $n=\int d\omega \rho(\omega)n_{\rm
F}(\omega)$. To satisfy this relation at higher temperature the spectrum has
to be shifted.
The PG structure in $\rho(\omega)$ at elevated temperature can still be
understood from the local picture. For $n<1$, we can write
$\mu=-U/2-\Delta\mu$ ($U>0$) assuming $\Delta\mu>0$,
and the atomic energies are $E_{\alpha}=0,U/2+\Delta\mu, 2\Delta\mu$.
The partition function reads
\begin{equation}
Z=1+2\mathrm e^{-\beta(U/2+\Delta\mu)}+\mathrm e^{-\beta
2\Delta\mu}
\end{equation}
There are now excitations at $\omega_+=U/2+\Delta\mu$ and
$\omega_-=-U/2+\Delta\mu$ with generally asymmetric weights
\begin{equation}
w_+=\frac{1}{Z}[1+\mathrm e^{-\beta(U/2+\Delta\mu)}],
\end{equation}
and
\begin{equation}
w_-=\frac{1}{Z}[\mathrm e^{-\beta 2 \Delta\mu}+\mathrm e^{-\beta(U/2+\Delta\mu)}],
\end{equation}
respectively.
Without showing explicit results we note that the pair density $\langle
n_\uparrow n_\downarrow\rangle$ displays a similar temperature dependence for
$n=0.5$ to what was shown in Fig.~\ref{double}, increasing from $n_{\sigma}^2$ at
large $T$ to larger values (maximal $n/2$). Therefore, similarly to the
half filled case PG behavior can coincide with an enhanced pair density for
large interactions and $T_c\lesssim T$. However, we also find cases,
e.g. $U/W=0.6$, $T/W=0.05$, with enhanced pair density ($\langle
n_\uparrow n_\downarrow\rangle \approx 0.17$) and no PG behavior, in
contrast to the expected relation in the preformed pair scenario.
In order to get an insight to overall trends, we compare several different fillings
in Fig.~\ref{compare_diff_doped}. We show $\rho(\omega)$ and $\mathrm{Im}\Sigma(\omega)$
for $U/W=0.6$ and $U/W=1$ for low temperature, $T/W=0.05$, in the normal phase.
For $U/W=0.6$ $\rho(\omega)$ exhibits a FL dip in $\mathrm{Im}\Sigma(\omega)$. It
is clearly visible, even for $n=0.1$, that the self-energy does not change its
structure when reducing the filling further. The frequency dependence in this
FL regime is relatively symmetric with respect to $\omega=0$.
The DOS, on the other hand, changes with $n$. While for $n =0.5$ a clear
peak close to the Fermi energy is visible in the DOS at low temperature, such
a peak is hardly noticeable for $n=0.2$, and it has disappeared for $n=0.1$. The
amplitude of the self-energy has become too weak to change the spectrum and we
essentially see a shifted non-interacting DOS.
For $U/W=1$, we find a NFL peak in $\mathrm{Im}\Sigma(\omega)$ for all fillings in
Fig.~\ref{compare_diff_doped}. We observe similar effects as for weaker
interactions when reducing the filling as far as the strength of the self-energy is concerned.
However, we find a clear PG structure in the DOS.
Whilst the PG structure at low temperature is pinned to
$\omega=0$, at high temperature the whole spectrum including the PG is shifted
to high frequencies (see Fig.~\ref{compare_doped}).
Note that at high temperature due to the flattening of $n_{\rm F}(\omega)$ the
Fermi energy ($\omega=0$) does not play such an important role as it does
for low temperatures.
In summary, when analyzing $\rho(\omega)$ and $\mathrm{Im}\Sigma(\omega)$ we can
find similar features to the ones of the half filled situation and a PG
appears for suitable parameters. However, depending on filling, temperature,
and interaction strength, the occurrence of the PG may be limited. At high
temperatures it can be shifted away from $\omega=0$, although it is still
clearly visible in the spectrum. Moreover, the impact of the local
Hubbard interaction becomes weaker for a system with smaller filling factor.
Momentum resolved spectra for $n=0.5$ and various values of $T$ and $U$ are displayed in
Fig.~\ref{spec_momentum_doped}. Generally, the features are similar to the
half filled case. For weak coupling ($U/W=0.4$) we find a shifted and
broadened spectrum which shows a SF gap at low temperature.
For intermediate coupling ($U/W=0.6$) interaction effects are more visible in
the spectrum, resulting in stronger band renormalization effects and shifts of
spectral weight. However, in contrast to $n=1$ no clear PG becomes visible in
$\rho_{{\bm k}}(\omega)$.
For $U/W=1$, we see strong interaction effects and PG features at all
temperatures above $T_c$. We also clearly observe an asymmetry in the
intensity, which is substantially lower for the $\omega<0$ part of the spectrum.
Particular cuts along $\omega$ for momenta which satisfy $\xi_{\vk_{{\scriptscriptstyle \mathrm{F}}}}+\mathrm{Re}\Sigma(0)=0$ are
shown in Fig.~\ref{Gk_doped}.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=0.95\columnwidth]{fig12.pdf}
\end{center}
\caption{(Color online) $\rho_{\vk_{{\scriptscriptstyle \mathrm{F}}}}(\omega)$ for $U/W=0.4$, $U/W=0.6$ and $U/W=1$ and different
temperatures. Notice that the lowest temperature $T/W=0.01$ is below $T_c$
in all cases.
The lower panels show the real part of the self-energy including the line
$y=\omega$. \label{Gk_doped}}
\end{figure}
At weak coupling ($U/W=0.4$) the FL peak is gapped out when the temperature is
lowered below $T_c$. For intermediate coupling ($U/W=0.6$) above $T_c$ we find
that the FL peak is shifted away from $\omega=0$ to higher energies. Also the
coherence peaks below $T_c$ show some asymmetry due to self-energy effects.
A clear PG is only visible for larger interactions, $U/W=1$. The lower panel
shows again the real part of the self-energy. In
contrast to the situation at half filling, the peaks in $\rho_{\vk_{{\scriptscriptstyle \mathrm{F}}}}(\omega)$
are not well explained by the intersection, $\omega=\mathrm{Re}\overline{\Sigma}(\omega)$. In
this situation the variation of $\mathrm{Im}\Sigma(\omega)$ is too strong invalidating
the simple arguments of Sec.~III. Nevertheless a NFL peak form of the
self-energy is clearly important for the PG behavior.
\section{Discussion and Conclusions}
We have analyzed the occurrence of PG features in the integrated and
${\bm k}$-resolved spectral function of the three-dimensional attractive Hubbard
model for different temperatures, interactions and filling factors. Properties
of the spectral functions have been traced back to the characteristic behavior
of the self-energy. We find PG behavior as long as the interaction $U$ is
large enough ($\sim W$) and the self-energy shows NFL behavior, i.e., $T>
T_{\rm FL}(U)$.
Our results show marked deviations from the popular preformed pair scenario,
where PG behavior is directly linked to the formation of pairs at a temperature
$T_{\rm p}>T_c$: (i) We find that PG behavior persists up to large temperatures
and is not bounded by some temperature scale $T_{\rm p}$. (ii) We find cases
with a substantially enhanced pair density where no PG behavior occurs.
The first effect is related to the fact that we are working with a lattice
model, such that local excitations are always well defined and related to
the chemical potential $\mu$ and $U$. This might be different in the continuum
where it is conceivable that the preformed pair scenario of Fig.~1 is
applicable. On the other hand we expect the PG to be present at large
temperatures as a non-perturbative local lattice effect also in
the two-dimensional lattice model.
Certainly, other effects like strong phase fluctuations and small momentum
pairing fluctuations, not contained in our calculations, can lead to an
extension of the regimes where PG behavior occurs.
A word of caution is in order when discussing the large temperatures addressed
in this paper. Here we dealt with a strict one-band model where the kinetic energy is limited by
the bandwidth. In most real systems very high temperature would activate
higher bands, and in solid state systems, it can lead to the melting of the
crystal structure; such effects are obviously not allowed in our setup.
Experiments with ultracold atoms in optical lattices provide an excellent
platform to test our predictions. Interactions can be tuned in a wide range by
Feshbach resonances, the lattices can be loaded with different filling factors
and a temperature range $T/W=0.1-0.2$ is routinely accessible
\cite{BDZ08}. Integrated and momentum resolved spectra can be measured such
that a direct comparison with our predictions is possible.
Thus, we hope that our work will stimulate further efforts in this field which contribute to a
better understanding of the intriguing PG physics.
\paragraph*{Acknowledgments -} We wish to thank M. Capone, N. Dupuis,
A. Georges, O. Gunnarsson, B. Halperin, A. Koga, W. Metzner, E. Perepelitsky, M. Punk,
P. Strack, and A. Toschi for very helpful discussions and suggestions during
different stages of this work. JB acknowledges financial support from the DFG
through grant number BA 4371/1-1. RP is supported by the FPR program of
RIKEN. Computer calculations have been done at the RICC supercomputer at RIKEN
and the Kashiwa supercomputer of the Institute of Solid State Physics in
Japan.
\begin{appendix}
\section*{Appendix}
\subsection{T-matrix approximation}
A popular approximation for the self-energy is the so-called $T$-matrix
approximation, which corresponds essentially to summing the scattering
processes in the particle-particle channel. One has,\cite{Hau92,KMS99}
\begin{equation}
\Sigma^{(1)}=TU\sum_{m,{\bm q}}\mathrm e^{i\omega_m\eta}
G({\bm q},i\omega_m) ,
\label{eq:sigmaiw}
\end{equation}
or equivalently,
\begin{equation}
\Sigma^{(1)}=U\sum_{{\bm q}}\integral{\omega}{}{}\rho({\bm q},\omega)n_{\rm
F}(\omega) ,
\label{eq:sigmaiw}
\end{equation}
and
\begin{equation}
\Sigma_{{\bm k}}^{\rm T}(i\omega_n)=T\sum_{m,{\bm q}}\mathrm e^{i\omega_n\eta}\Gamma({\bm q},i\omega_m)
G({\bm q}-{\bm k},i\omega_m-i\omega_n) ,
\label{eq:sigmaiw}
\end{equation}
with $\eta\to0$. Here we defined
\begin{equation}
\Gamma({\bm q},i\omega_m)=\frac{U^2K({\bm q},i\omega_m)}{1-U K({\bm q},i\omega_m)},
\end{equation}
with the particle-particle propagator
\begin{equation}
K({\bm q},i\omega_m)=-T\sum_{n,{\bm q}} G({\bm q}-{\bm k},i\omega_m-i\omega_n)G({\bm k},i\omega_n).
\end{equation}
The self-energy is
$\Sigma_{{\bm k}}(i\omega_n)=\Sigma_{{\bm k}}^{(1)}(i\omega_n)+\Sigma_{{\bm k}}^{\rm T}(i\omega_n)$.
In the local approximation, the expression simplify. We find the following
result after analytic continuation,
\begin{equation}
\Sigma^{(1)} =U\integral{\omega}{}{}\rho_G(\omega)n_{\rm F}(\omega) ,
\label{eq:sigmalociw}
\end{equation}
and
\begin{equation}
\Sigma^{\rm T}(\omega)=\integral{\omega_1}{}{}\integral{\omega_2}{}{}\frac{\rho_{\Gamma}(\omega_1)
\rho_G(\omega_2)}{\omega^+-\omega_1+\omega_2}[n_{\rm B}(\omega_1)+n_{\rm F}(\omega_2)] .
\end{equation}
We have
\begin{equation}
K(i\omega_m)=-T\sum_{n} G(i\omega_m-i\omega_n)G(i\omega_n),
\end{equation}
and $\rho_{\Gamma}=-\frac{1}{\pi}\mathrm{Im}\Gamma(\omega^+)$.
Introducing spectral functions we can also write,
\begin{equation}
K(\omega^+)=\integral{\omega_1}{}{}\integral{\omega_2}{}{}\frac{\rho_G(\omega_1)\rho_G(\omega_2)}
{\omega^+-\omega_1-\omega_2}[n_{\rm F}(\omega_1)-n_{\rm F}(-\omega_2)],
\end{equation}
and
\begin{equation}
\Gamma(i\omega_m)=\frac{U^2K(i\omega_m)}{1-U K(i\omega_m)}.
\end{equation}
The $T$-matrix calculations can be done non-self-consistently (Tnsc) and
self-consistently (Tsc).
\subsection{Comparison of NRG-DMFT with IPT and T-matrix}
We start with a comparison of the DMFT results obtained using NRG calculations
for the effective impurity model with DMFT calculations using second order
perturbation theory, usually termed iterated perturbation theory (IPT). IPT gives
qualitatively reliable results in the half filled Hubbard model.\cite{GKKR96}
Since IPT does not require a prescription of broadening discrete excitations,
this comparison helps to validate the finite temperature broadening procedure
described in Sec.~II. We focus on results at half filling in this section.
In Fig.~\ref{comp_ipt} we show a comparison of the imaginary part of the
self-energy, $\mathrm{Im}\Sigma (\omega)$, and the integrated spectral function,
$\rho(\omega)$, for $U/W=0.6$ (left) and $U/W=1$ (right) and different temperatures.
\begin{figure}[!ht]
\begin{center}
\vspace{0.2cm}
\includegraphics[width=0.48\columnwidth]{fig13a.pdf}
\includegraphics[width=0.48\columnwidth]{fig13b.pdf}
\includegraphics[width=0.48\columnwidth]{fig13c.pdf}
\includegraphics[width=0.48\columnwidth]{fig13d.pdf}
\end{center}
\vspace{-0.5cm}
\caption{(Color online) Comparison of DMFT-NRG (full lines)
and IPT (dashed lines) results for $U/W=0.6$ (left) and $U/W=1$ (right) for
$\mathrm{Im}\Sigma (\omega)$ and $\rho(\omega)$.
\label{comp_ipt}}
\end{figure}
\noindent
Overall the agreement is good with minor deviations in the tails. There is
a particularly visible difference for $U/W=1$, where the IPT result for
$\mathrm{Im}\Sigma (\omega)$ shows a somewhat stronger peak. This leads to a more
pronounced PG in $\rho(\omega)$.
We conclude that the DMFT-NRG results at high temperatures have the
qualitative correct form and the PG remains there.
We also provide a comparison of the DMFT-NRG results with T-matrix
calculations. In particular, we use the Eq.~(\ref{eq:sigmalociw}) and
following, and include self-consistent (Tsc) and non-selfconsistent
(Tnsc) results. Note that the T-matrix calculations are only sensible as long as
$1-U \mathrm{Re} K(\omega)$ does not become zero, which is particularly important for
the non-selfconsistent case.
At weak coupling ($U/W= 0.2$, not shown) one can find reasonable agreement of
T-matrix calculations with the DMFT-NRG and all calculations give no PG
behavior. However, in this situation also second order perturbation theory
gives satisfactory agreement.
\begin{figure}[!ht]
\begin{center}
\vspace{0.2cm}
\includegraphics[width=0.48\columnwidth]{fig14a.pdf}
\includegraphics[width=0.48\columnwidth]{fig14b.pdf}
\end{center}
\vspace{-0.5cm}
\caption{(Color online) Comparison of DMFT-NRG result with self-consistent
(Tsc) and non-selfconsistent (Tnsc) T-matrix calculations for $\mathrm{Im}\Sigma
(\omega)$ and $\rho(\omega)$ for $U/W=0.4$ and $T/W=0.1$.}
\label{comp_tmatU04}
\end{figure}
\noindent
As seen in Fig.~\ref{comp_tmatU04} for $U/W= 0.4$ and $T/W=0.1$, Tsc and DMFT still show
reasonable agreement, whereas Tnsc calculations can lead to a strong
overestimate for $\mathrm{Im}\Sigma (\omega)$. This can lead to a PG feature in $\rho(\omega)$, even though
calculations with the DMFT-NRG give no PG behavior.
For intermediate coupling, $U/W=0.6$, and $T/W=0.2$, we show a further comparison
in Fig.~\ref{comp_tmatU06}.
\begin{figure}[!ht]
\begin{center}
\vspace{0.2cm}
\includegraphics[width=0.48\columnwidth]{fig15a.pdf}
\includegraphics[width=0.48\columnwidth]{fig15b.pdf}
\end{center}
\vspace{-0.5cm}
\caption{(Color online) Comparison of DMFT-NRG result with self-consistent
(Tsc) and non-selfconsistent (Tnsc) T-matrix calculations for $\mathrm{Im}\Sigma
(\omega)$ and $\rho(\omega)$ for $U/W=0.6$ and $T/W=0.2$.
\label{comp_tmatU06}}
\end{figure}
\noindent
In this case both T-matrix calculations give unreliable results. The
self-energy of the self-consistent version is too small and $\rho(\omega)$
shows no PG. The non-selfconsistent calculation shows a PG but its magnitude
is largely overestimated.
For larger interactions, for instance, $U/W=1$, the deviations get worse. We
therefore conclude that T-matrix calculations - both self-consistent and
non-selfconsistent - within the local approximation do not give reliable
results for the PG physics of the three dimensional Hubbard model at half
filling.
\subsection{Second order self-energy and phase space factor}
The result for the second order retarded self-energy reads,\cite{SC91}
\begin{equation}
\Sigma^r(\omega,{\bm k}) = U^2\integral{\epsilon}{}{}
\frac{F^r(\epsilon,{\bm k})}{\omega+i\eta-\epsilon}.
\end{equation}
The imaginary part of the retarded self-energy is then given by
\begin{equation}
\mathrm{Im}\Sigma^r_{{\bm k}}(\omega)=-\pi U^2 F^r(\omega,{\bm k}),
\end{equation}
where $F^r(\epsilon,{\bm k})=f_1(\epsilon,{\bm k})+f_2(\epsilon,{\bm k})$,
with the phase space factors,
\begin{widetext}
\begin{equation}
f_1(\epsilon,{\bm k})=\sum_{{\bm k}_1,{\bm k}_2,{\bm k}_3}\delta(\xi_{{\bm k}_2}+\xi_{{\bm k}_3}-\xi_{{\bm k}_1}-\epsilon)
\delta({\bm k}+{\bm k}_1,{\bm k}_2+{\bm k}_3)n_{{\bm k}_1}(1-n_{{\bm k}_2})(1-n_{{\bm k}_3}).
\end{equation}
and,
\begin{equation}
f_2(\epsilon,{\bm k})=\sum_{{\bm k}_1,{\bm k}_2,{\bm k}_3}\delta(\xi_{{\bm k}_2}+\xi_{{\bm k}_3}-\xi_{{\bm k}_1}-\epsilon)
\delta({\bm k}+{\bm k}_1,{\bm k}_2+{\bm k}_3)(1-n_{{\bm k}_1})n_{{\bm k}_2}n_{{\bm k}_3}.
\end{equation}
The expressions can be simplified in the limit of large dimensions. The momentum
integrations can be replaced by integrals over the density of states, momentum
conservation is implicit so we can omit the corresponding $\delta$-function
and the ${\bm k}$-dependence disappears,
\begin{equation}
f_1(\epsilon)=\integral{\epsilon_1}{}{}\!\integral{\epsilon_2}{}{}\!\integral{\epsilon_3}{}{}
\rho_0(\epsilon_1)\rho_0(\epsilon_2)\rho_0(\epsilon_3)
\delta(\epsilon_2+\epsilon_3-\epsilon_1-\epsilon-\mu)n_{\rm
F}(\epsilon_1-\mu)n_{\rm F}(-\epsilon_2+\mu)n_{\rm F}(-\epsilon_3+\mu).
\end{equation}
We can do the integration over the $\delta$-function,
\begin{equation}
f_1(\epsilon)=\integral{\epsilon_2}{}{}\!\integral{\epsilon_3}{}{}
\rho_0(\epsilon_2+\epsilon_3-\epsilon-\mu)\rho_0(\epsilon_2)\rho_0(\epsilon_3)
n_{\rm F}(\epsilon_2+\epsilon_3-\epsilon-\mu)n_{\rm F}(-\epsilon_2+\mu)n_{\rm F}(-\epsilon_3+\mu),
\end{equation}
\end{widetext}
and similarly for $f_2(\epsilon)$.
In the particle-hole symmetric case we have,
\begin{equation}
f_2(\epsilon)=f_1(-\epsilon),
\end{equation}
It is then sufficient to evaluate $f_1(\epsilon)$ and we can
write,
\begin{equation}
F^r(\epsilon,{\bm k})=F^r(\epsilon)=f_1(\epsilon)+f_1(-\epsilon).
\end{equation}
This can be evaluated as a double integral for a given temperature and
$\rho_0(\epsilon)$.
Assuming that $\rho_0(\epsilon)$ is only finite in an interval $(-D,D)$ we can
analyze the double integration as being determined by certain region in the
$\epsilon_3-\epsilon_2$ plane.
At $T=0$ a geometric analysis of the integration region shows,
$f_1(\epsilon)\sim \epsilon^2$, which gives the typical Fermi liquid behavior, Eq.~(\ref{eq:FL}),
at low temperature.
In the opposite limit, $T\to \infty$, a similar analysis shows that
$F^r(\epsilon)$ is maximal at $\epsilon=0$ and it decays for small $\epsilon$
as $-\epsilon^2$, which yields the NFL form Eq.~(\ref{eq:NFL}). One can estimate the
crossover temperature $T_{\rm FL}$ by studying when then coefficient of the
$\epsilon^2$ changes sign. Depending on the density of states and the
approximations made one finds a result of the order of a fraction of the
bandwidth, consistent with the result in Fig.~\ref{phase_diagram} for small
$U$.
\end{appendix}
|
1,314,259,993,256 | arxiv | \section{Motivation}
The literature on portfolio management starts with the Markowitz portfolio
and the CAPM
(\cite{Lintner65}, \cite{Markowitz52}, \cite{Sharp64}). It is a
one-period %
model, where the information on assets is minimal. Every asset is
characterized by two numbers, its expected return and its covariance with
respect to the market portfolio. With such poor information, one cannot hope
to distinguish between stocks and bonds, and indeed part of the beauty of the
CAPM lies in its generality: it applies to any type of financial assets.
On the other hand, as soon as one tries to make use of all the information
available on assets, important differences appear between stocks and bonds.
Bonds mature, that is they are eventually converted into cash, whereas stocks
do not. The price of bonds depends on interest rates, and the price of stocks,
at least in the academic literature, does not. The bond market is notoriously
incomplete, much more so than the stock market, as is observed in practice. %
As a result, the classical
results on portfolio management, such as Merton's (\cite{Mert69}, \cite{Mert71}),
concern stock portfolios.
This paper and the papers \cite{I.E.-E.T bond th} and \cite{E.T Bond Completeness} were
born from a desire to extend them to bond portfolios.
More generally, we aim to construct a general framework for portfolio
management in continuous time, encompassing both stocks and bonds.
The first difficulty to overcome (and, in our opinion, the main financial one) is the
fact that such a theory should encompass two very different kinds of financial
assets: bonds, which have a finite life, and stocks, which are permanent. We
do it by introducing a new type of financial asset, the \emph{rollovers}. A
rollover of time to maturity $x$ is a bank deposit and which can be cashed at
any time, with accrued interest, provided notice be given time $x$ in advance.
Roll-overs have constant time to maturity (as opposed to zero-coupon bonds,
for instance), and are similar to stocks, in the sense that their main
characteristics do not change with time. By decomposing bonds into rollovers,
instead of decomposing them into zero-coupons, we can hope to incorporate
bonds and stocks into a unified theory of portfolio management.
Rollovers were considered in \cite{Rutkowski99} under the name
``rolling-horizon bond''.
This implies that the time to maturity $x$, rather than the maturity date $T,$
becomes the relevant characteristic of bonds.
Thus, we shall describe bonds using a moving maturity-time frame, where at time $t,$
the origin is the time to maturity $x=0,$ corresponding to the maturity date $T=t.$
As we shall see very soon, there
will be a mathematical price to pay for that.
At any time $t,$ denote by $p_{t}( x) $ the price of a unit
zero-coupon with time to maturity $x.$ The function $x\mapsto p_{t}(
x) $ will be called the zero-coupon (price) curve at time $t$; note that the
actual time when that zero-coupon matures is $T=t+x$, and that $T$, is fixed
while $x$ changes with $t$. The zero-coupon curve $p_{t}$ will be understood
to move randomly, and the second difficulty we face is to describe its motion
in some reasonable way. One solution is to decide that $p_{t}$ belongs to a
fixed family of curves, depending on finitely many parameters, so that
\[
p_{t}( x) =f( t,x;r_{1},...,r_{d})
\]
and the random motion of $p_{t}$ is the image of a random motion of the
$r_{i}$, which could, as in spot-rates models, be modelled, for instance by diffusions. This is the
\emph{parametric} approach, which exhibits the classical difficulty of all
parametric approaches, namely that there is no theoretical reason why the
$p_{t}$ should be written in that way, so that the choice of the function $f$
has to be dictated by observational fit. One then has to strike the right
balance between two evils:\ if the number of parameters is too small, the
model will be unrealistic, and if it his higher, it becomes very difficult to calibrate.
We will operate in a \emph{non-parametric} framework: we will make no
assumption on $p_{t}$, beyond some very rough ones, regarding smoothness and
behavior at infinity, nothing that would much constrain their shape.
Mathematically speaking, we will let the curve $p_{t}$ move freely in a linear
space $E$, which will typically be an infinite-dimensional Banach space, of
functions from $[0, \infty [ \,$ to $\mathbb{R}.$
In order to reflect adequately known financial facts, the correct definition
of $E$ must incorporate some basic constraints:
\begin{enumerate}
\item At any time $t$, the zero-coupon prices $p_{t}( x) \ $must
depend continuously on the time to maturity $x$. In order for forward %
interest rates to be well-define\.{d}, they must also have some degree of
differentiability with respect to $x$. So $E$ must consist of continuous
curves with some degree of differentiability.
\item The degree of differentiability of functions in $E$ will determine which basic
interest rates derivatives can be modelled. If $p_{t}$ is continuous, for
instance, then we can %
introduce bonds. The price of a unit zero-coupon bond with
time to maturity $x$ is $p_{t}( x) $; the bond itself, i.e. the value of a portfolio including exactly one bond is
represented by the linear form $p_{t}\mapsto p_{t}( x).$
Mathematically speaking, this is just the Dirac mass $\delta_{x}$ at $x.$
Now other derivatives such as Call's and Put's on zero-coupon bonds can be introduced,
since the pay-off for each of them is a continuous function of the zero-coupon bond
price $\zcpxd{T}(x),$ with a given time to maturity $x.$
If $p_{t}$ is continuously differentiable, then the forward interest rate
with time to maturity $x,$
$-\frac{\partial}{\partial x}p_{t}(x)/p_{t}(x)$ is well-defined, and further
contingent claims can be defined, such as caps, floors and swaps.
\item The curve $p_{t}$ %
will be understood to move randomly in
$E$, the randomness being driven by a Brownian motion. We will therefore need
to define Brownian motions in the infinite-dimensional space $E$, which for
all practical purposes will require $E$ to be a Hilbert space.
\item The accepted standard in mathematical modelling of zero-coupon
prices (the Heath-Jarrow-Morton model, henceforth HJM) is to decide that the
real-valued process %
$t \mapsto p_{t}( T-t), $ the price at time
$t$ of a unit zero-coupon maturing at a given time $T$, is an It\^o
process satisfying an stochastic ordinary differential equation (SODE).
As is well-known, for fixed $x,$ the real-valued process %
$t\mapsto p_{t}( x) , $ which is also an It\^o process,
then no longer satisfies an SODE. Indeed, if $f(t,T) \equiv p_{t}( T-t),$
then we have $p_{t}(x)= f(t,t+x)$ %
so that for fixed $x:$
\begin{equation}
d_{t} p_{t}( x) =[d_{t} f(t, T)+\frac{\partial f(t,T)}{\partial T} dT]_{T=t+x}
=[d_{t} f(t, T)]_{T=t+x}+\frac{\partial p_{t}( x)}{\partial x} dt.
\label{1}%
\end{equation}
Here the right-hand side (r.h.s) depends, not only on $p_{t}( x) ,$ but also on
its partial derivative with respect to $x$. So, equation (\ref{1}) for $p$ is a SPDE,
stochastic partial differential equation, where the first term on the r.h.s depends
only on the un-known $p_{t}( x) ,$ since $f(\cdot,T)$ satisfies an SODE. This is the well-known difficulty
of the Musiela parametrization (see \cite{Musi93}), and the space $\E{}$ shall permit a simple mathematical
formulation of the SPDE (\ref{1}).
\item
At any time $t$, the zero-coupon prices $p_{t}( x) $ should
go to zero as the time to maturity $x$ goes to $ \infty.$ To include also the
trivial case, where all interest rates vanish, and also cases where the forward
rates converges rapidly to zero as $x \rightarrow \infty,$ we only require
that $\lim p_{t}(x) $ exists as $x \rightarrow \infty.$ N.B. We will chose $\E{}$
such that the elements $f \in E$ satisfying
$\lim_{x \rightarrow \infty} f(x)=0$ form a closed
sub-space of $\E{},$ in order to cover easily the case where $p_{t}( x) \rightarrow 0. $
\end{enumerate}
Formula (\ref{1}) is really an infinite family of coupled
equations, one for each
$x \geq 0,$ describing the motion of the random variable $p_{t}(x),$
which we write
\begin{equation}
dp_{t}(x)=p_{t}(x)m_{t}(x)dt+p_{t}(x)\sigma_{t}(x)dW_{t}+\frac{\partial p_{t}( x)}{\partial x} dt,
\label{2'}%
\end{equation}
where for the moment $W$ is thought of as being a high dimensional Brownian motion.
Let us rewrite it as a single stochastic evolution equation for the motion of the random curve
$p_{t}$ in $E,$ i.e. as a SODE in $E:$
\begin{equation}
dp_{t}=p_{t}m_{t}dt+p_{t}\sigma_{t} dW_{t}+( \partial p_{t}) dt
\label{2}%
\end{equation}
where
$\partial$ is the differentiation operator with respect to \textit{time to maturity},
i.e. it is defined by $(\partial u)(x)=\frac{d u( x)}{d x},$
for differentiable $u \in \E{}.$
Since the left-hand side ``belongs'' to $E,$ so must the right-hand side, and then
$\partial p_{t}$ must belong to $E$.
There are ways to achieve that. One
is to choose a framework where the operator $\partial $ is
continuous over all of $E$. Then so is its $n$-th iterate $\partial ^{n}$,
so that the space $E$ must consist of functions which have infinitely many
derivatives. Unfortunately, the natural topology of such spaces cannot be
defined by a single norm, except for very particular cases,
and the mathematics become more demanding. %
A second more standard way to proceed is to consider $\partial $ as an unbounded
operator in a Hilbert space $E$, so that $\partial $ is defined only on a
subspace $\mathcal{D}( \partial ) \subset E$, called the domain of the
operator. One would then hope to define the solution of equation (\ref{2}) in
such a way that, if the initial condition $p_{0}$ lies in $\mathcal{D}(
\partial ) ,$ then $p_{t}$ remains in $\mathcal{D}(
\partial ) $ for every $t,$ so that $t \mapsto p_{t}$ is a trajectory in
$\mathcal{D}(\partial).$ In other words, if
$p_{0}( x) \ $is differentiable with respect to $x$, so should
the functions $x \mapsto p_{t}(x)$ be for all $t>0.$
To summarize, the introduction of rollovers and a moving frame forces us to complicate the
equations for price dynamics, by incorporating an additional term,
$\partial p_{t}$. To be able to solve the relevant equations, we have to
treat $\partial $ as an unbounded operator in Hilbert space. The definition
of the relevant Hilbert space has to incorporate basic properties which we
expect of zero-coupon curves.
This suits our purpose well, for it enables us to work in a non-parametric
framework, where no particular shape is assigned to the the zero-coupon
curves. On the other hand, we then have to use the theory of Brownian motion
in infinite-dimensional Hilbert spaces and the corresponding stochastic
integrals, which creates some additional difficulties. We do not limit the
number of sources of noise, indeed in our paper there can be infinitely many.
This is natural, since the already mentioned experimental fact,
that even using a large number of bonds, not all interest rate derivatives can be hedged.
The third difficulty to overcome, is the mathematically significant fact that
such a market can not be complete in the usual sense, i.e. every (sufficiently integrable)
contingent claim being hedgeable. This has important implications for the solution
of the portfolio optimization problem. The now classical two-step solution, so successfully
applied to the case of a
finite number of stocks (cf. \cite{Kr-Scha}, \cite{Pliska86}), consisting of first determining the optimal final wealth
by duality methods and then determining a hedging portfolio, does not (yet at least) apply
to the general infinite dimensional bond markets.
In this paper (see \cite{I.E.-E.T bond th} and \cite{E.T Bond Completeness})
we give, within the considered general It\^o process model, the optimal final wealth
for every case it exists (Proposition \ref{exist unique X}).
The existence of an optimal portfolio, is then established by the construction
of a hedging portfolio for two cases :
The first
is for deterministic $\E{}$-valued drift $m$ and volatility operator $\sigma,$
where we give a necessary and sufficient condition for the existence of an optimal portfolio. Here
there can exist several equivalent martingale measures (e.m.m.), so the market
can clearly be incomplete in every sense of the word.
The second is for certain stochastic
$m$ and $\sigma,$ for which there is a unique market price of risk process $\gamma.$
There is then a unique e.m.m. $Q.$ Now, certain integrability conditions on the $\ell^{2}$-valued
Malliavin derivative of the Radon-Nikodym density $dQ/dP$ leads to the construction of a
hedging portfolio.
We have tempted to make these notes self-contained, with exception of the general hedging result
in Theorem \ref{th completeness l^2}. The notes first recall some basic facts concerning
linear operators and semi-groups in Hilbert spaces, Sobolev spaces and stochastic
integration in Hilbert spaces. The theory of bond portfolios and hedging of
interest rate derivatives are then introduced.
Once this theory is explained, the paper proceeds to a short solution of the
optimization problem, leading to the results %
of \cite{I.E.-E.T bond th} and \cite{E.T Bond Completeness}.
In particular, under the assumption that the market prices
of risk are deterministic, some explicit formulas are given,
very similar in spirit to those who are known in the case of stock portfolios,
and a mutual fund theorem is formulated.
We conclude by stating an alternative formulation, of the optimization problem,
within a Hamilton-Jacobi-Bellman approach.
\section{Mathematical preliminaries}
\subsection{Hilbert spaces and bounded maps}
We shall be working with separable infinite-dimensional real Hilbert spaces.
Let $E$ be a Hilbert space with scalar product $(\;,\;)_{E}$
and norm $\| \;\;\|_{E},$ simply denoted $(\;,\;)$ and $\| \;\;\|$ if no risk for confusion.
The topology and convergence in $E$ is w.r.t. this norm, if not otherwise stated, i.e.
the strong topology and convergence. By definition $E$ is, \emph{separable} if it has
a countable dense subset. One shows easily that $E$ is separable iff it has a
countable orthonormal basis $e_{n},n\in \mathbb{N},$ i.e. $( e_{i},e_{j})
=0$ for $i\neq j$ and $\Vert e_{i} \Vert =1,$ so that every $x \in E$
can be written:%
\[
x=\sum_{n=0}^{\infty}\left( x,e_{n}\right) e_{n},%
\]
where the right-hand side converges in $E.$ Since the $e_{n}$ are orthonormal,
we have Parseval's equality:%
\[
\left\Vert x\right\Vert^{2} =\sum_{n=0}^{\infty}\left\vert \left( x,e_{n}\right)
\right\vert ^{2}.
\]
A typical separable Hilbert space is $\ell^{2}$, which is the space of all
real sequences $a_{n},n\in \mathbb{N},$ such that $\sum\left\vert a_{n}\right\vert
^{2}<\infty$. The scalar product in $\ell^{2}$ is given by $\left(
a,b\right) =\sum a_{n}b_{n}$. In fact, every infinite dimensional separable Hilbert space $E$ is
isomorphic to $\ell^{2}$. The map
\begin{equation}
x\mapsto a_{n}=( x,e_{n})_{E} ,n\in \mathbb{N},
\label{3}%
\end{equation}
of $E$ into $\ell^{2}$ is a linear bijection and it preserves norms on both sides.
A linear map $ L:E_{1}\rightarrow E_{2}$ is continuous if and only if it is
\emph{bounded}, that is if there exists a constant $c$ such that $\left\Vert
Lx\right\Vert _{E_{2}}\leq c\left\Vert x\right\Vert _{E_{1}}$ for every $x\in E_{1}.$
The (operator) norm of $L$ is then defined to be the infinimum of all such $c$:%
\[
\left\Vert L\right\Vert =\inf\left\{ c\ |\left\Vert Lx\right\Vert _{E_{2}}\leq
c\left\Vert x\right\Vert _{E_{1}}\ \forall x\right\}.
\]
For example, the linear map in (\ref{3}) of $E$ onto $\ell^{2}$ as well as
its inverse has norm $1.$
The linear space of all continuous linear maps from $E_{1}$
to $E_{2},$ $L(E_{1},E_{2}),$ is a Banach space when given this norm. One writes $L(E)$
as a shorthand for $L(E,E).$
Linear maps
are also called linear operators or just operators.
A \emph{bounded operator} on $E$ is a bounded linear map from $E$ into itself.
The \emph{dual} space $E'$ of $E,$ i.e. the space of all linear continuous functionals on $E,$
is given by $E'=L(E,\mathbb{R}).$ By the F. Riesz representation theorem,
\begin{equation} \label{riesz}
F \in E' \; \text{iff} \; \exists f \in E \; \text{such that} \; F(x)=(f,x) \; \forall x \in E.
\end{equation}
Also $\|F\|_{E'}=\|f\|_{E},$ so $E'$ and $E$ are isomorphic. In this paper we will
often use, in the context of Sobolev spaces, other representations of the dual $E'.$
By duality, every operator in $L(E_{1},E_{2})$ corresponds to an operator in
$L(E_{2}',E_{1}').$ Using the representation of the dual space given by (\ref{riesz}), the adjoint
operator $A^{*}$ of $A \in L(E_{1},E_{2})$ is defined by $A^{*}y=y^{*},$ where for $y \in E_{2}$
the element $y^{*} \in E_{1}$ is defined by
\begin{equation} \label{adj}
(y^{*},x)_{E_{1}} =(y,Ax)_{E_{2}}\; \forall x \in E_{1}.
\end{equation}
This defines an operator $A^{*} \in L(E_{2},E_{1})$. One easily checks that $(A^{*})^{*}=A$ and
$\left\Vert A^{*} \right\Vert=\left\Vert A \right\Vert.$
Let us consider a simple example, which will be relevant in the sequel
of this paper:
\begin{example}[Left-translation in $L^{2}$] \label{translation} \text{} \\ \normalfont
i) Let $E=L^{2}(\mathbb{R})$ and let $a$ be a given real number. Define the operator
$A$ on $E$ by $(Af)(x)=f(x+a).$ Then $\Vert A \Vert=1$ and $(A^{*}f)(x)=f(x-a).$
We note that $A$ has a bounded inverse $A^{-1}$ given by $(A^{-1}f)(x)=f(x-a),$
so $A A^{*}=A^{*} A =I,$ where $I$ is the identity operator. \\
ii) Let $E=L^{2}([0,\infty[)$ and let $a>0$ be a given real number. Define the operator
$A$ on $E$ by $(Af)(x)=f(x+a).$ Here we find that $\Vert A \Vert=1,$ that a.e. $(A^{*}f)(x)=0$ if $0\leq x < a$
and that $(A^{*}f)(x)=f(x-a)$ if $a \leq x.$ In this case $A^{*}$ is one-to-one and
$A A^{*}=I.$ But $ A^{*} A$ is the orthogonal projection on the (non-trivial) closed subspace of
$E$ of functions with support in $[a,\infty[ \, .$ So $A^{*}A \neq I.$
\end{example}
An operator $S \in L(E_{1},E_{2})$ is called unitary if $S S^{*}=S^{*} S =I.$
This is the case of $A$ in (i) of Example \ref{translation}.
An operator $S \in L(E_{1},E_{2})$ is called isometric if $S^{*} S =I.$
This is the case of $A^{*}$ in (ii) of Example \ref{translation}.
We will be interested in a particular class of bounded operators on $E.$
We begin with an easy result
\begin{lemma}
Suppose $L \in L(E_{1},E_{2})$ and that we have:
\[
\sum_{n=0}^{\infty}\left\Vert Le_{n}\right\Vert ^{2}<\infty
\]
for an orthonormal basis $e_{n},n\in \mathbb{N}$ in $E_{1}.$ Let $f_{n},n\in \mathbb{N}$ be another
orthonormal basis. Then:%
\[
\sum_{n=0}^{\infty}\left\Vert Le_{n}\right\Vert ^{2}=\sum_{n=0}^{\infty
}\left\Vert Lf_{n}\right\Vert ^{2}%
\]
\end{lemma}
\begin{definition} \label{H-S}
An operator $L$ on $E_{1}$ into $E_{2}$ is Hilbert-Schmidt if $\ \sum_{n=0}^{\infty
}\left\Vert Le_{n}\right\Vert ^{2}<\infty$ for some orthonormal basis
$e_{n},n\in \mathbb{N}$, in $E_{1}.$ Its Hilbert-Schmidt norm is defined to be:
\[
\left\Vert L\right\Vert _{\mathcal{HS}}=\left( \sum_{n=0}^{\infty}\left\Vert
Le_{n}\right\Vert ^{2}\right) ^{1/2}.%
\]
It does not depend on the choice of the orthonormal basis $e_{n},n\in \mathbb{N}$, in
$E$. The linear space of Hilbert-Schmidt operators from $E_{1}$ into $E_{2}$ is denoted
$\mathcal{HS}(E_{1},E_{2}).$
\end{definition}
Hilbert-Schmidt operators are bounded (in fact, $\left\Vert L\right\Vert
\leq\left\Vert L\right\Vert _{\mathcal{HS}}$) and even compact:\ they map
bounded subsets of $E_{1}$ into relatively compact subsets of $E_{2}.$ In other words,
if $L$ is Hilbert-Schmidt and $\left( x_{n}\right) _{n\in \mathbb{N}}$ is a bounded
sequence, then one can extract from $\left( Lx_{n}\right) _{n\in \mathbb{N}}$ a
norm-convergent subsequence.
This property of a Hilbert-Schmidt operator $L$
follows from the fact that $L$ is the limit in the operator norm of finite rank operators.
The space $\mathcal{HS}(E_{1},E_{2})$ endowed with the Hilbert-Schmidt norm defines a
Hilbert space.
Some general references for this subsection are: \cite{Kato66}, \cite{Lax 02}, \cite{Rudin 1}, \cite{Rudin 2}.
\subsection{Linear semi-groups and unbounded operators.}
Let $L$ be a bounded linear operator on $E$. For every $t\in \mathbb{R}$, define:%
\[
\Phi\left( t\right) =e^{tL}=\sum_{i=0}^{\infty}\frac{1}{n!}t^{n}L^{n},
\]
which converges in the operator norm.
Then $\Phi\left( t\right) $ is a bounded linear operator for every $t$, and
we have the relation:%
\begin{equation}
\Phi\left( t+s\right) =\Phi\left( t\right) \Phi\left( s\right)
\; \forall s,t \in \mathbb{R} \; \; \text{and} \;\; \Phi( 0) =I,
\label{43}%
\end{equation}
where $I$ is the identity operator on $E,$
from which it follows that $\Phi\left( t\right) $ and $\Phi\left( s\right)
$ commute %
and that $\Phi\left( t\right) $
is invertible for every $t$. Relation (\ref{43}) states that the map
$t\mapsto \Phi\left( t\right) $ is a group homomorphism. Note that it is
continuous in the norm topology for operators:%
\begin{equation}
\left\Vert \Phi\left( t\right) -I\right\Vert \rightarrow0 \;\text{ when}\; t\rightarrow0.
\label{43'}
\end{equation}
The solution of the Cauchy problem:%
\begin{align}
\frac{dx(t)}{dt} & =Lx(t),\label{31}\\
x\left( 0\right) & =x_{0} \label{32}%
\end{align}
is given by $x\left( t\right) =\Phi\left( t\right) x\left( 0\right) $.
In other words, $\Phi\left( t\right) $ is the flow associated with the
ordinary differential equation (\ref{31}). We can recover $L$ from
$\Phi\left( t\right) $ by writing:%
\begin{equation}
Lx=\lim_{h\rightarrow0}\frac{1}{h}\left[ \Phi\left( h\right) x-x\right], \:\: x \in E.
\label{34}%
\end{equation}
The norm continuity of the mapping $t \mapsto \Phi( t)$
is exceptional and has to be replaced by a more useful weaker property
(cf. Definition 1, Sect. 1, Chap. IX of \cite{Yosida}):
\begin{definition}
A family $\Phi\left( t\right) ,t\geq0$, of bounded operators on $E$ is
called a one parameter semi-group if $\Phi\left( 0\right) =I$, and for all $t\geq0$
and $s\geq0$ we have:%
\begin{equation}
\Phi\left( t+s\right) =\Phi\left( t\right) \Phi\left( s\right)
=\Phi\left( s\right) \Phi\left( t\right). \label{33}%
\end{equation}
It is said to be strongly continuous or to be of class $(C_{0})$ if, for every $x\in E$, we have:%
\begin{equation}
\lim_{t\rightarrow0}\Phi\left( t\right) x=x. \label{35}
\end{equation}
It is said to be a contraction semi-group if $\|\Phi(t) \| \leq 1$
for all $t \geq 0.$
\end{definition}
Note that, since equality (\ref{33}) is supposed to hold only for positive $s$ and $t$,
the operators $\Phi\left( t\right) $ are no longer necessarily invertible, as in the case
of a group. It can be proved easily that, if the semi-group $\Phi\left( t\right)
$ is strongly continuous, then $\lim_{s\rightarrow t}\Phi\left( s\right)
x=\Phi\left( t\right) x$ and there are constants $c$ and $C$ such that
$\left\Vert \Phi\left( t\right) \right\Vert \leq C\exp\left( ct\right) .$
We also note that if $[0, \infty [ \; \ni t \mapsto \Phi(t)$ is a
one parameter semi-group, so is the family of adjoint operators $[0, \infty [ \; \ni t \mapsto \Phi^{*}(t),$
where we define $\Phi^{*}(t)=(\Phi(t))^{*}.$
\begin{example} \label{translation 1} \text{} \\ \normalfont
In the situation of (i) (resp. of (ii)) of Example \ref{translation}, for given $a,$
let $\Phi_{1} (a)=A$ (resp. $\Phi_{2} (a)=A$).
Then $\mathbb{R} \ni t \mapsto \Phi_{1} (t)$ is a strongly continuous contraction group.
However $[0, \infty [ \; \ni t \mapsto \Phi_{2}(t)$ is only a strongly continuous contraction
semi group, which can not be extended to a group. In fact, $\Phi_{2}(t)$ is not
invertible for $t>0.$
\end{example}
We now try to extend formula (\ref{34}). It turns out that when $\Phi$ is no
longer norm-continuous, but only strongly continuous, the
right-hand side does not converge for every $x$, and if the limit exists, it
does not depend continuously on $x.$
The set of $x$ for which the limit exists
is obviously a linear subspace of $E$ and on this subspace the limit is a linear
function, let's say $G$ of $x.$ More formally, let $\mathcal{D}(G)$ be the subset of $E$
\textit{of all elements} $x \in E$ for which the strong limit
\begin{equation}
Gx=\lim_{h\rightarrow0}\frac{1}{h}\left[ \Phi\left( h\right) x-x\right]
\label{40}%
\end{equation}
exists.
\begin{theorem} \label{sem-group gen}
Assume
$\Phi $ is a strongly continuous semi-group. The set $\mathcal{D}(G)$
is then a dense linear subspace of $E$ and $G$ given by (\ref{40}) defines a linear map
$G: \mathcal{D}(G) \rightarrow E.$ This map is closed, i.e. if
$x_{n}$ is a sequence in $\mathcal{D}(G)$ such that $x_{n} \rightarrow \bar{x} \in E$
and $Gx_{n}\rightarrow\bar{y} \in E$ then $\bar{x} \in \mathcal{D}(G)$ and $\bar{y}=G\bar{x}.$
For every $x\in \mathcal{D}(G)$ and $t\geq 0$ we have
$\Phi\left( t\right) x \in \mathcal{D}(G),$
\begin{equation}
G\Phi\left( t\right) x =\Phi\left( t\right) Gx\label{41}
\end{equation}
and
\begin{equation}
\frac{d}{dt}\Phi\left( t\right) x =G\Phi\left( t\right) x. \label{42}%
\end{equation}
\end{theorem}
\begin{proof}
By definition $\mathcal{D}(G) $ is the set of $x$ where the limit in
formula (\ref{40}) exists (note that this is a strong limit, meaning that we
should have norm-convergence), and $Gx$ then is the value of that limit.
Clearly $G:\mathcal{D}(G) \rightarrow E$ is a linear map.
Given any $x\in E$ and $t>0$, consider the integral:%
\[
X\left( t\right) =\int_{0}^{t}\Phi\left( s\right) xds.
\]
It is well-defined since the integrand is a continuous function from $\left[
0,t\right] $ into $E$. Using the semi-group property, we have:%
\begin{align*}
\frac{1}{h}\left[ \Phi\left( h\right) X\left( t\right) -X\left(
t\right) \right] & =\frac{1}{h}\left[ \Phi\left( h\right) \int_{0}%
^{t}\Phi\left( s\right) xds-\int_{0}^{t}\Phi\left( s\right) xds\right] \\
& =\frac{1}{h}\left[ \int_{0}^{t}\Phi\left( s+h\right) xds-\int_{0}%
^{t}\Phi\left( s\right) xds\right] \\
& =\frac{1}{h}\int_{0}^{h}\Phi\left( s+h\right) xds-\frac{1}{h}\int
_{0}^{h}\Phi\left( s\right) xds\\
& \rightarrow\Phi\left( t\right) x-x.
\end{align*}
This proves that $X\left( t\right) $ belongs to $\mathcal{D}(G) $. Then
so does $\frac{1}{t}X\left( t\right) $, and when $t\rightarrow0$, we have
$\frac{1}{t}X\left( t\right) \rightarrow x$, so $\mathcal{D}(G) $ is dense in $H$,
as announced.
Now write:%
\[
\frac{1}{h}\left[ \Phi\left( t+h\right) -\Phi\left( t\right) \right]
x=\Phi\left( t\right) \frac{\Phi\left( h\right) -I}{h}x=\frac{\Phi\left(
h\right) -I}{h}\Phi\left( t\right) x.
\]
If $x\in \mathcal{D}(G) $, the second term converges to $\Phi\left(
t\right) Gx$ and the third one to $G\Phi\left( t\right) x$.
Formulas (\ref{41}) and (\ref{42}) now follow, since these two
terms must be equal.
To prove the last condition, note that:%
\begin{equation}
\forall x\in \mathcal{D}(G) ,\ \ \ \Phi\left( t\right) x-x=\int_{0}%
^{t}\Phi\left( s\right) Gxds. \label{44}%
\end{equation}
Indeed, we have two functions of $t$, with values in $E\,$, which are zero for
$t=0$ and which have the same derivative, namely $\Phi\left( t\right) Gx$,
for every $t>0$. So they must be equal. Now take a sequence $x_{n}%
\rightarrow\bar{x}$, and assume that $Gx_{n}=y_{n}\rightarrow\bar{y}$ in $E$.
Writing $x=x_{n}$ in formula (\ref{44}), we get:%
\[
\Phi\left( t\right) \bar{x}-\bar{x}=\int_{0}^{t}\Phi\left( s\right)
\bar{y}ds.
\]
Dividing by $t$ and letting $t\rightarrow0$, we find that $\bar{x}\in \mathcal{D}(G)$
and that $\bar{y}=G\bar{x}$.
\end{proof}
\begin{definition}
In the situation of Theorem \ref{sem-group gen},
$G$ is called the \emph{infinitesimal generator} of the semi-group $\Phi.$
\end{definition}
A linear map $L:\mathcal{D}( L) \rightarrow E_{2}$, where $\mathcal{D}\left( L\right) $
is a subspace of $E_{1},$ is called an operator from $E_{1}$
to $E_{2}$ with domain $\mathcal{D}\left( L\right).$ That two operators are equal, $L_{1}=L_{2},$
means that they have the same domain $\mathcal{D}( L_{1}) =\mathcal{D}( L_{2})$
and that $L_{1}x=L_{2}x$ for all $x$ in the domain.
The operator $L$ is densely defined
if $\mathcal{D}( L)$ is dense in $E_{1}.$
It is called a bounded operator if there exists a finite constant
$C \geq 0$ such that for all $x \in \mathcal{D}( L)$ one has
$ \|Lx\| \leq C \|x\| $ and it is called an \emph{unbounded operator} if
such $C$ does not exist. It
is \emph{closed} if its graph $\{(x,Lx) \, | \, x \in \mathcal{D}\left( L\right) \}$
is a closed subset of $E_{1} \times E_{2},$ which extends the definition in the preceding theorem.
With these definitions, we can
rephrase part of the preceding theorem by saying that every strongly
continuous semi-group in $E$ has a unique infinitesimal generator, which is a densely defined closed operator in $E.$
The problem to determine if a given densely defined closed operator $L$ in $E$ is the
infinitesimal generator of a strongly continuous semi-group is more difficult and
we refer the interested reader to the references mentioned in the end of this subsection.
The definition of the \emph{adjoint} of an operator can be extended to unbounded operators.
Let $L$ be a densely defined operator from $E_{1}$ to $E_{2}.$ We introduce the adjoint operator
$L^{*}$ to $L.$ The domain of $\mathcal{D}(L^{*})$ consists of all $y \in E_{2}$ for which the linear
functional
\begin{equation} \label{adj gen 1}
x \mapsto (y,Lx)
\end{equation}
is continuous on $\mathcal{D}(L),$ endowed with the strong topology of $E_{1}.$
For $y \in \mathcal{D}(L^{*})$ we define
$L^{*}y$ by
\begin{equation} \label{adj gen 2}
(L^{*}y,x)=(y,Lx) \; \forall x \in \mathcal{D}( L).
\end{equation}
This defines $L^{*}y$ uniquely, since $\mathcal{D}( L)$ is dense in $E_{1}.$ One proves
that $\mathcal{D}(L^{*})$ is dense in $E_{2}$ if $L$ is also closed.
An operator $L$ in $E$ is called \emph{selfadjoint} if $L^{*}=L$ and skew-adjoint if $L^{*}=-L.$
We have the following clear-cut result (Stone's theorem): $L$ is the infinitesimal
generator of a group of unitary operators iff $L$ is skew-adjoint.
\begin{example} \label{translation 2} \text{} \\ \normalfont
In the situation of Example \ref{translation 1}, let $L_{1}$ and $L_{2}$ be the infinitesimal
generators of $\Phi_{1}$ and $\Phi_{2}$ respectively. $L_{1}$ is given by
\[
\mathcal{D}( L_{1})=\{f \in L^{2}(\mathbb{R}) \; | \; f' \in L^{2}(\mathbb{R}) \},
\]
and $ (L_{1}f)(x)=f'(x),$ where $f'$ is the derivative of $f.$
$L_{2}$ is given by
\[
\mathcal{D}( L_{2})=\{f \in L^{2}([0,\infty[ \,) \; | \; f' \in L^{2}([0,\infty[ \,) \},
\]
and $ (L_{2}f)(x)=f'(x).$
Since $\Phi_{1}$ is a group of unitary operators, we have that $L_{1}^{*}=-L_{1}.$
$\Phi_{2}$ is not a semi-group of unitary operators, so $L_{2}^{*} \neq -L_{2}.$
A simple calculation shows that
\[
\mathcal{D}( L_{2}^{*})=\{f \in L^{2}([0,\infty[ \,) \; | \; f(0)=0 \; \text{and} \; f' \in L^{2}([0,\infty[ \,) \}
\]
and $(L_{2}^{*}f)(x)=-f'(x).$
So here $\mathcal{D}( L_{2}^{*}) \subset \mathcal{D}( L_{2}),$ with strict inclusion.
One checks that $ \Phi_{2}^{*}$ is a strongly continuous semi-group in $L^{2}([0,\infty[ \,).$
It represents right translations of functions. Its infinitesimal generator is $L_{2}^{*}.$
\end{example}
Some general references for this subsection are:
\cite{Kato66}, \cite{Lax 02}, \cite{Rudin 2}, \cite{Yosida}.
\subsection{Sobolev spaces}
For any integer $n\geq0$, the Sobolev space $H^{n}( \mathbb{R}) $ is
defined to be the set of functions $f$ which are square-integrable together
with all their derivatives of order up to $n$:
\[
f\in H^{n}(\mathbb{R}) \Longleftrightarrow\int_{-\infty}^{\infty}\left[
f^{2}+\sum_{k=1}^{n}\left( \frac{d^{k}f}{dx^{k}}\right)^{2}\right]
dx\leq\infty.
\]
This is a linear space, and in fact a Hilbert space with norm given by:
\[
\|f\|_{H^{n}}=\left(\int_{-\infty}^{\infty}\left[ f^{2}+\sum_{k=1}^{n} (
\frac{d^{k}f}{dx^{k}}) ^{2}\right] dx \right)^{1/2}.
\]
It is a standard fact that this norm of $f$ can be expressed in terms of
the Fourier transform $\hat{f}$ (appropriately normalized) of $f$ by:
\[
\left\Vert f\right\Vert _{H^{n}}^{2}=\int_{-\infty}^{\infty}\left(
1+y^{2}\right) ^{n}\left\vert \hat{f}\left( y\right) \right\vert ^{2}dy.
\]
The advantage of that new
definition is that it can be extended to non-integral and non-positive values.
For any real number $s$, not necessarily an integer nor positive, we define
the Sobolev space $H^{s}(\mathbb{R}) $ to be the Hilbert space of
functions associated with the following norm:%
\begin{equation} \label{Hn}
\left\Vert f\right\Vert _{H^{s}}^{2}=\int_{-\infty}^{\infty}\left(
1+y^{2}\right) ^{s}\left\vert \hat{f}\left( y\right) \right\vert ^{2}dy.
\end{equation}
Clearly, $H^{0}(\mathbb{R})=L^{2}(\mathbb{R})$ and
$H^{s}(\mathbb{R})\subset H^{s^{\prime }}(\mathbb{R})$ for $s\geq
s^{\prime }$ and in particular
$H^{s}(\mathbb{R})\subset L^{2}(\mathbb{R})\subset H^{-s}(\mathbb{R}),$ for $s \geq 0.$
$H^{s}(\mathbb{R})$ is, for general $s \in \mathbb{R},$ a space of (tempered) distributions.
For example $\delta^{(k)},$ the $k$-th derivative of a delta Dirac distribution, is
in $H^{ -k-1/2-\epsilon}(\mathbb{R})$ for $\epsilon >0.$
In the case when $s>1/2$, there are two classical results.
\begin{theorem}
[Continuity of multiplication] If $s>1/2$, if $f$ and $g$ belong to
$H^{s}( \mathbb{R}) $, then $fg$ belongs to $H^{s}( \mathbb{R}) $, and
the map $( f,g) \rightarrow fg$ from $H^{s}\times H^{s}$ to
$H^{s}$ is continuous.
\end{theorem}
Denote by $C_{b}^{n}(\mathbb{R})$ the space of $n$ times continuously differentiable
real-valued functions which are bounded together with all their $n$ first derivatives.
Let $C_{b0}^{n}(\mathbb{R})$ the closed subspace of $C_{b}^{n}(\mathbb{R})$
of functions which converges to $0$ at $\pm \infty$ together with
all their $n$ first derivatives. These are Banach spaces for the norm:
\[
\left\Vert f\right\Vert _{C_{b}^{n}}=\max_{0\leq k\leq n}\ \sup_{x}%
\ \left\vert f^{\left( k\right) }\left( x\right) \right\vert =\max_{0\leq
k\leq n}\left\Vert f^{\left( k\right) }\right\Vert _{C_{b}^{0}}.
\]
\begin{theorem} [Sobolev embedding]
If $s>n+1/2$ and if $f \in H^{s}(\mathbb{R}),$
then \ there is a function $g\ $in $C_{b0}^{n}( \mathbb{R}) $ which is
equal to $f$ almost everywhere. In addition, there is a constant $c_{s}$,
depending only on $s$, such that:
\[
\Vert g\Vert _{C_{b}^{n}}\leq c_{s} \Vert f\Vert _{H^{s}}.
\]
\end{theorem}
From now on we shall no longer distinguish between $f$ and $g$, that is, we
shall always take the continuous representative of any function in
$H^{s}( \mathbb{R}) $. As a consequence of the Sobolev embedding theorem,
if $s>1/2$, then any function $f$ in $H^{s}( \mathbb{R}) $ is continuous
and bounded on the real line
and converges to zero at $\pm \infty,$ so that its value is defined everywhere.
We define, for $s\in \mathbb{R},$ a continuous bilinear form on $H^{-s}(\mathbb{R}) \times H^{s}(\mathbb{R})$ by:
\begin{equation}
\sesq{f}{g}=\int_{-\infty}^{\infty} \overline{\left( \hat{f}(y)\right) }\text{ } \hat{g}(y)dy,
\label{sesqprod}
\end{equation}
where $ \overline{z}$ is the complex conjugate of $z.$ Schwarz inequality and (\ref{Hn})
give that
\begin{equation}
|\sesq{f}{g}| \leq \Vert f\Vert _{H^{-s}} \Vert g\Vert _{H^{s}},
\label{sesqprod 1}
\end{equation}
which indeed shows that the bilinear form in (\ref{sesqprod}) is continuous.
We note that formally the bilinear form (\ref{sesqprod}) can be written
\[
\sesq{f}{g}=\int_{-\infty}^{\infty} f(x) g(x)dx,
\]
where, if $s \geq 0,$ $f$ is in a space of distributions $H^{-s}(\mathbb{R})$ and
$g$ is in a space of ``test functions'' $H^{s}(\mathbb{R}).$
Any continuous linear form $g\rightarrow u\left( g\right) $ on $H^{s}(\mathbb{R})$
is, due to (\ref{Hn}), of
the form $u(g) =\sesq{f}{g}$ for some $f\in H^{-s}(\mathbb{R}),$ with
$\Vert f\Vert _{H^{-s}}=\Vert u\Vert _{(H^{s})^{'}}$, so
that henceforth we can identify the dual $( H^{s}(\mathbb{R}))^{'}$
of $H^{s}(\mathbb{R})$ with $H^{-s}(\mathbb{R}).$
In particular, if $s>1/2$ then $H^{s}(\mathbb{R}) \subset C_{b0}^{0}( \mathbb{R}),$
so $H^{-s}(\mathbb{R})$ contains all bounded Radon measures.
In the sequel, we will also be interested in functions defined only on the
half-line $[0,\infty[ \,.$ Let $s \geq 0.$ We define the space $H^{s}([0,\infty [ \,)$
to be the set of restrictions to $[0,\infty [ \,$ of functions in $H^{s}(\mathbb{R}).$
This is clearly a linear space. To turn it into a Hilbert space,
we have to use the following norm:
\begin{equation} \label{H norm}
\Vert f\Vert _{H^{s}( [0,\infty)) }=\inf \left \{
\Vert g\Vert _{H^{s}( \mathbb{R}) }\ |\ g( x)
=f( x) \ \text{a.e. on }[0,\infty)\right\}.
\end{equation}
This is a Hilbert space norm on $H^{s}([0,\infty [ \,) ,$ which is the
natural restriction of the norm on $H^{s}( \mathbb{R}) $. For instance, if
$f$ is a function in $H^{s}(\mathbb{R}) $ such that $f\left(
x\right) =0$ for $x\leq0$, then its restriction $f_{0}$ to $[0,\infty [ \,$
belongs to $H^{s}([0,\infty [ \,) $, and we have:%
\[
\left\Vert f_{0}\right\Vert _{H^{s}([0,\infty [ \,) }=\left\Vert
f\right\Vert _{H^{s}( \mathbb{R}) }%
\]
If $s=n$ is an integer, the norm on $H^{s}([0,\infty [ \,)$ turns
out to be equivalent to the following one:%
\[
|||f|||_{H^{s}}^{2}=\int_{0}^{\infty}\left[ f^{2}+\sum_{k=1}^{n}\left(
\frac{d^{k}f}{dx^{k}}\right) ^{2}\right] dx.
\]
To establish properties of translations in $H^{s}([0,\infty [ \,),$ we need
to know if there is a continuous linear embedding of $H^{s}([0,\infty [ \,)$ into
$H^{s}(\mathbb{R}),$ i.e. to know if the restriction operator has a continuous
right-inverse. Fortunately, as we are in a Hilbert space setting,
this problem is easy to solve.
Let $s \geq 0$ and let $H_{-}^{s}$ be the subset
of functions in $H^{s}(\mathbb{R})$ with support in $]-\infty,0 ]$,
so that $f\in H_{-}^{s}$ \ if and only if $f \in H^{s}(\mathbb{R})$ and $f( x) =0$
for all $x >0.$ $H_{-}^{s}$ is a closed subspace of $H^{s}(\mathbb{R}).$ Two functions
$f_{1}, f_{2} \in H^{s}(\mathbb{R})$ have the same restriction to $[0,\infty [ \,$
iff $f_{1}- f_{2} \in H_{-}^{s}.$ This means exactly that $H^{s}([0,\infty [ \,)$
is a quotient space:
$H^{s}([0,\infty [ \,)=H^{s}(\mathbb{R})/H_{-}^{s}.$
Introducing the notation $\oplus$ for the Hilbert space direct sum, we have the
following result, which proof we omit since its trivial:
\begin{proposition} \label{prop Hs decomp}
For $s \geq 0$ we have: \\
i) $H^{s}(\mathbb{R})=H^{s}([0,\infty [ \,) \oplus H_{-}^{s}.$ \\
ii) Let $M$ be the orthogonal complement of $H_{-}^{s}$ in $H^{s}(\mathbb{R})$ w.r.t.
the scalar product in $H^{s}(\mathbb{R}),$ let $\kappa$ be the canonical projection
of $H^{s}(\mathbb{R})$ on $H^{s}([0,\infty [ \,)$ and let $\iota$ be the canonical
bijection of $H^{s}([0,\infty [ \,)$ onto $M.$ Then $\kappa$ is continuous, $\iota$
is a Hilbert space isomorphism, $\kappa \iota $ is the identity map on
$H^{s}([0,\infty [ \,)$ and $\iota \kappa $ is the orthogonal projection map in $H^{s}(\mathbb{R})$ on
$M.$
\end{proposition}
We note that $\iota$ is a continuous operator extending functions on $[0,\infty [ $
to functions on $\mathbb{R}$ and that
$\left\Vert f\right\Vert _{H^{s}([0,\infty [ \,)}
=\left\Vert \iota f \right\Vert _{H^{s}(\mathbb{R})}.$
The dual space of $H^{s}([0,\infty [ \,)$ can easily be characterized in terms
of distributions. For $s \geq 0,$ $H^{s}([0,\infty [ \,)=H^{s}(\mathbb{R})/H_{-}^{s},$
so
\begin{equation}
(H^{s}([0,\infty [ \,))'=\left\{ f\in H^{-s}(\mathbb{R}) \, | \, \sesq{f}{g}=0 \;\;\forall \,
g \in H_{-}^{s}\right\}.
\end{equation}
For $s \geq 0,$ we define $H^{-s}([0,\infty [ \,)$ to be the closed subspace
of all distributions in $H^{-s}(\mathbb{R})$ with support in $[0,\infty [ \, .$
It then follows that $(H^{s}([0,\infty [ \,))'$ can be identified with $H^{-s}([0,\infty [ \,).$
Since $(H^{s}([0,\infty [ \,))''=H^{s}([0,\infty [ \,),$ it then follows that
\begin{equation} \label{H'}
(H^{s}([0,\infty [ \,))'=H^{-s}([0,\infty [ \,) \;s \in \mathbb{R}.
\end{equation}
If $s \in \mathbb{R},$ then the constant function taking the value $1$ is not in
$H^{s}([0,\infty [ \,).$
If $s > 1/2,$ then even every function in $H^{s}([0,\infty [ \,) $ converges to zero at $\infty.$
For this reason, we will need a larger class of distributions containing the constant
functions. Let $s \in \mathbb{R}$ and let $f$ be a distribution with support
in $[0,\infty [$ such that it admits the decomposition $f=g+a,$ where
$g \in H^{s}([0,\infty [ \,)$ and $a \in \mathbb{R}.$ This decomposition of $f$
is then unique and the set of all such distributions is naturally given the Hilbert space
structure $H^{s}([0,\infty [ \,) \oplus \mathbb{R}.$ The norm of $f=g+a$ is then
given by
\[
\Vert f\Vert^{2}=\Vert g\Vert _{H^{s}([0,\infty [ \,)}^{2}+a^{2}.
\]
This unique decomposition property leads us to the following
\begin{definition} \label{Es}
For $s \in \mathbb{R},$ set $E^{s}( [0,\infty [ \,) =H^{s}([0,\infty [ \,) \oplus
\mathbb{R}$ with the corresponding Hilbert space norm.
If $f\in E^{s}( [0,\infty [ \,)$ and if
$g\in H^{s}( [0,\infty [ \,)$ and $a\in \mathbb{R}$ are related by the unique decomposition
$f =g+a,$ then the norm of $f$ is given by
\[
\left\Vert f\right\Vert _{E^{s}}^{2}=\left\Vert g\right\Vert _{H^{s}}^{2}+a^{2}.
\]
\end{definition}
The dual $(E^{s}( [0,\infty [ \,))',$ of $E^{s}( [0,\infty [ \,)$ is identified
with $(H^{s}( [0,\infty [ \,))' \oplus \mathbb{R} \approx E^{-s}( [0,\infty [ \,)$
by extending the bi-linear form, defined in (\ref{sesqprod}), to
$E^{-s}( [0,\infty [ \,) \times E^{s}( [0,\infty [ \,):$
\begin{equation}
\sesq{F}{G}=ab + \sesq{f}{g},
\label{ext-sesqprod}
\end{equation}
where $F=a+f \in E^{-s}( [0,\infty [ \,),$ $G=b+g \in E^{s}( [0,\infty [ \,),$
$a,b \in \mathbb{R},$ $f \in H^{-s}( [0,\infty [ \,)$ and $g \in H^{s}( [0,\infty [ \,).$
For all the Sobolev spaces $H^{s}$ we have introduced, and also for the spaces $E^{s},$
there are two natural realizations of the dual space. Let us consider only the case
of $E^{s}( [0,\infty [ \,),$ the other being similar. One possibility,
the canonical one, is to identify $(E^{s}( [0,\infty [ \,))'$ with
$E^{s}( [0,\infty [ \,)$ by the scalar product in $E^{s}( [0,\infty [ \,).$
This gives the Riesz representation in (\ref{riesz}).
Another possibility, is, as we have seen, to identify $(E^{s}( [0,\infty [ \,))'$
with $E^{-s}( [0,\infty [ \,),$ by the bi-linear form defined in (\ref{ext-sesqprod}).
There is a linear continuous map
$\sequiv : E^{s}( [0,\infty [ \,) \rightarrow (E^{s}( [0,\infty [ \,))'$
with continuous inverse, relating the two realizations. It is defined by:
\begin{equation}
(f,g)_{E^{s}( [0,\infty [ \,)}=\sesq{\sequiv f}{g}, \; \forall f,g \in E^{s}( [0,\infty [ \,).
\label{S equiv}
\end{equation}
Now, different realizations of the dual space leads to different realizations of adjoint
operators. Let $A$ be a closed and densely defined operator from a Hilbert-space
$H$ to $E^{s}( [0,\infty [ \,).$
We have already defined in (\ref{adj gen 1}) its adjoint operator $A^{*}$ from $E^{s}( [0,\infty [ \,)$
to $H$ w.r.t. the duality defined by the scalar product. Let the dual $H'$ of $H$
be realized by $H_{1}$ and the continuous bi-linear form
$\sesq{\;}{\;}_{1}: H_{1} \times H \rightarrow \mathbb{R}.$
The adjoint $A',$ w.r.t. the duality realized by $\sesq{\;}{\;}_{1}$ and $\sesq{\;}{\;}$
is the operator from $E^{-s}( [0,\infty [ \,)$ to $H_{1},$ defined by:
The domain of $\mathcal{D}(A^{'})$ consists of all $y \in E^{-s}( [0,\infty [ \,)$
for which the linear functional
\begin{equation} \label{adj gen 3}
x \mapsto \ \sesq{y}{Ax}
\end{equation}
is continuous on $\mathcal{D}(A).$ For $y \in \mathcal{D}(A^{'})$ we define
$A^{'}y$ by
\begin{equation} \label{adj gen 4}
\sesq{A^{'}y}{x}_{1}=\sesq{y}{Ax} \; \forall x \in \mathcal{D}(A).
\end{equation}
This defines $A^{'}y$ uniquely, since $\mathcal{D}(A)$ is dense in $H.$
We now study translation semi-groups in the different spaces we have introduced.
It follows directly from the definition (\ref{Hn}) of the norm in $H^{s}(\mathbb{R})$
and by dominated convergence that that left-translations defines a strongly
continuous group of unitary operators $\tilde{\mathcal{L}}$ in $H^{s}(\mathbb{R})$
for $s \in \mathbb{R}$
(similarly as to the case of $\Phi_{1}$ in Example \ref{translation 1}):
\begin{equation} \label{lefttr 0}
(\tilde{\mathcal{L}}_{t}f)(x)=f(x+t), \; \forall f \in H^{s}(\mathbb{R}) \; \text{and} \; t \in \mathbb{R}.
\end{equation}
Since, for $s \geq 0,$ the closed subspace $H_{-}^{s}$ of $H^{s}(\mathbb{R})$
is invariant under the semi-group $\tilde{\mathcal{L}}_{t},$ $t \geq 0,$ it defines
a semi-group $\ltrans{}$ in $H^{s}([0,\infty [ \,).$ Defining $\ltrans{}$ also on
constants $a \in \mathbb{R}$ by $\ltrans{t}a=a$ we extend the semi-group $\ltrans{}$
to $E^{s}([0,\infty [ \,),$ $s \geq 0:$
\begin{equation}
( \ltrans{t}f) (x) =f(x+t), \; \forall f \in E^{s}([0,\infty [ \,) \; \text{and} \; t \geq 0.
\label{lefttr}
\end{equation}
\begin{proposition} \label{prop L}
If $s \geq 0,$ then $\ltrans{}$ is a strongly continuous contraction semi-group
on $E^{s}([0,\infty [ \,).$ Its
infinitesimal generator, denoted $\partial,$ has domain
$\mathcal{D}(\partial)=$ $E^{s+1}([0,\infty [ \,).$
If $f\in E^{s+1}([0,\infty [ \,)$ then $\partial f =f',$
where $f'$ is the derivative of $f.$
\end{proposition}
\begin{proof}
We first observe that, in the canonical decomposition
$E^{s}( [0,\infty [ \,) =H^{s}([0,\infty [ \,) \oplus \mathbb{R}$ in Definition \ref{Es},
$\ltrans{a}$ leaves the subspace $H^{s}([0,\infty [ \,)$ invariant
and acts trivially on $\mathbb{R}.$ It is therefore sufficient
to prove the statement with $E^{s}([0,\infty [ \,)$ replaced by $H^{s}([0,\infty [ \,).$
We use the notations of Proposition \ref{prop Hs decomp} and let $P=\iota \kappa$
be the orthogonal projection on $M.$
Since $\tilde{\mathcal{L}}_{t}H_{-}^{s} \subset H_{-}^{s},$
for $t \geq 0,$ it follows that $P \tilde{\mathcal{L}}_{t}(I-P)= 0.$ The group
composition law $\tilde{\mathcal{L}}_{t}\tilde{\mathcal{L}}_{u}=\tilde{\mathcal{L}}_{t+u},$
then gives for $t,u \geq 0:$
\[
(P\tilde{\mathcal{L}}_{t}P)(P\tilde{\mathcal{L}}_{u}P)=P\tilde{\mathcal{L}}_{t+u}P
\]
So, $[0,\infty [ \, \ni t \mapsto P\tilde{\mathcal{L}}_{t}P$ is a semi-group of bounded
operators on $M.$ It is a strongly continuous contraction semi-group since this is the
case for $\tilde{\mathcal{L}}_{}$ and $\|P\|=1.$
We have that $\ltrans{t}=\kappa \tilde{\mathcal{L}}_{t} \iota,$ for $t \geq 0.$
Using that $\ltrans{t}=\kappa P \tilde{\mathcal{L}}_{t} P \iota$ it easily follows
from the semi-group property of $ P \tilde{\mathcal{L}}_{t} P$ that $\ltrans{}$
is a semi-group on $H^{s}([0,\infty [ \,).$ It is a strongly continuous contraction semi-group,
since this is the case for $P\tilde{\mathcal{L}}_{}P$ and since $\|\kappa\|=\|\iota\|=1.$
Let
$\partial$ be the infinitesimal generator of $\ltrans{}.$ By the definition of $\ltrans{}$
it follows that
$\mathcal{D}(\partial)=\{f \in H^{s}([0,\infty [ \,) \; | \; f' \in H^{s}([0,\infty [ \,)\}$
and $\partial f=f'$ for $f \in \mathcal{D}(\partial).$ But
$H^{s+1}([0,\infty [ \,)=\{f \in H^{s}([0,\infty [ \,) \; | \; f' \in H^{s}([0,\infty [ \,)\},$
which proves the proposition.
\end{proof}
\begin{example} \label{translation 3} \text{} \\ \normalfont
Let $\ltrans{t}' : E^{-s}([0,\infty [ \,) \rightarrow E^{-s}([0,\infty [ \,)$ be the adjoint
of $\ltrans{t},$ $t \geq 0$ in Proposition \ref{prop L},
w.r.t. duality defined by the bilinear form $\sesq{\;}{\;}.$
$\ltrans{}'$ is then a semi-group of right-translations on the space of distributions
$E^{-s}([0,\infty [ \,).$ Loosely speaking $(\ltrans{t}'f)(x)=f(x-t).$
Let $s \geq 1.$
hen the generator $\partial'$ has domain $E^{-s+1}([0,\infty [ \,)$ and
$-\partial'$ is the derivative of distributions, so
$(\partial'f)(x)=-df(x)/dx$ if $f$ is a differentiable function.
One is easily convinced that the expressions for $\ltrans{t}^{*}$ and $\partial^{*}$
are more complicated.
\end{example}
Some general references for this subsection are: \cite{Adams 03}, \cite{Calderon}, \cite{Horm}.
\subsection{Infinite-dimensional Brownian motion}
In this sub-section we consider a separable Hilbert space $E$ and an index-set
$\mathbb{I}$ with the cardinality equal to the dimension of $E.$
The space $E$ can be infinite-dimensional or finite-dimensional.
There is given a family $W^{i},$
$i\in\mathbb{I}$ of standard independent Brownian motions on a complete filtered
probability space $(\Omega,P,\mathcal{F},\mathcal{A}).$ The filtration
$\mathcal{A}=\{\mathcal{F}_{t}\}_{0\leq t \leq T},$ is generated by the
$W^{i},$ and $\mathcal{F}=\mathcal{F}_{T}.$
\begin{definition}
A standard cylindrical Brownian motion $W_{t},\ 0\leq t\leq T$, on $E$ is a
sequence $e_{i}W_{t}^{i} ,\ i\in\mathbb{I}$ of $E$-valued processes,
where the $e_{i}$ are the elements of an orthonormal basis of $E$ and the
$ W_{t}^{i},$ $i\in\mathbb{I},$ are
independent real-valued standard Brownian motions on a filtered probability space
$(\Omega,P,\mathcal{F},\mathcal{A}).$
\end{definition}
From now on, given a standard cylindrical Brownian motion %
$W,$ we shall
write informally $W_{t}=\sum_{i\in\mathbb{I}}W_{t}^{i}e_{i}$.
If $\mathbb{I}$ is finite, we have:%
\[
\Vert W_{t} \Vert ^{2}=\sum_{i\in
\mathbb{I}}\left\Vert W_{t}^{i}\right\Vert^{2} < \infty \; \; \text{a.s.}
\]
and $W_{t}$ is a stochastic process with values in $E$.
If $\mathbb{I}$ is infinite, then for every $t$ the right-hand side is the sum
of infinitely many i.i.d. positive random variables, which does not converge
in any reasonable way. In that case, the formula $W_{t}=\sum_{i\in\mathbb{I}%
}W_{t}^{i}e_{i}$ cannot be understood as an equality in $E$, and must be given
another meaning.
\begin{proposition}
If $W_{t}=\sum_{i\in\mathbb{I}}W_{t}^{i}e_{i}$ is a standard cylindrical
Brownian motion, then, for every $f\in E$ with $\left\Vert f\right\Vert =1$,
the real-valued stochastic process $W_{t}^{f}$ defined by
\begin{equation}
W_{t}^{f}=\sum_{i\in\mathbb{I}}\left( e_{i},f\right) W_{t}^{i} \label{13}%
\end{equation}
is a standard Brownian motion on the real line.
\end{proposition}
\begin{proof}
If $\mathbb{I}$ is finite, the result is obvious. Let us then consider the
case when $\mathbb{I}$ $=\mathbb{N}$. We first have to check if the right-hand side is
well-defined. By Doob's inequality for martingales:%
\begin{align*}
E\left[ \sup_{0\leq t\leq T}\left\vert \sum_{i=n}^{n+p}\left( e_{i}%
,f\right) W_{t}^{i}\right\vert ^{2}\right] & \leq4E\left[ \left\vert
\sum_{i=n}^{n+p}\left( e_{i},f\right) W_{T}^{i}\right\vert ^{2}\right] \\
& \leq4T\sum_{i=n}^{n+p}\left( e_{i},f\right) ^{2}\rightarrow0.
\end{align*}
This implies that the right-hand side of (\ref{13}) converges in probability
to a continuous process.
Since each finite sum is Gaussian, so is the limit, and the result follows.
\end{proof}
So, in the case when $\mathbb{I}$ is infinite, the r.h.s. of
$W_{t}=\sum_{i\in\mathbb{I}}W_{t}^{i}e_{i}$ makes no sense in $E$, but every projection
does. Equation (\ref{13}) can be rewritten as:%
\[
\forall f\in E,\ \ \ \left( W_{t},f\right) =\sum_{i\in\mathbb{I}}\left(
e_{i},f\right) W_{t}^{i}.
\]
We will now show that the stochastic integrals with respect to cylindrical
Brownian motion make sense, provided the integrand satisfies a strong
integrability condition.
Consider the space $\mathcal{HS}(E,F) $ of all Hilbert-Schmidt operators from $E$
into a Hilbert space $F.$ Let the space $\mathcal{L}^{2}\left( \mathcal{HS}(E,F) \right) $
consist of all progressively measurable processes $A$ with values in
the Hilbert space $\mathcal{HS}(E,F) $, such that:
\[
E\left[\int_{0}^{T}\left\Vert A_{t}\right\Vert _{\mathcal{HS}}^{2}dt\right]<\infty.
\]
Recall that we have, according to Definition \ref{H-S}:
\[
\left\Vert A_{t}\right\Vert _{\mathcal{HS}}^{2} %
=\sum_{n=0}^{\infty}\left\Vert A_{t}e_{n}\right\Vert ^{2}, %
\]
where $\left(e_{n}\right)_{n\in \mathbb{N}}$ is any orthonormal basis of $E.$
\begin{theorem} \label{stoch int}
The stochastic integral:
\[
\int_{0}^{T}A_{t}dW_{t}
\]
is well-defined for every %
process $A\in\mathcal{L}^{2}\left(
\mathcal{HS}(E,F) \right) .$ It is a continuous martingale with values in $E$, and
we have the usual isometry:
\[
\left\Vert \int_{0}^{T}A_{t}dW_{t}\right\Vert _{L^{2}}^{2}
=\int_{0}^{T} E\left[ \left\Vert A_{t}\right\Vert _{\mathcal{HS}}^{2}\right]dt.
\]
\end{theorem}
In other words, the random variable $\int_{0}^{T}A_{t}dW_{t}$ has mean
$0$ and its variance is
$\sum_{n=0}^{\infty}\int_{0}^{T} E\left[ \left\Vert A_{t}e_{n}\right\Vert ^{2}\right]dt,$
the sum of the variances of the independent sources of Gaussian noise.
As usual, by localization the stochastic integral can be extended to a wider class of
processes. Denote by $\mathcal{L}_{loc}^{2}\left( \mathcal{HS}\right) $ the
set of all progressively measurable processes with values in $\mathcal{HS}$, such that:%
\[
P\left[ \int_{0}^{T}\left\Vert \Phi\right\Vert _{\mathcal{HS}}^{2}
dt<\infty\right] =1.
\]
Then the stochastic integral defines a continuous local martingale.
Some general references for this subsection are:
\cite{DaPrato-Zabczyk}, \cite{Kall-Xiong}, \cite{Mikul-Rozo98},
\cite{Mikul-Rozo99}, \cite{Nual71}.
\section{The dynamics of bond prices}
\subsection{The non-parametric framework} \label{non-param frame}
From now on, and for the rest of the paper, we are given a finite time interval
of possible trading times $\mathbb{T}=[0, \timeh]$ and we are given a family $W^{i},$
$i\in\mathbb{I\ }$ of standard independent Brownian motions on a complete filtered
probability space $(\Omega,P,\mathcal{F},\mathcal{A}),$ the filtration
$\mathcal{A}=\{\mathcal{F}_{t}\}_{0\leq t \leq \timeh},$ is generated by the
$W^{i},$ and $\mathcal{F}=\mathcal{F}_{\timeh}.$ The family $\mathbb{I}$
itself can be finite or infinite, in
which case we take $\mathbb{I}=\mathbb{N}.$ Let $\ell^{2}(\mathbb{I}),$ be
the Hilbert space of all real sequences $x=(x_{i})_{i \in \mathbb{I}},$ such that
$\|x\|_{\ell^{2}(\mathbb{I})}=(\sum_{i\in \mathbb{I}} (a_{n})^{2})^{1/2}<\infty$.
So, when $\mathbb{I}$ has a finite number $\numrand$ of elements, then
$\ell^{2}(\mathbb{I}) =\mathbb{R}^{\numrand}.$ Often we write just $\ell^{2}$
for $\ell^{2}(\mathbb{I}).$
Heath, Jarrow and Morton (henceforth HJM) were the first to study the term
structure of interest rates in a non-parametric framework. Their basic idea
(see \cite{HJM92}) consists of writing one equation for the price of every
zero-coupon at time $t$. Denoting by $\hat{B}_{t}(T)$ the price at time $t$ of
a zero-coupon bond maturing at time $T\geq t,$ the HJM equation has the following
form:
\begin{equation}
\hat{B}_{t}(T)=\hat{B}_{0}(T)+\int_{0}^{t}\hat{B}_{s}(T)a_{s}(T)ds+\int
_{0}^{t}\sum_{i\in\mathbb{I}}\hat{B}_{s}(T)v_{s}^{i}(T)dW_{s}^{i},\ \ 0\leq
t\leq T \label{a2}%
\end{equation}
There are infinitely many such equations, one for each maturity $T\geq t.$
The trend $a_{t}\left( T\right) $ and the volatilities $v_{t}^{i}(T)$ are
supposed to be progressively measurable processes, which means, for instance, that they
could be functions of all the $\hat{B}_{s}(S),$ $S\geq s$ and $s \leq t.$ In due course, we will
make further assumptions so as to ensure that equations such as (\ref{a2})
make mathematical sense.
Let us discount all prices to $t=0,$ by the spot interest rate $r_{t},$
which in terms of the zero-coupon bond price is given by
\begin{equation} \label{spot rate}
\spotrate{t}=-\frac{\partial \hat{B}_{t}(T)}{\partial T}\Big |_{T=t}.
\end{equation}
The discounted prices of zero-coupons are now:
\begin{equation} \label{discounted B}
B_{t}(T)=\hat{B}_{t}(T)\exp(-\int_{0}^{t}\spotrate{s}ds)
\end{equation}
and the equations (\ref{a2}) become:
\begin{equation} \label{a2'}
B_{t}(T)=B_{0}(T)+\int_{0}^{t}B_{s}(T)(a_{s}(T)-r_{s})ds+\int_{0}^{t}%
\sum_{i\in\mathbb{I}}B_{s}(T)v_{s}^{i}(T)dW_{s}^{i},\ \ 0\leq t\leq T
\end{equation}
and, again, there is one such equation for every maturity $T\geq t$. Note the
boundary condition $\hat{B}_{T}(T)=1,$ and hence, from (\ref{discounted B}):
\begin{equation} \label{52}
B_{t}(t)=\exp(-\int_{0}^{t}r_{s}ds) .
\end{equation}
\subsection{The bond dynamics in the moving frame}
For every $x\geq0$, we denote by $\zcpx{t}(x)$ the price and by $\zcpxd{t}(x)$ the discounted
price at time $t$ of a zero-coupon maturing at time $t+x$. The stochastic
processes $B_{t}(T) $ and $p_{t}(x) $ are related
by:
\[
p_{t}(x)=B_{t}(t+x).
\]
In other words, as explained in the introduction, instead of dating events by their distance from a fixed
origin, defined to be $t=0$, we are dating them by their distance from today:
we are using a time frame which moves with the observer. The equation for
$p_{t}$ in the moving frame, is easily obtained from (\ref{a2'}). For every
$x\geq0$, we have:\
\begin{equation}
\begin{split}
p_{t}(x)=&p_{0}(t+x)+\int_{0}^{t}p_{s}(t-s+x)m_{s}(t-s+x)ds \\
&+\int_{0}^{t}
\sum_{i\in\mathbb{I}}p_{s}(t-s+x)\sigma_{s}^{i}(t-s+x)dW_{s}^{i}, \label{51}
\end{split}
\end{equation}
where
\begin{equation}
\drift{t}(x)=a(t,t+x)-\spotrate{t} \; \; \text{and} \; \; \vol{i}{t}(x)=v^{i}(t,t+x),
\label{drift-vol}
\end{equation}
for all $0 \leq t \leq \timeh$ and $x \geq 0.$
Here, again, the trends $t \mapsto m_{t}(x)$ and the volatilities $t \mapsto \sigma_{t}^{i}(x)$
are progressively measurable processes.
Instead of looking at (\ref{51}) as an infinite family of coupled equations, one for
each $x\geq0$, we shall interpret it as a single equation describing the dynamics
of an infinite-dimensional object, the curve $x \mapsto p_{t}(x), $
which will be seen as a vector $p_{t}$ in the Hilbert space
$E^{s}([0, \infty[\,),$ for some fixed $s>1/2$, chosen so that the functions $m_{t}$ and
$\sigma_{t}^{i}$ belong to $E^{s}([0, \infty[\,).$
Let $\ltrans{}$ be the semi-group left translations on $E^{s}([0, \infty[\,)$
(see formula (\ref{lefttr}) and Proposition \ref{prop L}). From now on we shall just wright
$E^{s}$ instead of $E^{s}([0, \infty[\,),$ when there is no risk of confusion.
The equations in
(\ref{51}) can be rewritten as one equation in $E^{s}$:%
\begin{equation} \label{dynam discount p}
\ p_{t}=\mathcal{L}_{t}p_{0}+\int_{0}^{t}(\mathcal{L}_{t-s}(p_{s}%
m_{s}))ds+\int_{0}^{t}\sum_{i\in\mathbb{I}}(\mathcal{L}_{t-s}(p_{s}\sigma
_{s}^{i}))dW_{s}^{i}.
\end{equation}
\begin{theorem} \label{mild}
Let $s> 1/2.$ Assume that $p_{0} \in E^{s}$ and assume that $m_{t}$
and the $\sigma_{t}^{i},$ $i\in\mathbb{I},$ are progressively measurable processes in
$E^{s}$ satisfying:%
\begin{equation}
\int_{0}^{\timeh}(\Vert m_{t}\Vert_{E^{s}}+\sum_{i\in\mathbb{I}}\Vert
\sigma_{t}^{i}\Vert_{E^{s}}^{2})dt<\infty\;\ \ \text{a.s.} \label{55}%
\end{equation}
Then equation (\ref{dynam discount p}) defines a unique process $p$ in
$E^{s}$ satisfying:
\begin{equation} \label{56}
\int_{0}^{\timeh}(\Vert p_{t}\Vert_{E^{s}}+\Vert p_{t}m_{t}\Vert_{E^{s}}%
+\sum_{i\in\mathbb{I}}\Vert p_{t}\sigma_{t}^{i}\Vert_{E^{s}}^{2}%
)dt<\infty\;\text{a.s.}
\end{equation}
The process $p$ has continuous trajectories in $E^{s},$
\begin{equation} \label{bond dyn sol p}
p_{t}=\exp\left\{ \int_{0}^{t}\mathcal{L}_{t-s}\left((m_{s}-\frac{1}{2}\sum
_{i\in\mathbb{I}}(\sigma_{s}^{i})^{2})ds+\sum_{i\in\mathbb{I}}\sigma_{s}
^{i}dW_{s}^{i}\right) \right\} \mathcal{L}_{t}p_{0}.
\end{equation}
and if $p_{0} \in H^{s}$ then the process $p$ takes its values in $H^{s}.$
If $p_{0} \in E^{s}$ satisfies $p_{0} \geq 0$ (resp. $p_{0} > 0$),
i.e. $p_{0}(x) \geq 0$ (resp. $p_{0}(x) > 0$) for all $x \geq 0,$
then so does $p_{t}.$
\end{theorem}
For a proof of this theorem see Lemma A.1 of \cite{I.E.-E.T bond th}, which is reproduced
in the appendix of this article (Lemma \ref{existence lemma}).
Note that equation (\ref{dynam discount p}) implies that $p_{0}$ is the value
of $p_{t}$ for $t=0$.
A word here about the choice of function spaces. Assuming that $p_{t}$ belongs
to $H^{s}$ for some $s>1/2$ is minimal: it is basically saying that the
zero-coupon prices depend continuously on time to maturity and go to zero at
infinity. This, however, is too strong a requirement for $m_{t}$ and the
$\sigma_{t}^{i}$: we cannot expect the trend and the volatilities to go to
zero when the time to maturity increases to infinity. This is why we are
assuming that $m_{t}$ and the $\sigma_{t}^{i}$ belong to $E^{s}$.
To simplify the mathematical formalism and also to include interest rate models,
with vanishing long term rates, we have permitted that $p_{t} \in E^{s}.$
Now according to Theorem \ref{mild}, $p_{t}$ is in-fact in $H^{s}$ if $p_{0} \in H^{s}.$
Condition (\ref{55}) implies that $\sum_{i\in\mathbb{I}}\Vert\sigma_{t}%
^{i}\Vert_{E^{s}}^{2}$ is finite for almost every
$(t,\omega) \in \mathbb{T} \times \Omega.$ This means, when $\mathbb{I}=\mathbb{N},$ that the
operator $\sigma_{t}$ from $\ell^{2}(\mathbb{I})$ to $E^{s}$ defined by:%
\begin{equation} \label{vol hs}
\sigma_{t}e_{i}=\sigma^{i}_{t},\ \ \ i\in\mathbb{I},
\end{equation}
where $e_{i}$ are the elements of the standard basis of $\ell^{2}(\mathbb{I}),$
is Hilbert-Schmidt a.e. $(t,\omega).$ We have
\[
\left\Vert \sigma_{t}\right\Vert _{\mathcal{HS}(\ell^{2},E^{s})}^{2}=\sum_{i\in\mathbb{I}%
}\Vert\sigma_{t}^{i}\Vert_{E^{s}}^{2}.
\]
We shall refer to $\sigma$ as the \emph{volatility operator} process. It takes its values in
$\mathcal{HS}(\ell^{2},E^{s})$ and when we say that it is progressively measurable,
it is meant that all the $\sigma^{i}$ are progressively measurable.
We can now, using the stochastic integral introduced in Theorem \ref{stoch int},
rewrite equation (\ref{dynam discount p}) on a more compact form in $E^{s},$ where $s> 1/2:$
\begin{equation} \label{dynam discount p 1}
\ p_{t}=\mathcal{L}_{t}p_{0}+\int_{0}^{t}\mathcal{L}_{t-s}(p_{s} m_{s})ds
+\int_{0}^{t}\mathcal{L}_{t-s}(p_{s}\sigma_{s})dW_{s}.
\end{equation}
This makes sens in $E^{s}.$ Indeed, the only difference with equation (\ref{dynam discount p})
is the last term on the r.h.s. When condition (\ref{55}) is satisfied then the
volatility operator $\sigma_{u},$ defined by (\ref{vol hs}), from $\ell^{2}$ to $E^{s},$
is Hilbert-Schmidt a.e. $(u,\omega).$ Since pointwise multiplication of functions in $E^{s}$
is a continuous operation for $s> 1/2$ it follows that the linear operator $x \mapsto p_{u}\sigma_{u}x,$
from $\ell^{2}$ to $E^{s},$ is Hilbert-Schmidt a.e. $(u,\omega).$ $\ltrans{v}$ is bonded for every $v \geq 0,$
so the integrand is a progressively measurable $\mathcal{HS}(\ell^{2},E^{s})$-valued process
satisfying the conditions of Theorem \ref{stoch int}.
A process $p$ with values in $E^{s}$ satisfying (\ref{dynam discount p 1})
(or equivalently (\ref{dynam discount p})) and (\ref{56})
will be called a \emph{mild solution} of the bonds dynamics. %
Note that we are not worrying about the boundary condition (\ref{52}) at this
time, because it does not make mathematical sense: how do we define $r_{t}%
$\ ?\ This will be taken care of in the next section.
\subsection{Smoothness of the zero-coupon curve.}
Another way to proceed is to write (\ref{51}) in differentiated form. For fixed $x \geq 0,$
a formal calculation using It\^o's lemma and which can be rigorously justified gives:
\begin{equation} \notag%
\begin{split}
dp_{t}(x)-&p_{t}\left( x\right) m_{t}\left( x\right) dt-\sum_{i\in\mathbb{I}}p_{t}(x)\sigma_{t}^{i}(x)dW_{t}^{i} \\
&=\bigg(\frac{\partial}{\partial t}p_{0}(t+x)
+\int_{0}^{t}\frac{\partial}{\partial t}\left( p_{s}(t-s+x)m_{s}(t-s+x)\right) \ ds \\
&+\int_{0}^{t}\frac{\partial
}{\partial t} \sum_{i\in\mathbb{I}}p_{s}(t-s+x)\sigma_{s}%
^{i}(t-s+x) dW_{s}^{i}\bigg) \, dt.
\end{split}
\end{equation}
In the expression on r.h.s. we can replace $\partial/\partial t$ by $\partial/\partial x,$
since $p_{0}$ and the integrands on the r.h.s. are functions of $t+x.$ Derivation w.r.t.
$x$ under the integral then gives:
\begin{equation} \notag%
\begin{split}
&dp_{t}(x)-p_{t}\left( x\right) m_{t}\left( x\right) dt-\sum_{i\in\mathbb{I}}p_{t}(x)\sigma_{t}^{i}(x)dW_{t}^{i} \\
&=\bigg(\frac{\partial}{\partial x} \Big( p_{0}(t+x)
+\int_{0}^{t} p_{s}(t-s+x)m_{s}(t-s+x) \ ds \\
&+\int_{0}^{t} \sum_{i\in\mathbb{I}}p_{s}(t-s+x)\sigma_{s}^{i}(t-s+x) dW_{s}^{i} \Big)\bigg) \, dt.
\end{split}
\end{equation}
The l.h.s. is equal to $((\partial/\partial x) p_{t}(x)) \, dt,$ according to (\ref{51}), so
\begin{equation} \label{diff form3}
dp_{t}(x)-p_{t}\left( x\right) m_{t}\left( x\right) dt-\sum_{i\in\mathbb{I}}p_{t}(x)\sigma_{t}^{i}(x)dW_{t}^{i}
=\big(\frac{\partial}{\partial x} p_{t}(x)\big) \, dt,
\end{equation}
for all $x \geq 0$ and $t \in \mathbb{T}.$
Introducing the infinitesimal generator $\partial$ of the semi-group
$\ltrans{}$ (see Proposition \ref{prop L}), this can be understood as an equation in $E^{s}:$
\begin{equation}
dp_{t}=(\partial p_{t}+p_{t} m_{t})dt+\sum_{i\in\mathbb{I}}p_{t}\sigma_{t}^{i}dW_{t}^{i} \label{SPDE pd}
\end{equation}
or equivalently: %
\begin{equation}
p_{t}=p_{0}+\int_{0}^{t}( \partial p_{s}+p_{s}m_{s})
ds+\int_{0}^{t} \sum_{i\in\mathbb{I}}p_{s}\sigma_{s}^{i}dW_{s}^{i}.
\label{SPDE p}
\end{equation}
Equation (\ref{dynam discount p}) is the integrated version of (\ref{SPDE p}),
w.r.t. the semi-group $\ltrans{}.$
The connection between formulas (\ref{SPDE p}) and (\ref{dynam discount p})
is similar to the \emph{variations of constants} formula for ODE's in finite dimension.
We now have to give some mathematical meaning to equation (\ref{SPDE p}). This
will require beefing up the existence conditions given in Theorem \ref{mild}.
The following corollary follows from applying Theorem \ref{mild} with $s+1$
instead of $s:$
\begin{corollary} \label{strong}
Let $s> 1/2.$ Assume that $p_{0} \in \mathcal{D}(\partial) = E^{s+1}$ and assume
that $m_{t}$ and the $\sigma_{t}^{i},$ $i\in\mathbb{I}$ are progressively measurable
processes with values in $E^{s+1}$ satisfying
\begin{equation}
\int_{0}^{\timeh}(\Vert m_{t}\Vert_{E^{s+1}}+\sum_{i\in\mathbb{I}}\Vert
\sigma_{t}^{i}\Vert_{E^{s+1}}^{2})dt<\infty\;\ \ \text{a.s} \label{54}.
\end{equation}
Then the mild solution $p,$ in Theorem \ref{mild}, of the bonds dynamics satisfies
the following condition:
\begin{equation} \label{strong cond}
p_{t}\in E^{s+1} \; \text{and} \;
\int_{0}^{\timeh}(\Vert p_{t}\Vert_{E^{s+1}}+\Vert p_{t}m_{t}%
\Vert_{E^{s}}+\sum_{i\in\mathbb{I}}\Vert p_{t} \sigma_{t}^{i}\Vert_{E^{s}}^{2}) \ dt<\infty\;\ \text{a.s.}%
\end{equation}
Equation (\ref{SPDE p}) holds for every $t.$ In addition $p$ has
continuous paths in $E^{s+1}$ and $p_{t} \in H^{s+1}$ if $p_{0} \in H^{s+1}.$
\end{corollary}
By definition a solution of equation (\ref{dynam discount p}) is called a strong solution
of the equation (\ref{SPDE p}), when condition (\ref{strong cond}) is satisfied.
Here we shall say that $p$ is a \emph{strong solution} of the bonds dynamics.
As a consequence, in the situation of Corollary \ref{strong}, the term structure
$x \mapsto p_{t}(x)$ is
$C^{1}$ for every $t,$ and interest rates are well defined. The instantaneous forward
rate $\fwrate{t}(x)$ contracted at $t \in \mathbb{T}$ for time to maturity $x$ and
the spot rate $\spotrate{t}$ at time $t,$ for instance, are defined by:
\begin{equation} \label{intrest}
\fwrate{t}(x) =-\frac{\partial \log p_{t}(x)}{\partial x}=-\frac{(\partial p_{t})(x)}{p_{t}(x)}
\quad \text{and} \quad \spotrate{t}=\fwrate{t}(0) =-\frac{\left( \partial p_{t}\right) \left( 0\right) }{p_{t}\left(
0\right) }.
\end{equation}
By Corollary \ref{strong}, $p$ is a strong solution and the maps
$t\mapsto p_{t}$ and $t\mapsto\partial p_{t}$ are continuous from
$\mathbb{T}$ into $E^{s}$, and hence into $C^{0}(
[0,\infty [\, )$ endowed with the topology of uniform convergence.
So $p_{s}( 0) $ and $(\partial p_{s})\left(
0\right) $ converge to $p_{t}( 0) $ and $(\partial p_{t})\left( 0\right), $
when $s\rightarrow t.$ In other words, $r_{t}$ is a continuous function of $t,$
when $p_{t}( 0) > 0$ for all $t\in \mathbb{R}.$
We are now able to make sense of the boundary condition (\ref{52}), which we rewrite
in terms of $p:$
\begin{equation}
\zcpxd{t}(0)=\exp (\int_{0}^{t} \frac{(\partial \zcpxd{s})(0)}{\zcpxd{s}(0)}ds),
\label{p bound cond}
\end{equation}
for every $ t \in \mathbb{T}.$
\begin{proposition} \label{prop. boundary cond}
Let $s> 1/2.$
Assume that $m_{t}$ and the $\sigma_{t}^{i}$ are progressively measurable processes with
values in $E^{s+1}$ satisfying (\ref{54}) and
\begin{equation}
m_{t}(0)=0,\;\sigma_{t}^{i}(0)=0\;\ \ \forall i\in\mathbb{I} \label{63}%
\end{equation}
and assume that $p_{0}$ satisfies
\begin{equation} \label{p0}
p_{0} \in E^{s+1}, \; \; p_{0}(0)=1, \; \; p_{0}(x) >0 \ \ \ \forall x \geq 0.
\end{equation}
Then the solution of
the bond dynamics, given by Corollary \ref{strong}, satisfies the boundary
condition (\ref{p bound cond}).
\end{proposition}
\begin{proof}
Since $m_{t}$ and the $\sigma_{t}^{i}$ take values in $E^{s+1}$, they are
continuous function on $[0, \infty[ \, ,$ and condition (\ref{63}) makes sense.
As $p_{0} >0$ it follows from Proposition \ref{mild} that $p_{t} >0.$ We have
shown that, if $p_{t}$ is a strong and strictly positive solution of the bond dynamics,
then $r_{t}$ given by (\ref{intrest}) is a continuous function of $t$.
Writing conditions (\ref{63}) into equation
(\ref{SPDE p}), we get:%
\begin{align*}
p_{t}( 0) & =p_{0}( 0) +\int_{0}^{t}( (\partial p_{s})( 0) +p_{s}( 0) m_{s}(
0) ) ds+\int_{0}^{t} \sum_{i\in\mathbb{I}}p_{s}( 0) \sigma_{s}^{i}( 0) dW_{s}^{i}\\
& =1+\int_{0}^{t}(\partial p_{s})( 0) ds
=1-\int_{0}^{t}r_{s}p_{s}( 0) ds.
\end{align*}
In other words, $\varphi(t) =p_{t}(0) $ must
satisfy the differential equation $\varphi^{\prime}(t)
=-r_{t}\varphi (t) $, with the initial condition $\varphi (0) =1$. The result follows.
\end{proof}
When we get to optimizing portfolios, we will need $L^{p}$ estimates on the
solutions of the bond dynamics. They are provided by the following result:
\begin{theorem}\label{Th price compatible strong cond.}
Let $q(t)=\zcpxd{t}/\ltrans{t}\zcpxd{0}$ and $\hat{q}(t)=\zcpx{t}/\ltrans{t}\zcpx{0}.$
If $\zcpxd{0},$ $\vol{}{}$ and $\drift{}$ in Proposition \ref{prop. boundary cond} %
also satisfy the following additional conditions:
\begin{equation}
E( (\int_{0}^{\timeh}\| \vol{}{t} \|^{2}_{\mathcal{HS}(\ell^{2},E^{s+1})}dt)^{a}
+ \exp(a \int_{0}^{\timeh}\| \vol{}{t} \|^{2}_{\mathcal{HS}(\ell^{2},E^{s})}dt)) < \infty,
\; \forall a \in [1,\infty[
\label{domain sigma i}
\end{equation}
and
\begin{equation}
E(( \int_{0}^{\timeh}\| \drift{t} \|_{E^{s+1}}dt)^{a}
+ \exp(a \int_{0}^{\timeh}\| \drift{t} \|_{E^{s}}dt)) < \infty,
\forall a \in [1,\infty[ \,,
\label{domain drift}
\end{equation}
then the solution $\zcpxd{}$ in %
Proposition \ref{prop. boundary cond} has the following property:
\begin{equation}
\zcpxd{}, \zcpx{}, q, \hat{q}, 1/q, 1/\hat{q} \in
L^{u}(\Omega, P, L^{\infty}(\mathbb{T},E^{s+1})), \forall u \in [1,\infty[ \,.
\end{equation}
\end{theorem}
\begin{proof}
We use the notation
\begin{equation}
\tilde{\mathcal{E}}_{t}(L)=\exp (\int_{0}^{t} \ltrans{t-s}
((\drift{s}-\frac{1}{2}\sum_{i \in \mathbb{I}}(\vol{i}{s})^{2})ds
+ \vol{}{s} d\wienerp{}{s}) ),
\label{exp marting}
\end{equation}
for
\begin{equation}
L_{t}=\int_{0}^{t}(\drift{s}ds + \vol{}{s} d\wienerp{}{s}), \quad \text{if} \; 0 \leq t \leq \timeh.
\label{A. notation 1'}
\end{equation}
Conditions $(i)-(iv)$ of Lemma \ref{Lp norms} are satisfied for $p.$ Estimate (\ref{Lp norms 3})
of Lemma \ref{Lp norms} then shows that
$ \zcpxd{} \in
L^{u}(\Omega, P, L^{\infty}(\mathbb{T},E^{s+1}))$ $ \forall u \in [1,\infty[ \,.$
By the explicit expression (\ref{bond dyn sol p}), $q=\tilde{\mathcal{E}}(L),$
so it follows from Lemma \ref{Lp norms} that the conclusion holds true also for $q.$
Let %
$N_{t}=\int_{0}^{t}((- \drift{s}+\sum_{i \in \mathbb{I}}(\vol{i}{s})^{2})ds
- \sum_{i \in \mathbb{I}}\vol{i}{s} d\wienerp{i}{s}).$
Then $1/q=\tilde{\mathcal{E}}(N).$
According to conditions (\ref{domain sigma i}), (\ref{domain drift}), the
conditions $(i)-(iv)$ of Lemma \ref{Lp norms}
(with $N$ instead of $L$) are satisfied.
We now apply estimate
(\ref{Lp norms 3}) %
to $1/q,$
which proves that $1/q \in L^{u}(\Omega, P, L^{\infty}(\mathbb{T}, E^{s+1})),$
for all $u \geq 1.$ %
To prove the cases of $\hat{q}^{\alpha},$ $\alpha=1$ or $\alpha=-1,$ we note that $q(t)=\hat{q}(t)\zcpxd{t}(0).$ %
Using that the case of
$q^{\alpha}$ is already proved and H\"olders inequality, it is enough to prove that
$g \in L^{u}(\Omega, P, L^{\infty}(\mathbb{T}, \mathbb{R})),$ where $g(t)=(\zcpxd{t}(0))^{-\alpha}.$
Since $\zcpxd{t}(0)=(\ltrans{t}\zcpxd{0})(0)(q(t))(0)$ $=\zcpxd{0}(t)(q(t))(0),$
it follows that
\[0\leq g(t) = (\zcpxd{0}(t))^{-\alpha} ((q(t))(0))^{-\alpha}.\]
By Sobolev embedding, $\zcpxd{0}$ is a continuous real valued function on $[0, \infty [$ and it is also strictly positive,
so the function $t \mapsto (\zcpxd{0}(t))^{-\alpha}$ is bounded on $\mathbb{T}.$ Once more by Sobolev embedding,
$((q(t))(0))^{-\alpha} \leq C \|(q(t))^{-\alpha}\|_{E^{s}}.$ The result
now follows, since we have already proved the case of $q^{\alpha}.$
The case of $\zcpx{}$ is so similar to the previous cases that we omit it.
\end{proof}
Under the hypotheses of Proposition \ref{prop. boundary cond}, %
$\zcpxd{t}(0)$ satisfies (\ref{p bound cond}), so it is the discount factor (\ref{52}).
It has nice properties, as follows from the second part of the proof of Theorem \ref{Th price compatible strong cond.}
\begin{corollary}\label{discount factor}
Under the hypotheses of Theorem \ref{Th price compatible strong cond.},
if $\alpha \in \mathbb{R},$ then the discount factor $\zcpxd{t}(0)$ satisfies
\[E(\sup_{t \in \mathbb{T}}(\zcpxd{t}(0))^{\alpha}) < \infty.\]
\end{corollary}
\begin{remark} \label{p remark}
It follows from Theorem \ref{Th price compatible strong cond.} that for all $t \in \mathbb{T},$
$\zcpxd{t}$ and $\zcpxd{0}$ have similar asymptotic behavior. In fact for some r.v. $A>0,$
$A^{-1} \zcpxd{0}(t+x) \leq \zcpxd{t}(x) \leq A \zcpxd{0}(t+x),$ for all $t \in \mathbb{T}$ and $x\geq 0,$ where
$A$ is independent of $x$ and $t$ and $A \in L^{u}(\Omega, P)$ for all $u \geq 1.$ \\
\end{remark}
In a different context, Hilbert spaces of
forward rate curves were considered in \cite{Bj-Sv01} and \cite{Filipovic}. %
The space $E^{s},$ with $s> 1/2$ sufficiently small, contains
the image of these spaces, under the nonlinear map of forward rates to zero-coupons
prices. Or more precisely, it contains the image of subsets of forward rate curves
$f$ with positive long term interest rate, i.e. $f(x) \geq 0$ for all $x$ sufficiently
big.
\section{Portfolio theory}
In this section $s>1/2,$ $E^{s}=E^{s}[0,\infty)[ \,)$ and $\mathbb{T}=[0,\ \timeh],$
where $\timeh$ is the time horizon of the model. We also write $E$ for $E^{s}=E^{s}[0,\infty)[ \,)$
and $E'$ for $E^{-s}[0,\infty)[ \,).$
\subsection{Basic definitions.}
We recall that, by the bilinear form $\sesq{\;}{\;},$ the space $E^{-s}$ is identified
with the dual of $E^{s},$ that is, the space of continuous linear functionals on $E^{s}.$
It is important to note that, since
$s>1/2$, the space $E^{s}$ is contained in $C_{b}^{0}([0,\infty)[ \,),$ the space of bounded continuous functions on $[0,\infty [ \,$, so
that $E^{-s}$ contains the dual of $C_{b}^{0}( [0,\infty[ \,),$
which is the space of bounded Radon measure on $[0,\infty)$. In particular,
all Dirac masses $\delta_{x}$, for $x\geq0$, belong to $E^{-s}$.
\begin{definition} \label{port def}
A portfolio is progressively measurable process on the time interval $\mathbb{T},$ with values in
$E^{-s}.$ If $\theta$ is a portfolio, then its discounted value at time $t \in \mathbb{T}$ is
\begin{equation}
\prtfpxd{t}(\theta)=\sesq{\theta_{t}}{ \zcpxd{t}}.
\label{wealth *}
\end{equation}
\end{definition}
The basic example is a portfolio of one zero-coupon:
\begin{example} \label{Ex ZC 1} \text{} \normalfont \\
Consider a portfolio containing exactly one zero-coupon bond
with maturity date $T,$ i.e. \textit{time of maturity} $T:$ \\
1) Let $T \geq \timeh$ and let $T$ be fixed.
The portfolio $\theta$ is then defined by
\begin{equation} \label{eq Ex ZC 1}
\theta_{t}=\delta_{T-t}, \; \forall t\leq \timeh.
\end{equation}
Since $T \geq \timeh,$ we have indeed that the support of the distribution $\theta_{t}$
is contained in $[0,\infty[ \,,$ so $\theta_{t} \in E^{-s}.$
With this definition, the value of the zero-coupon is:%
\[
<\delta_{T-t},p_{t}>\ =\ p_{t}\left( T-t\right)
\]
which is precisely what we had in mind. \\
2) Let $T < \timeh$ and let $T$ fixed. In this case we note that the process in (\ref{eq Ex ZC 1}) does not
continue after time $T$: the zero-coupon is converted into cash. So the
buy-and-hold strategy is not possible for zero-coupon bonds, unless
the horizon $\timeh$ is less than the maturity $T.$ \\
3) Let $T=t+x$ and $x \geq 0$ a fixed \textit{time to maturity}. Then the portfolio
is defined by
\begin{equation} \label{eq Ex ZC 2}
\theta_{t}=\delta_{x},\text{ \ \ \ for }t\leq \timeh.
\end{equation}
\end{example}
We note that the higher we choose $s$, the more portfolios
can be incorporated into the model. For instance, if $s>3/2$, all curves in
$E^{s}$ are $C^{1}$, so that the derivative $\delta_{x}^{\prime}$ of the Dirac
mass belongs to $E^{-s}$. The value of $\delta_{T-t}^{\prime}$ is:%
\begin{equation}
<\delta_{T-t}^{\prime},p_{t}>\ =p_{t}^{\prime}(T-t)
=-\fwrate{t}(T-t) p_{t}(T-t), \label{65}
\end{equation}
where $p_{t}^{\prime}(x) =\partial p_{t}(x)/\partial x$ and where $\fwrate{t}(x),$
defined in (\ref{intrest}), is the instantaneous
forward rate with time to maturity $x,$ contracted at time $t.$
This also implies that the higher we choose $s$, the more interest
rates derivatives can be incorporated into the model.
If $s >1/2,$ then we can contract directly on the values of zero-coupon bond prices,
and if $s >3/2,$ then we can contract directly on the values of interest rates.
We next introduce the notion of \emph{self-financing} portfolio. We state a definition
such that it will makes sense for mild solutions of the bonds dynamics:
\begin{definition} \label{self-fin def}
A portfolio is called \emph{self-financing} if, for every $t \in \mathbb{T}$
\begin{equation} \label{self-fin prtf}
V_{t}(\theta) =V_{0}(\theta) +\int_{0}^{t} <\theta_{s}\,,\,p_{s}m_{s} \ ds+\sum_{i\in
\mathbb{I}}p_{s}\sigma_{s}^{i}dW_{s}^{i}>.
\end{equation}
\end{definition}
Given a strong solution $p$ of the bonds dynamics, we have for a self-financing portfolio:
\begin{equation}
dV_{t}(\theta)=\ <\theta_{t}\,,\,dp_{t}-\partial p_{t} \ dt>. \label{b5}%
\end{equation}
Note that this is not the standard definition:\ this is because we are in the
moving frame. Changes in portfolio value are due to two causes: changes in
prices, as in the fixed frame, and also to changes in time to maturity.
For the right-hand side of (\ref{self-fin prtf}) to make mathematical sense and to
introduce later arbitrage free markets, we need a further definition.
\begin{definition} \label{def self-fin}
A portfolio $\theta$ is an admissible portfolio if $\| \theta \|_{\prtfs} < \infty,$
where
\[
\| \theta \|^{2}_{\prtfs}=
E\left[ (\int_{0}^{\timeh}|<\theta_{t}\,,\,p_{t}m_{t}>|dt)^{2}+\int
_{0}^{\timeh}\sum_{i\in\mathbb{I}}(<\theta_{t}\,,\,p_{t}\sigma_{t}^{i}%
>)^{2}dt\right].
\]
$\prtfs$ is the linear space of all admissible portfolios and $\sfprtfs$ the subspace
of self-financing portfolios.
\end{definition}
The discounted gains process $G,$ defined by
\begin{equation}
G(t,\theta)=\int_{0}^{t}(\sesq{\theta_{s}}{\zcpxd{s}\drift{s}}ds
+\sesq{\theta_{s}}{\zcpxd{s}\vol{}{s}d\wienerp{}{s}}) ,
\label{Gain * explicit}
\end{equation}
is well-defined for admissible portfolios:
\begin{proposition}\label{proposition; square integ gain*}
Assume that $\zcpxd{0},$ $\drift{}$ and $\vol{}{}$ are as in Proposition \ref{prop. boundary cond}.
If $\theta \in \prtfs,$
then $G( \cdot,\theta)$ is continuous a.s. and
$E(\sup_{t \in \mathbb{T}}(G( t,\theta))^{2}) < \infty.$
\end{proposition}
\begin{proof}
Let $\theta \in \prtfs$ and introduce $X=\sup_{t \in \mathbb{T}}|G(t,\theta)|,$ $Y(t)=\int_{0}^{t}\sesq{\theta_{s}}{\zcpxd{s}\drift{s}}ds$
and $Z(t)=\int_{0}^{t}\sesq{\theta_{s}}{\zcpxd{s}\vol{}{s} d\wienerp{}{s}}.$
Then $G(t,\theta)=Y(t)+Z(t),$ according to formula (\ref{Gain * explicit}).
Let $ \zcpxd{}$ be given by Proposition \ref{prop. boundary cond}, of which the
hypotheses are satisfied.
We shall give estimates for $Y$ and $Z.$ By the definition of $\prtfs:$
\begin{equation} \begin{split}
E((\sup_{t \in \mathbb{T}} (Y(t))^{2})
\leq E((\int_{0}^{\timeh} |\sesq{\theta_{s}}{\zcpxd{s}\drift{s}} | ds)^{2}) \leq \|\theta\|^{2}_{ \prtfs}.
\end{split} \label{wealth; self fin strat proof 1}
\end{equation}
By isometry we obtain
\begin{equation} \begin{split}
E(Z(t)^{2})
=&E(\int_{0}^{t}\sesq{\theta_{s}}{ \zcpxd{s} \sum_{i \in \mathbb{I}}\vol{i}{s} d\wienerp{i}{s}})^{2} \\
&=E(\int_{0}^{t}\sum_{i \in \mathbb{I}}(\sesq{\theta_{s}}{ \zcpxd{s} \vol{i}{s}})^{2}ds) %
\leq \|\theta\|^{2}_{ \prtfs}.
\end{split}
\label{wealth; self fin strat proof 2}
\end{equation}
Doob's $L^{2}$ inequality and inequality (\ref{wealth; self fin strat proof 2})
give $E(\sup_{t \in \mathbb{T}}Z(t)^{2}) \leq 4 \|\theta\|^{2}_{ \prtfs}.$
Inequality (\ref{wealth; self fin strat proof 1}) then gives $E(X^{2}) \leq 10 \|\theta\|^{2}_{ \prtfs},$
which proves the proposition.
\end{proof}
\begin{example} \label{Ex ZC 2} \text{} \normalfont \\
1)
The portfolio in 1) of Example \ref{Ex ZC 2} is self-financing and the portfolios
in 2) and 3) of Example \ref{Ex ZC 2} are not self-financing. \\
2) The interest rate portfolio in formula (\ref{65}) is self-financing.
\\
\end{example}
\subsection{Rollovers}
\begin{definition}
Let $S\geq0.$ A $S$-rollover is a self-financing portfolio $\theta$ of a number of zero-coupon bonds
with constant time to maturity $S$ and with initial price $V_{0}(\theta)=\zcpxd{0}(S).$
\end{definition}
It follows directly from the definition that a $S$-rollover have the same initial price
as a zero-coupon with maturity date $S.$ It also follows that, if $x_{t}$ is the number
of zero-coupon bonds in the portfolio at $t,$ then we must have:%
\[
\theta_{t}=x_{t}\delta_{S},
\]
where the real-valued process $x$ makes the portfolio self-financing.
\begin{proposition}
If $\theta_{t}$ is a $S$- rollover, then:
\begin{equation} \label{r-o eq}
x_{t}= \exp(\int_{0}^{t} \fwrate{s}(S) \ ds).
\end{equation}
\end{proposition}
\begin{proof}
The portfolio $\theta_{t}$ only contains zero-coupons with time to maturity
$S,$ so that $V_{t}(\theta) =x_{t}p_{t}(S).$ Assuming the process $x$ to be of
bounded variation it follows that:
\[
dV_{t}( \theta) =p_{t}( S) dx_{t}+x_{t}dp_{t}(S).
\]
Substituting the expression for $dp_{t}(S)$ this becomes:%
\begin{align*}
dV_{t}( \theta) & =p_{t}( S) \frac{dx_{t}}%
{dt}dt+x_{t}\partial_{x}p_{t}( S) dt+x_{t}p_{t}( S)
( m_{t}( S) dt+\sum_{i\in\mathbb{I}}\sigma_{t}^{i}(
S) dW_{t}^{i}) \\
& =( p_{t}( S) \frac{dx_{t}}{dt}+x_{t}\partial_{x}%
p_{t}( S) +x_{t}p_{t}( S) m_{t}( S)
) dt+x_{t}p_{t}( S) \sum_{i\in\mathbb{I}}\sigma_{t}%
^{i}( S) dW_{t}^{i}.
\end{align*}
According to (\ref{self-fin prtf}) the portfolio is then self-financing if and only if:%
\[
p_{t}( S) \frac{dx_{t}}{dt}+x_{t}(\partial p_{t})(S) =0.
\]
This means that:%
\[
\frac{1}{x_{t}}\frac{dx_{t}}{dt}=-\frac{1}{p_{t}( S) }%
\frac{\partial p_{t}(S)}{\partial S}=\fwrate{t}(S).
\]
and the formula (\ref{r-o eq}) follows by integration. This proves the proposition
since $x$ then is of bounded variation.
\end{proof}
\noindent In particular, if $S=0,$ then we get the usual bank account with spot rate
$r_{t}.$
Henceforth, we will denote by $q_{t}(S) $ the value (discounted
to $t=0$) at time $t$ of a $S$-rollover. In the preceding notation, $q_{t}(S)=V_{t}(\theta).$
Introducing the price curve of the roll-over at time $t,$
$q_{t}:[0,\infty [ \, \rightarrow \mathbb{R}$, we find that the price
dynamics of roll-overs is given by:
\begin{equation}
q_{t}=p_{0}+\int_{0}^{t}q_{s}m_{s}ds+\int_{0}^{t}q_{s}\sum_{i\in\mathbb{I}%
}\sigma_{s}^{i}dW_{s}^{i}, \label{SPDE q}%
\end{equation}
Note that, compared to the same formula for bond prices, the term in
$\partial$ has disappeared from the right-hand side.
A $S$-rollover is a bank account which needs advance notice to be cashed: if
notice is given at time $t$, the rollover will then pay $x_{t}$ units of
account at time $t+S.$ In other words, at time $t,$ when notice is given, the
rollover is exchanged for $q_{t}(S)/p_{t}(S)=x_{t}$ units of a unit
zero-coupon with time of maturity $t+S.$
As we noted earlier, zero-coupons do not in general allow buy-and-hold
strategies. However rollovers do: a constant portfolio of rollovers is always
self-financing. A general bond portfolio $\theta_{t}$ can be expressed in
terms of a portfolio of rollovers $\eta_{t}$ and vice versa.
\subsection{Absence of arbitrage opportunities.}
Let $p$ be a mild solution of the price dynamics. Suppose that $\theta_{t}$ is a self-financing portfolio such that, for almost
every $(t,\omega) \in \mathbb{T} \times \Omega,$ we have:%
\begin{equation}
\forall i\in\mathbb{I},\text{ \ }<\theta_{t}\,( \omega)
,\,p_{t}( \omega) \sigma_{t}^{i}( \omega) >=0.
\label{70}%
\end{equation}
(We note that $p_{t}(\omega) \in E^{s}$ is a function of time to maturity,
$x \mapsto p_{t}(\omega,x),$ and similarly
for $\theta_{t}$ etc.)
Then (\ref{self-fin prtf}) gives $dV_{t}(\theta)=<\theta_{t}\,,\,m_{t}p_{t}>dt$, so that
$\theta_{t}$ is risk-free. Since the spot rate is zero (after discounting
values to $t=0$), in an arbitrage free market it must follow that for almost
every $( t,\omega) $:
\begin{equation}
<\theta_{t}(\omega)\,,\,m_{t}(\omega)p_{t}(\omega)>=0. \label{71}
\end{equation}
Comparing (\ref{70}) and (\ref{71}), we find that $p_{t}(\omega)m_{t}(\omega)$
must belong to the closure of the linear span of
$\{p_{t}(\omega)\sigma_{t}^{i}(\omega)\,|\,i\in\mathbb{I}\}.$
In fact this follows rigorously using Lemma \ref{lm hedge eq}, proved independently of this subsection.
There are now two cases:
\begin{itemize}
\item $\mathbb{I}$ is finite. %
Then the linear span is finite-dimensional, and
it coincides with its closure. So there are numbers $\gamma_{t}^{i}(
\omega) ,i\in\mathbb{I}$ such that
\[ p_{t}(\omega)m_{t}(\omega)
=p_{t}(\omega)\sum_{i\in\mathbb{I}}\gamma_{t}^{i}(\omega)\sigma_{t}^{i}(\omega)
\;\; \text{ (finite sum)}.\]
Since $p_{t}( \omega) >0$
for almost every $( t,\omega) $, this leads to:%
\[
m_{t}(\omega)=\sum_{i\in\mathbb{I}}\gamma_{t}^{i}(\omega)\sigma_{t}^{i}%
(\omega)
\]
and since the processes $m$ and $\sigma^{i}$ are progressively measurable, so can one choose
the processes $\gamma^{i}$. Note that the preceding equation holds in
$E^{s}$, and that it translates into a family of equations in $[0, \infty [ \,:$%
\[
m_{t}(\omega,x)=\sum_{i\in\mathbb{I}}\gamma_{t}^{i}(\omega)\sigma_{t}%
^{i}(\omega,x)\ \ \ \forall x\geq0
\]
or, as usual, omitting to mention the $\omega$ variable:
\[
m_{t}(x)=\sum_{i\in\mathbb{I}}\gamma_{t}^{i}\sigma_{t}^{i}(x)\ \ \ \forall
x\geq0.
\]
The $\gamma_{t}^{i}$ are the components of a market price of risk, and they do not depend on
the time to maturity $x.$ Using the volatility operator process $\sigma$ the last equality reads
\begin{equation} \label{mkt price 1}
m_{t}=\sigma_{t} \gamma_{t} \; \; \forall t \in \mathbb{T}
\end{equation}
and any $\gamma,$ progressively measurable with values in
$\ell^{2}(\mathbb{I}),$ satisfying this equation is called a market price of risk process.
\item $\mathbb{I}=\mathbb{N}.$ Then the linear span is not closed in general;
in fact, it is closed if and only if it is finite-dimensional. In that case,
we shall impose a stronger condition. To prove that the market is arbitrage-free,
we shall use that $m_{t}(\omega)$ is in the range of the volatility operator
$\sigma_{t}(\omega)$ which is a subset of the above closed linear span. So, once more
we impose that the condition (\ref{mkt price 1}) should be satisfied,
but for $\gamma$ with values in $\ell^{2}(\mathbb{I}).$ If the range of $\sigma_{t}(\omega)$
is infinite dimensional, then
this condition is indeed stronger, since $\sigma_{t}(\omega)$ is a.e. a compact operator.
\end{itemize}
In both cases, we also need that $\gamma$ satisfy some integrability condition
in $(\omega,t).$
This leads us to the following
\begin{definition} \label{market cond}
We shall say that the market is strongly arbitrage-free
if there exists a progressively measurable process $\gamma$ with values
in $\ell^{2}(\mathbb{I}),$
such that
\begin{equation}
m_{t}=\sigma_{t} \gamma_{t}, \; \;\forall t \in \mathbb{T}
\label{rel drift-sigma}%
\end{equation}
and
\begin{equation}
E\left[ \exp(a\int_{0}^{\timeh}\norm{\gamma_{t}}^{2}dt)\right] <\infty,\quad\forall a\geq0.
\label{gamma strong}%
\end{equation}
\end{definition}
If the market is strongly arbitrage-free then, by the Girsanov theorem, a
martingale measure is given by $dQ=\xi_{\timeh}dP$, with:%
\begin{equation}
\xi_{t}=\exp\left( -\frac{1}{2}\int_{0}^{t} \norm{\gamma_{s}}^{2}ds-\sum_{i\in\mathbb{I}}\gamma_{s}^{i}dW_{s}
^{i}\right). \label{75}%
\end{equation}
The $\wienerq{i}{},$ $i\in \mathbb{I},$ where
\begin{equation} \label{W Q}
\tilde{W}_{t}^{i}=W_{t}^{i}+\int_{0}^{t}\gamma_{s}^{i}ds,
\end{equation}
are independent Wiener process with respect to $Q.$ The expected
value of a random variable $X$ with respect to $Q$ is given by:
\[
E_{Q}[X]=E[\xi_{\timeh}X].
\]
Under a martingale measure, the discounted zero-coupon price process $p$
satisfies the equation
\begin{equation} \label{SPDE p Q 1}
\ p_{t}=\mathcal{L}_{t}p_{0}
+\int_{0}^{t}\mathcal{L}_{t-s}(p_{s}\sigma_{s})d\wienerq{}{s}
\end{equation}
and also the equation
\begin{equation}
p_{t}=p_{0}+\int_{0}^{t}\partial p_{s}ds+\int_{0}^{t}p_{s}\sigma_{s}\,d\wienerq{}{s}.
\label{SPDE p Q 2}%
\end{equation}
The discounted roll-over price process $q_{t}$ is given by:
\begin{equation}
q_{t}=p_{0}+\int_{0}^{t}q_{s} \sigma_{s}d\tilde{W}_{s}. \label{SPDE q Q}%
\end{equation}
\begin{lemma} \label{lm self-fin}
A portfolio $\theta$ is self-financing if and only if:
\begin{equation}
V_{t}(\theta)=V_{0}(\theta)+\int_{0}^{t}\sum_{i\in\mathbb{I}}<\theta
_{t}\,,\,p_{t}\sigma_{t}^{i}>d\tilde{W}_{s}^{i}. \label{self-fin}%
\end{equation}
\end{lemma}
We note that the integrand is in fact the adjoint operator of the operator
$b_{t}(\omega)= p_{t}(\omega)\sigma_{t}(\omega)$
from $\ell^{2}(\mathbb{I})$ to $E^{s}([0,\infty[ \,):$
\begin{equation} \label{self-fin1} %
(b_{t}(\omega)'\theta_{t})^{i}=<\theta_{t}\,,\,p_{t}\sigma_{t}^{i}>, \; \; \forall i \in \mathbb{I}.
\end{equation}
To see this, with $x_{t}^{i}(\omega)=<\theta_{t}\,,\,p_{t}\sigma_{t}^{i}>,$
rewrite it as follows: \\
for all $(t,\omega)$ and all $z \in \ell^{2}(\mathbb{I})$
\begin{align*}
\left( z,x_{t}(\omega)\right)_{\ell^{2}} & =\sum_{i\in \mathbb{I}}z^{i}<\theta_{t}\left(
\omega\right) ,\ p_{t}\left( \omega\right) \,\sigma_{t}^{i}\left(
\omega\right) >\\
& =<\theta_{t}\left( \omega\right) ,\ p_{t}\left( \omega\right)
\sum_{i\in \mathbb{I}}\sigma_{t}^{i}\left( \omega\right)z^{i} \,>\\
& =<\theta_{t}\left( \omega\right) ,\ b_{t}\left( \omega\right) z\,>
=<b_{t}\left( \omega\right) ^{^{\prime}}\theta_{t}\left( \omega\right)
,\ z\,>.
\end{align*}
If the market is strongly arbitrage-free and if condition (\ref{domain sigma i})
of Theorem \ref{Th price compatible strong cond.} is satisfied, then also
condition (\ref{domain drift}) is satisfied and the
Theorem \ref{Th price compatible strong cond.} applies.
\section{Hedging of interest derivatives} \label{sect. hedging}
From now on, it will be a standing assumption that $p_{0}$ satisfies condition (\ref{p0}),
that $\vol{}{}$ satisfy conditions (\ref{63}) and (\ref{domain sigma i}) and that
the market is strongly arbitrage-free according to Definition \ref{market cond}.
Before we solve the optimal portfolio problem, we shall study the problem of
hedging a European interest rates derivative with payoff $X$ at maturity $\timeh.$
$X$ is said to be an attainable contingent claim or derivative if $V_{\timeh}(\theta)=X$
for some admissible self-financing portfolio $\theta.$
Here we are only interested in payoffs, relevant for the optimal portfolio problem
considered in these notes, i.e. $X \in L^{p}(\Omega,\mathcal{F},P)$ for every $p \geq 1$
(see Lemma \ref{X in Lp}).
We first introduce the hedging equation, the Malliavin derivative and the
Clark-Ocone representation formula, which then permits the reader, if he wish, to
proceed directly to the study of the optimization problem in the case of deterministic
$\sigma$ and $\gamma$ in \S \ref{determ case}
Assume that $X\in L^{2}(\Omega,\mathcal{F},Q),$ where $Q$ is one equivalent martingale
measure given by (\ref{75}). Then, by the martingale
representation theorem, $X$ can be written as a stochastic integral:
\begin{equation} \label{mart decomp 1}
X=E_{Q}[X]+\int_{0}^{\timeh}\sum_{i\in\mathbb{I}}
x_{t}^{i}d\tilde{W}_{t}^{i},
\end{equation}
with:
\begin{equation} \label{mart decomp 2}
E_{Q}[\int_{0}^{\timeh} \norm{x_{t}}^{2}dt] <\infty.
\end{equation}
Comparing with equations (\ref{self-fin}) and (\ref{self-fin1}) for a self-financing portfolio, we
obtain the hedging equation
\begin{equation} \label{hedge eq 1}
b_{t}(\omega)'\theta_{t}(\omega)=x_{t}(\omega), \; \text{a.e.} \; (t,\omega),
\end{equation}
where the operator $b_{t}(\omega)=\zcpxd{t}(\omega)\vol{}{t}(\omega)$
from $\ell^{2}(\mathbb{I})$ to $E^{s}([0,\infty[ \,)$ was introduced in (\ref{self-fin1}).
Equivalently: for almost every$\;(t,\omega),$
\begin{equation*}
x_{t}^{i}(\omega)=\ <\theta_{t}\left( \omega\right) ,\ p_{t}\left(
\omega\right) \,\sigma_{t}^{i}\left( \omega\right) >,\;\forall
i\in\mathbb{I}\text{ }. %
\end{equation*}
We next introduce the Malliavin derivative (c.f. \cite{Nual71}), $D_{t}X,$ with respect
to $\wienerq{}{},$ at time $ t \in \mathbb{T}$ of certain
$\mathcal{F}=\mathcal{F}_{\timeh}$ measurable real random variables $X$ by: \\
\noindent
D1) $D_{t}X=0,$ if $X$ is a constant, \\
D2) $D_{t}X=h_{t},$ if $h \in L^{2}(\mathbb{T}, \ell^{2}(\mathbb{I}))$ and
$X=\int_{0}^{\timeh}\sum_{i\in\mathbb{I}}h_{t}^{i}d\tilde{W}_{t}^{i},$ \\
D3) $D_{t}(X Y)=X D_{t}Y+ Y D_{t} X.$ \\
\noindent
The algebra of such random variables is dense in $ L^{2}(\Omega,\mathcal{F},Q),$
which can be used to extend the definition to larger sets.
$D_{t}X$ takes its values in $\ell^{2}(\mathbb{I}).$
The partial derivative, with respect to $\wienerq{i}{},$ $D_{i,t}X,$ is the $i$-th component of
$D_{t}X.$
We will use the following
expression for the Malliavin derivative of an It\^o stochastic integral:
\begin{equation} \label{malliavin 1}
D_{t}\int_{0}^{\timeh}\sum_{i\in\mathbb{I}}x_{s}^{i}d\tilde{W}_{s}^{i}
=x_{t}+\int_{t}^{\timeh}\sum_{i\in\mathbb{I}}(D_{t}x_{s}^{i})d\tilde{W}_{s}^{i},
\end{equation}
when almost all the $x_{s}^{i}$ are Malliavin differentiable and sufficiently integrable.
In the case when $X$ is Malliavin differentiable, the
Clark-Ocone representation formula states that the integrand $x_{t}$ in
(\ref{mart decomp 1}) is given by
\begin{equation}
x_{t}=E_{Q}\left[ D_{t}X\ |\ \mathcal{F}_{t}\right].
\label{Clark-Ocone}%
\end{equation}
We now come back to the hedging equation (\ref{hedge eq 1}).
The fact that $\theta_{t}=\delta_{0}$ is a solution to the homogeneous %
equation (\ref{hedge eq 1}) permits us to construct self-financed solutions of
the in-homogeneous equation (\ref{hedge eq 1}), from solutions, which are not self-financed:
\begin{lemma} \label{lm hedge eq}
If $\bar{\theta}$ is an admissible portfolio (not necessarily
self-financed) which satisfies (\ref{hedge eq 1}), then there is a unique
self-financing admissible portfolio $\theta_{t}$ such that the difference
$\theta_{t}-\bar{\theta}_{t}$ is risk-free. It is given by:
\begin{equation} \label{sol hedge eq}
\theta_{t} =a_{t}\delta_{0}+\bar{\theta}_{t},
\end{equation}
\begin{equation} \label{sol hedge eq1}
a_{t} =\frac{1}{p_{t}(0)}\left[ E_{Q}[X\,|\,\mathcal{F}_{t}]-V_{t}%
(\bar{\theta})\right].
\end{equation}
\end{lemma}
\begin{proof}
We here omit the argument $\omega.$ Since the portfolio $\theta_{t}%
-\bar{\theta}_{t}$ is risk-free, it must have time to maturity $0$, and the %
formula (\ref{sol hedge eq}) is true by definition.
Substituting \ into equation (\ref{hedge eq 1}), and bearing in mind that
$\sigma_{t}^{i}(0)=0$:
\begin{align*}
((p_{t}\sigma_{t})'&\theta_{t})^{i}=
<\theta_{t},\ p_{t}\,\sigma_{t}^{i}\;>\ =\ <a_{t}\delta_{0}+\bar{\theta
}_{t},\ p_{t}\,\sigma_{t}^{i}>\text{ }\\
& =\ a_{t}\ p_{t}(0) \ \sigma_{t}^{i}(0)+<\bar{\theta}_{t},\ p_{t}\,\sigma_{t}^{i}>\\
& =x_{t}^{i}~\ \ \ \forall i\in\mathbb{I}.
\end{align*}
So $\theta_{t}$ satisfies (\ref{hedge eq 1}). It is then a hedging portfolio of $X$
if $V_{t}(\theta)=E_{Q}[X\,|\,\mathcal{F}_{t}].$ Substituting again
(\ref{sol hedge eq}) and then (\ref{sol hedge eq1}), we get:
\[
V_{t}(\theta)=a_{t}V_{t}(\delta_{0})+V_{t}(\bar{\theta})
=a_{t}p_{t}(0)+V_{t}(\bar{\theta})
= E_{Q}[X\,|\,\mathcal{F}_{t}].
\]
If $\bar{\theta}$ is an admissible portfolio, then $\theta$ is also admissible,
since $\| \theta \|_{\prtfs}=\| \bar{\theta} \|_{\prtfs}.$
\end{proof}
By the lemma, the construction of a hedging portfolio for $X$ is reduced to
solve equation (\ref{hedge eq 1}) in $\theta_{t}( \omega )$
for every $\left( t,\omega\right),$ in such a way that $\theta \in \prtfs,$
i.e. $\theta$ is admissible. Any such solution $\theta$ of this equation contains
the risky part of the portfolio.
To solve equation (\ref{hedge eq 1}), for given $(t,\omega ),$ we have to know if
$x_{t}(\omega)$ is in the range of the operator $b_{t}(\omega)'.$
The closure of the range of $b_{t}(\omega)'$ is equal to
the orthogonal complement $(\mathcal{K}(b_{t}(\omega)))^{\perp}$
of the kernel $\mathcal{K}(b_{t}(\omega))$ of $b_{t}(\omega).$
Consider the cases of $\mathbb{I}$ finite:
The range $\mathcal{R}((b_{t}(\omega))')$ is then closed, since it is finite dimensional.
The kernel $\mathcal{K}(b_{t}(\omega))$ is trivial
iff the $p_{t}\left( \omega\right) \,\sigma_{t}^{i}\left( \omega\right)$
are linearly independent.
So $(b_{t}(\omega))'$ is
surjective and and there is a (non-unique) solution $\theta_{t}( \omega ),$ for every $x_{t}(\omega),$
iff the $p_{t}\left( \omega\right) \,\sigma_{t}^{i}\left( \omega\right)$
are linearly independent.
Consider the cases of $\mathbb{I}$ infinite:
The map $(b_{t}(\omega))'$ from $E^{-s}([0,\infty[ \,)$ to $\ell^{2}(\mathbb{I}),$
is then never surjective. In fact, $b_{t}(\omega)$ is a Hilbert-Schmidt operator,
so it is compact. The adjoint is then also compact and since $\ell^{2}(\mathbb{I})$
is infinite dimensional, its range must be a proper subspace of $\ell^{2}(\mathbb{I}).$
This is the basic reason why there are always non-attainable contingent claims,
when $\mathbb{I}$ is infinite.
We have the following result
(see Th.4.1 and Th.4.2 of \cite{E.T Bond Completeness} for the case $\mathbb{I}=\mathbb{N}$):
\begin{theorem} \label{Th D0 non complete and approx complete}
Let $\derprod{}{0}=\cap_{p \geq 1} L^{p}(\Omega,P,\mathcal{F}).$ \\
$i)$ If $\mathbb{I}=\mathbb{N},$ then there exists $X \in \derprod{}{0}$
such that $\prtfpxd{\timeh}(\theta) \neq X$ for all $\theta \in \sfprtfs.$ \\
$ii)$ $\derprod{}{0}$ has a dense subspace of attainable contingent claims
if and only if the operator $\vol{}{t}(\omega)$ has
a trivial kernel a.e. $(t,\omega) \in \mathbb{T} \times \Omega.$
\end{theorem}
Statement $ii)$ says by definition that the bond market is approximately complete
(notion introduced in \cite{Bj-Ka-Ru97} and \cite{Bj-Ma-Ka-Ru97})
if and only if $\vol{}{t}(\omega)$ has a trivial kernel a.e.
In the sequel of this section, we are interested in the hedging problem for
approximately complete markets, so
we only consider the solution of the hedging equation (\ref{hedge eq 1})
in the case when $\vol{}{t}(\omega)$ has a trivial kernel a.e.
$(t,\omega) \in \mathbb{T} \times \Omega.$
Consider the case when $\mathbb{I}=\mathbb{N}$ is an infinite and let $\ell^{2}= \ell^{2}(\mathbb{I}).$
To derive a condition under which (\ref{hedge eq 1}) has a solution
and to derive a closed formula for one of the solutions,
we rewrite the l.h.s. of (\ref{hedge eq 1}) using the notations
\begin{equation} \label{B and l}
l_{t}=\ltrans{t}\zcpxd{0}, \; B_{t}(\omega)=l_{t} \vol{}{t}(\omega)
\; \text{and } \; \eta_{t}(\omega)=\sequiv^{-1}(\zcpxd{t}(\omega)/l_{t})\theta_{t}(\omega).
\end{equation}
Then
\begin{equation*}
\begin{split}
(\vol{}{t}(\omega))'&\zcpxd{t}(\omega)\theta_{t}(\omega)
=(\vol{}{t}(\omega))'l_{t}(\zcpxd{t}(\omega)/l_{t})\theta_{t}(\omega)
=(l_{t} \vol{}{t}(\omega))'(\zcpxd{t}(\omega)/l_{t})\theta_{t}(\omega) \\
&=(l_{t} \vol{}{t}(\omega))^{*}\sequiv^{-1}(\zcpxd{t}(\omega)/l_{t})\theta_{t}(\omega)
=(B_{t}(\omega))^{*}\eta_{t}(\omega).
\end{split}
\end{equation*}
The linear operator $B_{t}(\omega)$ is given, since $\zcpxd{0}$
and $\vol{}{t}(\omega)$ are supposed given.
Applying Theorem \ref{Th price compatible strong cond.} to the factor $\zcpxd{}/l,$
it follows that equation (\ref{hedge eq 1}) is equivalent to find a progressive $E^{s}$-valued process
$\eta$ satisfying
the equation
\begin{equation} \label{prtf eq 2}
(B_{t}(\omega))^{*}\eta_{t}(\omega)=x_{t}(\omega), \; \text{a.e. } (t,\omega) \in \mathbb{T} \times \Omega.
\end{equation}
We define the self-adjoint operator $A_{t}(\omega)$ in $\ell^{2}$ by
\begin{equation}
A_{t}(\omega)=(B_{t}(\omega))^{*}B_{t}(\omega).
\label{At}
\end{equation}
It is a fact of basic Hilbert space operator theory (cf. \cite{Kato66}) that the range
$\mathcal{R}((B_{t}(\omega))^{*})$ $=\mathcal{R}((A_{t}(\omega))^{1/2}).$
The solvability of each one of equations (\ref{hedge eq 1}) and (\ref{prtf eq 2})
is therefore equivalent to the existence of a progressive $\ell^{2}$-valued process
$z$ satisfying
\begin{equation} \label{hedge eq l2}
(A_{t}(\omega))^{1/2}z_{t}(\omega)=x_{t}(\omega), \; \text{a.e. } (t,\omega) \in \mathbb{T} \times \Omega.
\end{equation}
The kernel $\mathcal{K}((A_{t}(\omega))^{1/2})$ is trivial since
$\mathcal{K}((A_{t}(\omega))^{1/2})=\mathcal{K}(A_{t}(\omega))=\mathcal{K}(B_{t}(\omega))$ $=\{0\}.$
Now, if $x_{t}(\omega) \in \mathcal{R}((B_{t}(\omega))^{*})$ then the unique solution of (\ref{hedge eq l2})
is $z_{t}(\omega)=(((A_{t}(\omega))^{1/2})^{-1}x_{t}(\omega)$ and a solution of (\ref{prtf eq 2})
is given by
\begin{equation}
\eta_{t}(\omega)=S_{t}(\omega)(A_{t}(\omega))^{-1/2}x_{t}(\omega),
\label{prtf eq 3}
\end{equation}
where $S_{t}(\omega),$ the closure of the operator $B_{t}(\omega)(A_{t}(\omega))^{-1/2},$
is isometric (cf. \cite{Kato66}) from $\ell^{2}$ to $E^{s}.$
Let $a$ be as in (\ref{sol hedge eq1}) and %
\begin{equation}
\theta =a \delta_{0}+\bar{\theta} \; \text{and} \; \bar{\theta}_{t}=(l_{t}/\zcpxd{t}) \sequiv \eta_{t}. %
\label{prtf eq 4}
\end{equation}
Then $\theta $ is a hedging portfolio according to Lemma \ref{lm hedge eq}.
In order to ensure that $x_{t}(\omega)$ of
(\ref{hedge eq 1}) is in the range of $(\dualvol{}{t}\zcpxd{t})(\omega),$
we introduce spaces $\ell^{s,2},$ of vectors decreasing faster (for $s>0$) than those
of $\ell^{2}.$ For $s \in \mathbb{R},$ let $\ell^{s,2}$ be the Hilbert space of real sequences
endowed with the norm
\begin{equation}
\|x\|_{\ell^{s,2}}=(\sum_{i \in \mathbb{N}}(1+i^{2})^{s}|x^{i}|^{2})^{1/2}.
\label{ls,2}
\end{equation}
Obviously $\ell^{2}=\ell^{0,2}$ and $\ell^{s',2} \subset \ell^{s,2},$ if $s' \geq s.$
Although $(A_{t}(\omega))^{-1/2}$ is an unbounded operator in $\ell^{2}$ its restriction
to $\ell^{s,2}$ can be a bounded operator for some sufficient large $s>0,$ i.e.
$(A_{t}(\omega))^{-1/2}\ell^{s,2} \subset \ell^{2}.$ This is the idea of our assumption,
which will ensure hedgeability. However a precise formulation of this assumption
must, as in the case of a finite of Bm., take care of integrability properties in $(t,\omega).$
To consider also the case of a finite $\mathbb{I},$
we define after obvious modifications the operator
$A_{t}(\omega))$ in $\ell^{2}(\mathbb{I})$ by formula (\ref{At}). In this case
$A_{t}(\omega))$ has obviously a bounded inverse.
\begin{condition} \label{uniform cond sigma}
$i)$ If $Card(\mathbb{I}) < \infty,$ then there exists $k \in \derprod{}{0},$
such that for all $x \in \ell^{2}(\mathbb{I}):$
\begin{equation}
\|x\|_{\ell^{2}}
\leq k(\omega) \|(A_{t}(\omega))^{1/2}x \|_{\ell^{2}} \;
\text{a.e.} \; (t,\omega) \in \mathbb{T} \times \Omega.
\label{uniform cond sigma eq Rm}
\end{equation}
$ii)$ If $\mathbb{I}=\mathbb{N},$ then there exists $s >0$ and
$k \in \derprod{}{0},$ such that for all $x \in \ell^{2}(\mathbb{I}):$
\begin{equation}
\|x\|_{\ell^{2}}
\leq k(\omega) \|(A_{t}(\omega))^{1/2}x \|_{\ell^{s,2}} \; \text{a.e.} \; (t,\omega) \in \mathbb{T} \times \Omega.
\label{uniform cond sigma eq}
\end{equation}
\end{condition}
In the case of a finite number of Bm. Condition \ref{uniform cond sigma} $i)$ leads
to a complete market and one can choose a hedging portfolio such that it is continuous in
the asset to hedge. To state the result let use introduce the notation
$\derprod{}{0}(F)=\cap_{p \geq 1} L^{p}(\Omega,P,\mathcal{F},F),$ where $F$ is a Banach space.
$\derprod{}{0}=\derprod{}{0}(\mathbb{R}).$
\begin{theorem}[Finite number of random-sources, $Card(\mathbb{I}) < \infty$]
\label{th completeness R^m} %
\text{} \\
If $(i)$ of Condition \ref{uniform cond sigma} is satisfied and
if $X \in \derprod{}{0},$ then the portfolio given by equation (\ref{prtf eq 4})
satisfies $\theta \in \sfprtfs$ and $\prtfpxd{\timeh}(\theta)=X.$
Moreover the linear mapping
$\derprod{}{0} \ni X \mapsto \theta \ \in \prtfs \cap \derprod{}{0}(L^{2}(\mathbb{T},\dualE{})),$
is continuous.
\end{theorem}
\begin{proof}
We only outline the proof of the theorem.
Here $\ell^{2}=\ell^{2}(\mathbb{I})=\mathbb{R}^{\numrand}$ is finite dimensional. \\
Let $X \in \derprod{}{0}$ and let $x$ be given by (\ref{mart decomp 1}).
First one proves (see Lemma 3.1 of \cite{E.T Bond Completeness})
that
\begin{equation} \label{D 1}
\derprod{}{0}(F)=\cap_{p \geq 1} L^{p}(\Omega,Q,\mathcal{F},F).
\end{equation}
Applying the BDG inequalities to equation (\ref{mart decomp 1}) it follows that
\begin{equation} \label{proof R^m 1}
x \in \derprod{}{0}(L^{2}(\mathbb{T},\ell^{2})),
\end{equation}
where $x$ is progressively measurable. The definition of $\eta$ in (\ref{prtf eq 3})
and the condition (\ref{uniform cond sigma eq Rm}) give
\[
\|\eta_{t}(\omega)\|_{\ell^{2}}
\leq k_{t}(\omega) \|x_{t}(\omega) \|_{\ell^{2}}.
\]
Inequality (\ref{proof R^m 1}) then leads to $\eta \in \derprod{}{0}(L^{2}(\mathbb{T},\E{})).$
Using the definition (\ref{prtf eq 4}) of $\bar{\theta}$ we then obtain
\begin{equation} \label{proof R^m 2}
\bar{\theta} \in \derprod{}{0}(L^{2}(\mathbb{T},\dualE{})).
\end{equation}
Since $\bar{\theta}$ satisfies equation (\ref{hedge eq 1})
by construction and since formulas (\ref{proof R^m 1}) and (\ref{proof R^m 2})
shows that $\bar{\theta}$ is admissible,
the hypotheses of Lemma \ref{lm hedge eq} are satisfied, so $\theta \in \sfprtfs.$
This shows that $\theta$ is a hedging portfolio of $X.$
All the linear maps $X \mapsto x \mapsto \eta \mapsto \theta$ are continuous
in the above spaces, which also proves the claimed continuity of the map $X \mapsto \theta.$
\end{proof}
The solution of the hedging problem, given by Theorem \ref{th completeness R^m},
is highly non-unique, since when $Card(\mathbb{I})=\numrand < \infty$ then
the kernel $\mathcal{K}((\dualvol{}{t}\zcpxd{t})(\omega))$ has infinite dimension.
For instance there is a hedging portfolio $\hat{\vartheta}$ consisting of $\numrand+1$
rollovers at any time. %
To state the result in the case of a infinite number of Bm., we first introduce
spaces of contingent claims $\derprod{}{s},$ smaller than $\derprod{}{0}$ if $s>0$
and corresponding to that the integrand $x$ in (\ref{mart decomp 1}) takes values
in $\ell^{s,2}.$ More precisely, for $s>0$ let
\begin{equation} \label{def D_s}
\derprod{}{s} =\{X \in \derprod{}{0} \; | \; x \in \derprod{}{0}(L^{2}(\mathbb{T},\ell^{s,2}))
\; \text{where $x$ is given by (\ref{mart decomp 1})} \}.
\end{equation}
Condition \ref{uniform cond sigma} $ii)$ leads to a $\derprod{}{s}$\textit{-complete} market,
i.e. $\derprod{}{s}$ is a space of attainable contingent claims,
$\derprod{}{s}$ is a dense subspace of $\derprod{}{0}$ and $\derprod{}{s}$ is itself
a complete topological vectorspace. This concept gives a natural frame-work to
study existence and continuity of hedging portfolios.
We have (see Theorem 4.3 of \cite{E.T Bond Completeness}):
\begin{theorem}[Infinite number of random-sources $\mathbb{I}=\mathbb{N}$] \text{} \\
\label{th completeness l^2} %
If $(ii)$ of Condition \ref{uniform cond sigma} is satisfied
and if $X \in \derprod{}{s},$ where $s>0$ is given by Condition \ref{uniform cond sigma},
then the portfolio given by equation (\ref{prtf eq 4})
satisfies $\theta \in \sfprtfs$ and $\prtfpxd{\timeh}(\theta)=X.$
Moreover the linear map
$\derprod{}{s} \ni X \mapsto \theta \in \prtfs \cap \derprod{}{0}(L^{2}(\mathbb{T},\dualE{})),$
is continuous.
\end{theorem}
For the proof, which only uses elementary spectral properties of self-adjoint
operators and compact operators, the reader is referred to \cite{E.T Bond Completeness}.
A Malliavin-Clark-Ocone formalism was adapted recently in reference \cite{Carmona-Tehr},
for the construction of hedging portfolios in a Markovian context, with a Lipschitz
continuous (in the bond price) volatility operator. This guaranties that the
Malliavin derivative of the bond price is proportional to the volatility operator
(formula (30) of \cite{Carmona-Tehr}).
Hedging is then achieved for a restricted class of claims, namely European claims
being a Lipschitz continuous function in the price of the bond at maturity.
References \cite{DeDonno Pratelli} and \cite{Pham 2003} studies the hedging
problem in a weaker sense of approximate hedging, which in our context simply boils down
to the well-known existence of the integrand $x$ in the decomposition (\ref{mart decomp 1}).
\section{Optimal portfolio management}
We now consider an investor, characterized by a von-Neumann-Morgenstern
utility function $U$, an initial wealth $v,$ and a horizon $\timeh$. The money is
invested in a market portfolio, and the investor seeks to maximize the
terminal (discounted) value $V_{\timeh}(\theta)$ of the portfolio. Transaction costs and
taxes are neglected. The optimal portfolio problem is then to find an
admissible self-financing portfolio $\hat{\theta}$ with $V_{0}(\hat{\theta})=v,$
such that:
\[
\text{($P_{0}$)}\left\{
\begin{array}
[c]{c}%
\sup E_{P}\left[ U\left( V_{\timeh}(\theta)\right) \right] =E_{P}[U(V_{\timeh}(\hat
{\theta}))]\\
V_{0}(\theta)=v\\
\theta\in\mathsf{P}_{sf}.
\end{array}
\right.
\]
We will follow the now classical two-step approach (cf. \cite{Kr-Scha}, \cite{Pliska86})
towards solving that problem.
If the portfolio is self-financing and is worth $v$ at time $0$, then, by the
martingale property:%
\[
E_{P}\left[ \xi_{\timeh}V_{\timeh}(\theta)\right] =v
\]
where the random variable $\xi_{\timeh},$ arising from Girsanov's theorem, was
introduced earlier in (\ref{75}). In general there can be several possible $\xi_{\timeh},$
one for each $\gamma$ satisfying the conditions of Definition \ref{market cond}.
The first step (optimization) consists of finding for given $\gamma,$ among %
$\mathcal{F}_{\timeh}$-measurable random variables $X$ such that $E_{P}\left[
\xi_{\timeh}X\right] =v$, the one(s) that maximize expected utility $E_{P}%
[U\left( X\right) ]$. This problem has in our setting a general solution
$\hat{X},$ given by Proposition \ref{exist unique X}. The second one (accessibility)
consists in hedging one of the contingent claims $\hat{X},$ obtained for the different
$\gamma,$ by a self-financing portfolio
$\hat{\theta}.$ This portfolio is then a solution of the optimal portfolio problem
($P_{0}$). By concavity, the final optimal wealth $V_{\timeh}(\hat{\theta})$ is unique.
\subsection{Optimization}
We consider,
for a given $\gamma$ satisfying the conditions of Definition \ref{market cond},
the optimization problem:%
\[
\left\{
\begin{array}
[c]{c}%
\sup E_{P}\left[ U\left( X\right) \right] \\
E_{P}\left[ \xi_{\timeh}X\right] =v\\
X\in L^{2}\left( \Omega,\mathcal{F}_{\timeh},P\right)
\end{array}
\right.
\]
We can rewrite it in a more geometric way, involving the scalar product
in $L^{2}\left( \Omega,\mathcal{F}_{\timeh},P\right) $:%
\[
\text{(P)}\left\{
\begin{array}
[c]{c}%
\sup\int_{\Omega}U\left( X\right) dP\\
\int_{\Omega}\xi_{\timeh}XdP=(\xi_{\timeh},X)_{L^{2}}=v \\
X\in L^{2}\left( \Omega,\mathcal{F}_{\timeh},P\right)
\end{array}
\right.
\]
Problem (P) consists of maximizing a concave function on a closed linear
subspace of $L^{2}$. Assume there is a maximizer $\hat{X}$. If the usual
theory of Lagrange multipliers applies, there will be some $\lambda\in \mathbb{R}$ such
that $\hat{X}$ actually optimizes the functional
\[
\int_{\Omega}\left[ U\left( X\right) -\lambda\xi_{\timeh}X\right] dP
\]
over all of $L^{2}$. Maximizing pointwise under the integral, and bearing in
mind that $U$ is concave, we are led to the equation:%
\begin{equation}
U^{\prime}\left( \hat{X}\left( \omega\right) \right) =\lambda\xi
_{\timeh}\left( \omega\right) \text{ \ }P\text{-a.e.}, \label{5}%
\end{equation}
which fully characterizes the solution $\hat{X}$. Unfortunately this program
cannot be carried through, for the function $E_{P}\left[ U\left( X\right)
\right]$ has no point of continuity in $L^{2}$ unless $U$ is bounded, so the
constraint qualification conditions do not hold for problem (P), cf. \cite{I.E.-R.T}.
We will
therefore proceed by a roundabout way:\ use (\ref{5}) to define $\hat{X}$, and
then prove that $\hat{X}$ is optimal for a suitable choice of $\lambda$. For
this, we need some conditions on $U$.
\begin{definition} \label{85}
The utility function $U$ will be called \emph{admissible} if it
satisfies the following properties:
\begin{enumerate}
\item $U:\mathbb{R}\rightarrow\left\{ -\infty\right\} \cup \mathbb{R}$ is concave and upper semi-continuous
\item there is some $a\in\left\{ -\infty\right\} \cup \ ]- \infty,0],$ such that $U\left(
x\right) =-\infty$ if $x<a$ and $U\left( x\right) >-\infty$ if $x>a$
\item $U$ is twice differentiable on the interval $A= \ ]a,\ \infty\lbrack$; set
$B=U^{\prime}\left( A\right) $
\item $\sup B=+\infty;$ $\inf B =0$ or $\inf B =- \infty.$
\item $U^{\prime}:A\rightarrow B$ is one-to-one, and there are some
positive constants $r, c_{1}, c_{2}$ and $c_{3}$
such that its inverse $I=\left[ U^{\prime}\right] ^{-1}$ satisfies the
estimate $\vert I( y)\vert + \vert y I'( y)\vert \leq c_{1}+c_{2}\left\vert
y\right\vert ^{r}+c_{3}\left\vert y\right\vert ^{-r}$ for $y\in B$.
\end{enumerate}
\end{definition}
It follows from these assumptions that $I$ is continuous and strictly
decreasing, with:
\begin{align*}
I\left( \lambda\right) & \rightarrow+\infty\text{ when }\lambda
\rightarrow\inf B\\
I\left( \lambda\right) & \rightarrow a\text{ when }\lambda\rightarrow
+\infty.
\end{align*}
We note that the estimate, in point $5)$ of Definition \ref{85}, is satisfied iff
there exist $C \geq 0$ such that
\[
\vert I( y)\vert + \vert y I'( y)\vert
\leq C \ ( |y|^{r}+ |y|^{-r}),
\]
for all $y\in B.$
All usual utility functions are admissible:
\begin{example} \label{U example} \text{} \\
i) Quadratic utility;
Set $U\left( x\right) =\mu x-\frac{1}{2} x^{2},$ $\mu \in \mathbb{R}.$ Then
$a=-\infty$, and $U^{\prime}\left( x\right) =\mu - x,$ so that $B=\mathbb{R}$ and
$I\left( y\right) = \mu -y.$ The estimate is satisfied with $r=1.$ \\
ii) Exponential utility; Set $U\left( x\right) =1- \frac{1}{\mu} \exp\left( -\mu x\right),$
$\mu > 0.$
Then $a=-\infty$, and $U^{\prime}\left( x\right) =\exp\left(
-\mu x\right) ,$ so that $B=]0,\ \infty [$ and $I\left( y\right)
=-\frac{1}{\mu}\ln\left( y\right) $. The estimate is satisfied for any $r>0.$ \\
iii) Power utility; Set $U\left( x\right) =\frac{1}{\mu}x^{\mu}$ for some
$\mu<1$ and $\mu \neq 0$ (note that $\mu$ may be negative). Then $a=0$, and $U^{\prime
}\left( x\right) =x^{\mu-1}$, so that $B=]0,\ \infty\lbrack$ and
$I\left( y\right) = y^{1/( \mu-1) }$. The
estimate is satisfied with $r=\frac{1}{1-\mu}.$ \\
iv) Logarithmic utility; Set $U\left( x\right) =\ln x$. Then $a=0$ and $U^{\prime
}\left( x\right) =\frac{1}{x}$, so that $B=]0,\ \infty\lbrack$ and $I\left(
y\right) =\frac{1}{y}$. The estimate is satisfied with $r=1.$
\end{example}
Take some $\lambda\in B$ and a $\gamma$ satisfying the conditions of
Definition \ref{market cond}, and define a random variable $X_{\lambda}$ by:%
\[
X_{\lambda}\left( \omega\right) =I\left( \lambda\xi_{\timeh}\left(
\omega\right) \right).
\]
$X_{\lambda}$ is $\mathcal{F}_{\timeh}$-measurable. In addition, we have:
\begin{lemma} \label{X in Lp}
$X_{\lambda}\in L^{p}\left( \Omega,\mathcal{F}_{\timeh},P\right) $ for every
$p \geq 1$.
\end{lemma}
\begin{proof}
Since $U$ is admissible, we know from condition 4 that, for some $r>0$ we
have:%
\begin{align*}
\left\vert I\left( \lambda\xi_{\timeh}\right) \right\vert ^{p} & \leq\left(
c_{1}+c_{2}\left\vert \lambda\xi_{\timeh}\right\vert ^{r}+c_{3}\left\vert
\lambda\xi_{\timeh}\right\vert ^{-r}\right) ^{p}\\
& \leq k_{1}+k_{2}\left\vert \lambda\right\vert ^{pr}\left\vert \xi
_{\timeh}\right\vert ^{pr}+k_{3}\left\vert \lambda\right\vert ^{-pr}\left\vert
\xi_{\timeh}\right\vert ^{-pr}%
\end{align*}
and the right-hand side is integrable, for we know that $\xi_{\timeh}^{s} \in
L^{1}\left( \Omega,\mathcal{F}_{\timeh},P\right) $ for every $s \in \mathbb{R}.$
\end{proof}
\begin{lemma} Let $v \in A.$
There is a unique $\hat{\lambda}\in B$ %
such that
$E_{P}\left[X_{\hat{\lambda}}\xi_{\timeh}\right] =v$
\end{lemma}
\begin{proof}
Consider the map $\varphi:B\rightarrow \mathbb{R}$ defined by $\varphi\left(
\lambda\right) =E_{P}\left[ X_{\lambda}\xi_{\timeh}\right] =E_{P}\left[
I\left( \lambda\xi_{\timeh}\right) \xi_{\timeh}\right] $. Since $\xi_{\timeh}$ $>0$
$P$-a.e., and $I$ is strictly decreasing, $\varphi$ is strictly decreasing. Using the
Lebesgue dominated convergence theorem, we find that it is continuous. Using
Fatou's lemma, we find that:
\begin{itemize}
\item $\varphi\left( \lambda\right) \rightarrow+\infty$ when $\lambda
\rightarrow\inf B$
\item $\limsup$ $\varphi\left( \lambda\right) \leq a$ when $\lambda
\rightarrow+\infty$
\end{itemize}
Since $v \in A,$ it follows that there is a unique $\hat{\lambda}$ such
that $\varphi\left( \hat{\lambda}\right) =v$.
\end{proof}
Denote $X_{\hat{\lambda}}$ by $\hat{X}$. We now conclude:
\begin{proposition} \label{exist unique X}
$\hat{X}$ is the unique solution of problem (P).
\end{proposition}
\begin{proof}
Let us show that $\hat{X}$ is indeed a solution of problem (P). Uniqueness
follows from the strict concavity of $U.$
We have shown that $\hat{X}$ is in $L^{2}$, and $E_{P}[ \hat{X}\xi_{\timeh}]=v,$
so $\hat{X}$ satisfies the constraints. Take another $X\in L^{2}$ such that
$E_{P}[ \hat{X}\xi_{\timeh}] =v.$ Since $U$ is concave, we have:%
\[
U\left( X\left( \omega\right) \right) \leq U\left( \hat{X}\left(
\omega\right) \right) +(X\left( \omega\right) -\hat{X}\left(
\omega\right) )U^{\prime}\left( \hat{X}\left( \omega\right) \right)
\text{ \ \ }P\text{-a.e.}%
\]
By definition, $U^{\prime}\left( \hat{X}\left( \omega\right) \right)
=\lambda\xi_{\timeh}\left( \omega\right) $. Substituting into the inequality and
integrating, we get:%
\[
\int_{\Omega}U\left( X\right) dP\leq\int_{\Omega}U\left( \hat{X}\right)
dP+\lambda\int_{\Omega}(X-\hat{X})\xi_{\timeh}dP
\]
and the last term vanishes because it is just $\lambda\left( v-v\right) $.
So $\hat{X}$ is indeed an optimizer, and the result follows.
\end{proof}
\subsection{Hedging}
Once the solution $\hat{X}$ of the optimization problem ($P$) is found, for a given $\gamma,$
the question
is whether it can be hedged by a self-financing portfolio $\hat{\theta},$ so
that $V_{\timeh}(\hat{\theta})=\hat{X}.$ We note that, if there exists such
$\hat{\theta} \in \sfprtfs,$ then it is a solution of ($P_{0}$).
In fact, let $\theta \in \sfprtfs$ and $V_{0}(\theta)=v$ and set $X=V_{\timeh}(\theta).$
It follows from ($P$) that
\[
E_{P}\left[U( V_{\timeh}(\theta))\right]=E_{P}\left[U(X)\right]
\leq E_{P}\left[U(\hat{X})\right]=E_{P}\left[U( V_{\timeh}(\hat{\theta}))\right],
\]
so $\hat{\theta}$ is a solution of ($P_{0}$).
\subsubsection{Deterministic case} \label{determ case}
In this paragraph, we shall use the general hedging results of
\S \ref{sect. hedging} to solve this problem, in the case when the
$m$ and $\sigma,$
are deterministic (i.e. they do not depend on $\omega$).
Under these conditions, there can be several $\gamma$ that satisfy the conditions
of Definition \ref{market cond} and some $\gamma$ can even be non-deterministic.
However, as we have supposed that the market is strongly arbitrage free, so equation
(\ref{rel drift-sigma}) has a solution, we can choose $\gamma$ to be the unique
solution with the property of being orthogonal in $\ell^{2}$ to the kernel of
the volatility operator. More precisely, we choose the unique $\gamma$ such that
\begin{equation} \label{gamma orth}%
(\gamma_{t},x)_{\ell^{2}}=0, \; \; \forall \ x \in \ell^{2}(\mathbb{I}) \; \; \text{s.t.} \; \; \sigma_{t}x=0.
\end{equation}
The $\gamma$ defined by this condition is deterministic. In the sequel of this
paragraph $\gamma$ is given by (\ref{gamma orth}).
In that case, it follows from formula (\ref{75}) that $\xi_{\timeh}$ is Malliavin
differentiable. It follows from formula (\ref{malliavin 1}) that the partial
derivative with respect to $\wienerq{i}{}$ is given by:%
\[
D_{i,t}\xi_{\timeh}=-\gamma_{t}^{i}\xi_{\timeh}%
\]
and $\hat{X}=I\left( \hat{\lambda}\xi_{\timeh}\right) $ is Malliavin
differentiable as well, with:
\[
D_{i,t}\hat{X}=-\hat{\lambda}\gamma_{t}^{i}\xi_{\timeh}I^{\prime}(\hat{\lambda}\xi_{\timeh}).
\]
The Clarke-Ocone formula now reads:%
\begin{align}
X & =E_{Q}[X\,|\,\mathcal{F}_{0}]+\sum_{i\in\mathbb{I}}\int_{0}^{\timeh}E_{Q}
\left[ D_{i,t}X\ |\ \mathcal{F}_{t}\right] d\tilde{W}_{t}^{i} \label{co1}\\
& =v-\hat{\lambda}\sum_{i\in\mathbb{I}}\int_{t}^{\timeh}\gamma_{t}^{i}
E_{Q}\left[\xi_{\timeh}I^{\prime}(\hat{\lambda}\xi_{\timeh})\ |\ \mathcal{F}_{t}\right]
d\tilde{W}_{t}^{i}\label{co2}%
\end{align}
We then write the equation (\ref{hedge eq 1}) for the hedging portfolio
$\hat{\theta},$ and we
substitute the Clark-Ocone formula for $x_{t}^{i}\left( \omega\right) $:%
\begin{equation} \label{76}
b_{t}(\omega)'\theta_{t}(\omega)=\ -\hat{\lambda
}E_{Q}\left[ \xi_{\timeh}I^{\prime}(\hat{\lambda}\xi_{\timeh}%
)\ |\ \mathcal{F}_{t}\right] \gamma_{t}.
\end{equation}
This equation has a solution iff $\gamma_{t}$ is in the range of $b_{t}(\omega)'.$
Since $\sigma$ is deterministic, this condition simplifies. In fact, let
$l_{t}$ and $B_{t}$ be given by (\ref{B and l}), which here both are deterministic,
and let $q(t,\omega)=\zcpxd{t}(\omega)/l_{t}.$ Then the expression (\ref{self-fin1}) of $b_{t}(\omega)'$
give:
\[
(b_{t}(\omega)'\theta_{t}(\omega))^{i}=<\theta_{t}(\omega)\,,\,p_{t}(\omega)\sigma_{t}^{i}>
=<\theta_{t}(\omega)q(t,\omega)\,,\, l_{t} \sigma_{t}^{i}>
=(B_{t}'f_{t}(\omega))^{i},
\]
where $f_{t}(\omega) \in E^{-s}$ is given by $f_{t}(\omega)=q(t,\omega)\theta_{t}(\omega).$
So, equation (\ref{76}) has a solution iff $\gamma_{t}$ is in the range of $B_{t}'.$
This is always true when $\mathbb{I}$ is finite, since then the range of $B_{t}'$
is equal to the orthogonal complement of the kernel of $\sigma_{t}$
(we remember that $p_{t}(\omega,x) >0$ for $x\geq 0$). When $\mathbb{I}=\mathbb{N},$
then the range is only a strictly smaller dense subset.
We are lead to following condition
\begin{definition} \label{cond. C}
We shall say that the market satisfies condition (C) if there exists a
deterministic portfolio $\theta_{t}^{0}$ which is admissible and satisfies
$B_{t}' \theta_{t}^{0}=\gamma_t,$ i.e.
\begin{equation}
<\theta_{t}^{0}\,,\left( \mathcal{L}_{t}p_{0}\right) \sigma_{t}%
^{i}>\ =\gamma_{t}^{i}, \label{77}%
\end{equation}
for each $i\in\mathbb{I}$ and $t$.
\end{definition}
Condition $C$ is then equivalent to $\gamma_{t} \in \mathcal{R}(B_{t}'),$
the range of $B_{t}'.$ %
In the case when $\mathbb{I}$ is finite, there is never
uniqueness in the choice of $\theta_{t}^{0}.$ \\
In the case when $\mathbb{I}$ is finite, we know that condition (C) is satisfied
and it can easily be verified, with $n$ elements say,
by picking $n$ maturities $0<S_{1}<...<S_{n}$ and by
seeking $\theta_{t}^{0}$ as a linear combination of rollovers:\ $\theta
_{t}^{0}=\sum x_{t}^{i}\delta_{S_{i}}$. Condition (\ref{77}) then reduces to a
system of $n$ linear equations with $n$ unknowns which determines the
$x_{t}^{i}$.
In the case when $\mathbb{I=\mathbb{N}}$, condition (C) may not be satisfied. We will
be content with %
reminding that the left-hand side of equation (\ref{77}) is
meaningful, since $\left( \mathcal{L}_{t}p_{0}\right) \sigma_{t}^{i}$
belongs to the space $E^{s}.$ %
If condition (C) is satisfied, equation (\ref{76}) becomes:%
\begin{equation*}
\begin{split}
<\theta_{t},\,\,p_{t}\sigma_{t}^{i}>\ & =-\hat{\lambda}E_{Q}[ \xi
_{\timeh}I^{\prime}(\hat{\lambda}\xi_{\timeh})\ |\ \mathcal{F}_{t}] <\theta
_{t}^{0},\frac{\,\mathcal{L}_{t}p_{0}}{p_{t}}p_{t}\sigma_{t}^{i}> \\
&=<-\hat{\lambda}E_{Q}[ \xi
_{\timeh}I^{\prime}(\hat{\lambda}\xi_{\timeh})\ |\ \mathcal{F}_{t}] \
\frac{\,\mathcal{L}_{t}p_{0}}{p_{t}}\ \theta_{t}^{0} \ , \ p_{t}\sigma_{t}^{i}>
\end{split}
\end{equation*}
and an obvious solution $\theta_{t}=\bar{\theta}_{t}$ (the risky part of the optimal portfolio) is
given by:%
\[
\bar{\theta}_{t}=-\hat{\lambda}E_{Q}[ \xi
_{\timeh}I^{\prime}(\hat{\lambda}\xi_{\timeh})\ |\ \mathcal{F}_{t}] \
\frac{\,\mathcal{L}_{t}p_{0}}{p_{t}}\ \theta_{t}^{0}.
\]
Applying Lemma \ref{lm hedge eq}, with $x$ defined by (\ref{sol hedge eq1}),
we obtain a hedging portfolio
$\hat{\theta}=x \delta_{0}+\bar{\theta}$ of $\hat{X},$ where $\bar{\theta}$ is as above, and:
\[
x_{t}=\frac{1}{p_{t}\left( 0\right) }\left( E_{Q}\left[ I\left(
\hat{\lambda}\xi_{\timeh}\right) \ |\ \mathcal{F}_{t}\right] -<\bar{\theta}%
_{t},p_{t}>\right).
\]
To sum up, in the case when the %
$m_{s}$ and the $\sigma_{s}^{i},i\in\mathbb{I}$, are deterministic,
with $\sigma_{t}^{i}\left( 0\right)=0,$
with condition $(C)$ and equation (\ref{rel drift-sigma}) satisfied,
an optimal admissible and self-financing
portfolio is given by
\begin{equation} \label{sum up}
\hat{\theta}_{t}=x_{t}\delta_{0}+\bar{\theta}_{t}, \; \; \text{where} \; \;
\bar{\theta}_{t}\ =y_{t}\ \frac{(\mathcal{L}_{t}p_{0})}{p_{t}} \ \theta_{t}^{0}
\end{equation}
and where the coefficients $x_{t}$ and $y_{t}$ are real-valued progressively measurable processes
given by
\begin{align}
y_{t} & = -E_{Q}[\hat{\lambda}\xi_{\timeh}I^{\prime}(\hat{\lambda}\xi
_{\timeh})\,|\,\mathcal{F}_{t}]\label{prtf explicit 1}\\
x_{t} & =(p_{t}(0))^{-1}\left( E_{Q}[I(\hat{\lambda}\xi_{\timeh}%
)\,|\,\mathcal{F}_{t}]-y_{t}<\theta_{t}^{0}\,,\,\mathcal{L}_{t}p_{0}>\right).
\label{prtf explicit 2}%
\end{align}
This leads immediately to a mutual fund theorem: whatever the utility function
and the initial wealth, the optimal portfolio at time $t$ is a linear combination of the current
account $\delta_{0}$ and the portfolio $f \mapsto <\theta_{t}^{0}%
,\frac{\,\ltrans{t}p_{0}}{p_{t}}f>,$ i.e. the portfolio
$\frac{\,\ltrans{t}p_{0}}{p_{t}}\theta_{t}^{0}.$
This portfolio is in general not self-financed, so it can
not be given the status of a \textit{market portfolio}. However we can easily reformulate
the result with a self-financed portfolio. In fact, chose an admissible utility
function, with $a=0,$ according to Definition \ref{85}. For this utility function,
let $\Theta$ be the optimal portfolio given by (\ref{sum up}), with unit initial wealth.
Obviously
$\frac{\,\ltrans{t}p_{0}}{p_{t}}\theta_{t}^{0}$ is a linear combination of
$\delta_{0}$ and $\Theta_{t}.$ This gives us:
\begin{theorem}[Mutual fund theorem] \label{Th mutual fund}
The optimal portfolio $\Theta$ %
has the following properties: \\
\noindent i) $\Theta$ is an admissible self-financing portfolio,
with unit initial value, i.e. $\sesq{\Theta_{0}}{\zcpxd{0}}=1,$
and the value at each time $t \in \mathbb{T}$ is strictly positive, i.e. $\sesq{\Theta_{t}}{\zcpxd{t}}>0.$ \\
\noindent ii) For each utility function $U,$ admissible according to Definition \ref{85}
and each initial wealth $v \in \;]a,\infty [\,,$
there exist two real valued processes $c$ and $d$ such that if
$\hat{\theta}_{t} =c_{t}\delta_{0}+d_{t}\Theta_{t},$
then $\hat{\theta}$ is an optimal self financing portfolio for $U,$
i.e. a solution of problem ($P_{0}$).
\end{theorem}
\subsubsection{Stochastic $m$ and $\sigma$}
We shall here concentrate on the case of an approximately complete market, which
is equivalent to that the volatility operator is non-degenerated. In fact,
according to $iii)$ of Theorem \ref{Th D0 non complete and approx complete},
the market is approximately complete if and only if $\vol{}{t}(\omega)$ has
a trivial kernel a.e. $(t,\omega) \in \mathbb{T} \times \Omega.$ We remind that
the market of price process $\gamma$ is unique in this case.
In the case of a finite number of Bm. we obtain easily from Lemma \ref{X in Lp}
and Theorem \ref{th completeness R^m} the following result
(see Theorem 3.6 of \cite{I.E.-E.T bond th}):
\begin{theorem}\label{th opt port R^m}
Let $\mathbb{I}$ be a finite set, let $U$ be admissible in the sens of Definition %
\ref{85} %
and let $i)$ of Condition \ref{uniform cond sigma} %
be satisfied.
The problem ($P_{0}$) %
then has a solution $\hat{\theta}.$ One solution $\hat{\theta}=a \delta_{0}+\bar{\theta} \in \sfprtfs$
is given by (\ref{prtf eq 4}). %
\end{theorem}
In the case of an infinite number of Bm. we shall impose Malliavin differentiability
properties on the market price of risk $\gammapx{}{}.$ To this end we introduce
the space $\derprod{1}{s},$ for $s>0$ by
\begin{equation} \label{def D^{1}_{s}}
\derprod{1}{s} =\{X \in \derprod{}{0} \; | \; DX \in \derprod{}{0}(L^{2}(\mathbb{T},\ell^{s,2})) \}.
\end{equation}
We can now state a result in the case of an infinite number of Bm., quite analog
to the case of a finite number of Bm. (see Theorem 4.5 of \cite{E.T Bond Completeness}):
\begin{theorem}\label{th opt port l^2}
Let $\mathbb{I}=\mathbb{N},$
let $U$ be admissible in the sens of Definition \ref{85}, %
let $ii)$ of Condition \ref{uniform cond sigma} %
be satisfied
and let $\ln(\xi_{\timeh}) \in \derprod{1}{s},$ where $s>0$ is given by $ii)$ of Condition \ref{uniform cond sigma}.
The problem ($P_{0}$) %
then has a solution $\hat{\theta}.$ One solution $\hat{\theta}=a \delta_{0}+\bar{\theta} \in \sfprtfs$
is given by (\ref{prtf eq 4}). %
\end{theorem}
\begin{proof}
We only consider the case of $U'>0,$ since the case of $U'(x)=0$ for some $x$ is so similar.
Let the hypotheses of the theorem be satisfied. The portfolio $\hat{\theta}$ is a solution of
equation ($P_{0}$), if $\hat{\theta} \in \sfprtfs$ and if it hedges
$\hat{X}$ given by Proposition \ref{exist unique X}.
(See Corollary 3.4 of \cite{I.E.-E.T bond th}).
It is enough to verify that Theorem \ref{th completeness l^2} applies to
$\hat{X}=I (\hat{\lambda} \xi_{\timeh})$ for a certain given $\hat{\lambda} >0.$
$I $ is $C^{1},$ so
$\mder{t}\hat{X}=\lambda \xi_{\timeh}\varphi' (\lambda \xi_{\timeh})\mder{t}\ln(\xi_{\timeh}).$
Since $\ln(\xi_{\timeh}) \in \derprod{1}{s},$ this gives
$\|\mder{}\hat{X}\|_{L^{2}( \mathbb{T},\ell^{s,2})}
=|\lambda \xi_{\timeh}\varphi' (\lambda \xi_{\timeh})| \, \|\mder{}\ln(\xi_{\timeh})\|_{L^{2}( \mathbb{T},\ell^{s,2})}.$
The inequality in 5) of Definition \ref{85} gives %
$\|\mder{}\hat{X}\|_{L^{2}( \mathbb{T},\ell^{s,2})}
\leq C ((\lambda \xi_{\timeh})^{p}
+(\lambda \xi_{\timeh})^{-p}) \|\mder{}\ln(\xi_{\timeh})\|_{L^{2}( \mathbb{T},\ell^{s,2})},$
for some $p \geq 1.$
Condition (\ref{gamma strong}) of Definition \ref{market cond}
shows that $(\lambda \xi_{\timeh})^{p}+(\lambda \xi_{\timeh})^{-p} \in L^{q}(\Omega, P),$
for all $q \geq 1.$ By hypothesis
$\|\mder{}\ln(\xi_{\timeh})\|_{L^{2}( \mathbb{T},\ell^{s,2})} \in \derprod{}{0},$
so H\"older's inequality now gives that
$\|\mder{}\hat{X}\|_{L^{2}( \mathbb{T},\ell^{s,2})} \in \derprod{}{0},$
i.e. $\mder{}\hat{X} \in \derprod{}{0}(L^{2}( \mathbb{T},\ell^{s,2})).$
By Lemma \ref{X in Lp}, $\hat{X} \in \derprod{}{0}.$
It follows that $\hat{X} \in \derprod{1}{s}.$
We can now apply Theorem \ref{th completeness l^2}, which proves the existence of $\hat{\theta}.$
\end{proof}
\subsubsection{Examples.} \label{exampl}
We now give some examples of optimal bond portfolios for logarithmic and
quadratic utility functions $U.$ Other examples can be found in \cite{I.E.-E.T bond th}.
First we assume the drift function $m_{t}$ and the volatility operator $\sigma_{t}$
to be deterministic.
We shall therefore suppose that the market satisfy condition
$(C),$ of Definition \ref{cond. C}, so the market prices of risk $\gamma$
is deterministic and satisfy condition (\ref{77}).
We shall derive the optimal portfolio directly, going through
the steps leading to the general solution (\ref{sum up}).
Secondly we study the general case of stochastic drift function $m_{t}$ and volatility operator $\sigma_{t}$
for the logarithmic utility function.
The final optimal discounted wealth is $\hat{X}=I(\hat{\lambda}\xi_{\timeh})$. The
corresponding optimal discounted wealth process $Y$is given by
$Y_{t}=E_{Q}[I(\hat{\lambda}\xi_{\timeh})\,|\,\mathcal{F}_{t}].$ The initial wealth
$Y_{0}=v$ determines $\hat{\lambda}$ by the equation
\begin{equation}
v=Y_{0}=E_{Q}[I(\hat{\lambda}\xi_{\timeh})]. \label{81}%
\end{equation}
We recall that $(p_{t})^{-1}\mathcal{L}_{t}p_{0}\in E^{s}$ a.s and that
$p_{t}(0)>0$ a.s.
\paragraph{Logarithmic utility (deterministic $m$ and $\sigma$)}
Let
\begin{equation}
U(x)=\ln(x). \label{util-fnct-log}
\end{equation}
We have $I(x)=1/x,$ and $\hat{X}=( \hat{\lambda}\xi_{\timeh})^{-1},$
so that equation (\ref{81}) gives:
\[
v=E_{Q}[1/(\hat{\lambda}\xi_{\timeh})]=E_{P}[\xi_{\timeh}/(\hat{\lambda}\xi_{\timeh}%
)]=1/\hat{\lambda}.
\]
Then using the expression (\ref{75}) for $\xi_{t}$ and $\tilde{W}_{t}^{i}$ we have:%
\begin{equation}
\frac{1}{\xi_{t}}=\exp\left( -\frac{1}{2}\int_{0}^{t}\sum_{i\in\mathbb{I}%
}\left( \gamma_{s}^{i}\right) ^{2}ds+\int_{0}^{t}\sum_{i\in\mathbb{I}}%
\gamma_{s}^{i}d\tilde{W}_{s}^{i}\right). \label{82}%
\end{equation}
The right-hand side is a $Q$-martingale, then so is $1/\xi_{t}$. It follows
that the optimal discounted wealth at $t$ is%
\[
Y_{t}=E_{Q}[I(\hat{\lambda}\xi_{\timeh})\,|\,\mathcal{F}_{t}]=\frac{1}{\hat
{\lambda}\xi_{t}}=\frac{v}{\xi_{t}}.
\]
Since $d(1/\xi_{t})=\sum_{i\in\mathbb{I}}(\gamma_{t}^{i}/\xi_{t})d\tilde{W}_{t}^{i}$
and $\hat{X} =Y_{\timeh},$
it then follows that: %
\begin{equation}
\hat{X} =v\left( 1+\sum_{i\in\mathbb{I}}\int_{0}^{\timeh}\gamma_{t}^{i}\frac{1}{\xi_{t}
}d\tilde{W}_{t}^{i}\right).
\end{equation}
The hedging equation (\ref{hedge eq 1}) and the above formula give:
\begin{equation} \label{log 1}
\forall i\in\mathbb{I},\ \ <\theta_{t}\left( \omega\right) ,\ p_{t}\left(
\omega\right) \,\sigma_{t}^{i}\left( \omega\right) >\ =\ \frac{v}{\xi
_{t}\left( \omega\right) }\gamma_{t}^{i}%
\end{equation}
By condition (C) we find a portfolio $\theta^{0}$ satisfying
$\gamma_{t}^{i}=\ <\theta_{t}^{0}\,,\left( \mathcal{L}_{t}p_{0}\right)\sigma_{t}^{i}>,$
so
\begin{equation} \label{log 1.1}
\gamma_{t}^{i}=\ <\left( \mathcal{L}_{t}p_{0}\right) \theta_{t}^{0}\,,\sigma_{t}^{i}>.
\end{equation}
Substituting this expression of $\gamma$ into (\ref{log 1}) we obtain:
\begin{equation} \label{log 2}
\forall i\in\mathbb{I},\ \ \
< p_{t}\left(\omega\right) \theta_{t}\left(\omega\right)
-\frac{v}{\xi_{t}(\omega)}(\mathcal{L}_{t}p_{0}) \ \theta_{t}^{0}
,\ \,\sigma_{t}^{i}\left( \omega\right) >\ = 0.
\end{equation}
One solution of this equation is obviously given by $\theta=\bar{\theta},$ where
\begin{equation} \label{log 3}
\bar{\theta}_{t}(\omega)
=y_{t}(\omega) \ \frac{(\mathcal{L}_{t}p_{0}) }{ p_{t}(\omega)} \ \theta_{t}^{0}, \; \;
y_{t}(\omega)= \frac{v}{\xi_{t}(\omega)}.
\end{equation}
The discounted value of $\bar{\theta}$ at time $t$ in state $\omega$ is then
\begin{equation} \label{log 4}
(V_{t}(\bar{\theta}))(\omega)=<\bar{\theta})_{t}\,, p_{t}>
=\frac{v}{\xi_{t}(\omega)} \ <\theta_{t}^{0}\,, \mathcal{L}_{t}p_{0}>.
\end{equation}
The optimal portfolio $\hat{\theta}$ is now obtained by using Lemma \ref{lm hedge eq}:
$\hat{\theta}_{t} =x_{t}\delta_{0}+\bar{\theta}_{t},$ where
\begin{equation} \label{log 5}
x_{t} =\frac{1}{p_{t}(0)}\frac{v}{\xi_{t}}(1-<\theta_{t}^{0}\,,\hat{\theta
}_{t}\left( \omega\right) \mathcal{L}_{t}p_{0}>).
\end{equation}
As it should, the discounted value of $\hat{\theta}$ is then
$V_{t}(\hat{\theta})=Y_{t}=v/\xi_{t}.$
We note the following useful property:
the ratio of the investment in bonds with time to maturity $S>0$ to the total
investment is deterministic. In fact this ratio is simply price
at $t=0,$ of a zero-coupon bond with time to maturity $S+t:$
\begin{equation} \label{log 6}
\frac{\bar{\theta}_{t}(S,\omega) \ p_{t}(S,\omega)}{(V_{t}(\bar{\theta}))(\omega)}
=p_{0}(S+t).
\end{equation}
\paragraph{Quadratic utility (deterministic $m$ and $\sigma$)}
Let the utility function be:
\[
U\left( x\right) =\mu x-\frac{1}{2}x^{2}%
\]
As in $i)$ of Example \ref{U example}, we find that
\[
I(y) =\mu -y.
\]
The final discounted optimal wealth is $\hat{X}=I(\hat{\lambda}\xi_{\timeh}),$ so
\[
\hat{X} =\mu -\hat{\lambda}\xi_{\timeh}.
\]
We determine $\hat{\lambda}$ by the condition:%
\begin{equation} \label{quadr 1}
v =E_{Q}\left[ \hat{X}\right] =E_{Q}\left[ \mu
-\hat{\lambda}\xi_{\timeh} \right]
=\mu -\hat{\lambda}E_{Q} \left[ \xi_{\timeh}\right] .
\end{equation}
Set
\[
Z_{t}=\exp{ \left( -\frac{1}{2}\int_{0}^{t}\sum_{i \in \mathbb{I}} (\gamma^{i}_{s})^{2}ds
-\int_{0}^{t}\sum_{i \in \mathbb{I}} \gamma^{i}_{s} d\wienerq{i}{s})\right)}.
\]
Then $Z$ is a martingale with respect to $Q$ and formula (\ref{W Q}) gives
\begin{equation} \label{quadr 2}
\xi_{t}=Z_{t} \exp{ \left(\int_{0}^{t}\sum_{i \in \mathbb{I}} (\gamma^{i}_{s})^{2}ds\right)}.
\end{equation}
We have, by substitution into (\ref{quadr 1}):
\[
v =\mu -\hat{\lambda}E_{Q} \left[ \xi_{\timeh}\right]
=\mu -\hat{\lambda}
\exp{ \left(\int_{0}^{\timeh}\sum_{i \in \mathbb{I}} (\gamma^{i}_{s})^{2}ds\right)}.
\]
This gives
\begin{equation} \label{quadr 4}
\hat{\lambda}
=\left( \mu -v \right)
\exp{ \left( - \int_{0}^{\timeh}\sum_{i \in \mathbb{I}} (\gamma^{i}_{s})^{2}ds\right)}.
\end{equation}
It now follows from (\ref{quadr 2}) that
\begin{equation} \label{quadr 5}
\hat{X} =\mu -\hat{\lambda}\xi_{\timeh}
=\mu + \left( v -\mu \right) \ Z_{\timeh}
\end{equation}
and the optimal discounted wealth at $t$ is%
\[
Y_{t}=E_{Q}[I(\hat{\lambda}\xi_{\timeh})\,|\,\mathcal{F}_{t}]
=\mu + \left( v -\mu \right) \ Z_{t}.
\]
Since $d Z_{t}=- Z_{t} \sum_{i \in \mathbb{I}} \gamma^{i}_{t} d\wienerq{i}{t},$
we have that
\begin{equation*} %
\hat{X}
=\mu - \left( v -\mu \right) \
\int_{0}^{\timeh} \sum_{i \in \mathbb{I}}Z_{t} \gamma^{i}_{t} d\wienerq{i}{t}
=\mu +
\int_{0}^{\timeh} \sum_{i \in \mathbb{I}} (\mu -Y_{t}) \gamma^{i}_{t} d\wienerq{i}{t},
\end{equation*}
so the hedging equation reads (see (\ref{hedge eq 1})):
\begin{equation} \label{quadr 7}
\forall i\in\mathbb{I},\ \
<\theta_{t}\left( \omega\right) ,\ p_{t}\left(\omega\right) \,\sigma_{t}^{i}\left( \omega\right) >\ =\
- \left(\mu -Y_{t} (\omega) \right) \ \gamma^{i}_{t}.
\end{equation}
As usually, condition (C) gives a portfolio $\theta^{0}$ satisfying
$\gamma_{t}^{i}=\ <\theta_{t}^{0}\,,\left( \mathcal{L}_{t}p_{0}\right)\sigma_{t}^{i}>,$
which together with (\ref{quadr 7}) gives:
\begin{equation*} %
\forall i\in\mathbb{I},\ \ \
< p_{t}\left(\omega\right) \theta_{t}\left(\omega\right)
+ \left( Y_{t}(\omega) -\mu \right) \ (\mathcal{L}_{t}p_{0}) \ \theta_{t}^{0}
,\ \,\sigma_{t}^{i}\left( \omega\right) >\ = 0.
\end{equation*}
One solution of this equation is $\theta=\bar{\theta},$ where
\begin{equation*} %
\bar{\theta}_{t}(\omega)=y_{t}(\omega) \
\frac{(\mathcal{L}_{t}p_{0}) }{ p_{t}(\omega)} \ \theta_{t}^{0}, \; \;
y_{t}(\omega)= \mu -Y_{t}(\omega) .
\end{equation*}
$\bar{\theta}$ gives the risky part of the optimal portfolio.
Applying Lemma \ref{lm hedge eq} we obtain the optimal portfolio
$\hat{\theta}_{t} =x_{t}\delta_{0}+\bar{\theta}_{t},$ where
\begin{equation} \label{quadr 9}
x_{t}= (\zcpxd{t}(0))^{-1} (Y(t)-(\mu -Y(t))\sesq{\theta_{t}^{0}}{\ltrans{t}\zcpx{0}}).
\end{equation}
\paragraph{Logarithmic utility (stochastic $m$ and $\sigma$)}
We assume that the conditions of Definition \ref{market cond} are satisfied.
We chose $\gamma_t(\omega)$ to be orthogonal to the kernel of
$\vol{}{t}(\omega),$ a.e. $(t,\omega).$ This $\gamma$ satisfies the conditions of
Definition \ref{market cond}. Formulas (\ref{util-fnct-log})--(\ref{log 1}) then
still hold true. As in the discussion preceding the condition $(C),$
of Definition \ref{cond. C} it follows that $\gamma_t(\omega)$ is a.s. in the closure
of the range of $B_{t}'(\omega).$ Therefore, in this example,
the natural generalization of the condition $(C)$
to the stochastic case is simply to impose the same condition (\ref{77})
of Definition \ref{cond. C} to be satisfied with a stochastic portfolio $\theta^{0} \in \prtfs.$
Formulas (\ref{log 1.1})--(\ref{log 6}) are then also true statements and it
follows using Theorem \ref{Th price compatible strong cond.} that $\hat{\theta} \in \sfprtfs.$
In particular the ratio of the investment in bonds with time to maturity $S>0$ to the total
investment is deterministic.
\subsection{The H-J-B approach}
When $m_{t}$ and $\sigma_{t}^{i}$ are given functions
$m_{t}(p_{t})$ and $\sigma_{t}^{i}(p_{t})$ of the price $p_{t},$ for every
$t,$ then the optimal portfolio problem ($P_{0}$) can be considered within a
Hamilton-Jacobi-Bellman approach. In this subsection we illustrate this
approach, without being rigorous and we suppose that the utility function $U$
satisfies the conditions of Definition \ref{85}. For notational simplicity
we exclude the price argument in $m_{t}$ and $\sigma_{t}^{i}.$
The optimal value function, here denoted by $F,$ then only depends of time
$t,$ of the value of the discounted wealth $w$ and the discounted price
function $f\in E^{s}$ of Zero-Coupons at time $t:$
\[
F(t,w,f)=\sup\{E[U(V_{\timeh}(\theta))\;|\;V_{t}(\theta)=w,\;p_{t}=f] \;|\;\theta
\in\mathsf{P}_{sf}\}.
\]
The derivative $DG(f;g)$ of a function $E^{s} \ni f \mapsto G(f)$ in the direction
$g \in E^{s},$ is as usually defined by
\[
DG(f;g)=\lim_{\epsilon \rightarrow 0} \frac{G(f+\epsilon g)-G(f)}{\epsilon}.
\]
Suppose that $G$ is $C^{2}.$
Writing $DG(f)$ for the map $g \mapsto DG(f;g)$ and $D^{2}G(f)$ for the map
$g_{1} \times g_{2} \mapsto DG(f;g_{1}, g_{2}),$ we have that %
$DG(f)$ is a linear continuous form on $E^{s}$ and $D^{2}G(f)$
is a bi-linear continuous form.
Let us first consider the case of a volatility operator $\vol{}{}$ with
trivial kernel, i.e. for every strictly positive price (function) $f \in E^{s},$
the kernel of the linear map $\vol{}{t}: \ell^2(\mathbb{I}) \rightarrow E^{s}$
is trivial a.s. According to Definition \ref{market cond} there is then a
unique market of price process $\gamma.$ Define the Hamiltonian $H(t,w,f,x)$ by: %
\begin{equation}
\begin{split}
&H(t,w,f,x)= \sum_{i \in \mathbb{I}} x^{i}(t,w,f)\gamma^{i}_{t} \frac{\partial F}{\partial w} \, (t,w,f)
+DF(t,w,f; \partial f +\sum_{i \in \mathbb{I}} \gamma^{i}_{t} \vol{i}{t}f) \\
& + \sum_{i \in \mathbb{I}} \bigl( \frac{1}{2}(x^{i}(t,w,f))^{2}\frac{\partial^{2} F}{\partial w^{2}} \, (t,w,f) %
+x^{i}(t,w,f) \frac{\partial}{\partial w} DF(t,w,f; \vol{i}{t}f) \\
&+\frac{1}{2} D^{2}F(t,w,f; \vol{i}{t}f, \vol{i}{t}f) \bigr).
\end{split}
\label{HJB hamiltonian eq inf dim}
\end{equation}
In that formula, $x=\left( x^{i}\right) _{i\in\mathbb{I}} \in \ell^{2}$ is the control,
which is related to the optimal terminal wealth by formula
(\ref{mart decomp 1}). A control $x$ is called \emph{admissible} if
\begin{equation}
x^{i}(t,V_{t}(\theta),p_{t})=<\theta_{t}\,,\,p_{t}\sigma_{t}^{i}>
\label{HJB control x}
\end{equation}
for all $\theta \in\mathsf{P}_{sf}$. In other words, $x^{i}$ can be
interpreted as the value invested in the $i$-th source of noise. Using
the Ito formula, one derives the (formal) HJB equation:
\begin{equation}
\frac{\partial F}{\partial t}\,(t,w,f)+\sup_{x} H(t,w,f,x)=0,
\label{HJB eq inf dim}%
\end{equation}
with the boundary condition
\begin{equation}
F(\timeh,w)=U(w).
\label{HJB bound cond inf dim}
\end{equation}
The optimal control $\hat{x},$ solution of the optimization problem
\[
\sup_{x} H(t,w,f,x),
\]
is given by
\begin{equation} \label{opt contr}
\hat{x}^{i}(t,w,f)=-\left( \frac{\partial^{2}F}{\partial w^{2}}\right)
^{-1}\left( \gamma_{t}^{i}\frac{\partial F}{\partial w}
+(D\frac{\partial F}{\partial w})(t,w,f; \sigma_{t}^{i}f)\right) ,\;i\in\mathbb{I}.
\end{equation}
Now, substitution of $H(t,w,f,\hat{x}(t,w,f))$ into equation (\ref{HJB eq inf dim}) gives:%
\begin{equation} \label{HJB eq 2}
\begin{split}
& \frac{\partial^{2}F}{\partial w^{2}}(t,w,f)
\Big(\frac{\partial F}{\partial t}(t,w,f)
+DF(t,w,f;\partial f+m_{t}f) \\ &+\frac{1}{2}\sum_{i\in\mathbb{I}}
D^{2}F(t,w,f;\sigma_{t}^{i}f,\sigma_{t}^{i}f) \Big)
=\frac{1}{2}\sum_{i\in\mathbb{I}}\left(\gamma_{t}^{i}\frac{\partial F}{\partial
w}+(D\frac{\partial F}{\partial w})(t,w,f;\sigma_{t}^{i}f)\right)^{2}.
\end{split}
\end{equation}
Once the solution $F$ of (\ref{HJB eq 2}),
with boundary condition (\ref{HJB bound cond inf dim}), is found, the optimal control $\hat{x}$
is given by (\ref{opt contr}).
Any optimal portfolio $\hat{\theta}$ is then a solution of the equation:%
\[
\hat{x}^{i}(t,V_{t}(\hat{\theta}),p_{t})=<\hat{\theta}_{t}\,,\,p_{t}\sigma_{t}^{i}>, \; \;
\forall \ \ i\in\mathbb{I}, \ \ t \in \mathbb{T}. %
\]
Next we consider the case of a volatility operator, which does not necessarily
have a trivial kernel. Once more we define the Hamiltonian $H(t,w,f,x,\gamma)$ by
formula (\ref{HJB hamiltonian eq inf dim}), which now also depends on the control
$\gamma,$ a $\ell^2(\mathbb{I})$ valued function of $(t,w,f).$ A control $(x,\gamma)$
is admissible if condition (\ref{HJB control x}) is satisfied and if the conditions
of Definition \ref{market cond} are satisfied, so writing out the price argument
$f \in E^s$ in $m_{t}$ and $\sigma_{t}^{i}:$
\begin{equation}
m_{t}(f)=\sigma_{t}(f) \gamma_{t}(w,f).
\label{HJB control gamma}
\end{equation}
The optimal control $\hat{\gamma}$ is determined by conditions (\ref{HJB control x})
and (\ref{HJB control gamma}). This can be seen as follows. Let $\gamma^{\perp}(f)$
be the unique solution of (\ref{HJB control gamma}) such that $\gamma^{\perp}(f)$
is in the orthogonal complement $(\mathcal{K}(\sigma_{t}(f)))^{\perp}$
of the kernel $\mathcal{K}(\sigma_{t}(f)),$ let
$\hat{\alpha} =\hat{\gamma} - \gamma^{\perp}$ and let
$P_t(f)$ be the orthogonal projection on $\mathcal{K}(\sigma_{t}(f)).$
Condition (\ref{HJB control x})
implies that $\hat{x} \in (\mathcal{K}(\sigma_{t}(f)))^{\perp}.$
According to (\ref{opt contr}), this can only be satisfied if
\begin{equation} \label{opt contr gamma}
\hat{\gamma} = \gamma^{\perp} +\hat{\alpha}
\;\; \text{and} \;\;
\hat{\alpha}_t(w,f) \frac{\partial F}{\partial w}= P_t(f) \nu_t(w,f),
\end{equation}
where $\nu_t^i (w,f)=(D\frac{\partial F}{\partial w})(t,w,f; \sigma_{t}^{i}f).$
So in the general the case of a volatility operator, which does not necessarily
have a trivial kernel, the H-J-B approach leads to the equation (\ref{HJB eq 2}),
with $\gamma$ replaced by $\hat{\gamma}$ defined by formula (\ref{opt contr gamma}).
In the case when $m_{t}$ and $\sigma_{t}^{i}$ are independent of $p_{t},$ then
the $\hat{x}^{i}$ are independent of $f,$ $\gamma = \gamma^{\perp}$
and the above equations simplify:%
\[
\frac{\partial F}{\partial t}\frac{\partial^{2}F}{\partial w^{2}}=\frac{1}%
{2}\left( \sum_{i\in\mathbb{I}}\Vert\gamma_{t}^{i}\Vert^{2}\right)
(\frac{\partial F}{\partial w})^{2},
\]
with the boundary condition
\[
F(\timeh,w)=U(w),\;w\in\mathbb{R}.
\]
Each self financing portfolio $\hat{\theta}\in\mathsf{P}_{sf},$ such that
\[
<\hat{\theta}_{t}\,,\,p_{t}\sigma_{t}^{i}>=-\gamma_{t}^{i}\left(
\frac{\partial F}{\partial w}\right) \left( \frac{\partial^{2}F}{\partial
w^{2}}\right) ^{-1},\; \; \forall \ i\in\mathbb{I}, \ t \in \mathbb{T},
\]
where $w=V_{t}(\hat{\theta}),$ is then a solution of problem ($P_{0}$).
The solutions in the examples in \S \ref{exampl}, as well as the general solution
(\ref{sum up}) for deterministic $m$ and $\sigma,$ are easily obtained by solving
these equations.
|
1,314,259,993,257 | arxiv | \section{Introduction} \label{sec:introduction}
Mid infrared (MIR) light (2 - 15 $\upmu$m) is of importance in a wide range of technological applications. Free space telecommunication \cite{su201810}, LIDAR \cite{weibring2003versatile}, environmental monitoring \cite{fix2016upconversion}, medicine and biology \cite{evans2007chemically,bellisola2012infrared,potter2001imaging,miller2013ftir} are only few of the several fields where MIR optics plays a role. In particular, gas sensing exploits the strong absorption bands in the MIR \cite{popa2019towards} to enhance remarkably the sensitivity of absorption spectroscopy measurements \cite{petersen2014mid,vainio2016mid,ghorbani2017real}. Despite the great interest in developing MIR applications, these are still hindered by immature optical MIR devices. Quantum optics offers new solutions to mitigate such limitations. Sub-poissonian light can be used to beat the shot noise limit \cite{brida2010experimental,whittaker2017absorption}. Entangled photons have been used to demonstrate new imaging and spectroscopy techniques able to get rid of detection technology limitations, namely ghost imaging \cite{pittman1995optical,morris2015imaging} or undetected photon measurement \cite{lemos2014quantum,kalashnikov2016infrared,vergyris2020two}. To enable quantum enhanced MIR metrology leveraging these quantum based measurement strategies, a source of single or entangled photons beyond 2 $\upmu$m is required. Up to now, these techniques have been investigated only with bulky, alignment tolerant and expensive instrumentation, based on free space nonlinear crystals \cite{kalashnikov2016infrared,prabhakar2020two}. To develop feasible, robust and affordable quantum technologies, miniaturization and cost effectiveness are crucial. Such requirements can be met by means of integrated photonics. In particular, silicon photonics integrated circuits are characterized by mature CMOS (complementary metal oxide semiconductor) fabrication technology, which allows for robust, stable, low power consuming and efficient light manipulation at the chip scale \cite{lockwood2010silicon}. On-chip MIR quantum measurements would enable efficient and cost effective sensors, boosting the development of MIR and quantum technologies.
Recently, an on-chip silicon-on-insulator (SOI) source of MIR pairs has been reported \cite{rosenfeld2020mid}. However, in this work a pump in the MIR is used, and both the paired photons are beyond 2 $\upmu$m, thus requiring specific MIR technologies for both the pump and the detection. Recently, we demonstrated that inter-modal spontaneous four wave mixing (SFWM) can be used in silicon waveguides to generate correlated pairs with one photon in the near infrared (NIR) and the other in the MIR by using a standard C-band pump \cite{signorini2018intermodal,signorini2019silicon}. However, we never detected the MIR correlated photon. Instead we inferred its existence by measuring the high energy photon in the pair.\\
In this work, we demonstrate a SOI waveguide source of heralded MIR single photons based on inter-modal SFWM, peforming the MIR detection by means of an upconversion system \cite{mancinelli2017mid}. The herald photon lays in the NIR, where it can be efficiently detected with traditional InGaAs single photon avalanche photodiodes (SPADs). Moreover, the photons are generated in discrete bands, thus removing the need for narrow band filters to select the operating wavelengths of signal and idler. As a result, the heralding efficiency is increased with respect to traditional intra-modal SFWM, as witnessed by the measured intrinsic heralding efficiency $\eta_I = 59(5) \, \%$. The large detuning of the generated photons is also beneficial for the pump and Raman noise rejection, that can be easily removed with broadband filters. The pump is a standard 1550 nm pulsed laser. Therefore, we do not require MIR technologies to operate a source beyond 2 $\mu$m. We assessed the single photon behaviour of the source by measuring a heralded $g^{(2)}_h(0)$ of 0.23(8). We monitored the idler-signal coincidences, reporting a maximum coincidence to accidental ratio of 40.4(9), exceeding the performance of current integrated sources of MIR heralded photons \cite{rosenfeld2020mid}.\\ The paper is organized in the following way:
in section \ref{sec:setup} we describe the chip design and the experimental setup. In section \ref{sec:data_anal} our approach to data analysis is extensively described. In section \ref{sec:results} the results relative to the source characterization are reported. Section \ref{sec:conclusions} concludes the paper.
\begin{figure*}[t]
\includegraphics[width=\textwidth]{chip_black_last_low.png}
\caption{\label{fig:setup} a) Simulated intensity profiles of the TE0 and TE1 spatial modes in the multimode waveguide. b) Experimental setup used in the experiment. For the pump (green) we used a pulsed laser at 1550.3 nm (40 ps pulse width, 80 MHz repetition rate), which, after passing through a band pass filter (F) and a polarization controller (PC), is coupled to the chip via a tapered lensed fiber. The chip schematics is shown in the bottom part. On the chip, after a 3-dB directional coupler (DC), half of the pump remains on the TE0, while the other half is converted to the TE1 via an asymmetric directional coupler (ADC1) (92$\%$ efficiency). In this way, the pump reaches the multimode waveguide (MMWG) equally splitted on the TE0 and TE1 modes. In the MMWG, the inter-modal SFWM process generates the idler (blue) and signal (red) photons in the TE0 and TE1 modes respectively. The signal is then converted to the TE0 via another asymmetric directional coupler (ADC2). In this way, idler and signal can be easily separated on chip. Idler and signal are then out-coupled from the chip via two tapered lensed fibers. Pump residual and Raman noise are rejected from the idler beam by means of a short pass filter (SP) with a cut-off wavelength of 1335 nm. The idler is then detected via an InGaAs SPAD (ID Quantique IDQ210), triggered by the pump, with a gate width of 1.90 ns. The signal, after being out-coupled from the chip, is polarization rotated through a free space half-wave plate $\left( \lambda / 2 \right)$ and upconverted to the visible through an upconverter system (UC). The UC includes a long pass filter with a cut-on wavelength of 1900 nm, which rejects the C-band pump. To be noticed that the UC introduces noise photons collinear to the upconverted signal and centered at the same wavelength. A bandpass filter (BP) is used to filter away part of this noise, without filtering the upconverted signal (purple). Then, the signal photons are analyzed by means of a Hanbury Brown and Twiss (HBT) interferometer. The HBT interferometer is composed by a 50/50 beam splitter (BS) with two visible silicon SPADs (Excelitas SPCM-AQRH-12) monitoring the BS reflection and transmission ports. The visible SPADs are used in free-running mode. A time tagging unit (Swabian Time Tagger 20) is used to monitor individual singles and coincidences between the three detectors.
}
\end{figure*}
\section{Chip design and experimental setup}\label{sec:setup}
Conventional intra-modal SFWM involves only one waveguide mode in the conversion of two input pump photons into an idler photon and a signal photon. On the contrary, inter-modal SFWM leverages the different chromatic dispersions of different optical spatial modes of a photonic waveguide to achieve phase matching \cite{signorini2018intermodal}. Different modal combinations are possible, depending on the waveguide cross-section, which also determines the generated signal and idler wavelengths. In this work, we use the transverse electric (TE) fundamental (TE0) and first (TE1) waveguide modes in a rib SOI waveguide. The waveguide has a width of 1.95 $\upmu$m and a height of 0.190 $\upmu$m over a 0.3 $\upmu$m thick slab. The waveguide length is 1.5 cm. The waveguide and the slab are in silicon, while the top and bottom claddings are in silica. The simulated intensity profiles of the TE0 and TE1 modes are shown in Fig. \ref{fig:setup}a.
The inter-modal combination used in our work involves the pump on both the TE0 and TE1, the idler on the TE0 and the signal on the TE1. A peculiar advantage of intermodal SFWM is the generation of the signal and idler photons on different waveguide modes. In this way, idler and signal can be easily separated with high efficiency through an on-chip mode converter.
The experimental setup is detailed in Fig. \ref{fig:setup}b.
The upconverter (UC) is constituted by a continuous wave (CW) laser cavity, where a Nd:YVO$_4$ pumped intra-cavity periodically poled lithium niobate (PPLN) allows for sum-frequency generation (SFG) between the intra-cavity laser (1064 nm) and the input MIR photons. We used a PPLN from HC Photonics with a length of 25 mm, tuned in temperature to upconvert the MIR signal at 2015 nm to the visible at 696 nm. The UC used is the same of Mancinelli et al. \cite{mancinelli2017mid}, though tuned at the wavelengths of interest here. The transfer function of the UC is reported in Fig. \ref{fig:profiles}b, showing a full width at half maximum (FWHM) of $1.15 \, \pm \, 0.12$ nm.
We used a pump pulsed laser centered at $1550.30 \, \pm \, 0.05$ nm with 40 ps pulse width and 80 MHz repetition rate. The generated idler spectrum is reported in Fig. \ref{fig:profiles}. We measured a discrete band centered at 1259.7 $\pm$ 0.5 nm, with a FWHM of 2.0 $\pm$ 0.3 nm. The measured FWHM of the idler is compatible with the simulated one of 1.81 nm, as shown in Fig. \ref{fig:profiles}a. According to the energy conservation, the signal is generated at 2015.2 $\pm$ 1.5 nm. From the measured idler bandwidth we estimated a FWHM of 5.1 $\pm$ 0.8 nm for the signal. Therefore, the UC filters the signal photons according to the spectrum shown in Fig. \ref{fig:profiles}b.
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{profiles_new.eps
\caption{\label{fig:profiles} a) Measured intensity spectrum of the idler beam. The fit has been made with a gaussian function, showing a FWHM of $2.87 \, \pm \, 0.07$ nm. This measurement is affected by the transfer function of the monochromator used to perform the measurement, which enlarges the actual bandwidth of the generation. We simulated the idler spectrum considering also the widening due to the monochromator (orange dashed line). To evaluate the actual bandwidth of the idler (2.0 $\pm$ 0.3 nm) we deconvolved the response function of the monochromator. b) Measured spectral response of the upconverter. The response has been fitted by a squared sinc function, as expected for a sum frequency generation process. The FWHM is $1.15 \, \pm \, 0.12$ nm. }
\end{figure}
\section{Data analysis}\label{sec:data_anal}
With SFWM, the detection probabilities per pulse for the idler ($p_i$), signal ($p_s$), coincidences ($p_{si}$) and accidentals ($p_{acc}$) are quadratic with the pump power $P$. In the limit of low transmission efficiencies for the signal and idler \cite{harada2009frequency}, they can be written as
\begin{subequations}\label{eq:probabilities}
\begin{equation} \label{prima_pi}
p_i = \xi P^2 \eta_i + d_i,
\end{equation}
\begin{equation}\label{prima_ps}
p_s = \xi P^2 \eta_s + d_s,
\end{equation}
\begin{equation}\label{prima_pcc}
p_{si} = \xi P^2 \eta_i \eta_s,
\end{equation}
\begin{equation}\label{prima_pacc}
p_{acc} = p_i p_s,
\end{equation}
\end{subequations}
where $\xi$ is the generation probability per pulse per squared unit power \cite{rosenfeld2020mid}, $\eta_i, \, \eta_s$ are the total transmission efficiencies for the idler and signal channels (from generation to detection), $d_i, \, d_s$ are the dark count probabilities per pulse for the idler and signal respectively. Eq. \eqref{prima_pcc} refers to net coincidences, thus without accidentals. In eqs. \eqref{eq:probabilities}, noise photons coming from the pump residual and Raman scattering, typically linear with the pump power, have not been considered, being negligible in our experimental setup. Singles and coincidence rates can be calculated by multiplying the probabilities in eqs. \eqref{eq:probabilities} by the repetition rate $R_p$ of the pump laser.
Together with SFWM other nonlinear phenomena take place in the waveguide. Two photon absorption (TPA), cross two photon absorption (XTPA) and free carrier absorption (FCA) have to be modelled properly in order to recover the actual generation and transmission efficiency of the pairs. TPA, XTPA and FCA play an important role in increasing the losses in the waveguide for both the pump and the generated photons. As a result, the detection probabilities are no longer quadratic with the input pump power \cite{boyd2019nonlinear}. A further effect is the nonlinearity of the idler detector. To model the linear and nonlinear losses affecting pump, signal and idler photons, we solved the differential equations for the pulse propagation involving TPA, FCA and propagation losses, assuming that the pump power is equally split on the TE0 and TE1 modes \cite{borghi2017nonlinear}. According to this modeling we can rewrite eqs. \eqref{eq:probabilities} as
\begin{subequations} \label{p_average}
\begin{equation} \label{pcc_average}
p_{si} \simeq \xi \Bar{P}_p^2 \Bar{\eta}_i \Bar{\eta}_s \eta_{ND} \equiv \bar{p}_{si},
\end{equation}
\begin{equation} \label{pi_average}
p_i \simeq \left( \xi \Bar{P}_p^2 \Bar{\eta}_i + d_i \right) \eta_{ND} \equiv \bar{p}_{i},
\end{equation}
\begin{equation} \label{ps_average}
p_s \simeq \xi \Bar{P}_p^2 \Bar{\eta}_s + d_s \equiv \bar{p}_{s},
\end{equation}
\begin{equation}\label{pacc_average}
p_{acc} \simeq \bar{p}_i \bar{p}_s \equiv \bar{p}_{acc},
\end{equation}
\end{subequations}
where
\begin{subequations}
\begin{equation}
\Bar{P}_p = \sqrt{\frac{1}{L} \int_0^L P_p^2(z) dz},
\end{equation}
\begin{equation}
\bar{\eta}_j = \bar{\eta}_j^{on} \eta_j^{off},
\end{equation}
\begin{equation}\label{etajbar}
\Bar{\eta}^{on}_j = \frac{1}{L} \int_0^L \eta^{on}_j(z) dz,
\end{equation}
\end{subequations}
where $j=i,s$, $L$ is the waveguide length, $P_p(z)$ is the on-chip pump power along the waveguide, $\eta^{on}_j(z)$ is the transmission efficiency for a photon generated at $z$ along the waveguide accounting only for the linear and nonlinear on-chip losses, $\eta^{off}_j$ is the transmission efficiency accounting only for the losses occurring off chip (fiber-chip coupling, filtering) and $\eta_{ND}$ models the nonlinear response of the idler detector.
Details about the derivation of eqs. \eqref{p_average} are reported in Supplementary material.
\section{Results} \label{sec:results}
\subsection{Generation probability and heralding efficiency}\label{brightness}
To monitor the coincidences between signal and idler, we used a start-and-stop detection system, using the idler as the start trigger and the signal as the stop detection \cite{signorini2020chip}. Coincidences are evaluated within a coincidence window $\Delta t_c$. To be noticed that while for the idler channel the detection rates (both signal and dark counts) are fixed by the detection gate width of the idler detector (1.90 ns), for the signal the rates depend on the coincidence window used in post processing. Therefore, given $R_{dc,i} = 620 \, \textrm{cps}$ and $R_{dc,s} = 2150 \, \textrm{cps}$ the dark count rates at the idler and signal detectors,
\begin{equation}
d_i = R_{dc,i}/R_p = 7.75\times 10^{-6},
\end{equation}
while
\begin{equation}
d_s = 1 - \textrm{e}^{-R_{dc,s} \Delta t_c},
\end{equation}
considering a Poisson distribution for the signal noise (SPAD dark counts and UC noise).
In order to fit the measured rates and retrieve the generation probability $\xi$, we can reduce eqs. \eqref{p_average} to
\begin{subequations}\label{y}
\begin{equation}
y_i = \frac{\bar{p}_i - \eta_{ND} \, d_i}{\bar{\eta}_i^{on} \eta_{ND} } = \xi \bar{P}_p^2 \eta_i^{off} = a_i \bar{P}_p^2,
\end{equation}
\begin{equation}
y_s = \frac{\bar{p}_s - d_s}{ \bar{\eta}_s^{on} } = \xi \bar{P}_p^2 \eta_s^{off} = a_s \bar{P}_p^2,
\end{equation}
\begin{equation}
y_{si} = \frac{\bar{p}_{si}}{\bar{\eta}_i^{on} \eta_{ND} \bar{\eta}_s^{on}} = \xi \bar{P}_p^2 \eta_i^{off} \eta_s^{off} = a_{si} \bar{P}_p^2,
\end{equation}
\end{subequations}
with $a_i = \xi \eta_i^{off}$, $a_s = \xi \eta_s^{off}$, $a_{si} = \xi \eta_i^{off} \eta_s^{off}$. $y_i$, $y_s$, $y_{si}$ can be calculated from the measured singles, coincidence and noise rates and from the simulated $\bar{\eta}^{on}_j$ and the measured $\eta_{ND}$ (see Supplementary material). Modeling exactly the nonlinear losses is a non trivial task, being the nonlinear parameters highly variable with the fabrication process and the geometry used. Therefore, we fit $y_i$, $y_s$, $y_{si}$ for an input power $<$ 0.5 W (i.e. $\bar{P}_p < 0.4$ W), where the nonlinear losses are not the dominant ones. We use $f(x) = a x^2 + b$ as the fitting function, retrieving $a_i$, $a_s$ and $a_{si}$. In this way, we can evaluate $\xi$ (in units of $W^{-2}$ of peak power) and the off-chip transmissions, resulting in
\begin{subequations} \label{fit_results}
\begin{equation}
\xi = \frac{a_i \, a_s}{a_{si}} = \left( 0.72 \pm 0.10 \right) W^{-2},
\end{equation}
\begin{equation}
\eta_i^{off} = \frac{a_{si}}{a_s} = \left( 2.81 \pm 0.17 \right)\times 10^{-3},
\end{equation}
\begin{equation}
\eta_s^{off} = \frac{a_{si}}{a_i} = \left( 3.97 \pm 0.20 \right)\times 10^{-4},
\end{equation}
\end{subequations}
where we used $\Delta t_c = 1.1$ ns (3$\sigma$ bin width) and the uncertainties are evaluated at 1 standard deviation of the fitting coefficients. Details about the nonlinear parameters and propagation losses used in the model are reported in Supplementary materials. From these results we calculate the intrinsic heralding efficiency $\eta_I$ as \cite{signorini2020chip}
\begin{equation}
\eta_I = \frac{R^{net}_{si}}{\left( R_i-R_{dc,i}\right) \, \bar{\eta}_s^{off}} = 59 \pm 5 \, \%,
\end{equation}
where $R^{net}_{si}$ is the measured net coincidence rate and $R_i$ is the measured idler rate. By normalizing for the signal channel losses, the $\eta_I$ allows to compare different sources only on the bases of their intrinsic properties, getting rid of the setup used. Our high value comes from the low on-chip signal losses and the moderate filtering losses to select the signal wavelength. The heralding efficiency can be further improved by optimizing the matching between the signal and UC bandwidths.
\subsection{Coincidence to accidental ratio}
To quantify the efficiency of coincidence detection, the coincidence-to-accidental ratio (CAR) is used. CAR is analogous to a signal-to-noise ratio comparing the rate of true coincidences with the accidental ones. True coincidences come from simultaneous detection of a signal and an idler belonging to the same pair. Coincidences between signals and idlers belonging to different pairs or coincidences with noise photons or dark counts contribute to the accidentals \cite{signorini2020chip,harada2009frequency}. The measurement of CAR is carried out with the start-stop coincidence detection described in sec. \ref{brightness}. We used the setup in Fig. \ref{fig:setup}b with a single visible SPAD at the output of the UC after removing the beams splitter. In fact, the CAR does not involve the intra-beam correlations. As shown in Fig. \ref{fig:car_meas}, the coincidences occur with a temporal delay $\delta t$ = 0 ns. The other peaks, spaced with the laser repetition period, are due to accidentals. Please notice that the zero-delay peak includes also accidental coincidences.
Therefore, the CAR is evaluated as
\begin{equation}
\textrm{CAR} = \frac{\textrm{coincidence counts}}{\textrm{accidental counts}} = \frac{N^{raw}_{si} - N_{acc}}{N_{acc}},
\end{equation}
with $N^{raw}_{si}$ the total coincidence counts falling in the zero delay bin and $N_{acc}$ the accidental counts, evaluated as the average over all the accidental peaks. The true coincidences, also called as net coincidences, are calculated as $N_{si}^{net} = N_{si}^{raw}-N_{acc}$. Depending on the $\Delta t_c$ used, the ratio between coincidence and accidentals in the individual bin changes, changing the CAR. In Fig. \ref{fig:car} we report the measured CAR and the corresponding net coincidences as a function of the on-chip peak pump power. To be noticed that the peak power in the plot is the power at the input of the multimode waveguide after fiber-chip coupling losses, it is not $\bar{P}_p$. We report the results with a coincidence window of 1.1 ns and of 2 ns. With the 1.1 ns window the CAR is higher, with a maximum of 40.4(9) at 115 mW. At this power the rate of net coincidences is 0.316(3) cps. The net coincidences are almost the same for the two windows, demonstrating that with the larger coincidence window we are mainly introducing noise rather than signal. CAR and net coincidences have been also simulated starting from the parameters calculated in sec. \ref{brightness} and sec. \ref{sec:data_anal}. They are reported as solid lines in the figure and are calculated as \cite{harada2009frequency}
\begin{subequations}
\begin{equation}
\textrm{CAR} = \frac{\bar{p}_{si}}{\bar{p}_i \, \bar{p}_s} = \frac{\xi \bar{P}_p^2 \bar{\eta_i} \bar{\eta_s}}{\left(\xi \bar{P}_p^2 \bar{\eta_i} + d_i\right) \, \left(\xi \bar{P}_p^2 \bar{\eta_s} + d_s\right)},
\end{equation}
\begin{equation}
N_{si}^{net} = \xi \bar{P}_p^2 \bar{\eta}_i \bar{\eta}_s \eta_{ND} R_p.
\end{equation}
\end{subequations}
Simulated and experimental values of CAR are in agreement in the whole range of pump power used. This agreement demonstrates that the main effects and phenomena involved in the generation process have been properly considered and modelled. The net coincidence rates are in agreement at low power, while at higher power the nonlinear losses have been overestimated. A perfect agreement would require a precise knowledge of all the nonlinear parameters of the material.\\
The larger CAR here measured with respect to other works \cite{rosenfeld2020mid} demonstrates that the overall system, considering both the generation and detection stages, is competitive with respect to solutions already demonstrated on the silicon platform.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{car_meas.eps
\caption{\label{fig:car_meas}
Two-fold coincidences as a function of the delay $\delta t$ between idler (start) and signal (stop) detections. We collect the events with a coincidence window of 0.05 ns (blue). In post processing, we use a larger coincidence window, here 1.1 ns (orange), in order to take into account the majority of the coincidence events. The coincidence peak is the highest one, placed at $\delta t = 0$ ns. The laser repetition period is clearly visible from the accidental peaks. In the inset, we focus on the zero-delay bin, comparing the coincidence peak shape with the post processing coincidence window.
}
\end{figure}
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{CAR_double_new.eps
\caption{\label{fig:car}
Measured CAR (circles) and net coincidence rates (triangles) with $\Delta t_c = 1.1$ ns (orange) and $\Delta t_c = 2$ ns (blue). The data are reported versus the on-chip peak pump power. The experimental points are compared with the simulated values for both the CAR (solid lines) and the net coincidence rates (dashed lines). With $\Delta t_c = 1.1$ ns the CAR is remarkably higher with respect to the 2 ns bin, with only a limited reduction in the coincidence rate. The better performance obtained with the smaller $\Delta t_c$ is due to the lower noise integrated within the coincidence bin.
}
\end{figure}
\begin{table*}[h!]
\renewcommand*{\arraystretch}{1.4}
\caption{Comparison with state of the art MIR heralded sources.}\label{T:sources}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\textbf{Platform} & \textbf{Process} & \textbf{Generation probability} & \textbf{CAR}& \textbf{CAR} & $\mathbf{g^{(2)}_h(0)}$& $\eta_I$ &\textbf{Reference}\\
& & (W$^{-2}$) & \textbf{max} & @ $N^{net}_{si} \sim$ 1 Hz & & ($\%$)& \\
\hline
Mg:PPLN & SPDC & - & 180 $\pm$ 50 & - & - & - &\cite{prabhakar2020two} \\
SOI & intra-modal SFWM & 0.28 &25.7 $\pm$ 1.1 & 25.7 $\pm$ 1.1 & - & 5 &\cite{rosenfeld2020mid} \\
SOI & inter-modal SFWM & 0.72 $\pm$ 0.10 & 40.4 $\pm$ 0.9 & 27.9 $\pm$ 0.5 & 0.23 $\pm$ 0.08 & 59 $\pm$ 5 & This work \\
\hline
\end{tabular}
\end{table*}
\subsection{Heralded g$^{(2)}_h$}
To asses the single photon nature of the emission, we measured the heralded $g^{(2)}$, that we indicate as $g^{(2)}_h$. Using the setup in Fig. \ref{fig:setup}b, we tuned the delays in order to have the signal detection on one visible SPAD coincident with the idler detection on the InGaAs SPAD. The coincidence between these two detectors, with a coincidence window $\Delta t_c =$ 2 ns, was used as the start trigger, while the detection from the remaining visibile SPAD, that we will call "delayed signal", was used as the stop trigger. In this way, we monitored the three-fold coincidences as a function of the delay $\delta t$ between the start and stop events. At the same time, we measured the two-fold coincidences between the idler and the delayed signal. We used a coincidence window of 2 ns to monitor the three-fold coincidences. The $g^{(2)}_h$ can be given as \cite{signorini2020chip}
\begin{equation} \label{g2h}
g^{(2)}_h(\delta t) = \frac{N_{12i}(\delta t)}{N_{1i}(0) N_{2i}(\delta t)} N_i,
\end{equation}
where $1,2,i$ label respectively the first signal detector, the second signal detector (that is the delayed signal) and the idler detector. $N_{12i}$ corresponds to the three-fold coincidence counts, $N_{1i}$ and $N_{2i}$ are the two-fold coincidence counts between the idler and the signal detectors, and $N_i$ corresponds to the idler counts. We can normalize eq. \eqref{g2h} by $N_i$ and $N_{1i}(0)$, such that
\begin{equation} \label{g2hn}
g^{(2)}_h(\delta t) = \frac{N_{12i}(\delta t)}{\langle N_{12i}(\delta t \neq 0) \rangle} \frac{\langle N_{2i} (\delta t \neq 0) \rangle}{N_{2i} (\delta t)},
\end{equation}
with $\langle N_{12i}(\delta t \neq 0) \rangle$ and $\langle N_{2i}(\delta t \neq 0) \rangle$ the average of the three-folds and two-folds coincidences for $\delta t$ different from zero. If the emission is truly at the single photon level, $g^{(2)}_h(0)$ should be lower than 0.5\cite{signorini2020chip}. The measured $g^{(2)}_h(0)$ as a function of the on-chip peak pump power is reported in Fig. \ref{fig:g2h}. For an input power of 0.33 W we measured $g^{(2)}_h(0) = 0.23(8)$, demonstrating the single photon regime of the source. The corresponding $g^{(2)}_h(\delta t)$, calculated as in eq. \eqref{g2hn}, is reported in the inset of Fig. \ref{fig:g2h}. We discarded the neighbouring bins of the zero delay bin, affected by spurious coincidences due to photon emissions from triggered silicon SPADs \cite{kurtsiefer2001breakdown}.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{g2h_2ns_fillPlot_new.eps
\caption{\label{fig:g2h} Comparison between the measured (blue points) and simulated (light blue area) $g_h^{(2)}(0)$ as a function of the on-chip peak power. In the inset is reported the measurement for the $g^{(2)}_h(\delta t)$ at an on-chip peak power of 0.33 W. The bins adjacent to the zero-delayed one have been removed due to the SPADs emitted photons.
}
\end{figure}
To verify the goodness of the modeling introduced in sec. \ref{sec:data_anal}, we used the calculated $\xi$, $\bar{P}_p$, $\bar{\eta}_i$ and $\bar{\eta}_s$ in sec. \ref{brightness} to simulate the expected $g_h^{(2)}(0)$. Considering the general formula for the heralded second order coherence, we can write
\begin{equation}
g_{h}^{(2)}(0) = \frac{\bar{p}_{12i} \bar{p}_i}{\bar{p}_{1i} \bar{p}_{2i}},
\end{equation}
where $\bar{p}_{12i}$ is the probability per pulse of having a three-fold coincidence. To model the experimental results, we have to consider all the possible coincidence events that may involve signal and/or noise detections. By considering all the possible events leading to a three-fold coincidence (see Supplementary Material), we can rewrite $\bar{p}_{12i}$ as
\begin{align} \label{p12i}
\bar{p}_{12i} =
& \sum_{n=2}^\infty n^2 (n-1) \wp(n) \, \bar{\eta}_1 \bar{\eta}_2 \bar{\eta}_i \eta_{ND}\\
+ & \sum_{n=1}^\infty n^2 \wp(n) \, (\bar{\eta}_1 d_2 + d_1 \bar{\eta}_2) \bar{\eta}_i \eta_{ND}\\
+ & \frac{1}{2} \sum_{n=2}^\infty n(n-1) \wp(n) \, \bar{\eta}_1 \bar{\eta}_2 d_i \eta_{ND}\\
+ & \sum_{n=1}^\infty n \wp(n) \, \bar{\eta}_1 d_2 d_i \eta_{ND}\\
+ & \sum_{n=1}^\infty n \wp(n) \, d_1 \bar{\eta}_2 d_i \eta_{ND}\\
+ & \sum_{n=1}^\infty n \wp(n) d_1 d_2 \bar{\eta}_i \eta_{ND}\\
+ & \, d_1 d_2 d_i \eta_{ND},
\end{align}
with $\wp(n)$ the photon number distribution. In eq. \eqref{p12i}, $\bar{\eta}_i$ is as in eq. \eqref{etajbar}, while $\bar{\eta}_1$ and $\bar{\eta}_2$ have to take into account also the effect of the beam splitter, thus, according to eq. \eqref{etajbar}, they can be written as
\begin{subequations}\label{eta12}
\begin{equation}
\bar{\eta}_1 = \bar{\eta}_s T^2_{BS} \eta_{BS},
\end{equation}
\begin{equation}
\bar{\eta}_2 = \bar{\eta}_s R^2_{BS} \eta_{BS},
\end{equation}
\end{subequations}
with $T_{BS}$ and $R_{BS}$ the transmission and reflection coefficients of the beam splitter, $T^2_{BS} + R^2_{BS} = 1$, and $\eta_{BS}$ modeling the losses of the beam splitter. In our case, $T^2_{BS} = R^2_{BS} = 0.5$ and $\eta_{BS} = 1$. In eqs. \eqref{eta12} we are assuming the same detection efficiency for the two visible SPADs. Considering all the events leading to a two-fold coincidence, we can rewrite $\bar{p}_{1i}$ and $\bar{p}_{2i}$ as
\begin{align}\label{pki}
\bar{p}_{ki} = & \sum_{n=1}^\infty n^2 \wp(n) \bar{\eta}_k \bar{\eta}_i \eta_{ND} \\
+ & \sum^\infty_{n=1} n \wp(n) \left( \bar{\eta}_k d_i + d_k \bar{\eta}_i \right) \eta_{ND}\\
+ & \, d_k d_i \eta_{ND},
\end{align}
with $k = 1,2$. To be noticed that in eq. \eqref{p12i} and eq. \eqref{pki} we are neglecting events with more than one photon reaching the same detector, being unlikely with the involved transmission efficiencies (i.e. $\bar{\eta}_i$, $\bar{\eta}_1$ and $\bar{\eta}_2$ are all $\ll$1). We are also neglecting events where photon detections and dark count detections occur simultaneously on the same detector.
The photon number distribution of a squeezed source ranges between a poissonian (infinite modes emission) and a thermal (single mode emission) distribution \cite{takesue2010effects,signorini2020chip}. We solved eq. \ref{p12i} and eq. \ref{pki} for the poissonian emission,
\begin{equation} \label{ppoisson}
\wp(n) = \frac{\mu^n}{n!} \textrm{e}^{ -\mu},
\end{equation}
and for the thermal emission,
\begin{equation}\label{pthermal}
\wp(n) = \frac{\mu^n}{(1+\mu)^{n+1}},
\end{equation}
where $\mu$ is the average number of pair per pulse. Eqs. \ref{ppoisson} and \ref{pthermal} define a lower and an upper boundary for $g^{(2)}_{h,sim}$. In computing $g^{(2)}_{h}$ we calculated $\mu$ as $\mu = \xi \bar{P}_p^2$ and we measured the noise affecting the three channels. $d_i$ is the same of the CAR measurements, $d_1 = 2.30 \times 10^{-6}$ and $d_2 = 2.32 \times 10^{-6}$. We simulated an area for the expected value of the $g^{(2)}_h(0)$, that is upper bounded by the thermal case and lower bounded by the poissonian case. The simulation is reported in Fig. \ref{fig:g2h}. The measured $g^{(2)}_h$ is compatible with the simulated values, confirming the reliability of the modeling. We want to stress that in this case we are not performing a fit of the measured $g^{(2)}_h$ and that the experiment and the simulation are completely independent. The experimental points in Fig. \ref{fig:g2h} are closer to the upper bound rather than to the lower one, suggesting an emission statistics closer to the thermal one. This is compatible with the unheralded $g^{(2)}$ of the signal beam \cite{signorini2020chip}, measured in Fig. \ref{fig:g2} as a function of the pump power. The unheralded $g^{(2)}$ results to be 1.67(2) at a power of 1.08 W, compatible with the simulated value of 1.66 (dashed line) calculated from the simulated joint spectral intensity (JSI) \cite{signorini2020chip,borghi2020phase}. The measured $g^{(2)}$ demonstrates that the source is closer to a thermal emission, justifying the experimental $g^{(2)}_h$. In Fig. \ref{fig:g2} we also report the simulated values for a source whose statistics is between the thermal (upper bound) and the poissonian one (lower bound). At low powers, the dark counts dominate, and in both cases the $g^{(2)}$ goes to 1. At high powers, the $g^{(2)}$ asymptotically increases to its actual value. In this way, we explain the power dependent behaviour of the experimental data. Further details about the measurement and simulation of $g^{(2)}$ are reported in Supplementary materials.
\begin{figure}[h!]
\includegraphics[width=\columnwidth]{g2_merge_new.eps
\caption{\label{fig:g2} The measured unheralded $g^{(2)}(0)$ (orange dots) is reported as a function of the on-chip peak power. We report in the inset the simulated JSI, from which we calculated the expected $g^{(2)}$ (dashed black line), that is compatible with the experiment. The measured points fall within the simulated values (light orange area), upper bounded by a source with thermal emission statistics and lower bounded by a source with poissonian emission statistics (constant $g^{(2)} = 1$).
}
\end{figure}
\section{Conclusions}\label{sec:conclusions}
In this work, we demonstrated a heralded single photon source beyond 2 $\mu$m based on inter-modal SFWM on a silicon chip. This source has two main peculiarities: the discrete band generation and the large detuning between the signal and idler photons. The discrete band generation removes the need for tight filtering to select idler and signal wavelengths, and the generated photons experience a higher transmission with respect to standard continuous band sources, witnessed by the high experimental $\eta_I = 59(5) \, \%$. The large detuning has two advantages: on one side, it enables an easier pump and nonlinear noise rejection; on the other side, it allows to generate the herald photon in the NIR, benefiting of an efficient detection technology. As a last advantage, this heralded source based on inter-modal SFWM requires a common C-band pump laser, easier to be integrated and operated on a silicon chip.
We performed a complete characterization of the source. We demonstrated the sub-poissonian statistics of the source by measuring $g^{(2)}_h(0) = 0.23(8)$. We characterized the CAR, finding a maximum value of 40.4(9), and the generation probability per pulse, with a measured value of 0.72(10) W$^{-2}$. These performances are competitive with other reported silicon sources of MIR photons (Table \ref{T:sources}) demonstrating the promising perspectives of inter-modal SFWM for bright and efficient sources of correlated photons beyond 2 $\upmu$m. The source can be significantly improved by reducing the propagation losses and optimizing the matching between the signal and upconverter bandwidths. With this work we demonstrate a new approach to MIR quantum photonics, providing a high quality source of quantum light beyond 2 $\mu$m without the need of MIR technologies. This result paves the way towards low cost, efficient and integrated solutions for quantum photonics beyond 2 $\upmu$m, offering new opportunities to the developing field of MIR photonics.
\section*{SUPPLEMENTARY MATERIAL}
See supplementary material for further details about the experimental setup, the measurements and the theoretical calculations.
\begin{acknowledgments}
This work was partially supported by grants from Q@TN provided by the Provincia Autonoma di Trento. The authors acknowledge HC Photonics, which fabricated the PPLN crystals used for the upconversion system. S.S. wants to thank Dr. Massimo Borghi, for fruitful discussions and precious suggestions, and Mr. Davide Rizzotti for its careful revision of the manuscript.
\end{acknowledgments}
\section*{DATA AVAILABILITY}
The data that support the findings of this study are available from the corresponding author
upon reasonable request.
\section*{REFERENCES}
|
1,314,259,993,258 | arxiv | \section{Introduction}\label{sec_intro}
The automated extraction of roads applies to a multitude of long-term efforts: improving access to health services, urban planning, and improving social and economic welfare. This is particularly true in developing countries that have limited resources for manually intensive labeling and are under-represented in current mapping. Updated maps are also crucial for such time sensitive efforts as determining communities in greatest need of aid, effective positioning of logistics hubs, evacuation planning, and rapid response to acute crises.
Existing data collection methods such as manual road labeling or aggregation of mobile GPS tracks are currently insufficient to properly capture either underserved regions (due to infrequent data collection), or the dynamic changes inherent to road networks in rapidly changing environments. For example, in many regions of the world OpenStreetMap (OSM) \cite{OpenStreetMap} road networks are remarkably complete. Yet, in developing nations OSM labels are often missing metadata tags (such as speed limit or number of lanes), or are poorly registered with overhead imagery (i.e., labels are offset from the coordinate system of the imagery), see Figure \ref{fig:osm_goof}. An active community works hard to keep the road network up to date, but such tasks can be challenging and time consuming in the face of large scale disasters.
For example, following Hurricane Maria, it took the Humanitarian OpenStreetMap Team (HOT) over two months to fully map Puerto Rico \cite{osm_maria}.
Furthermore, in large-scale disaster response scenarios, pre-existing datasets such as population density and even geographic topology may no longer be accurate, preventing responders from leveraging this data to jump start mapping efforts.
\begin{comment}
Current approaches to road labeling are often manually intensive. In the commercial realm, projects such as Bing Maps and Google Maps have been very successful in developing road networks from overhead imagery, though such processes are still labor intensive, and proprietary. On the open source side, OpenStreetMap (OSM) is an extensive data set built and curated by a community of mappers.
\end{comment}
\begin{figure}
\vspace{-5pt}
\centering
\includegraphics[width=0.95\linewidth]{osm_goofs.jpg}
\caption{\textbf{Potential issues with OSM data.} Left: OSM roads (orange) overlaid on Khartoum imagery; the east-west road in the center is erroneously unlabeled. Right: OSM roads (orange) and SpaceNet buildings (yellow); in some cases road labels are misaligned and intersect buildings.}
\label{fig:osm_goof}
\vspace{-15pt}
\end{figure}
The frequent revisits of satellite imaging constellations may accelerate existing efforts to quickly update road network and routing information.
Of particular utility is estimating the time it takes to travel various routes in order to minimize response times in various scenarios;
unfortunately existing algorithms based upon remote sensing imagery cannot provide such estimates.
A fully automated approach to road network extraction and travel time estimation from satellite imagery therefore warrants investigation, and is explored in the following sections.
In Section \ref{sec:existing} we discuss related work, while
Section \ref{sec:algo} details our graph extraction algorithm that infers a road network with semantic features directly from imagery.
In Section \ref{sec:data} we discuss the datasets used and our method for assigning road speed estimates based on road geometry and metadata tags.
Section \ref{sec:metrics} discusses the need for modified metrics to measure our semantic graph, and
Section \ref{sec:experiments} covers our experiments to extract road networks from multiple datasets.
Finally in Sections \ref{sec:discussion} and \ref{sec:conclusion} we discuss our findings and conclusions.
\section{Related Work}\label{sec:existing}
Extracting road pixels in small image chips from aerial imagery has a rich history (e.g.
\cite{zhang2017},
\cite{mattyus16}
\cite{wang2016},
\cite{zhang2017},
\cite{sironi14},
\cite{mnihroads}).
These algorithms typically use a segmentation + post-processing approach combined with lower resolution imagery (resolution $\geq 1$ meter), and OpenStreetMap labels.
Some more recent efforts (e.g. \cite{dlinknet}) have utilized higher resolution imagery (0.5 meter) with pixel-based labels \cite{deepglobe}.
Extracting road networks directly has also garnered increasing academic interest as of late.
\cite{stoica04}
attempted road extraction via a Gibbs point process, while
\cite{wegner13}
showed some success with road network extraction with a conditional random field model.
\cite{chai13}
used junction-point processes to recover line networks in both roads and retinal images, while
\cite{turet13}
extracted road networks by representing image data as a graph of potential paths.
\cite{mattyus15}
extracted road centerlines and widths via OSM and a Markov random field process, and
\cite{mosinska18}
used a topology-aware loss function to extract road networks from aerial features as well as cell membranes in microscopy.
Of greatest interest for this work are a trio of recent papers that improved upon previous techniques.
DeepRoadMapper \cite{deeproadmapper} used segmentation followed by $A^{*}$ search, applied to the not-yet-released TorontoCity Dataset.
The RoadTracer paper \cite{roadtracer}
utilized an interesting approach that used OSM labels to directly extract road networks from imagery without intermediate steps such as segmentation. While this approach is compelling, according to the authors it ``struggled in areas where roads were close together'' \cite{fbastani_roads} and underperforms other techniques such as segmentation + post-processing
when applied to higher resolution
data with dense labels.
\cite{Batra_2019_CVPR} used a connectivity task termed Orientation Learning combined with a stacked convolutional module and a SoftIOU loss function to effectively utilize the mutual information between orientation learning and segmentation tasks to extract road networks from satellite imagery, noting improved performance over \cite{roadtracer}.
Given that \cite{roadtracer} noted superior performance to \cite{deeproadmapper} (as well as previous methods), and \cite{Batra_2019_CVPR} claimed improved performance over both \cite{roadtracer} and \cite{deeproadmapper}, we compare our results to RoadTracer \cite{roadtracer} and Orientation Learning \cite{Batra_2019_CVPR}.
We build upon CRESI v1 \cite{cresi} that scaled up narrow-field road network extraction methods. In this work we focus primarily on developing methodologies to infer road speeds and travel times, but also improve the segmentation, gap mitigation, and graph curation steps of \cite{cresi}, as well as improve inference speed.
\section{Road Network Extraction Algorithm}\label{sec:algo}
Our approach is to combine novel segmentation approaches, improved post-processing techniques for road vector simplification, and road speed extraction using both vector and raster data.
Our greatest contribution is the inference of road speed and travel time for each road vector, a task that has not been attempted in any of the related works described in Section \ref{sec:existing}.
We utilize satellite imagery and geo-coded road centerline labels (see Section \ref{sec:data} for details on datasets) to build training datasets for our models.
We create training masks from road centerline labels assuming a mask halfwidth of 2 meters for each edge.
While scaling the training mask width with the full width of the road is an option (e.g. a four lane road will have a greater width than a two lane road),
since the end goal is road centerline vector extraction, we utilize the same training mask width for all roadways.
Too wide of a road buffer inhibits the ability to identify the exact centerline of the road, while too thin of a buffer reduces the robustness of the model to noise and variance in label quality; we find that a 2 meter buffer provides the best tradeoff between these two extremes.
We have two goals: extract the road network over large areas, and assess travel time along each roadway.
In order to assess travel time we assign a speed limit to each roadway
based on metadata tags such as road type, number of lanes, and surface construction.
We assign a maximum safe traversal speed of 10 - 65 mph to each segment based on the road metadata tags. For example, a paved one-lane residential road has a speed limit of 25 mph, a three-lane paved motorway can be traversed at 65 mph, while a one-lane dirt cart track has a traversal speed of 15 mph.
See Appendix A for further details.
This approach is tailored to disaster response scenarios, where safe navigation speeds likely supersede government-defined speed limits.
We therefore prefer estimates based on road metadata over government-defined speed limits, which may be unavailable or inconsistent in many areas.
\subsection{Multi-Class Segmentation}\label{sec:seg_mc}
We create multi-channel training masks by binning the road labels into a 7-layer stack, with channel 0 detailing speeds between 1-10 mph, channel 1 between 11-20 mph, etc. (see Figure
\ref{fig:train_masks}).
We train a segmentation model inspired by the winning SpaceNet 3 algorithm \cite{albu}, and use
a ResNet34 \cite{resnet} encoder with a U-Net \cite{unet} inspired decoder. We include skip connections every layer of the network, and use an Adam optimizer.
We explore various loss functions, including binary cross entropy, Dice, and focal loss \cite{focal_loss}, and find the best performance with $\alpha_{mc} = 0.75$ and
a custom loss function of:
\begin{equation}\label{eqn:c}
\mathcal{L} = \alpha_{mc}\mathcal{L}_{\text{focal}} + (1-\alpha_{mc})\mathcal{L}_{\text{dice}}
\end{equation}
\begin{figure}[]
\vspace{-5pt}
\begin{center}
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{cc}
\vspace{-6pt}
\subfloat [\textbf{Input Image}] {\includegraphics[width=0.48\linewidth]{RGB-PanSharpen_AOI_2_Vegas_img47.jpg}} &
\subfloat [\textbf{Binary Training Mask}] {\includegraphics[width=0.48\linewidth]{RGB-PanSharpen_AOI_2_Vegas_img47_mask_black.jpg}} \\
\subfloat [\textbf{Continuous Training Mask}] {\includegraphics[width=0.48\linewidth]{RGB-PanSharpen_AOI_2_Vegas_img47_mask.jpg}} &
\subfloat [\textbf{Multi-class Training Mask}] {\includegraphics[width=0.48\linewidth]{mc_train_mask1p5.jpg}} \\
\end{tabular}
\caption{\textbf{Training data.} (a) Input image. (b) Typical binary road training mask (not used in this study).
(c) Continuous training mask, whiter denotes higher speeds.
(d) Multi-class mask showing individual speed channels: red = 21-30 mph, green = 31-40 mph, blue = 41-50 mph.}
\label{fig:train_masks}
\end{center}
\vspace{-12pt}
\end{figure}
\begin{comment}
\begin{figure}[]
\vspace{-5pt}
\begin{center}
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{cc}
\vspace{-5pt}
\subfloat{\includegraphics[width=0.48\linewidth]{RGB-PanSharpen_AOI_2_Vegas_img47_mask_black.jpg}} &
\subfloat{\includegraphics[width=0.48\linewidth]{mc_train_mask1p5.jpg}}
\end{tabular}
\caption{\textbf{Multi-class training mask.} Left: Typical binary road training mask. Right: Multi-class training mask showing individual speed channels: red = 21-30 mph, green = 31-40 mph, blue = 41-50 mph.}
\label{fig:mc_mask}
\end{center}
\vspace{-6pt}
\end{figure}
\end{comment}
\subsection{Continuous Mask Segmentation}
A second segmentation method renders continuous training masks from the road speed labels. Rather than the typical binary mask, we linearly scale the mask pixel value
with speed limit,
assuming a maximum speed of 65 mph
(see Figure \ref{fig:train_masks}).
\begin{comment}
\begin{figure}[]
\vspace{-5pt}
\begin{center}
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{cc}
\vspace{-5pt}x
\subfloat{\includegraphics[width=0.48\linewidth]{RGB-PanSharpen_AOI_2_Vegas_img47.jpg}} &
\subfloat{\includegraphics[width=0.48\linewidth]{RGB-PanSharpen_AOI_2_Vegas_img47_mask.jpg}} \\
\end{tabular}
\caption{\textbf{Continuous training mask.} Left: Sample SpaceNet image chip. Right: Continuous training mask, whiter denotes higher speeds.}
\label{fig:mask_contin}
\end{center}
\vspace{-6pt}
\end{figure}
\end{comment}
We use a similar network architecture to the previous section (ResNet34 encoder with a U-Net inspired decoder), though we use a loss function that utilizes cross entropy (CE) rather than focal loss ($\alpha_c = 0.75$):
\begin{equation}
\label{eqn:c2}
\mathcal{L} = \alpha_c\mathcal{L}_{\text{CE}} + (1-\alpha_c)\mathcal{L}_{\text{dice}}
\end{equation}
\subsection{Graph Extraction Procedure}
The output of the segmentation mask step detailed above is subsequently refined into road vectors.
We begin by smoothing the output mask with a Gaussian kernel of 2 meters. %
This mask is then refined using opening and closing techniques with a similar kernel size of 2 meters,
as well as removing small object artifacts or holes with an area less than 30 square meters.
From this refined mask
we create a skeleton (e.g.~ sckit-image skeletonize \cite{skimage}).
This skeleton is rendered into a graph structure with a version of
the {\it sknw} package \cite{sknw} package modified to work on very large images.
This process is detailed in Figure \ref{fig:baseline}.
The graph created by this process contains length information for each edge, but no other metadata.
To close small gaps and remove spurious connections not already corrected by the mask refinement procedures,
we remove disconnected subgraphs with an integrated path length of less than
a certain length (6 meters for small image chips, and 80 meters for city-scale images).
We also follow \cite{albu} and remove terminal vertices that lie on an edge less than 3 meters in length,
and connect terminal vertices if the distance to the nearest non-connected node is less than 6 meters.
\begin{comment}
The output of the segmentation mask step detailed above is refined into road vectors via the following procedure:
\begin{enumerate}
\item Create a simple binary mask by flattening and thresholding the final output mask.
\item Refine the binary mask using standard techniques: thresholding, opening, closing, and smoothing.
\item Create a skeleton from the refined mask (e.g.~ sckit-image skeletonize \cite{skimage}).
\item Render the skeleton into a graph structure with a modified version of the {\it sknw} package \cite{sknw} package.
\item Close small gaps and remove spurious connections.
\end{enumerate}
This process is detailed in Figure \ref{fig:baseline}. The graph created by this process contains length information for each edge, but no other metadata.
To close small gaps and remove spurious connections not already corrected by the opening and closing procedures (step 5), we remove disconnected subgraphs with an integrated path length of less than
a certain length.
We also follow \cite{albu} and remove terminal vertices that lie on an edge less than 10 pixels in length,
and connect terminal vertices if the distance to the nearest non-connected node is less than 20 pixels.
\end{comment}
\begin{figure}[]
\vspace{-5pt}
\centering
\includegraphics[width=0.95\linewidth]{baseline.jpg}
\caption{\textbf{Graph extraction procedure.} Left: raw mask output.
Left center: refined mask. Right center: mask skeleton.
Right: graph structure.
}
\label{fig:baseline}
\vspace{-8pt}
\end{figure}
\begin{comment}
\begin{figure}[]
\begin{subfigure}[b]{0.48\columnwidth}
\includegraphics[width=\linewidth]{speed0.jpg}
\caption{}
\end{subfigure}
\hfill %
\begin{subfigure}[b]{0.48\columnwidth}
\includegraphics[width=\linewidth]{speed1.jpg}
\caption{}
\end{subfigure}
\hfill %
\vspace{-2mm
\caption{(a) Flattened prediction mask for SpaceNet test chip. (b) Imagery overlaid with inferred network (yellow).}
\label{fig:mc_mask2}
\end{figure}
\end{comment}
\subsection{Speed Estimation Procedure}\label{sec:speed_ex}
We estimate travel time for a given road edge by leveraging the speed information encapsulated in the prediction mask.
The majority of edges in the graph are composed of multiple segments; accordingly, we attempt to estimate the speed of each segment in order to determine the mean speed of the edge.
This is accomplished by analyzing the prediction mask at the location of the segment midpoints. For each segment in the edge, at the location of the midpoint of the segment we extract a small $8\times8$ pixel patch from the prediction mask. The speed of the patch is estimated by filtering out low probability values (likely background), and averaging the remaining pixels (see Figure \ref{fig:speed_comp}).
In the multi-class case, if the majority of the the high confidence pixels in the prediction mask patch belong to channel 3 (corresponding to 31-40 mph), we would assign the speed at that patch to be 35 mph.
For the continuous case the inferred speed is simply directly proportional to the mean pixel value.
\begin{figure}[t]
\vspace{-1pt}
\centering
\includegraphics[width=0.98\linewidth]{speed_fig1.jpg}
\caption{\textbf{Speed estimation procedure.}
Left: Sample multi-class prediction mask; the speed ($r$) of an individual patch (red square) can be inferred by measuring the signal from each channel.
Right: Computed road graph; travel time ($\Delta t$) is given by speed ($r$) and segment length ($\Delta l$).}
\label{fig:speed_comp}
\vspace{-8pt}
\end{figure}
\begin{comment}
\begin{figure}[t]
\vspace{-5pt}
\centering
\includegraphics[width=0.95\linewidth]{speed_comp2.jpg}
\caption{\textbf{Segment speed estimation.} Top: Example road edge (yellow), with midpoints of each segment shown in blue. Bottom left: Zoomed in region around the green midpoint. Bottom right: Corresponding portion of the prediction mask ($8\times8$ pixel patch size in red); speed is estimated with this mask patch. }
\label{fig:speed_comp}
\vspace{-6pt}
\end{figure}
\end{comment}
The travel time for each edge is in theory the path integral of the speed estimates at each location along the edge. But given that each roadway edge is presumed to have a constant speed limit, we refrain from computing the path integral along the graph edge. Instead, we estimate the speed limit of the entire edge by taking the mean of the speeds at each segment midpoint. Travel time is then calculated as edge length divided by mean speed.
\begin{comment}
The inference algorithm is summarized in Table \ref{tab:algo}.
\begin{table}[h]
\caption{Graph inference algorithm}
\vspace{-3pt}
\label{tab:algo}
\small
\centering
\begin{tabular}{ll}
\toprule
Step & Description \\
\toprule
1 & Compute road mask from segmentation model \\
2 & Clean road mask with opening, closing, smoothing \\
3 & Skeletonize road mask \\
4 & Extract graph from skeleton \\
5 & Remove spurious edges and close small gaps in graph \\
6 & Estimate local speed limit at midpoint of each segment \\
7 & Assign travel time to each edge from aggregate speed \\
\bottomrule
\end{tabular}
\vspace{-8pt}
\end{table}
\end{comment}
\subsection{Scaling to Large Images}\label{sec:CRESIv2}
The process detailed above
works well for small input images,
yet fails for
large images due to a saturation of GPU memory. For example, even for a relatively simple architecture such as U-Net \cite{unet}, typical GPU hardware (NVIDIA Titan X GPU with 12 GB memory) will saturate for images greater than $\sim2000 \times 2000$ pixels in extent and reasonable batch sizes $> 4$.
In this section we describe a straightforward methodology for scaling up the algorithm to larger images.
We call this approach City-Scale Road Extraction from Satellite Imagery v2 (CRESIv2).
The essence of this approach is to combine the approach of Sections \ref{sec:seg_mc} - \ref{sec:speed_ex} with the
Broad Area Satellite Imagery Semantic Segmentation (BASISS) \cite{basiss} methodology. BASISS returns a road pixel mask for an arbitrarily large test image (see Figure \ref{fig:SIMRDWN_training}), which we then leverage into an arbitrarily large graph.
\begin{figure}[]
\centering
\includegraphics[width=0.98\linewidth]{basiss_test.jpg}
\caption{\textbf{Large image segmentation.}
BASISS process of segmenting an arbitarily large satellite image \cite{basiss}.}
\label{fig:SIMRDWN_training}
\vspace{-6pt}
\end{figure}
The final algorithm is given by Table \ref{tab:algo2}.
The output of the CRESIv2 algorithm is a {NetworkX} \cite{networkx} graph structure, with full access to the many algorithms included in this package.
\begin{table}[]
\caption{CRESIv2 Inference Algorithm}
\vspace{-3pt}
\label{tab:algo2}
\small
\centering
\begin{tabular}{ll}
\toprule
Step & Description \\
\toprule
1 & Split large test image into smaller windows \\
2 & Apply multi-class segmentation model to each window \\
$\,\,2_{\rm b}$ & *\,\,Apply remaining (3) cross-validation models \\
$\,\,2_{\rm c}$ & *\,\,For each window, merge the 4 predictions \\
3 & Stitch together the total normalized road mask \\
4 & Clean road mask with opening, closing, smoothing \\
5 & Skeletonize flattened road mask \\
6 & Extract graph from skeleton \\
7 & Remove spurious edges and close small gaps in graph \\
8 & Estimate local speed limit at midpoint of each segment \\
9 & Assign travel time to each edge from aggregate speed \\
\bottomrule
& * Optional
\end{tabular}
\vspace{-5pt}
\end{table}
\section{Datasets}\label{sec:data}
Many existing publicly available labeled overhead or satellite imagery datasets tend to be relatively small, or labeled with lower fidelity than desired for foundational mapping.
For example, the International Society for Photogrammetry and Remote Sensing (ISPRS) semantic labeling benchmark \cite{isprs_sem}
dataset contains high quality 2D semantic labels over two cities in Germany;
imagery is obtained via an aerial platform and is 3 or 4 channel and 5-10 cm in resolution, though covers only 4.8 km$^2$. The TorontoCity Dataset \cite{torontocity} contains high resolution 5-10 cm aerial 4-channel imagery, and $\sim700$ km$^2$ of coverage; building and roads are labeled at high fidelity (among other items), but the data has yet to be publicly released. The Massachusetts Roads Dataset \cite{MnihThesis} contains 3-channel imagery at 1 meter resolution, and $2600$ km$^2$ of coverage; the imagery and labels are publicly available, though labels are scraped from OpenStreetMap and not independently collected or validated.
The large dataset size, higher 0.3 m resolution, and hand-labeled and quality controlled labels of SpaceNet \cite{spacenet} provide
an opportunity for algorithm improvement. In addition to road centerlines, the SpaceNet dataset contains metadata tags for each roadway including: number of lanes, road type (e.g.~ motorway, residential, etc), road surface type (paved, unpaved), and bridgeway (true/false).
\subsection{SpaceNet Data}
Our primary dataset accordingly consists of the SpaceNet 3 WorldView-3 DigitalGlobe satellite imagery (30 cm/pixel) and attendant road centerline labels. Imagery covers 3000 square kilometers, and over 8000 km of roads are labeled \cite{spacenet}.
Training images and labels are tiled into $1300 \times 1300$ pixel
($\approx160,000 \, \rm{m^2}$)
chips (see Figure \ref{fig:sn_data1}).
\begin{figure}[]
\vspace{-5pt}
\centering
\includegraphics[width=0.95\linewidth]{sn_data1.jpg}
\caption{\textbf{SpaceNet training chip.} Left: SpaceNet GeoJSON road label. Right: $400 \times 400$ meter image overlaid with road centerline labels (orange).}
\label{fig:sn_data1}
\vspace{-10pt}
\end{figure}
\begin{comment}
We assign a maximum safe traversal speed of 10 - 65 mph to each segment based on the road metadata tags. For example, a paved one-lane residential road has a speed limit of 25 mph, a three-lane paved motorway can be traversed at 65 mph, while a one-lane dirt cart track has a traversal speed of 15 mph.
See Appendix A for further details.
This approach is tailored to disaster response scenarios, where safe navigation speeds likely supersede government-defined speed limits.
We therefore prefer estimates based on road metadata over government-defined speed limits, which may be unavailable or inconsistent in many areas.
\end{comment}
To test the city-scale nature of our algorithm, we extract large test images from all four of the SpaceNet cities with road labels: Las Vegas, Khartoum, Paris, and Shanghai.
As the labeled SpaceNet test regions are non-contiguous and irregularly shaped,
we define rectangular subregions of the images where labels do exist within the entirety of the region.
These test regions total 608 km$^2$, with a total road length of 9065 km. See Appendix B for further details.
\begin{comment}
\subsection{SpaceNet Large Area Test Data}\label{sec:testdata}
We extract test images from all four of the SpaceNet cities with road labels: Las Vegas, Khartoum, Paris, and Shanghai. As the labeled SpaceNet test regions are non-contiguous and irregularly shaped,
we define rectangular subregions of the images where labels do exist within the entirety of the region.
These test regions total 608 km$^2$, with a total road length of 9064.6 km.
See Appendix A for further details.
\begin{comment}
\begin{figure}[]
\begin{center}
\includegraphics[width=0.97\linewidth]{shanghai_reg0.jpg}
\end{center}
\caption{SpaceNet road vector labels over Shanghai (purple). The label boundary is discontinuous and irregularly shaped, so we define rectangular regions for testing purposes (e.g.~ the blue region denotes test region Shanghai$\_$0),}
\label{fig:shanghai_reg0}
\end{figure}
\end{comment}
\begin{comment}
\begin{table}[h]
\caption{Test Regions}
\vspace{-3pt}
\label{tab:test_regs}
\small
\centering
\begin{tabular}{lll}
\toprule
Test Region & Area & Road Length \\
& (Km$^2$) & (Total Km) \\
\toprule
Khartoum$\_0$ & 3.0 & 76.7 \\
Khartoum$\_1$ & 8.0 & 172.6 \\
Khartoum$\_2$ & 8.3 & 128.9\\
Khartoum$\_3$ & 9.0 & 144.4 \\
Las$\_$Vegas$\_0$ & 68.1 & 1023.9\\
Las$\_$Vegas$\_1$ & 177.0 & 2832.8\\
Las$\_$Vegas$\_2$ & 106.7 & 1612.1\\
Paris$\_0$ & 15.8 & 179.9\\
Paris$\_1$ & 7.5 & 65.4\\
Paris$\_2$ & 2.2 & 25.9\\
Shanghai$\_0$ & 54.6 & 922.1\\
Shanghai$\_1$ & 89.8 & 1216.4\\
Shanghai$\_2$ & 87.5 & 663.7\\
\hline
Total & 608.0 & 9064.8\\
\bottomrule
\end{tabular}
\vspace{-5pt}
\end{table}
\end{comment}
\subsection{Google / OSM Dataset}
We also evaluate performance with the satellite imagery corpus used by \cite{roadtracer}. This dataset consists of
Google satellite imagery at 60 cm/pixel over 40 cities, 25 for training and 15 for testing. Vector labels are scraped from
OSM, and we use these labels to build training masks according the procedures described above. Due to the high variability in OSM road
metadata density and quality, we refrain from inferring road speed from this dataset, and instead leave this for future work.
\section{Evaluation Metrics}\label{sec:metrics}
Historically, pixel-based metrics (such as IOU or F1 score) have been used to assess the quality of road proposals, though such metrics are suboptimal for a number of reasons (see \cite{spacenet} for further discussion). Accordingly, we use the graph-theoretic Average Path Length Similarity (APLS) and map topology (TOPO) \cite{topo_metric} metrics designed to measure the similarity between ground truth and proposal graphs.
\subsection{APLS Metric}
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{apls_fig0.jpg}
\caption{\textbf{APLS metric.} APLS compares path length difference between sample ground truth and proposal graphs. Left: Shortest path between source (green) and target (red) node in the ground truth graph is shown in yellow, with a path length of $\approx948$ meters. Right: Shortest path between source and target node in the proposal graph with 30 edges removed, with a path length of $\approx1027$ meters; this difference in length forms the basis for the metric.
}
\label{fig:apls_fig0}
\vspace{-6pt}
\end{figure}
\end{comment}
To measure the difference between ground truth and proposal graphs, the
APLS \cite{spacenet}
metric sums the differences in optimal path lengths between nodes in the ground truth graph G and the proposal graph G',
with missing paths in the graph assigned a score of 0.
The APLS metric scales from 0 (poor) to 1 (perfect).
Missing nodes of high centrality will be penalized much more heavily by the APLS metric than missing nodes of low centrality.
The definition of shortest path can be user defined; the natural first step is to consider geographic distance as the measure of path length (APLS$_{\rm{length}}$), but any edge weights can be selected. Therefore, if we assign a travel time estimate to each graph edge we can use the APLS$_{\rm{time}}$ metric to measure differences in travel times between ground truth and proposal graphs.
For large area testing, evaluation takes place with the APLS metric adapted for large images: no midpoints along edges and a maximum of 500 random control nodes.
\subsection{TOPO Metric}
The TOPO metric \cite{topo_metric} is an alternative metric for computing road graph similarity. TOPO compares the nodes that can be reached within a small local vicinity of a number of seed nodes, categorizing proposal nodes as true positives, false positives, or false negatives depending on whether they fall within a buffer region (referred to as the ``hole size''). By design, this metric evaluates local subgraphs in a small subregion ($\sim 300$ meters in extent), and relies upon physical geometry. Connections between greatly disparate points ($>300$ meters apart) are not measured, and the reliance upon physical geometry means that travel time estimates cannot be compared.
\begin{comment}
The TOPO metric \cite{topo_metric} is an alternative metric for computing road graph similarity. TOPO compares the nodes that can be reached within a small local vicinity of a number of seed nodes, categorizing proposal nodes as true positives, false positives, or false negatives depending on whether they fall within a buffer region or 'hole size'. While this metric does provide a measure of the connections possible in the graph, it only evaluates local subgraphs rather than the network at large, and relies entirely upon physical geometry (i.e.~travel time estimates cannot be measured). While we report the TOPO score in Section \ref{sec:results}, we believe that the APLS metric provides a more flexible and somewhat more meaningful overall measure of road network similarity.
\end{comment}
\section{Experiments}\label{sec:experiments}
We train CRESIv2 models on both the SpaceNet and Google/OSM datasets. For the SpaceNet models, we use the 2780 images/labels in the SpaceNet 3 training dataset. The Google/OSM models are trained with the 25 training cities in \cite{roadtracer}. All segmentation models use a road centerline halfwidth of 2 meters, and withhold 25\% of the training data for validation purposes. Training occurs for 30 epochs. Optionally, one can create an ensemble of 4 folds (i.e. the 4 possible unique combinations of 75\% train and 25\% validate) to train 4 different models. This approach may increase model robustness, at the cost of increased compute time.
As inference speed is a priority, all results shown below use a single model, rather than the ensemble approach.
For the Google / OSM data, we train a segmentation model as in Section \ref{sec:seg_mc}, though with only a single class since we forego speed estimates with this dataset.
\subsection{SpaceNet Test Corpus Results}
We compute both APLS and TOPO performance for the $400 \times 400$ meter image chips in the SpaceNet test corpus, utilizing an APLS buffer and TOPO hole size of 4 meters (implying proposal road centerlines must be within 4 meters of ground truth), see Table \ref{tab:f1_snchips}. An example result is shown in Figure \ref{fig:sn3_chips_comparo}.
Reported errors ($\pm 1 \sigma$) reflect the relatively high variance of performance among the various test scenes in the four SpaceNet cities.
Table \ref{tab:f1_snchips} indicates that the continuous mask model struggles to accurately reproduce road speeds, due in part to the model's propensity to predict high pixel values for for high confidence regions, thereby skewing speed estimates. In the remainder of the paper, we only consider the multi-class model.
Table \ref{tab:f1_snchips} also demonstrates that for the multi-class model the APLS score is still 0.58 when using travel time as the weight, which is only $13\%$ lower than when weighting with geometric distance.
\begin{table}[h]
\caption{Performance on SpaceNet Test Chips}
\vspace{-3pt}
\label{tab:f1_snchips}
\small
\centering
\begin{tabular}{llll}
\toprule
Model & TOPO & APLS$_{\rm{length}}$ & APLS$_{\rm{time}}$ \\
\toprule
Multi-Class & $0.53\pm0.23$ & $0.68\pm0.21$ & $0.58\pm0.21$ \\
Continuous & $0.52\pm0.25$ & $0.68\pm0.22$ & $0.39\pm0.18$ \\
\bottomrule
\end{tabular}
\vspace{-2pt}
\end{table}
\begin{comment}
\begin{table}[h]
\caption{Performance on SpaceNet Test Chips}
\label{tab:f1_snchips}
\small
\centering
\begin{tabular}{cccc}
\toprule
APLS (length) & APLS (time) & TOPO \\
\toprule
0.64 & 0.56 & 0.54 \\
\bottomrule
\end{tabular}
\end{table}
\end{comment}
\begin{figure}[]
\vspace{-5pt}
\begin{center}
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{cc}
\vspace{-5pt}
\subfloat [\textbf{Ground truth mask}] {\includegraphics[width=0.48\linewidth]{vegas_715_gt.jpg}} &
\subfloat [\textbf{Predicted mask}] {\includegraphics[width=0.48\linewidth]{vegas_715_p.jpg}} \\
\subfloat [\textbf{Ground truth network}] {\includegraphics[width=0.48\linewidth]{vegas_715_gt_map.jpg}} &
\subfloat [\textbf{Predicted network}] {\includegraphics[width=0.48\linewidth]{vegas_715_p_map.jpg}} \\
\end{tabular}
\caption{\textbf{Algorithm performance on SpaceNet.} (a) Ground truth and (b) predicted multi-class masks: red = 21-30 mph, green = 31-40 mph, blue = 41-50 mph, yellow = 51-60 mph.
(c) Ground truth and (d) predicted graphs overlaid on the SpaceNet test chip; edge widths are proportional to speed limit. The scores for this proposal are APLS$_{\rm{length}}=0.80$ and APLS$_{\rm{time}}=0.64$.
}
\label{fig:sn3_chips_comparo}
\end{center}
\vspace{-8pt}
\end{figure}
\subsection{Comparison of SpaceNet to OSM}\label{sec:osm}
As a means of comparison between OSM and SpaceNet labels, we use our algorithm to train two models on SpaceNet imagery. One model uses ground truth segmentation masks rendered from OSM labels, while the other model uses ground truth masks rendered from SpaceNet labels. Table \ref{tab:osm_vs_sn} displays APLS scores computed over
a subset of the SpaceNet test chips,
and demonstrates that the model trained and tested on SpaceNet labels is far superior to other combinations, with a $\approx 60 - 100\%$ improvement in APLS$_{\rm{length}}$ score. This is likely due in part to the the more uniform labeling schema and validation procedures adopted by the SpaceNet labeling team, as well as the superior registration of labels to imagery in SpaceNet data. The poor performance of the SpaceNet-trained OSM-tested model is likely due to a combination of: different labeling density between the two datasets,
and differing projections of labels onto imagery for SpaceNet and OSM data. Figure \ref{fig:sn_osm_ex} and Appendix C illustrate the difference between predictions returned by the OSM and SpaceNet models.
\begin{table}[h]
\caption{OSM and SpaceNet Performance}
\vspace{-3pt}
\label{tab:osm_vs_sn}
\small
\centering
\begin{tabular}{llllll}
\toprule
Training Labels & Test Labels & APLS$_{\rm{length}}$ \\
\toprule
OSM & OSM & 0.47 \\
OSM & SpaceNet & 0.46 \\
SpaceNet & OSM & 0.39 \\
SpaceNet & SpaceNet & 0.77 \\
\bottomrule
\end{tabular}
\vspace{-5pt}
\end{table}
\begin{figure}[]
\vspace{-5pt}
\centering
\includegraphics[width=0.9\linewidth]{osm_sn_c_1057.jpg}
\caption{\textbf{SpaceNet compared to OSM.} Road predictions (yellow) and ground truth SpaceNet labels (blue) for a Las Vegas image chip.
SpaceNet model predictions (left) score APLS$_{\rm{length}} = 0.94$,
while OSM model predictions (right) struggle in this scene with significant offset and missing roads, yielding APLS$_{\rm{length}} = 0.29$.}
\label{fig:sn_osm_ex}
\vspace{-8pt}
\end{figure}
\begin{comment}
\begin{figure}[]
\vspace{-5pt}
\begin{center}
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{cc}
\vspace{-5pt}
\subfloat [\textbf{OSM}] {\includegraphics[width=0.48\linewidth]{1019_osm_zoom.jpg}} &
\subfloat [\textbf{SpaceNet}] {\includegraphics[width=0.48\linewidth]{1019_spacenet_zoom.jpg}} \\
\end{tabular}
\caption{\textbf{SpaceNet compared to OSM.} Road predictions (yellow) and ground truth SpaceNet labels (blue) for a sample Las Vegas image chip. OSM model predictions (a) are slightly more offset from ground truth labels than SpaceNet model predictions (b).}
\label{fig:sn_osm_ex}
\end{center}
\vspace{-10pt}
\end{figure}
\end{comment}
\subsection{Ablation Study}\label{sec:ablation}
In order to assess the relative importance of various improvements to our baseline algorithm, we perform ablation studies on the final algorithm. For evaluation purposes we utilize the
the same subset of test chips as in Section \ref{sec:osm},
and the APLS$_{\rm{length}}$ metric.
Table \ref{tab:ablation} demonstrates that advanced post-processing significantly improves scores. Using a more complex architecture also improves the final prediction. Applying four folds improves scores very slightly, though at the cost of significantly increased algorithm runtime. Given the minimal improvement afforded by the ensemble step, all reported results use only a single model.
\begin{table}[]
\caption{Road Network Ablation Study}
\vspace{-3pt}
\label{tab:ablation}
\small
\centering
\begin{tabular}{llr}
\toprule
& Description & APLS \\
\toprule
1 & Extract graph directly from simple U-Net model & 0.56 \\
2 & Apply opening, closing, smoothing processes & 0.66 \\
3 & Close larger gaps using edge direction and length & 0.72 \\
4 & Use ResNet34 + U-Net architecture & 0.77 \\
5 & Use 4 fold ensemble & 0.78 \\
\bottomrule
\end{tabular}
\vspace{-8pt}
\end{table}
\subsection{Large Area SpaceNet Results}\label{sec:CRESI_results}
We apply the CRESIv2 algorithm described in Table \ref{tab:algo2} to the
large area SpaceNet test set covering 608 km$^2$.
Evaluation takes place with the APLS metric adapted for large images (no midpoints along edges and a maximum of 500 random control nodes), along with the TOPO metric, using a buffer size (for APLS) or hole size (for TOPO) of 4 meters.
We report scores in Table \ref{tab:test_perf} as the mean and standard deviation of the test regions of in each city.
Table \ref{tab:test_perf} reveals an overall $\approx4\%$ decrease in APLS score when using speed versus length as edge weights.
This is somewhat less than the decrease of 13\% noted in Table \ref{tab:f1_snchips}, due primarily to the fewer edge effects from larger testing regions.
Table \ref{tab:test_perf} indicates a large variance in scores across cities; locales like Las Vegas with wide paved roads and sidewalks to frame the roads are much easier than Khartoum, which has a multitude of narrow, dirt roads and little differentiation in color between roadway and background.
Figure \ref{fig:res_vegas0_khartoum2} and Appendix D
display the graph output for various urban environments.
\begin{table}[h]
\caption{SpaceNet Large Area Performance}
\vspace{-3pt}
\label{tab:test_perf}
\small
\centering
\begin{tabular}{llll}
\toprule
Test Region & TOPO & APLS$_{\rm{length}}$ & APLS$_{\rm{time}}$ \\
\toprule
Khartoum & $0.53 \pm 0.09$ & $ 0.64 \pm 0.10$ & $0.61 \pm 0.05$ \\
Las Vegas & $0.63 \pm 0.02$ & $ 0.81 \pm 0.04$ & $0.79 \pm 0.02$ \\
Paris & $0.43 \pm 0.01$ & $ 0.66 \pm 0.04$ & $0.65 \pm 0.02$ \\
Shanghai & $0.45 \pm 0.03$ & $ 0.55 \pm 0.13$ & $0.51 \pm 0.11$ \\
\hline
Total & $0.51 \pm 0.02$ & $ 0.67 \pm 0.04$ & $0.64 \pm 0.03$ \\
\bottomrule
\end{tabular}
\vspace{-1pt}
\end{table}
\begin{comment}
\begin{figure}[]
\vspace{-6pt}
\begin{center}
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{c}
\vspace{-6pt}
\subfloat {\includegraphics[width=0.97\linewidth]{khartoum_2_edges.jpg}} \\
\vspace{-5pt}
\subfloat {\includegraphics[width=0.97\linewidth]{paris_0_edges.jpg}} \\
\end{tabular}
\caption{\textbf{CRESIv2 outputs.} Top: Road predictions (yellow) overlaid on a portion of the Khartoum$\_2$ test region with a high percentage of dirt roads. Bottom: Road predictions overlaid on a portion of the Paris$\_0$ test region in atypical (dark) lighting conditions and $19^{\circ}$ off-nadir.}
\label{fig:tricky}
\end{center}
\vspace{-5pt}
\end{figure}
\end{comment}
\begin{comment}
\begin{figure
\begin{center}
\includegraphics[width=0.97\linewidth]{shanghai_0_shp.jpg}
\end{center}
\caption{Output of CRESIv2 inference as applied to the Shanghai$\_0$ test region.
}
\label{fig:res_r0}
\end{figure}
\end{comment}
\begin{figure}[]
\vspace{-5pt}
\begin{center}
\centering
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{cc}
\vspace{-5pt}
\subfloat {\includegraphics[width=0.96\linewidth]{vegas02.jpg}}\\
\subfloat {\includegraphics[width=0.96\linewidth]{khartoum02_clip.jpg}}\\
\end{tabular}
\vspace{-3pt}
\caption{\textbf{CRESIv2 road speed.} Output of CRESIv2 inference as applied to the SpaceNet large area test dataset.
Predicted roads are colored by inferred speed limit, from yellow (20 mph) to red (65 mph). Ground truth labels are shown in gray.
{Top:} Las Vegas: APLS$_{\rm{length}} = 0.85$ and APLS$_{\rm{time}} = 0.82$.
{Bottom:} A smaller region of Khartoum: APLS$_{\rm{length}} = 0.71$ and the APLS$_{\rm{time}} = 0.67$.
}
\label{fig:res_vegas0_khartoum2}
\end{center}
\vspace{-15pt}
\end{figure}
\begin{comment}
\begin{figure
\vspace{-5pt}
\centering
\includegraphics[width=0.97\linewidth]{vegas02.jpg}
\caption{\textbf{CRESIv2 road speed.} Output of CRESIv2 inference as applied to a portion of the SpaceNet Las Vegas large area test region. The APLS$_{\rm{length}}$ score for this prediction is 0.85, and the APLS$_{\rm{time}}$ score is 0.82. Predicted roads are colored by inferred speed limit, from yellow (20 mph) to red (65 mph). Ground truth labels are shown in gray.}
\label{fig:res_vegas0}
\vspace{-5pt}
\end{figure}
\begin{figure
\vspace{-5pt}
\centering
\includegraphics[width=0.97\linewidth]{khartoum02.jpg}
\caption{\textbf{CRESIv2 road speed.} Output of CRESIv2 inference as applied to a more difficult portion of the SpaceNet large area test dataset, this time in Khartoum. The APLS$_{\rm{length}}$ score for this prediction is 0.71, and the APLS$_{\rm{time}}$ score is 0.67. Predicted roads are colored by inferred speed limit, from yellow (20 mph) to red (65 mph). Ground truth labels are shown in gray.}
\label{fig:res_khartoum2}
\vspace{-5pt}
\end{figure}
\end{comment}
\begin{comment}
\begin{figure
\begin{center}
\includegraphics[width=0.97\linewidth]{AOI_5_Khartoum_MUL-PanSharpen_Cloud_RGB_ox_partial.jpg}
\end{center}
\caption{Output of CRESIv2 inference for a large image strip over Khartoum. The area of this region is 274 km$^2$, with 4000 km of roads, which far exceeds the available labels for Khartoum.}
\label{fig:res_khartoum0}
\end{figure}
\end{comment}
\begin{comment}
\begin{figure
\begin{center}
\includegraphics[width=0.95\linewidth]{paris_1_ox_route_r00.jpg}
\end{center}
\caption{Output of CRESIv2 inference overlaid on a subset of the Paris$\_$0 test region. Prediction and edges nodes are in blue, and in red we also overlay the optimal route between two nodes of interest.
}
\label{fig:res_r1}
\end{figure}
\end{comment}
\subsection{Google / OSM Results}
Applying our methodology to 60 cm Google imagery with OSM labels achieves state of the art results.
For the same 4 m APLS buffer
used above, we achieve a score of
APLS$_{\rm{length}} = 0.53 \pm 0.11$.
This score is consistent with the results of Table \ref{tab:osm_vs_sn}, and compares favorably to previous methods (see Table \ref{tab:comparo} and Figure \ref{fig:nyc}).
\begin{comment}
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{boston_ox_plot.jpg}
\caption{\textbf{Inference on 60 cm imagery.} Prediction for the Boston test region of the Google / OSM dataset.
The APLS$_{\rm{length}}$ score for this region is 0.53. }
\label{fig:boston}
\vspace{-6pt}
\end{figure}
\end{comment}
\subsection{Comparison to Previous Work}
Table \ref{tab:comparo} demonstrates that CRESIv2 improves upon existing methods for road extraction,
both on the $400 \times 400$ m SpaceNet image chips at 30 cm resolution, as well as 60 cm Google satellite imagery with OSM labels.
To allow a direct comparison in Table \ref{tab:comparo}, we report TOPO scores with the 15 m hole size used in \cite{roadtracer}.
A qualitative comparison is shown in Figure \ref{fig:nyc} and Appendices E and F,
illustrating that our method is more complete and misses fewer small roadways and intersections than previous methods.
\begin{table}[h]
\caption{Performance Comparison}
\vspace{-3pt}
\label{tab:comparo}
\small
\centering
\begin{tabular}{lcc}
\toprule
Algorithm & Google / OSM & SpaceNet \\
& (TOPO) & (APLS$_{\rm{length}}$) \\
\toprule
DeepRoadMapper \cite{deeproadmapper} & 0.37 & 0.51\footnotemark[1] \\
RoadTracer \cite{roadtracer} & 0.43 & 0.58\footnotemark[1] \\
OrientationLearning \cite{Batra_2019_CVPR} & - & 0.64 \\
CRESIv2 (Ours) & \bf{0.53} & \bf{0.67} \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\item[1] $^1$ from Table 4 of \cite{Batra_2019_CVPR}
\end{tablenotes}
\footnotetext[1]{table footnote 1}
\end{table}
\captionsetup[subfigure]{labelformat=empty}
\begin{figure}[]
\vspace{-5pt}
\begin{center}
\setlength{\tabcolsep}{0.3em}
\begin{tabular}{cc}
\vspace{-5pt}
\captionsetup[subfigure]{labelformat=empty}
\subfloat {\includegraphics[width=0.49\linewidth]{new_york_roadtracer_highres.jpg}} &
\subfloat {\includegraphics[width=0.49\linewidth]{new_york_ox_plot_clip.jpg}} \\
\subfloat [\textbf{RoadTracer}] {\includegraphics[width=0.49\linewidth]{pittsburgh_roadtracer_highres.jpg}} &
\subfloat [\textbf{CRESIv2}] {\includegraphics[width=0.49\linewidth]{pittsburgh_ox_plot.jpg}} \\
\end{tabular}
\caption{\textbf{New York City (top) and Pittsburgh (bottom) Performance.} (Left) RoadTracer prediction \cite{roadtracer_ims}. (Right) Our CRESIv2 prediction over the same area.
}
\label{fig:nyc}
\end{center}
\vspace{-15pt}
\end{figure}
\section{Discussion}\label{sec:discussion}
CRESIv2 improves upon previous methods in extracting road topology from satellite imagery,
The reasons for our 5\% improvement over the Orientation Learning method applied to SpaceNet data
are difficult to pinpoint exactly, but our custom dice + focal loss function (vs the SoftIOU loss of \cite{Batra_2019_CVPR}) is a key difference. The enhanced ability of CRESIv2 to disentangle areas of dense road networks accounts for most of the 23\% improvement over the RoadTracer method applied to Google satellite imagery + OSM labels.
We also introduce the ability to extract route speeds and travel times. Routing based on time shows only a $3-13\%$ decrease from distance-based routing,
indicating that true optimized routing is possible with this approach. The aggregate score of APLS$_{\rm{time}} = 0.64$ implies that travel time estimates will be within $\approx \frac{1}{3}$ of the ground truth.
As with most approaches to road vector extraction, complex intersections are a challenge with CRESIv2.
While we attempt to connect gaps based on road heading and proximity, overpasses and onramps remain difficult (Figure \ref{fig:res_shang0}).
\begin{figure
\vspace{-3pt}
\centering
\includegraphics[width=0.77\linewidth]{shanghai_intersect_clip2.jpg}
\caption{\textbf{CRESIv2 challenges}.
While the pixel-based score of this Shanghai prediction is high, correctly connecting roadways in complex intersections remains elusive.
}
\label{fig:res_shang0}
\vspace{-8pt}
\end{figure}
\begin{comment}
The total TOPO score of 0.56
compares favorably with the RoadTracer implementation, which reports an F1 score of $\approx 0.43$
for a larger (less restrictive) TOPO hole size of 10 meter (versus our 4 meter hole size),
or a TOPO F1 score of $\approx 0.37$ for the DeepRoadMapper implementation (Figure 8 of \cite{roadtracer}).
The RoadTracer work used OSM labels and 0.6 meter resolution aerial imagery. To perform a more direct comparison to this work, we degrade imagery to 0.6 meter resolution and train a new model; we also adopt a TOPO hole size of 10 meters to compare directly with the RoadTracer TOPO scores. With the model trained (and tested) on 0.6 meter data we observe a decrease of
$12\%$ in the APLS$_{\rm{length}}$ score, to $0.61 \pm 0.20$.
The TOPO score actually rises slightly to $0.58 \pm 0.21$
due to the less stringent hole size. This TOPO score represents a
$35\%$
improvement over the RoadTracer implementation, though we caveat that testing and training is still on different cities for CRESIv2 and RoadTracer, and SpaceNet labels were shown in Section \ref{sec:osm} to provide a significant improvement over OSM labels (which RoadTracer uses).
\end{comment}
CRESIv2 has not been fully optimized for speed, but even so inference runs at a rate of $280 \,{\rm km}^2 / \, {\rm hour}$ on a machine with a single Titan X GPU.
At this speed, a 4-GPU cluster could map the entire $9100 \, {\rm km}^2$ area of Puerto Rico in $\approx 8$ hours, a significant improvement over the two months required by human labelers \cite{osm_maria}.
\section{Conclusion}\label{sec:conclusion}
Optimized routing is crucial to a number of challenges, from humanitarian to military. Satellite imagery may aid greatly in determining efficient routes, particularly in cases involving natural disasters or other dynamic events where the high revisit rate of satellites may be able to provide updates far more quickly than terrestrial methods.
In this paper we demonstrated methods to extract city-scale road networks directly from remote sensing images of arbitrary size, regardless of GPU memory constraints.
This is accomplished via a multi-step algorithm that segments small image chips, extracts a graph skeleton, refines nodes and edges, stitches chipped predictions together, extracts the underlying road network graph structure, and infers speed limit / travel time properties for each roadway.
Our code is publicly available at \texttt{github.com/CosmiQ/cresi}.
Applied to SpaceNet data, we observe a 5\% improvement over published methods, and when using OSM data our method provides a significant (+23\%) improvement over existing methods.
Over a diverse test
set that includes atypical lighting conditions, off-nadir observation angles, and locales with a multitude of dirt roads,
we achieve a total score of
APLS$_{\rm length}= 0.67$, and nearly equivalent performance when optimizing for travel time: APLS$_{\rm time} = 0.64$.
Inference speed is a brisk $\geq280 \, {\rm km} ^2 \, / \, \rm{hour} / \rm{GPU}$.
While automated road network extraction is by no means a solved problem, the CRESIv2 algorithm
demonstrates that true time-optimized routing is possible,
with potential benefits to applications such as disaster response where rapid map updates are critical to success.
\begin{comment}
In this paper we demonstrated methods to extract city-scale road networks directly from remote sensing imagery. GPU memory limitations constrain segmentation algorithms to inspect images of size $\sim1000$ pixels in extent, yet any eventual application of road inference must be able to process images far larger than a mere $\sim400$ meters in extent. Accordingly, we demonstrate methods to infer road networks and travel times for input images of arbitrary size. This is accomplished via a multi-step algorithm that segments small image chips, extracts a graph skeleton, refines nodes and edges, stitches chipped predictions together, extracts the underlying road network graph structure, and assigns speed limit / travel time properties to each roadway. Our code is publicly available at \texttt{github.com/CosmiQ/cresi}. Measuring performance with the APLS graph theoretic metric we observe superior performance for models trained and tested on SpaceNet data over OSM data. The TOPO scores are a significant improvement over existing methods, partly due to the higher fidelity labels provided by SpaceNet over OSM. Over four SpaceNet test cities, we achieve a respectable total score of APLS$_{\rm length}= 0.71 \pm 0.15$, and APLS$_{\rm time} = 0.67 \pm 0.15$, with an inference speed of $\approx 200 \, {\rm km} ^2 \, / \, \rm{hour}$.
Optimized routing is crucial to a number of challenges, from humanitarian to military. In this paper we Satellite imagery may aid greatly in determining efficient routes, particularly in cases of natural disasters or other dynamic events where the high revisit rate of satellites may be able to provide updates far faster than terrestrial methods. This paper demonstrates that satellite imagery can be rapidly transformed into routable road networks, whihc
In cases of natural disasters or other dynamic events the high revisit rate of satellites may be able to provide updates far faster than terrestrial methods. This paper demonstrates that satellite imagery can be rapidly transformed into routable road networks, thereby enabling optimized routing that is crucial to multitude of humanitarian and commercial challenges.
Optimized routing is crucial to a number of challenges, from humanitarian to military. Satellite imagery may aid greatly in determining efficient routes, particularly in cases of natural disasters or other dynamic events where the high revisit rate of satellites may be able to provide updates far faster than terrestrial methods. This paper demonstrates that satellite imagery can be rapidly transformed into routable road networks, whihc
\end{comment}
\begin{comment}
\begin{figure}[t]
\begin{center}
\fbox{\rule{0pt}{2in} \rule{0.9\linewidth}{0pt}}
\end{center}
\caption{Example of caption. It is set in Roman so that mathematics
(always set in Roman: $B \sin A = A \sin B$) may be included without an
ugly clash.}
\label{fig:long}
\label{fig:onecol}
\end{figure}
\begin{figure}
\begin{center}
\fbox{\rule{0pt}{2in} \rule{.9\linewidth}{0pt}}
\end{center}
\caption{Example of a short caption, which should be centered.}
\label{fig:short}
\end{figure}
\end{comment}
\vspace{5pt}
\begin{footnotesize}
\noindent
{\bf Acknowledgments}
We thank Nick Weir, Jake Shermeyer, and Ryan Lewis for their insight, assistance, and feedback.
\end{footnotesize}
{\small
\bibliographystyle{ieee}
|
1,314,259,993,259 | arxiv | \section{Introduction}
We are interested in the behavior of a moving interface ${\Gamma}$ in a random medium, where ${\Gamma}$ is a graph,
i.e. defined as
\begin{equation}\label{cdl.interface}
\Gamma(t):=\{ (x,y)\in {\mathbb{R}}^{2}: y=u(x,t)\}
\end{equation}
and
the function $u$ evolves according to the following equation:
\begin{align}
&\frac{\partial u}{\partial t}= u_{xx}(x,t) + f(x,u(x,t)) +F \quad \text{ in } \quad {\mathbb{R}}\times {\mathbb{R}}^{+}, \label{cdl.eq.interface}\\
&u(x,0)=0\label{cdl.eq.initial}
\end{align}
where $f \in C^1({\mathbb{R}}^{2}\times {\Omega})$ is a random field representing the random medium
and will be defined more precisely later on. Note that $f$ is not restricted to be either positive or negative.
$F$ is a positive constant called "driving field."
The objective is to prove that the solution of \eqref{cdl.eq.interface}-\eqref{cdl.eq.initial} does not get pinned,
i.e. does not converge to a nonnegative stationary solution if
$F$ is above a critical value $F_c$. To this end, we will show that nonnegative stationary solutions on bounded
intervals $[-N, N]$ with Dirichlet boundary conditions get large with high probability as $N\to \infty.$
The main contribution of this paper is to show that a {\em finite} $F$ is sufficient to keep the graph moving, even if
it will have to pass through regions where $f(x,u,\omega)\ll -1,$ provided the probability of finding such a region is
small. As $f$ can become arbitrarily big, one cannot find a deterministic subsolution that keeps moving,
and instead probabilistic arguments are needed.
The interest in the model stems from
the theoretical analysis of the effective behavior on large scales of
models for interface evolution
specified at a microscopic scale, which is at the heart of many
problems in physics and material science. Of particular interest is the influence of material heterogeneities, which are generally assumed to be random. Mathematically, this leads to studying the limit of evolution equations with rapidly varying random coefficients. In the case of dissipative equations,
on which we focus here, the randomness leads to new and interesting effects absent in the case of periodic coefficients, e.g. pinning and de-pinning for obstacles with a strength that cannot be bounded uniformly. If the strong obstacles
are sufficiently rare, than the interaction through the Laplacian helps the graph overcome them although the
total forcing $f(x,u)+F$ remains negative near the obstacle.
One example we have in mind as
motivation are driven elastic systems, for a review of the research
in physics and its possible applications we refer to \cite{BN04}.
For a survey of front evolutions in random media, with evolution
laws different from the ones considered here, see e.g. the recent
monograph \cite{Xin}.
The model (\ref{cdl.eq.interface}) is obviously a gradient flow
for a random energy. In fact, it approximates a more geometric interface
evolution law:
In fact, if the hypersurface $\Sigma$ is the boundary of the set
$A_\Sigma$ then we can define for
any bounded $D\subseteq {\mathbb{R}}^{2}$ the energy
$$
F(\Sigma| D):=H^{1}(\Sigma \cap D)+\int_{D\cap A_\Sigma} f(X,\omega)dX
$$where $X\in {\mathbb{R}}^{2}$ and $H^{1}$ denotes the
$1$-dimensional Hausdorff measure.
Requiring that the first variation of
that functional (with respect to inner variations, i.e. deforming the interface
with the flow of a smooth vector field) is proportional to the
normal velocity of the interface leads
to forced mean curvature flow,
$$
V=\kappa+f(X),
$$where $\kappa$ denotes the mean curvature of the interface (trace
of the second fundamental form) and the scalar $V$ is the velocity of
the interface in the direction of the inner normal. This geometric evolution law leads to nonlinear degenerate parabolic
equations, hence questions concerning the large-scale behaviour of solutions are related to homogenising such
equations with periodic or random coefficients.
This is an active field of research (see e.g. \cite{CSW}, \cite{LS}) but many difficult problems
remain open. Here we consider a modified evolution law:
If we suppose that the interface is a graph is ``flat''
(no overhangs, small gradients)
then we can consider a semi-linear equation as in
(\ref{cdl.eq.interface})
as heuristic approximation of the evolution by forced mean curvature flow.
This model, here called random obstacle model (ROM) because of the precise nature of the random nonlinearity
$f(x,u,\omega)$ used in this paper, is a special case of a class of
equations sometimes called quenched Edwards-Wilkinson model
which, for some choices of the random nonlinearity,
is used in physics as a model for
overdamped interface evolution in
a random environment when
``overhangs'' can be neglected.
For further comments on physical properties and justifications
of the model we refer to \cite{BN04}. In particular, one expects that solutions move with a deterministic effective (large-scale) velocity
for $F$ larger than a critical forcing $F_*.$ For $F$ slightly larger than $F_*,$
the relation between the effective velocity and $F-F_*$ is expected to be a power law.
(See also \cite{DY} for the periodic case.).
While there are important differences between the forced mean curvature flow and the semi-linear model (e.g. forced
mean curvature flow can "wrap around" strong obstacles),
we expect that the techniques we will develop
when studying (\ref{cdl.eq.interface}) will prove helpful
in investigating more general models for interface evolution. This strategy was successful in the periodic case,
where first the semi-linear case was solved (\cite{DY}) and then the results could be extended to graphs
evolving by forced mean curvature flow (\cite{DKY}).
One more reason why such models are of mathematical interest is the relation with "singular" homogenization problems,
i.e. problems where the ${\epsilon}$-equation is of second order (possibly degenerate) and the homogenized equation
of first order. Note that the effective
velocity $c(\eta)$ of an interface evolving with average slope
$\eta$ can be found by considering
$$
\frac{\partial u}{\partial t}= u_{xx}(x,t) + f(x,\eta \cdot x+u(x,t)) +F,
$$ i.e. this can be seen as the ``cell problem'' for
$$
\frac{\partial v(y,\tau,\omega)}{\partial\tau}={ {\epsilon}} v_{yy}(y,\tau,\omega)+f({{\epsilon}^{-1}}y,
{{\epsilon}^{-1} }v (y,\tau,\omega),\omega)+F
$$ with $\tau={\epsilon}^{-1}t,\ y={\epsilon}^{-1}x.$
The paper is organised as follows.
In Section 2 we define the random obstacle model precisely and state our main results.
In Section 3, we introduce an auxiliary model which is more suitable
for explicit estimates and whose solutions can be related
to solutions of the original equation \eqref{cdl.eq.interface}
by the comparison principle for parabolic equation.
This auxiliary problem has the property that any of its stationary solutions $u$ solve $u_{xx}=-F$ away from the obstacles and is a convex function on the obstacles.
This fact allows us to define a discretization, using that
each solution is determined by its values when entering and leaving an obstacle.
This yields a discretised path $\bar v^\delta:\ {\mathbb{Z}}\to \delta{\mathbb{Z}}$ characterizing each stationary solution.
In section 4, we estimate the discrete Laplacian of $\bar v^\delta(i)$ against the obstacles that sit above and below
$i\in {\mathbb{Z}}$ and are approached by the path, i.e. $\Delta_d\bar v(i)+\bar F\le C\ell_{i,[\bar v^\delta(i)[}(\omega)$ where $\bar F$ is a constant which can be chosen arbitrarily large.
A technical problem is posed by the fact that the path may pass more
than one obstacle above the same integer.
In section 5 we estimate the probability of a discrete path being "compatible" with the random environment.
This probability can be estimated against
an auxiliary random measure on paths:
\begin{eqnarray*}
{\mathbb{P}}\left(\left\{
\omega:\ u(\omega)\ {\rm compatible \ with\ } \bar v^\delta (i)\right\}\right)
&\le& C^{2N}{{\mathbb{P}}}\left(\{ \Delta_d \bar v^\delta(i) \}_{i=-N+1}^{N-1}\right),
\\
{\mathbb{P}}\left( \{\Delta_d \bar v^\delta(i) \}_{i=-N+1}^{N-1}\right)&:=&Z^{-1}
e^{-\lambda\sum_{i=-N+1}^{N-1} |\Delta_{d} \bar v^\delta (i)+\bar F |},
\end{eqnarray*}
where $Z$ is a normalization (corresponding to the partition function in statistical mechanics).
In section 6 we conclude that the probability of a nonnegative solution of the Dirichlet problem to cross
$KN-K|x|$ is ${\mathcal O}(e^{-CN}).$
The key observation is that for such a path $N^{-1}\sum_{i=-N+1}^{N-1}\left(\Delta_{d}\bar v^{\delta}(i)+\bar F\right)$ must be large,
which is very unlikely under the auxiliary (product) probability measure.
Finally, we show by invoking the comparison principle for semi-linear parabolic equations
that these results for large $N$ imply non-existence of global nonnegative
stationary solutions. This implies that for a solution $u$
of (\ref{cdl.eq.interface}), (\ref{cdl.eq.initial}) and
all $x\in {\mathbb{R}}$ it holds that $\lim_{t\to\infty}u(t,x,\omega)=+\infty$ almost
surely in $\omega,$ i.e. the interface cannot be stopped by the obstacles.
{\bf Acknowledgements} The second named author would like to thank Enza Orlandi and Michael Scheutzow
for helpful discussions.
The authors acknowledge gratefully the hospitality of the Max Planck Institute for Mathematics in the Sciences Leipzig.
\section{Results and Definitions}~
\subsection{The random field $f$}
Here, the field $f$ is negative on "obstacles" in ${\mathbb{R}}^2$ which are random in strength, but positioned
on a lattice. More precisely, we make the following assumption:
\begin{definition}[Obstacles]\label{Obstacles}\hfill
\begin{enumerate}
\item Let ${\mathbb{Z}}^*:={\mathbb{Z}}+1/2.$ We assume that the obstacles lie on a lattice ${\mathcal{L}}:={\mathbb{Z}}\times{\mathbb{Z}}^*$ where for convenience $(b_{ij})_{_{i\in {\mathbb{Z}},j\in{\mathbb{Z}}^*}}$ denotes the nodes of this lattice, i.e $b_{i,j}:=(i,j)$.
\item Let $\delta\ll1/2$ and define $Q_\delta(0,0):=[-\delta,\delta]^2,$ and $Q_{\delta}(i,j):=Q_\delta(0,0)+b_{i,j}.$
Then the obstacles, i.e. regions where $f<0$ is possible, are given by the $Q_\delta(i,j),$ see also Figure 1
\end{enumerate}
\end{definition}
\begin{figure}
\label{fig1}
\begin{center}
\input{grille.pstex_t}
\caption{The obstacles}
\end{center}
\end{figure}
In order to obtain existence and regularity of the solutions, the nonlinearity $f(x,y)$ should be sufficiently
regular, hence in order to define $f$ we have to smooth out the obstacles.
\begin{definition}[Random field]
Let $\phi \in C^{\infty}_c$ be a nonnegative function
such that its support is contained in cube $Q_\delta(0,0)$.
Let ($l(i,j)({\omega}))_{(i,j)\in {\mathbb{Z}}\times{\mathbb{Z}}^*}$ be a family of independent identically distributed exponential
random variables, i.e. there exists $\lambda_0>0$ such that for $r\ge 0$
$$ {\mathbb{P}}\{l(i,j)({\omega})>r\}=e^{-\lambda_0 r}.$$
Let $\Sigma$ be the set of the obstacles, i.e.
$\Sigma := \bigcup_{(i,j) \in {\mathbb{Z}}\times {\mathbb{Z}}^*} \Big(Q_\delta(b_{i,j})\Big),$
then the field $f$ is defined the following way:
$$
f(x,s)=g(x,s)-\sum_{(i,j)\in {\mathbb{Z}}\times{\mathbb{Z}}^*} l(i,j)\phi((x,s)-b_{i,j})
$$
where $g$ is a non-negative function chosen so that the field has mean zero in a suitable sense:
\begin{align*}
&g\ge 0 \quad \text{in}\quad {\mathbb{R}}^{2}\\
&\lim_{L\to\infty} (2L)^2\int_{[-L,L]^2}f(x,s)\,dxds =0
\end{align*}
\end{definition}
\begin{remark}
\begin{enumerate}
\item
As ${\mathbb{E}} (l(i,j))=\frac{1}{\lambda},$ the law of large numbers implies that a possible choice of
$g$ is $$g(x,s)=\sum_{(i,j)\in {\mathbb{Z}}\times{\mathbb{Z}}^*} \frac{1}{\lambda}\phi((x,s)-b_{i,j}) .$$
\item The results on non-existence of nonnegative stationary solutions hold for any
i.i.d. random variables $l(i,j)$ such that there exists $\lambda_0>0$ with
$$
{\mathbb{P}}\{l(i,j)({\omega})>r\}\le e^{-\lambda_0 r}.$$
\item As we are only interested in the combined effect of $f(x,s)$ and the constant forcing $F,$
the mean zero property of the random nonlinearity is just a normalisation.
\item In our analysis, the
shape of the obstacles $(supp (\phi))$
plays no role and the results will stand
as well if we consider a random field like e.g.
$$f=g(x,s)-\sum_{(i,j)\in {\mathbb{Z}}\times{\mathbb{Z}}^*} l(i,j)\phi_{i,j}((x,s)) $$
where $\phi_{i,j}$ are smooth functions uniformly bounded and
such that $supp(\phi_{i,j})\subset Q_{\delta}(b_{i,j})$ .
\end{enumerate}
\end{remark}
\subsection{Results:}~
We consider the stationary version of \eqref{cdl.eq.interface} with Dirichlet boundary conditions:
\begin{align}
& u_{xx} + f(x,u,{\omega}) +F=0 \quad \text{ in } \quad [-N+\delta ,N-\delta] \label{cdl.eq.stat}\\
&u(-N+\delta)=u(N-\delta)=0 \label{cdl.eq.dirichlet}
\end{align}
\begin{theo}\label{mainthm}
Let $u({\omega})$ solve (\ref{cdl.eq.stat}, \ref{cdl.eq.dirichlet}). Then there exist $F_0>0,$ $C$ and $K$ such that
for $F>F_0$ and $N$ sufficiently large
$$
{\mathbb{P}}\left( \{{\omega}|\, u(x,{\omega})\ge (K(N-1)-K|x|)_+\ {\rm on\ }[-N+\delta ,N-\delta] \}\right)\ge 1- C e^{-\frac{N}{C}},
$$where $a_+$ denotes the positive part of a real number $a.$
\end{theo}
\begin{cor}Let $F>F_0,$ with $F_0$ as in Theorem \ref{mainthm}.
{\hfill }
\begin{enumerate}
\item
There is almost surely no global nonnegative stationary solution of
(\ref{cdl.eq.interface}).
\item Let $u$ solve (\ref{cdl.eq.interface}),
(\ref{cdl.eq.initial}). Then
$$\lim_{t\to\infty}u(t,x,\omega)=+\infty\quad {\rm for\ all\ }x\in {\mathbb{R}}$$
holds with probability one.
\end{enumerate}
\end{cor}
\section{Blocked path and auxiliary problem}
In this section we define a auxiliary problem that we will constantly use along this paper.
We will denote by $\chi_B$ the characteristic function of the set $B.$
\begin{definition}[Auxiliary field]\label{auxfield}
Let
\begin{align*}
&A:= {\mathbb{R}}^2\setminus \{\bigcup_{i\in {\mathbb{Z}}}(i-\delta,i+\delta)\times{\mathbb{R}} \}\\
&A_{\epsilon}:= {\mathbb{R}}^2\setminus \{\bigcup_{i\in {\mathbb{Z}}}(i-\delta-{\epsilon},i+\delta+{\epsilon})\times{\mathbb{R}} \}
\end{align*}
and define
$$\widetilde f(x,s):=-\sum_{(i,j)\in {\mathbb{Z}}^*\times{\mathbb{Z}}^*} l(i,j)\phi((x,s)-b_{i,j}).$$
\end{definition}
Let us now consider the following auxiliary problem
\begin{align}
&\frac{\partial v}{\partial t}\,=\,v_{xx} + \widetilde f(x,v(t,x)) +F\chi^{\epsilon}_A(x) \label{cdl.eq.approx}\\
&v(0,x) \,=\,0,
\end{align}
where $\chi_A^{{\epsilon}}$ is a smooth function such that $ \chi_{A_{\epsilon}}\le\chi_A^{{\epsilon}}\le \chi_A$. ${\epsilon}$ is a small parameter which will be fix later on.
To visualize the new random field defined by $\widetilde g= \widetilde f+F\chi^{\epsilon}_A(x)$ see figure 2.
Note that it is differentiable in $x$ and $s.$
\begin{figure}
\begin{center}
\input{grille4.pstex_t}
\caption{Mapping of the obstacles for the auxiliary problem}
\end{center}
\end{figure}
Observe that, as the obstacles are negative, $\widetilde f+F\chi^{\epsilon}_A \le f+F.$ Therefore the comparison principle
for the parabolic equation (see section \ref{existence}) implies that solutions of the auxiliary problem
remain below solutions of the original problem. Hence existence of a nonnegative stationary solution for
the original problem implies existence of one for the auxiliary problem. By contraposition, nonexistence
for the auxiliary problem implies nonexistence for the original problem.
Stationary sub/supersolutions can be constructed as piecewise quadratic functions.
For any $F$ we can construct the graph of such a solution
(also called "paths" to emphasize the analogy with a stochastic process).
\begin{definition}[blocked path] \label{blocked} A
graph $(x,v(x))$ is called blocked path if and only if
$v \in C^1_{loc}({\mathbb{R}}),$ and
\begin{align}
&v_{xx}=-F\chi_A^{\epsilon}(x) \qquad \text{ in } \qquad (i+\delta ,i+1-\delta),\label{cdl.eq.outside.strip}\\
&v_{xx}= \sum_{j\in {\mathbb{Z}}^*} l(i,j)({\omega})\phi_{i,j}(x,v(x)) \qquad \text{ in } \qquad (i-\delta , i+\delta)\label{cdl.eq.inside.strip}.
\end{align}
where $\phi_{i,j}(x,s):=\phi((x,s)-b_{i,j})$.
\end{definition}
Observe that the path for $x\in (i+\delta ,i+1-\delta)$ is uniquely determined by the boundary values
$v(i+\delta)$ and $v(i+1-\delta),$ because it solves a {\em linear} elliptic equation there.
But note that, for a given realisation of the random field, there may be more
than one blocked path, as equations like $u_{xx}=f(x,u)$ do not have unique solutions without further conditions on
the nonlinearity.
\begin{remark}
>From Definition \ref{blocked}, we see that $v$ is a convex function in $(i-\delta , i+\delta)$ and hence we have
$$v(i+\delta)\ge v(i-\delta)+2\delta v'(i-\delta) $$
\end{remark}
Let us now define some discrete quantities that we will use throughout the paper.
\begin{definition}\label{hat_v}
Let $\hat v(i)$ and $\bar v^{\delta}[i]$ be defined as follows:
$$\hat v(i):=v(i-\delta)+2\delta v_{x}(i-\delta),$$
$$\bar v^{\delta}[i]:=\delta \left[\delta^{-1}\hat v(i)-\frac{1}{2} \right]=\inf\{j\in \delta{\mathbb{Z}}\, |\,j\ge \hat v(i)-\frac{\delta}{2} \}
\in\delta{\mathbb{Z}}.$$
\end{definition}
We will need the following Lemma.
\begin{lem}\label{comparisonlemma}
Let $v$ be as in Definition \ref{blocked} and
$\hat v,\ \bar v^\delta$ be in Definition \ref{hat_v}. Denote by $\bar w^\delta$ the
piecewise linear interpolation of $\bar v^\delta,$ and by $w$ the piecewise linear interpolation
of $\hat v.$ Then
$v+\delta/2\ge \bar w^\delta,$ and $v\ge w.$
\end{lem}
\begin{proof}
First, note that convexity of $v$ in $[i-\delta,i+\delta]$ implies
that $\hat v(i)\le v(i+\delta).$
Let $$I_i:=(i-1+\delta,i+\delta)$$ and let the auxiliary function
$\hat w$ be the solution of
\begin{eqnarray*}
&&\Delta \hat w=-F1_{[i-1+\delta,i-\delta]}\quad {\rm on\ }I_i\\
&&\hat w(i-1+\delta)=v(i-1+\delta),\quad \hat w(i+\delta)=\hat v(i).
\end{eqnarray*}
This function is $C^{1}$ on its domain and solves the ODE
$\hat w_{xx}=-F$ on $(i-1+\delta, i-\delta).$ (Here $x$ is considered as "time"). Suppose
$\hat w(i-\delta)>v(i-\delta).$ Then $\hat w_{x}(i-\delta)<
v_{x}(i-\delta),$ and integrating the ODE backwards in $x$ we obtain
$\hat w(i-1+\delta)>v(i-1+\delta),$ a contradiction. Assuming
$\hat w(i-\delta)<v(i-\delta)$ we obtain a contradiction in a similar
way,
and we conclude $\hat w(i-\delta)=v(i-\delta).$ This implies
$\hat w=v$ on $[i-1+\delta,i-\delta]$ and (by convexity
of $v$ on $[i-\delta,i+\delta]$) $\hat w\le v$ on
$[i-1+\delta,i-\delta].$
Now consider
\begin{eqnarray*}
&&\Delta w=0\quad {\rm on\ }I_i\\
&&w(i-1+\delta)=\hat v(i-1),\quad w(i+\delta)=\hat v(i)
\end{eqnarray*}
Clearly $w$ is the piecewise linear interpolation of
$\hat v.$
As $\Delta \hat w \le \Delta w$ and $w\le \hat w$ on $\partial I_i,$
the comparison principle for the Laplace
operator
implies $\hat w\ge w,$ so
$v\ge \hat w\ge w.$ The conclusion for $\bar w^\delta$ follows immediately.
\end{proof}
\subsection{Existence and uniqueness for parabolic equations}\label{existence}
\begin{lem}
There exists a global classical solution of the parabolic Cauchy problems (\ref{cdl.eq.interface}),
and (\ref{cdl.eq.approx}) with initial conditions which are uniformly bounded and locally
$C^2.$
The solutions are unique. If $0\le v_0\le u_0,$ $v$ solves (\ref{cdl.eq.approx}) with initial condition
$v_0,$ $u$ solves (\ref{cdl.eq.interface}) with initial condition $u_0,$ then $v \le u.$
\end{lem}
Proof: For $M\in {\mathbb{N}}, $ replace $l(i,j)(\omega)$ by $l^M(i,j):=M\wedge l(i,j)$, where $\wedge$ denotes the operation $ a\wedge b:=\inf\{a,b\} $.The corresponding fields
$f^M,\ \widetilde f^M$ are uniformly bounded and uniformly Lipschitz in $s.$ Therefore we can apply the Banach
fixed point theorem in $L^\infty$ in order to obtain a local in time solution, which,
by local parabolic regularity, is classical. It can be extended as the nonlinearity is uniformly
bounded. Hence a global solution $u^M(x,t)$ exists. Note that by the comparison principle $u^{M}$ is a positive monotonic non-increasing function of $M$ i.e. $u^M>u^N>0$ for $N>M$ , so $u(x,t):=\lim_{M\to\infty}u^M(x,t)$ exists.
Applying regularity locally, (where the obstacles are bounded) we obtain that the limit is a classical solution.
\section{\textit{a priori} estimates on $\hat v(i)$ and $\bar v^{\delta}[i]$}
In this section, we establish some a-priori estimates on $\hat v(i)$ and $\bar v^{\delta}[i]$.
First we show a lemma which allows to estimate the discrete Laplacian of $\hat v$ at $i$ (which involves
$i,i+1$ and $i-1$) by something that depends only on the obstacles above $i.$
\begin{lem}\label{cdl.lem.esti1}
Let $\hat v(i)$ defined as in the previous section, and define the discrete Laplacian as
$$\Delta_{d}\hat v(i):= \hat v(i+1)-2\hat v(i)+\hat v(i-1) =\big(\hat v(i+1)-\hat v(i)\big)-\big(\hat v(i) -\hat v(i-1)\big)$$
Then
$$
-2\delta[v_{x}(i-1+\delta)-v_{x}(i-1-\delta)]\le \Delta_{d}\hat v(i)+
\hat F\le (1+2\delta)[v_{x}(i+\delta)-v_{x}(i-\delta)].$$
where $F(1-2(\delta+{\epsilon}))\le\hat F \le (1-2\delta)F$ for the ${\epsilon}>0$ in Def. \ref{auxfield}.
\end{lem}
Note that our discretization, using the tangents, implies that the discrete Laplacian does not
necessarily satisfy the same lower bound as the Laplacian of the original path.
\dem{Proof:} {\bf Step One : Upper Bound}\\
As a preparation, let us recall some formulas satisfied by $v$.\\
Since $v$ satisfies \eqref{cdl.eq.outside.strip}, we have for all $i\in {\mathbb{Z}}$
\begin{align}
&v_{x}(i+1-\delta)-v_{x}(i+\delta)=-F\int_{i+\delta}^{i+1-\delta}\chi_A^{{\epsilon}}(x)dx \label{cdl.eq.esti1}\\
&v(i+1-\delta)-v(i+\delta)=(1-2\delta)v_{x}(i+\delta)-F\int_{i+\delta}^{i+1-\delta}\left(\int_{i+\delta}^{s}\chi_A^{{\epsilon}}(x)dx\right)\,ds \label{cdl.eq.esti2}
\end{align}
Let us define $$\hat F:=F\int_{i+\delta}^{i+1-\delta}\chi_A^{{\epsilon}}(x)dx. $$
Observe that since $\chi_A^{{\epsilon}}(x+p)=\chi_A^{{\epsilon}}(x)$ for all integer $p$, $\hat F$ is independant of $i\in {\mathbb{Z}}$. Moreover
$$F(1-2(\delta+{\epsilon}))\le \hat F\le (1-2\delta)F$$ since $\chi_{A_{{\epsilon}}}(x)\le \chi_A^{{\epsilon}}(x)\le\chi_A(x) $.
Using now \eqref{cdl.eq.esti1}, the definition of $\hat v(i+1)$ and \eqref{cdl.eq.esti2} we see that
\begin{align*}
\hat v(i+1)&=v(i+\delta)+v_{x}(i+\delta)-F\int_{i+\delta}^{i+1-\delta}\left(\int_{i+\delta}^{s}\chi_A^{{\epsilon}}(x)dx\right)\,ds+2\delta (v_{x}(i+1_\delta)-v_{x}(i+\delta))\\
&=v(i+\delta)+v_{x}(i+\delta)-F\int_{i+\delta}^{i+1-\delta}\left(\int_{i+\delta}^{s}\chi_A^{{\epsilon}}(x)dx\right)\,ds-2\delta \hat F.
\end{align*}
Therefore,
\begin{equation}\label{cdl.eq.esti3}
\hat v(i+1)-\hat v(i)= v(i+\delta)+v_{x}(i+\delta)-F\int_{i+\delta}^{i+1-\delta}\left(\int_{i+\delta}^{s}\chi_A^{{\epsilon}}(x)dx\right)\,ds-2\delta \hat F -\hat v(i).
\end{equation}
Observe that since $\chi_A^{{\epsilon}}(x+p)=\chi_A^{{\epsilon}}(x)$ for all integer $p$ we have
$$F\int_{i+\delta}^{i+1-\delta}\left(\int_{i+\delta}^{s}\chi_A^{{\epsilon}}(x)dx\right)\,ds+2\delta \hat F =F \int_{i-1+\delta}^{i-\delta}\left(\int_{i-1+\delta}^{s}\chi_A^{{\epsilon}}(x)dx\right)\,ds+2\delta \hat F.$$
Hence, from the definition of the discrete laplacian and using \eqref{cdl.eq.esti3} it follows that
\begin{equation}
\Delta_d \hat v(i)= v(i+\delta)+v_{x}(i+\delta)-\hat v(i) - v(i-1+\delta)-v_{x}(i-1+\delta)+\hat v(i-1)\label{cdl.eq.esti4}
\end{equation}
Using now the definition of $\hat v(i)$ and the convexity of $v$ in $(i-\delta,i+\delta)$ for all $i\in{\mathbb{Z}}$ we see that
\begin{align*}
&v(i+\delta)+v_{x}(i+\delta)-\hat v(i) \le v_{x}(i+\delta)+ 2\delta (v_{x}(i+\delta)-v_{x}(i-\delta))\\
&- v(i-1+\delta)+\hat v(i-1) \le 0.
\end{align*}
Hence,
$$
\Delta_d \hat v(i)\le (1+2\delta)v_{x}(i+\delta)-2\delta v_{x}(i-\delta)-v_{x}(i-1+\delta).$$
Using now \eqref{cdl.eq.esti1} it follows that
$$
\Delta_d \hat v(i)\le (1+2\delta)(v_{x}(i+\delta)- v_{x}(i-\delta)) -\hat F.$$
{\bf Step two: Lower bound}
From formula \eqref{cdl.eq.esti4} we have
\begin{equation}
\Delta_d \hat v(i)=v(i+\delta) -\hat v(i) +v_{x}(i+\delta)-v(i-1+\delta) +\hat v(i-1) -v_{x}(i-1+\delta)
\end{equation}
Since $v$ is convex in $(i-\delta,i+\delta)$, we have $v(i+\delta) -\hat v(i) \ge 0$ and $v_{x}(i+\delta)\ge v_{x}(i-\delta)$. Therefore we have
\begin{equation}
\Delta_d \hat v(i)\ge v_{x}(i-\delta)-v_{x}(i-1+\delta)-v(i-1+\delta) +\hat v(i-1).
\end{equation}
Using now \eqref{cdl.eq.esti1}, the convexity of $\hat v$ in $(i-1-\delta,i-1+\delta)$ and the definition of $\hat v(i-1)$ it follows that
\begin{equation}
\Delta_d \hat v(i)\ge -\hat F- 2\delta [v_{x}(i-1+\delta) -v_{x}(i-1-\delta)].
\end{equation}
\vskip 0.2 pt \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\square$
Now we proceed to estimate the change of the discrete gradient
$$k(i):=v_{x}(i+\delta)-v_{x}(i-\delta)$$ in terms of the obstacle strengths above $i.$
Observe that always $k\ge 0$ by convexity.
If the gradients are very steep, the path will pass through several obstacles above the interval
$[i-\delta,i+\delta].$ The number of obstacles passed and the time spent in each
of them (i.e. the Lebesgue measure of its image under the inverse mapping)
can be estimated in terms of $v'(i-\delta)$ and $v'(i+\delta).$
\begin{lem}\label{cdl.lem.esti3}
Let $v$ be a blocked path, $i\in{\mathbb{Z}}$ and and assume that $k(i)>0$.
Set $M:=sup\{|v_{x}(i-\delta)|;|v_{x}(i+\delta)|\}$ then we have
$$k(i)\le \frac{18\delta}{M}\sum_{\hat v(i)-4\delta M\le j\le \hat v(i)+4\delta M}l(i,j)$$
\end{lem}
\dem{Proof:}
Step 1:
As $v$ is convex on $[i-\delta,i+\delta],$ the gradient is monotone, hence $|v_x(x)|\le M$
for all $x\in I(i):=[i-\delta,i+\delta].$ As a consequence, we have on $I(i)$
$$
v(i)-\delta M\le v(x)\le v(i)+ \delta M,
$$ As $|\hat v(i)-v(i-\delta)| \le 2\delta M,$ $|v(i)-v(i-\delta)|\le \delta M,$ we obtain
$$
|v(x)-\hat v(i)|\le 4\delta M\quad {\rm on } [i-\delta,i+\delta].
$$
Step 2.
Define the time spent by the path in the $j$-th obstacle above $i$ as
$$S_j:=\big|\{x:\ v(x)\in [j-\delta,j+\delta] \}\big|,
$$
where $|A|$ denotes the Lebesgue measure of the set $A$ and $j\in Z^*=1/2+{\mathbb{Z}}.$ Note that by convexity
$v_x$ changes sign at most once, hence each $S_j$ is the union of at most two intervals, moreover
$S_j=\emptyset$ if $|j-\hat v(i)|>4\delta M$
Hence, as for $x\in I(i)$ $v_{xx}(x) \le l(i,j)$ on obstacle $j$ and zero else,
$$
v_x(i+\delta)-v_x(i-\delta)\le \sum_{\hat v(i)-4\delta M\le j\le \hat v(i)+4\delta M}l(i,j)S_j.
$$where $j\in {\mathbb{Z}}^*$
Step 3. Note that $k\le 2M.$
As the gradient is monotone on $I(i),$ there exists a $\hat \tau$ such that
$|v_x(\hat \tau)|=M-k/3$ and $|v_x(x)|\ge M-k/3\ge M/3\ge 0$ on $\hat I(i),$ where
$$
\hat I(i)=\left\{
\begin{array}{ll} [\hat \tau,i+\delta] & {\rm if}\ M=|v_x(i+\delta)| \\
{} [i-\delta,\hat \tau] & {\rm if}\ M=|v_x(i-\delta)|. \end{array}
\right.
$$
As the gradient does not change sign on $\hat I (i),$ the sets $\hat S_j:=S_j\cap \hat I(i)$ are intervals.
Moreover,
$$|\hat S_j|\le \frac{2\delta}{M/3}=\frac{6\delta}{M}
$$ as $|v_x|\ge M/3$ on $\hat I(i).$
Hence
$$
\frac{k}{3}=M-v_x(\hat \tau)\le \sum_{\hat v(i)-4\delta M\le j\le \hat v(i)+4\delta M}l(i,j)\hat S_j\le \frac{6\delta}{M}
\sum_{\hat v(i)-4\delta M\le j\le \hat v(i)+4\delta M}l(i,j)
$$
and the result follows.
\vskip 0.2 pt \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\square$
\begin{remark}\label{cdl.rem.m(i)}
Note that in the case where $k(i)\ge 1$ then the corresponding $M(i)\ge \frac{1}{2}$. Indeed, by definition of $M(i)$ and $k(i)$ we have $2M(i)\ge |v_{x}(i-\delta)|+|v_{x}(i+\delta)|\ge v_{x}(i+\delta) -v_{x}(i-\delta)=k(i)\ge 1 ,$
i.e. $M(i)\ge 1/2.$
\end{remark}
Combining now Lemmas \ref{cdl.lem.esti1} and \ref{cdl.lem.esti3} we deduce the following estimates, which
allow to estimate the discrete Laplacian of the blocking path $(\bar v^{\delta}[j])_{j\in [-N,N]\cap{\mathbb{Z}}}$ at a site $i$ against a normalized sum of random variables.
\begin{lem}\label{cdl.lem.esti5}
Let $v$ be a blocked path.
Then for all $i \in [-N+\delta,N-\delta]\cap {\mathbb{Z}}$ there exists $M(i), M(i-1)>\frac{1}{2}$ such that following holds
\begin{align*}
\Delta_d \bar v(i) +\bar F &\le(1+2\delta)\left[\frac{360\delta^2}{2\delta(4 M(i)+\frac{1}{2})}\sum_{\bar v(i)-\delta(4 M(i)+\frac{1}{2})\le j\le \bar v(i) +\delta(4 M(i)+\frac{1}{2})}l(i,j)({\omega})\right]\\
&\ge -2\delta\left[\frac{360\delta^2}{2\delta(4 M(i-1)+\frac{1}{2})}\sum_{\bar v(i-1)-\delta(4 M(i-1)+\frac{1}{2})\le j\le \bar v(i-1) +\delta(4 M(i-1)+\frac{1}{2})}l(i-1,j)({\omega})\right] ,
\end{align*}
where $\bar F:=\hat F -(1+2\delta)$.
\end{lem}
\dem{Proof:} Let us first start with the proof of the upper bound.
Observe first that
$$ \hat v(i)-\frac{\delta}{2}\le \bar v^{\delta}[i]\le \hat v(i)+\frac{\delta}{2},$$
which implies that
$$ \Delta_d \hat v^{\delta}[i] -2\delta \le \Delta_d \bar v^{\delta}[i]\le \Delta_d \hat v^{\delta}[i] +2\delta.$$
Therefore using Lemma \ref{cdl.lem.esti1} we have
\begin{equation}\label{cdl.eq.esti-laplaced3}
\Delta_d \bar v^{\delta}[i]\le (1+2\delta)k(i) -\hat F +2\delta.
\end{equation}
with $k(i)>0$.
By Lemma \ref{cdl.lem.esti3} and Remark \ref{cdl.rem.m(i)}, for $k(i)\ge 1$ there exists $M(i)\ge \frac{1}{2}$ so that
$$k(i)\le\frac{18\delta}{M(i)}\sum_{\hat v(i)-4\delta M(i)\le j\le \hat v(i) +4\delta M(i)}l(i,j)({\omega}).$$
So we easily see that
\begin{equation}\label{cdl.eq.esti-laplaced4}
k(i)\le\frac{18\delta^2(4M(i)+\frac{1}{2})}{M(i)(4M(i)+\frac{1}{2})\delta}\sum_{\bar v^{\delta}[i]-(4 M(i)+\frac{1}{2})\delta\le j\le \bar v^{\delta}[i] +(4 M(i)+\frac{1}{2})\delta}l(i,j)({\omega}).
\end{equation}
Therefore, since $M(i)>\frac{1}{2}$ we have
\begin{equation}\label{cdl.eq.esti-laplaced5}
k(i)\le\frac{180\delta^2}{(4M(i)+\frac{1}{2})\delta}\sum_{\bar v^{\delta}[i]-(4 M(i)+\frac{1}{2})\delta\le j\le \bar v^{\delta}[i] +(4 M(i)+\frac{1}{2})\delta}l(i,j)({\omega}).
\end{equation}
Hence, for all $k(i)\ge 0$, we have
$$k(i)\le 1+ \frac{180\delta^2}{(4M(i)+\frac{1}{2})\delta}\sum_{\bar v^{\delta}[i]-(4 M(i)+\frac{1}{2})\delta\le j\le \bar v^{\delta}[i] +(4 M(i)+\frac{1}{2})\delta}l(i,j)({\omega}).$$
and the estimate follows .
The lower bound is treated in a similar way.
\vskip 0.2 pt \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\square$
\section{Probabilistic Estimates}
We first recall a standard fact for the Laplace transform of independent exponential random variables and random variables with distribution function bounded by an exponential.
\begin{lem}\label{cdl.lem.prob1}
\begin{enumerate}
\item
Let $\{X_i\}_{i\in N}$ be independent identically distributed
random variables such that for a parameter $\lambda_0$
and a constant $C>0$
\begin{equation}\label{cdl.eq.expbound}
{\mathbb P}[X_0>r]\le Ce^{-\lambda_0 r}.
\end{equation}
Then we have for any $\lambda<\lambda_0$ and $L\in {\mathbb{N}},$ $L\ge 2,$
\begin{eqnarray}\label{cdl.eq.L=1}
{\mathbb{E}}\left[ e^{\lambda X_1}\right]&\le&C\frac{\lambda_0}{\lambda_0-\lambda}\\ \label{cdl.eq.Lnot1}
{\mathbb{E}}\left[ e^{\lambda \sum_{i=1}^L X_i}\right]&\le&C^L\left(\frac{\lambda_0}{\lambda_0-\lambda}\right)^L\\
\label{cdl.eq.Lge1}
{\mathbb{E}}\left[ e^{\lambda \left(\frac{1}{L}\sum_{i=1}^L X_i\right)}\right]&\le&C^Le^{\lambda \frac{4\ln(4/3)\lambda}{3\lambda_0}}
\quad {\rm \ for\ } L\ge 2 ,\lambda\in (2/3\lambda_0,\lambda_0)
\end{eqnarray}
\item Let $\{X_i\}_{i\in N}$ be independent exponential random variables with parameter $\lambda_0>0.$
Then (\ref{cdl.eq.L=1})-(\ref{cdl.eq.Lnot1}) hold as equalities with $C=1,$ while
(\ref{cdl.eq.Lge1}) holds as inequality with $C=1.$
\end{enumerate}
\end{lem}
\dem{Proof:} We first show 2.
The first equality is standard, the second follows by using independence. For the third,
note that by concavity of $\ln(1-x)$ on $[0,3/4]$
$$\ln(1-x)\ge \frac{4}{3}x \ln(3/4)\ {\rm for}\ x\in \left[0,\frac{3}{4}\right]$$ Using independence and this concavity estimate
with $x=\lambda_0/(\lambda L)$
$$
{\mathbb{E}}\left[ e^{\lambda \frac{1}{L}\sum_{i=1}^L X_i}\right]=\left(\frac{\lambda_0}{\lambda_0-\frac{\lambda}{L}}\right)^L
=e^{-L\ln\left(1-\frac{\lambda}{\lambda_0L}\right)}\le e^{\ln(4/3)\frac{4\lambda}{3\lambda_0}}.
$$
In order to show 2., it is sufficient to prove the first inequality, the others then follow as in the previous case.
For (\ref{cdl.eq.L=1}) note that the expectation of a random variable is the Riemann-Stieltjes integral with the
distribution function as integrator. Now integrate by parts and use that the integrand $e^{\lambda x}$
is monotone.
\vskip 0.2 pt \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\square$
\begin{remark} \label{cdl.rem.esti.expectation}
Observe that the above estimate on the Laplace transform of $S_L$ is independent of $L$.
\end{remark}
Let us define $\widetilde S_M$ by $$\widetilde S_M({\omega})(i,j):= \sum_{-M\le j-l\le M}l(i,l).$$
The we have the following Corollary:
\begin{cor}\label{corprob}
For any discrete function $j(i):{\mathbb{Z}} \to{\mathbb{Z}},$ the random variables $\{\widetilde S_M({\omega})(i,j(i))\}_{i\in {\mathbb{Z}}}$
are independent and identically distributed. Moreover,
there exist constants $C,\hat \lambda$ which depend only on $\lambda_0$ such that
$$
{\mathbb{P}}\left( \widetilde S_M({\omega})(i,j(i))>r\right)\le e^{C-\hat\lambda r}
$$
\end{cor}
\dem{Proof:} The first assertion is obvious. The second
is a consequence of (\ref{cdl.eq.L=1}) and (\ref{cdl.eq.Lge1}) and the exponential Chebyshev inequality
with a parameter $\lambda\in (2/3\lambda_0,\lambda_0).$
\vskip 0.2 pt \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\square$
Let us now estimate the probability of a blocked path with boundary
conditions on $[-N,N]$
to be compatible with the $l(i,j)$.
\begin{definition}\label{BlockedDirichlet} (blocked Dirichlet path)
\noindent
Let $v(-N+\delta)=v(N-\delta)=0.$
\noindent
Moreover, let $v$ solve
(\ref{cdl.eq.outside.strip}) for $-N\le i\le N-1,$ and let
$v$ solve (\ref{cdl.eq.inside.strip}) for $-N+1\le i \le N-1.$
\noindent
Extend $v$ to $[-N-\delta,N+\delta]$ by
$$v(x)=v'(-N+\delta)(x+N-\delta) {\quad \rm on \quad}
[-N-\delta,-N+\delta]$$ and
$$v(x)=v'(N-\delta)(x-N+\delta){\quad \rm on \quad}
[N-\delta,N+\delta].$$
\end{definition}
\begin{remark}\label{remdiscrete}{\quad \hfill {\quad}}
\begin{enumerate}
\item
Note that this path solves (\ref{cdl.eq.inside.strip})
for $-N\le i \le N$ if we set $l(i,j)=0$ for $i=-N$ or $i=N.$
\item
If $v\ge0$ on $[-N+\delta,N-\delta],$ then $$0\ge
v(x)\ge -2\delta FN\quad {\rm for}\quad
x\in [-N-\delta,-N+\delta]\cup[N-\delta,N+\delta].$$
\end{enumerate}
\end{remark}
\begin{definition}
Let $\bar v^\delta:\ [-N,N]\cap {\mathbb{Z}}\to \delta{\mathbb{Z}}$
be a discrete path. We call the path {\em compatible} with a random
obstacle configuration if there exists a (not necessarily unique)
path as in Definition \ref{BlockedDirichlet} which is mapped to
$\bar v^\delta$
under the discretization defined in Def. \ref{hat_v}.
\end{definition}
Note that the discrete path is fixed. Whether it is compatible or not depends on the configuration
of the random field.
\begin{lem}\label{cdl.lem.prob3}
Let $({\Omega},{\mathcal{F}},{\mathcal{P}})$ be a probability space and let
$l(i,j)({\omega})$ be i.i.d. exponential random variables
with parameter $ \lambda_0>0$ and let $\bar v^{\delta}$ be a discrete
path
with fixed boundary conditions
$$\bar v^{\delta}(-N+\delta)=0,\ \bar v^{\delta}(N)=b\ {\rm for\ some\ }
b\in [-FN,FN].$$
Then there exist constants
$\hat C(\delta,\lambda_0),\ \lambda_1(\delta,\lambda_0)$ independent
of
$b$
such that we have for $F$ sufficiently large
$$
{\mathbb{P}}[\bar v^\delta\ {\rm compatible},\ \bar v^\delta(N)=b]
:={\mathbb{P}}_b[\bar v^\delta\ {\rm compatible}]\le e^{N \hat C}
e^{-\lambda_1\sum_{-N+1}^{N-1} |\Delta_d\bar v^\delta(i)+\bar F|}.
$$ with $\bar F$ as in Lemma \ref{cdl.lem.esti5}
\end{lem}
The previous estimates bounds the probability of the random obstacle configurations such that a {\em fixed}
discrete path is compatible with the random environment. In order to prove that the probability that
{\em there exists} some compatible nonnegative path is small,
we would
have to sum over all possible paths,
each weighted with the right hand side of the previous estimate.
It is complicated to bound these sums, because the number of possible discrete paths grows faster than
exponentially in
$N.$ Fortunately, most of them are extremely unlikely to be compatible. In order to quantify this,
we define an auxiliary probability measure on discrete paths.
\begin{definition}\label{Ptilde}
\begin{eqnarray*}
\widetilde {\mathbb{P}}_b[\Delta_d\bar v^\delta]&:=&
\frac{1}{Z^{2N-1}}e^{-\lambda_1\sum_{-N+1}^{N-1} |\Delta_d\bar v^\delta(i)+\bar F|},
\\
Z&:=&\sum_{k=-\infty}^\infty e^{-\lambda_1|\delta k+\bar F|}
\end{eqnarray*}
\end{definition}
The normalisation constant is obtained by summing over all possible discrete paths for fixed boundary
conditions. This is equivalent to summing over all discrete Laplacians. Note that $Z$ is bounded from above
and below by constants independent of $F.$
Note that the law of the positive and the negative part
of $\Delta_d\bar v^\delta(i)+\bar F$
under $\widetilde P$ is that of (discretized) independent exponential
random variables.
In particular, probabilities of sums of the discrete Laplacians have certain exponential moments and can
be estimated by large deviation techniques.
\begin{cor}\label{cdl.cor.auxmeas}
With $\widetilde P$ as in Def. \ref{Ptilde}, there exists $N_0(\lambda_0,\delta)$ such that
$$
{\mathbb{P}}_b[\bar v^\delta\ {\rm compatible}]\le e^{\widetilde CN}
\widetilde{\mathbb{P}}[\Delta_d\bar v^\delta ]
$$
for $N>N_0.$
\end{cor}
\dem{Proof of Corollary \ref{cdl.cor.auxmeas}:} We suppose that Lemma \ref{cdl.lem.prob3} holds. Then
\begin{eqnarray*}
P_b[\bar v^\delta\ {\rm compatible}]&\le& e^{N \hat C}
e^{-\lambda_1\sum_{-N+1}^{N-1} |\Delta_d\bar v^\delta(i)+\bar F|}
\\ &=& e^{N \hat C}\left(Z^2\right)^{N}Z^{-1}
\frac{1}{Z^{2N-1}}e^{-\lambda_1\sum_{-N+1}^{N-1} |\Delta_d\bar v^\delta(i)+\bar F|}\le
e^{N \widetilde C}\widetilde{\mathbb{P}}[\Delta_d\bar v^\delta ]
\end{eqnarray*} for $N$ sufficiently large. Here we can choose e.g.
$$
\widetilde C=2\hat C +2\ln(Z).
$$
\dem{Proof of Lemma \ref{cdl.lem.prob3}:}
In order to simplify notation we write
$$
S_{\bar v^\delta}(\omega)(i):= \widetilde S_{M(\bar v^\delta)}({\omega})(i,\bar v^\delta(i)).
$$
We write the absolute value as sum of positive and negative part.
By Lemma \ref{cdl.lem.esti5} we get that there exist
universal positive constants $C_0$
such that the fixed discrete path $\bar v^\delta$
is compatible only if
\begin{eqnarray*}&&\omega\in\left(\bigcap_{i=-N+1}^{N-2}
\left(A_{\bar v^\delta,+}(i)\cap A_{\bar v^\delta,-}(i)\right)\right)\cap A_{\bar v^\delta,+}(N-1)\cap A_{\bar v^\delta,-}(-N+1)\\
A_{\bar v^\delta,+}(i)&:=&\left\{\omega:\
C_0\left(\Delta_d\bar v^\delta(i)+\bar F\right)_+
\le S_{\bar v^\delta}(\omega)(i)\right\}\\
A_{\bar v^\delta,-}(i)&:=&\left\{\omega:\
C_0\left(\Delta_d\bar v^\delta(i+1)+\bar F\right)_-
\le S_{\bar v^\delta}(\omega)(i)\right\}\ \\
B_{\bar v^\delta}(i)&:=&
\left(A_{\bar v^\delta,+}(i)\cap A_{\bar v^\delta,-}(i)\right) .
\end{eqnarray*}
Note that
$$
B_{\bar v^\delta}(i)\subseteq \left\{
S_{\bar v^\delta}(i)\ge
\frac{C_0}{2}\left(\Delta_d\bar v^\delta (i+1)+\bar F \right)_-
+ \frac{C_0}{2}\left(\Delta_d\bar v^\delta(i)+\bar F \right)_+\right\}
$$
and we estimate with the help of Corollary \ref{corprob}
for $i\in \{-N+1,\ldots ,N-2\}$
$$
{\mathbb{P}}(B_{\bar v^\delta}(i))\le e^{\hat C-\frac{\widehat\lambda_1 \delta}{C_0}\left(\left(
\Delta_d\bar v^\delta(i)+\bar F \right)_++\left(
\Delta_d\bar v^\delta(i+1)+\bar F \right)_-\right)}$$
for constants $\hat C$ and $\widehat\lambda_1$ depending only on $\lambda_0$
but not on $F.$
Moreover,
for $i=N-1$ we obtain
$$
{\mathbb{P}}(A_{\bar v^\delta,+}(N-1))\le e^{\hat C-\frac{\widehat\lambda_1
\delta}{C_0}
\left(
\Delta_d\bar v^\delta(N-1)+\bar F
\right) _+}
$$
and for $i=-N+1$ we obtain
$$
{\mathbb{P}}(A_{\bar v^\delta,-}(-N+1))\le e^{\hat C-\frac{\widehat\lambda_1
\delta}{C_0}
\left(
\Delta_d\bar v^\delta(-N+1)+\bar F
\right) _-}
$$
The events $B_{\bar v^\delta}(i)$
are independent for different
$i,$ hence
\begin{eqnarray*}
{\mathbb{P}}_b[\bar v^\delta\ {\rm compatible}]&\le& {\mathbb{P}}(A_{\bar v^\delta,-}(-N+1))
{\mathbb{P}}(A_{\bar v^\delta,+}(N-1)) \prod\limits_{i=-N+1}^{N-2}
{\mathbb{P}}(B_{\bar v^\delta}(i))
\\ &\le&e^{N \hat C}
e^{-\frac{\widehat \lambda_1 \delta}{C_0}\sum_{-N+1}^{N-1} |\Delta_d\bar v^\delta(i)+\bar F|}.
\end{eqnarray*}
The claim follows now by choosing $\lambda_1=\frac{\widehat\lambda_1 \delta}{C_0}.$
\vskip 0.2 pt \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\square$
\begin{remark}Note that the 1-1-correspondence between second derivatives and paths with
Dirichlet boundary conditions allows us to express each path uniquely through its discrete
Laplacians and thus estimate its probability with the help of the previous lemma.
\end{remark}
As a consequence the discrete Laplacians on average much larger than $-F$ are extremely
unlikely. We will show that nonnegative paths that cross the "triangle $KN-K|x|$ require such unlikely values
of the discrete Laplacian.
\section{Final Argumentation}
\subsection{Some formulas on discrete path and Comparison of two paths}
In this section, we recall some well known formulas for discrete
paths and their discrete derivatives.
The proofs are straightforward computations and therefore omitted.
Let us first recall some basic formulas satisfied by a discrete path $z$ defined in ${\mathbb{Z}}\times {\mathbb{R}}$.
\begin{lem} \label{cdl.lem.formula}
Let us denote
${\nabla^{l}} z[\ell+1]:=z[\ell+1]-z[\ell]$ and ${\nabla^{r}} z[\ell+1]:=z[\ell+1]-z[\ell+2]$. Then for $\ell\in {\mathbb{Z}}$ we have
\begin{itemize}
\item[(i)] \begin{align*}
& {\nabla^{l}} z[\ell+1]=\Delta_d z[\ell] +{\nabla^{l}} z[\ell]=\sum_{i=1}^{\ell}\Delta_d z[i] +{\nabla^{l}} z[1]\\
& {\nabla^{l}} z[\ell+1]=\Delta_d z[\ell] +{\nabla^{l}} z[\ell]=\sum_{i=k}^{\ell}\Delta_d z[i] +{\nabla^{l}} z[k].\end{align*}
\item[(ii)] \begin{align*}
&z[\ell+1]-z[0]=\sum_{i=1}^{\ell}\sum_{j=1}^{i}\Delta_d z[j] +(\ell+1){\nabla^{l}} z[1].\\
&z[\ell+1]-z[k]=\sum_{i=k+1}^{\ell+1}(z[i]-z[i-1])=\sum_{i=k+1}^{\ell}\sum_{j=k+1}^{i}\Delta_d z[j] +(\ell+1-k){\nabla^{l}} z[k+1].
\end{align*}
\item[(iii)] \begin{align*}
&{\nabla^{r}} z[0]=\Delta_d z[1] +{\nabla^{r}} z[1]=\sum_{i=1}^{\ell }\Delta_d z[i] +{\nabla^{r}} z[\ell ],\\
& {\nabla^{r}} z[k]=\Delta_d z[k+1] +{\nabla^{r}} z[k+1]=\sum_{i=k+1}^{\ell}\Delta_d z[i] +{\nabla^{r}} z[\ell].
\end{align*}
\item[(iv)] \begin{align*}
&z[0]-z[\ell+1]=\sum_{i=0}^{\ell-1}\sum_{j=i+1}^{\ell}\Delta_d z[j] +(\ell+1){\nabla^{r}} z[\ell]\\
&z[k]-z[\ell+1]=\sum_{i=k}^{\ell} (z[i]-z[i+1])=\sum_{i=k}^{\ell-1}\sum_{j=i+1}^{\ell}\Delta_d z[j] +(\ell+1-k){\nabla^{r}} z[\ell].
\end{align*}
\item[(v)] $$ {\nabla^{l}} z[\ell+1]=-{\nabla^{r}} z[\ell]$$
\end{itemize}
\end{lem}
\bigskip
Let us now define what we mean by "crossing."
\begin{definition}
Let $z_1$ and $z_2$ be two given paths in ${\mathbb{Z}}\times {\mathbb{R}}$. We say that $z_1$ cross $z_2$ if and only if there exists $i\in {\mathbb{Z}}$ such that $z_1[i]\ge z_2[i] $ and $z_1[i+1]\le z_2[i+1]$.
\end{definition}
We will apply this to the discrete path $\bar v^\delta$ and the triangle $z_K(i):=NK-K|i|.$
%
\subsection{Proof of Theorem \ref{mainthm}}
First we state a trivial fact for discrete sums.
\begin{lem}
Let $a_j$ be nonnegative numbers, then
\begin{equation}\label{averaging}
\sum_{i=1}^N\sum_{j=i}^Na_j=\sum_{i=1}^N j a_j\le N \sum_{i=1}^N a_j\
\end{equation}
\end{lem}
We will show that paths that remain nonnegative but cross the triangle $z_k$ require values
of the average discrete Laplacian which are very unlikely under $\widetilde P.$ In order to do so, we
distinguish cases: Either the path is above the triangle near one of the two endpoints of the interval
$[-N,N]$ and crosses at the interior, or it crosses at $N$ or $-N.$ In both cases, this implies information on the
gradient. Note that the nonnegativity of the original subsolution does
not imply the nonnegativity of
the discretized path, but only that the discretized path is larger than $-\delta FN,$ $\delta$ times
the minimal possible gradient. In particular, it implies that the
terminal value
$b$ of the discretized path is in $[-\delta FN,0].$
{\bf Notation:} As only discrete paths appear in the following estimates, we will write $v[i]$ for $\bar v^\delta[i]$
In order to simplify notation.
If $-\nabla^r v[-N]\le K,$ then by Lemma \ref{cdl.lem.formula}
$$
v[0]-v[-N]=\sum_{i=-N+1}^{-1}\sum_{j=-N+1}^{i}\Delta_d v[j] -N{\nabla^{r}} v[-N].$$
Since $v[-N]=0$ and rewriting the double sum the right way, it follows that
$$
-FN\le v[0]\le NK+\sum_{i=-N+1}^{-1}(-i)(\Delta_d v[i]).
$$
After adding and subtracting $\bar F$ in each term in the summation
$$
-FN\le NK+\sum_{i=-N+1}^{-1}(-i)(\Delta_d v[i]+\bar F) -\bar F \frac{N(N-1)}{2}.
$$
so, invoking (\ref{averaging}) it follows that
$$
\bar F\frac{N(N-1)}{2}-(F+K)N\le 2(N-1)\sum_{i=-N+1}^{N-1}(\Delta_d v[i]+\bar F)_+.
$$
By definition of $\bar F$, we have $$ \bar F\ge F(1-2(\delta+{\epsilon}))-(1+2\delta).$$
Therefore for ${\epsilon}$ small, says $ {\epsilon} \le \delta$ and $F$ such that $F\ge 2\frac{1+2\delta}{1-8\delta}$ we achieve $$\bar F\ge \frac{F}{2}.$$
Whence
$$
F\frac{N(N-1)}{4}-(F+K)N\le 2(N-1)\sum_{i=-N+1}^{N-1}(\Delta_d v[i]+\bar F)_+.
$$
This implies that
for $N$ large and $K$ fixed
$$
\frac{1}{2(N-1)}\sum_{i=-N+1}^{N-1}(\Delta_d v[i]+\bar F)_+ \ge \frac{1-2\delta}{8}F.
$$
As the $(\Delta_d v[i]+\bar F)_+$ are independent random variables under the auxiliary
probability measure $\widetilde {\mathbb{P}}$ defined in Def. \ref{Ptilde} which have exponential moments
bounded as in (\ref{cdl.eq.L=1}), we can derive an upper bound
for the large deviations principle:
(For the basic form of the large deviations principle needed, see
e.g \cite{Grimmett} Ch. 5.11)
Let
$$
{\mathcal I}(F)=\frac{F}{\mu}-1+\ln\left(\frac{\mu}{F}\right),
$$where $\mu:=\lambda_0^{-1}$ with $\lambda_0$ as in Lemma \ref{cdl.lem.prob1}. (I.e. for exponential random variables $\mu$ is the expectation of $(\Delta_d v[i]+\bar F)_+$ under $\widetilde {\mathbb{P}}.$ Note that $\mu$
is decreasing in $\lambda_0.$)
Then, by the large deviations principle,
for any $\eta>0$ there exists $N_0\in {\mathbb{N}}$ such that for all $N\ge N_0$
$$
\widetilde {\mathbb{P}}\left( \frac{1}{2(N-1)}\sum_{i=-N+1}^{N-1}(\Delta_d v[i]+\bar F)_+
\ge \frac{(1-2\delta)}{8}F\right)\le e^{-N\left(C+{\mathcal I}\left(\frac{(1-2\delta)F}{8}\right)-\eta\right)}.
$$where $C$ is the constant in the bound (\ref{cdl.eq.L=1}). ($C=1$ for exponential random variables.)
Now choose
$F$ sufficiently large such that
$$
e^{\widetilde C C-{\mathcal I}\left(\frac{(1-2\delta)F}{8}\right)}<1,
$$
where the constants are defined in Lemma \ref{cdl.lem.prob3}.
Then there exists a constant $C_3$ depending on
$\lambda_0$ and $\delta$ such that for $N$ sufficiently large
$$
{\mathbb{P}}({\rm case\ 1})\le e^{-C_3N}.
$$
The case $\nabla^lv[N]\ge -K$ is done in a similar way.
Second case:
$-\nabla^r v[-N]>K, \nabla^lv[N]<-K.$ This implies that the path has to cross the triangle
inside the interval $[-N,N].$
Suppose the path crosses $z_K$ on $[-N,0],$ the other
case is follows by symmetry. Then there exists $N_1, \, -N<N_1<0,$ such that $-\nabla ^rv[N_1]\le K$ and
$v[N_1]\le KN.$ Then by Lemma \ref{cdl.lem.formula}
$$
v[N]-v[N_1]=\sum_{i=N_1+1}^{N-1}\sum_{j=N_1+1}^{i}\Delta_d v[j] -(N-N_1){\nabla^{r}} v[N_1],
$$
so
$$
-FN\le v[N]\le 2KN+KN+\sum_{i=N_1+1}^{N-1}\sum_{j=N_1+1}^{i}(\Delta_d v[j]+\bar F) -\bar F \frac{(N-N_1)(N-N_1-1)}{2},
$$
which implies
$$
\bar F \frac{N(N-1)}{2}-(F+3K)N\le \sum_{i=N_1+1}^{N-1}\sum_{j=N_1+1}^{i}(\Delta_d v[j]+\bar F)
\le 2(N-1)\sum_{i=-N+1}^{N-1}(\Delta_d (v[i])+\bar F)_+,
$$i.e. for $N$ sufficiently large
$$
\frac{1}{2(N-1)}\sum_{i=-N+1}^{N-1}(\Delta_d v[i]+\bar F)_+ \ge \frac{(1-2\delta)F}{4}.
$$
Now we can repeat the probabilistic argument from the first case.
Finally, we sum over all possible $-FN$values of the the terminal
condition
$b.$ This sum grows linearly in $N,$ hence using the exponential decay
of the probabilities we obtain
that there exists
$C_4(\delta,\lambda_0)$ and $F_0(\delta,\lambda_0)$ such that for
$F>F_0$
$$
{\mathbb{P}}\big(\omega:\ \bar v^\delta
\mbox{\ compatible\ and\ }
\bar v^\delta\ {\rm crosses}\ z_K \big)\le e^{-C_4N}.
$$
Now we conclude with Lemma \ref{comparisonlemma}.
\vskip 0.2 pt \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad $\square$
\subsection{Proof of Corollary}
Define $v^N$ as the solution of the initial-boundary value problem
\begin{eqnarray*}
\frac{\partial v^N}{\partial t}&=& v^N_{xx}(x,t) + \widetilde f(x,v^N(x,t)) +F \quad \text{ in } \quad (-N+\delta,N-\delta), \\
v^N(-N,t)&=&u=v^N(N,t)=0\\
v^N(x,0)&=&0,
\end{eqnarray*}
and let $u(x,t)$ solve \ref{cdl.eq.interface}. The comparison principle for parabolic equations implies that
$v^N(x,t)\le u(x,t)$ for $x\in [-N-\delta,N+\delta],\ t>0.$
Moreover, $v^N(x,t)\nearrow v^N_{\rm stat}(x)$ as $t \to \infty ,$ where
$v_{\rm stat}^N(x)$ is a stationary solution of the Dirichlet problem.
Note that $\partial_tv^N(x,t)\ge 0$ as
$\partial_tv^N(x,0)\ge 0,$ and the time derivative $w:=\partial_tv^N$ solves
$$
\partial_t w=\Delta w+V(x)w,
$$where the potential $V(x)=\frac{\partial f}{\partial u}(x,v^N(x,t))$
is bounded on compact subsets of ${\mathbb{R}}^N.$ (Note that $\omega$ is a
fixed
parameter here. $v^N\le FN^21_{[-N,N]},$so only obstacles within
$[-N,N]\times FN^2$ can occur, but these are bounded for $\omega$ fixed.)
Now a linear parabolic PDE with sufficiently regular potential
$V(x)$
and nonnegative initial condition remains nonnegative: $\widetilde w=
e^{-t\|V\|_\infty}w$ solves
$$
\partial_t \widetilde w=\Delta \widetilde w+\widetilde V(x)w,\quad \widetilde V\le 0
$$ with initial condition $\widetilde w\ge 0.$ So the classical parabolic
comparison principle (\cite{Nirenberg}) implies $\widetilde w \ge 0.$
By Thm. \ref{mainthm} and the first Borel-Cantelli Lemma (see e.g.
\cite{Grimmett}),
$${\mathbb{P}}\left(\omega: v_{\rm stat}^N(0)\le
KN\ {\rm for\ infinitely\ many\ }N\right)=0,$$ so there exist almost surely
arbitrarily large $N$ such that
$$
\liminf_{t\to \infty} u(0,t)\ge \lim_{t\to\infty} v^N(0,t)=v^N_{\rm
stat}(0)\ge KN,
$$which implies
\begin{equation}\label{nopin}
\liminf_{t\to \infty} u(0,t,\omega)=+\infty
\end{equation} with probability 1.
By the comparison principle, this contradicts the existence
of a global nonnegative stationary solution.
Moreover, by arguments as in Lemma \ref{comparisonlemma},
(\eqref{nopin})
holds for $x\in [-1,1].$
As the distribution of the obstacles is invariant
under translations in $x$-direction, (\eqref{nopin}) holds
for $x\in {\mathbb{R}}.$
|
1,314,259,993,260 | arxiv | \section{Introduction}
This work is concerned with the Cauchy problem
\begin{equation}\label{eq:original hyperbolic system}
\partial_t u+\mathcal Lu:=\partial_t u+A\partial_xu+Bu=0,\qquad u(0,x)=u_0(x).
\end{equation}
Even if we are going to discuss properties of the system \eqref{eq:original hyperbolic system} on its own,
we primarily regard at the system \eqref{eq:original hyperbolic system} as a linearization of the nonlinear hyperbolic system
\begin{equation}\label{nonlinear1}
\partial_t u+\partial_{x} F(u) + G(u) = 0
\end{equation}
at a constant stationary state $\bar u$ satisfying $G(\bar u)=0$ (see \citep{liu87,serre07} and descendants).
In addition, linear systems fitting in the class \eqref{eq:original hyperbolic system} emerge as models for velocity jump processes such as the Goldstein--Kac model \citep{goldstein51,kac74} (a generalization was introduced in \citep{mascia16}) and in other fields of application.
On the other hand, decay estimates for the system \eqref{eq:original hyperbolic system} have been accomplished for years. In \citep{shizuta85}, it is proved that the $L^2$-norm of the solution $u$ to \eqref{eq:original hyperbolic system} is bounded by the sum of the two terms: the first term, with respect to the $L^2$-norm of the initial datum of $u$, decays exponentially and the second one, with respect to the $L^q$-norm of the initial datum of $u$ for $q\in[1,\infty]$, decays at the rate $(1/q-1/2)/2$. In that work, the matrices $A$ and $B$ are symmetric and they satisfy the Kawashima--Shizuta condition: {\it if $z$ is an eigenvector of $A$, then $z$ does not belong to $\ker B$}, which is required for designing a compensating matrix to capture the dissipation of the system \eqref{eq:original hyperbolic system} over the degenerate kernel space of $B$ since the symmetric structure is not enough to guarantee the decay. The result is then improved in \citep{bianchini07}, if \eqref{eq:original hyperbolic system} has a convex entropy and satisfies the Kawashima--Shizuta condition, $B$ can be written in the block-diagonal form $\textrm{\normalfont diag}\,(O_{m\times m},D)$ where $O_{m\times m}$ is the $m\times m$ null matrix and $D\in M_{n-m}(\mathbb R)$ is positive definite, and by considering the parabolic equation given by applying the Chapman--Enskog expansion to the system \eqref{eq:original hyperbolic system}, the $L^p$-norm of the difference between the solution $u$ and the solution $U$ to the parabolic equation decays $1/2$ faster than the rate $(1-1/p)/2$ in terms of the $L^1\cap L^2$-norm of the initial datum of $u$. Recently, \citep{ueda12} can be seen as a generalization of \citep{shizuta85} for any non symmetric matrix $B$ under appropriate conditions.
More detailed descriptions of the asymptotic behavior have been provided for specific classes of equations by {\it $L^p$-$L^q$ estimates} {e.g.} the $L^p$-$L^q$ estimate for the Cauchy problem for the damped wave equation
\begin{equation}\label{dampedwave}
\partial_t u+\partial_{tt} u-\Delta u=0.
\end{equation}
It shows that the time-asymptotic profile of the solution $u$ to \eqref{dampedwave} includes the solution to a heat equation and the solution to a wave equation, and when measuring the initial datum of $u$ in $L^q$, the $L^p$ distance between $u$ and this profile decays $\varepsilon>0$ faster than the rate $\alpha(p,q):=d(1/q-1/p)/2$ where $d$ is the spacial dimension (see \citep{marcati03,hosono04}).
We are not aware of any results on such kind of estimate for the general system \eqref{eq:original hyperbolic system} with general exponents $p$ and $q$. We start with the following assumptions on the matrices $A$ and $B$.
{\it {\bf Condition A.} {\sl [Hyperbolicity]} The matrix $A$ is diagonalizable with real eigenvalues.}
{\it {\bf Condition B.} {\sl [Partial dissipativity]} $0$ is a semi-simple eigenvalue of $B$ with algebraic multiplicity $m\geq 1$
and the spectrum $\sigma(B)$ of $B$ can be decomposed as $\{0\}\cup\sigma_{0}$
with $\sigma_0\subset\{\lambda\in\mathbb C\,:\,\textrm{\normalfont Re}\,\lambda>0\}$.}
Let $P_0^{(0)}$ be the unique eigenprojection associated with the eigenvalue $0$ of $B$, then the reduced system is given by
\begin{equation}\label{eq:reduced system}
\partial_tw+C\partial_xw\approx 0,
\end{equation}
where $w:=P_0^{(0)}u$ and $C:=P_0^{(0)}AP_0^{(0)}$. One assumes
{\it {\bf Condition C.} {\sl [Reduced hyperbolicity]} The matrix $C$ is diagonalizable with real eigenvalues considered in $\ker(B)$.}
On the other hand, the requisite condition for the decay of the solution to the system \eqref{eq:original hyperbolic system}, which is related to the well-known Kawashima--Shizuta condition, is that the eigenvalues $\lambda(i\xi)$ of the operator $E(i\xi):=-(B+i\xi A)$ satisfy
{\it {\bf Condition D.} {\sl [Uniform dissipativity]} There is $\theta>0$ such that
\begin{equation}\label{eq:regularity}
\textrm{\normalfont Re}\,(\lambda(i\xi))\le -\dfrac{\theta |\xi|^{2}}{1+|\xi|^2}, \qquad \textrm{ for } \xi\ne 0.
\end{equation}}
In this framework, we show that under the assumptions {\bf A}, {\bf B}, {\bf C} and {\bf D}, the time-asymptotic profile of the solution to the system \eqref{eq:original hyperbolic system} is the superposition of diffusion waves and exponentially decaying waves.
The diffusion waves are constructed as follows. Let $\Gamma_0$ be an oriented closed curve enclosing the eigenvalue $0$ except for the nonzero eigenvalues of $B$ in the resolvent set $\rho(B)$, one sets
\begin{equation}\label{eq:reduced resolvent coefficient}
S_0^{(0)}:=\dfrac{1}{2\pi i}\int_{\Gamma_0}z^{-1}(B-zI)^{-1}\,dz.
\end{equation}
On the other hand, let
\begin{equation}\label{eq:P01}
P_0^{(1)}:=-P_0^{(0)}AS_0^{(0)}-S_0^{(0)}AP_0^{(0)},
\end{equation}
one defines
\begin{equation}\label{eq:matrix D}
D:=-\bigl(P_0^{(1)}BP_0^{(1)}+P_0^{(0)}AP_0^{(1)}+P_0^{(1)}AP_0^{(0)}\bigr).
\end{equation}
Then, we consider the Cauchy problem with respect to $U$ in $\textrm{\normalfont ran}\bigl(P_j^{(0)}\bigr)$, such that
\begin{equation}\label{eq:parabolic system}
\partial_t U+c_j\partial_x U-P_j^{(0)}D \partial_{xx}U=0,\qquad U(0,x)=P_j^{(0)}u_0(x),
\end{equation}
where $c_j$ and $P_j^{(0)}$ are the $j$-th element of the spectrum of $C$ considered in $\ker(B)$ and the eigenprojection associated with it for $j=1,\dots,h$ and $h\le m$ is the cardinality of the spectrum of $C$ considered in $\ker(B)$ and $m$ is the algebraic multiplicity of the eigenvalue $0$ of $B$. Thus, one can choose $U:=\sum_{j=1}^{h}U_j$ where $U_j$ is the solution to the system \eqref{eq:parabolic system} for $j=1,\dots,h$.
On the other hand, the coefficients $P_0^{(0)}$ and $S_0^{(0)}$ can be computed by the formula
\begin{equation}
P_0^{(0)}=\mathbb P_{m-1}(B)\qquad \textrm{and}\qquad S_0^{(0)}=\mathbb S_{m-1}(B),
\end{equation}
where the matrix-valued functions $\mathbb P$ and $\mathbb S$ are introduced in \eqref{func:P} and \eqref{func:S} in the appendix section. Moreover, let $\alpha>\max\{|\lambda|:\lambda\in\sigma(C)\}$ and let $C':=C+\alpha P_0^{(0)}$, then $C'$ has $h$ distinct nonzero eigenvalues denoted by $c_j'$ with algebraic multiplicities $m_j\ge 1$ for $j\in\{1,\dots,h\}$. Thus, $P_j^{(0)}$ can be computed by the formula
\begin{equation}
P_j^{(0)}=\mathbb P_{m_j-1}(C'-c_j'I),
\end{equation}
for $j\in\{1,\dots,h\}$. Noting that the shift from $C$ to $C'$ is requisite since we consider only the eigenvalues of $C$ restricted to $\ker(B)$.
The exponentially decaying waves are constructed as follows. Due to the diagonalizable property of $A$, let $Q\in M_n(\mathbb R)$ be the invertible matrix diagonalizing $A$. Then, one sets
\begin{equation}\label{eq:A and B bar}
\bar A:=\textrm{\normalfont diag}\,(a_1,\dots,a_n),\qquad \bar B:=Q^{-1}BQ,
\end{equation}
where $a_j\in\mathbb R$ for $j=1,\dots,n$ are the repeated eigenvalues of $A$. Let define a partition denoted by $\{\mathcal S_j:j=1,\dots,s\}$ of $\{1,\dots,n\}$ for some $s\le n$ such that $h,k\in\mathcal S_j$ if $a_h=a_k$, it is easy to see that $s$ is the cardinality of the spectrum of $\bar A$. On the other hand, we also define the matrix
\begin{equation}\label{eq:the eigenprojection associated with alphaj}
\bigl(\Pi_j^{(0)}\bigr)_{hk}:=\begin{cases} 1 &\textrm{if } h=k\in\mathcal S_j,\\
0 &\textrm{otherwise,} \end{cases}
\end{equation}
for $h,k=1,\dots,n$. Then we consider the Cauchy problem with respect to $V\in\textrm{\normalfont ran}\bigl(\Pi_j^{(0)}\bigr)$ such that
\begin{equation}\label{eq:hyperbolic system}
\partial_t V+\alpha_j\partial_xV+\Pi_j^{(0)}\bar B V=0,\qquad V(0,x)=\Pi_j^{(0)}Q^{-1}u_0(x),
\end{equation}
where $\alpha_j=a_h$ if $h\in \mathcal S_j$ for $j=1,\dots,s$. Thus, we can choose $V:=Q\sum_{j=1}^sV_j$ where $V_j$ is the solution to the system \eqref{eq:hyperbolic system} for $j=1,\dots, s$.
\begin{theorem}[$L^p$-$L^q$]\label{theo:standard type}
If $u$ is the solution to the system \eqref{eq:original hyperbolic system} with the initial datum $u_0\in L^q(\mathbb R)$, the conditions {\bf A}, {\bf B}, {\bf C} and {\bf D} imply that, for $1\le q\le p\le \infty$ and $t\ge 1$, there are constants $C:=C(p,q)>0$ and $\delta>0$ such that
\begin{equation}\label{est:ellepq standard type}
\|u-U-V\|_{L^p}\le Ct^{-\frac12\bigl(\frac1q-\frac1p\bigr)-\frac12}\|u_0\|_{L^q},
\end{equation}
where
\begin{equation}\label{est:parabolic solution-hyperbolic solution}
\|U\|_{L^p}\le Ct^{-\frac12\bigl(\frac1q-\frac1p\bigr)}\|u_0\|_{L^q}\qquad \textrm{and}\qquad \|V\|_{L^2}\le Ce^{-\delta t}\|u_0\|_{L^2}.
\end{equation}
\end{theorem}
Going back to the $L^p$-$L^q$ decay estimate in \citep{marcati03}, if the initial condition for the Cauchy problem for \eqref{dampedwave} is given by $(u,\partial_t u)|_{t=0}=(u_0,u_1)$, then the following estimate holds
\begin{equation}\label{mnLpLq}
\bigl\| u-U-e^{-t/2}V \bigr\|_{L^p}\le C\,t^{-\frac12(\frac1q-\frac1p)-1}\bigl\|u_0+u_1\bigr\|_{L^q}, \qquad \forall t\ge 1,
\end{equation}
where respectively, $U$ and $V$ are the solutions to the Cauchy problems
\begin{equation*}
\left\{\begin{aligned}
&\partial_t U-\partial_{xx} U=0,\\
&U(x,0)=u_0(x)+u_1(x),
\end{aligned}\right.
\quad\textrm{and}\quad
\left\{\begin{aligned}
&\partial_{tt}V-\partial_{xx}V=0,\\
&V(x,0)=u_0(x),\quad \partial_tV(x,0)=0.
\end{aligned}\right.
\end{equation*}
Comparing \eqref{est:ellepq standard type} with \eqref{mnLpLq}, we recognize a difference of 1/2 in the decay rates. The better decay, which is valid for the linear damped wave equation, is a consequence of an additional property, namely the invariance with respect to the transformation $x\mapsto -x$. Indeed, in terms of the Goldstein--Kac system, such symmetry implies that the eigenvalue curves of $E(i\xi)=-(B+i\xi A)$ which pass through $0$ can be expanded as $\lambda(i\xi):=-d_0\xi^2+\mathcal O(|\xi|^4)$ for some $d_0>0$ as $|\xi|\to 0$, and the fact that the error terms are $\mathcal O(|\xi|^4)$ guarantees the gain of $1$ instead of $1/2$ in the decay rate, where $1/2$ holds for general cases where the error terms are $\mathcal O(|\xi|^3)$.
Thus, we are also interested in systems fitting in the class \eqref{eq:original hyperbolic system} that have an analogous property, namely
{\bf Hypothesis S.} {\sl [Symmetry]} There is an invertible symmetric matrix $S$ such that
\begin{equation*}
AS=-SA\qquad\textrm{and}\qquad BS=SB.
\end{equation*}
When the above assumption holds, if $u:=u(x,t)$ is a solution to \eqref{eq:original hyperbolic system}, then the reflection $v:=v(x,t)=u(-x,t)$ is a solution to the same system as well.
Let us consider a stronger assumption than the condition {\bf C} on the reduced system, namely
{\it {\bf Condition C'.} {\sl [Reduced strictly hyperbolicity]} The matrix $C$ is diagonalizable with $m$ real distinct eigenvalues considered in $\ker(B)$.}
Let $U=\sum_{j=1}^mU_j$ where $U_j$ is the solution to \eqref{eq:parabolic system} with the initial datum given by
\begin{equation}\label{eq:initial datum of parabolic system}
U(0,x):=\bigr(P_j^{(0)}+P_j^{(1)}\partial_x\bigl)u_0(x),
\end{equation}
where $P_j^{(0)}$ is already introduced and $P_j^{(1)}$ is as follows for $j\in\{1,\dots,m\}$.
Let $\Gamma_j$ be an oriented closed curve enclosing the nonzero eigenvalue $c_j'$ except for the other eigenvalues of $C'$ in the resolvent set $\rho(C')$ for $j\in\{1,\dots,m\}$. One sets
\begin{equation}
S_j^{(0)}:=\dfrac{1}{2\pi i}\int_{\Gamma_j}z^{-1}(C'-zI)^{-1}\,dz,
\end{equation}
and then, $P_j^{(1)}$ can be computed by
\begin{equation}\label{eq:Pj1}
P_j^{(1)}:=P_j^{(0)}DS_j^{(0)}+S_j^{(0)}DP_j^{(0)},
\end{equation}
for all $j\in\{1,\dots,m\}$. Similarly to before, $S_j^{(0)}$ can be computed by
\begin{equation}
S_j^{(0)}=\mathbb S_{m_j-1}(C'-c_j'I),
\end{equation}
since $c_j'$ is simple for $j\in\{1,\dots,m\}$.
Let $V$ be the same as before, one has
\begin{theorem}[Increased decay rate]\label{theo:standard type symmetry}
With the same hypotheses as in Theorem \ref{theo:standard type}, if the condition {\bf C} is substituted by the condition {\bf C'} and if the condition {\bf S} holds, then, for $1\le q\le p\le \infty$, there is a positive constant $C:=C(p,q)>0$ such that
\begin{equation}\label{est:ellepq symmetry}
\bigl\|u-U-V\bigr\|_{L^p}\le C\,t^{-\frac12\bigl(\frac1q-\frac1p\bigr)-1}
\bigl\|u_0\bigr\|_{L^q},\qquad\forall t\ge1,
\end{equation}
where
\begin{equation}
\|U\|_{L^p}\le Ct^{-\frac12\bigl(\frac1q-\frac1p\bigr)}\|u_0\|_{L^q}\qquad \textrm{and}\qquad \|V\|_{L^2}\le Ce^{-\delta t}\|u_0\|_{L^2}.
\end{equation}
\end{theorem}
Once relaxing from {\bf C'} to {\bf C}, the decay rate in the estimate \eqref{est:ellepq standard type} does not increase in general since the condition {\bf S} cannot prevent the eigenvalues of $E$ which converge to $0$ from exhibiting non zero terms $(i\xi)^{3+\alpha}$ for $\alpha\in [0,1)$ in their expansions, and thus, it does not permit to have the gain of 1 in the decay rate.
The paper is organized as follows. In order to study the behavior of the solution to the system \eqref{eq:original hyperbolic system}, we introduce the asymptotic expansion of the operator $E(i\xi)=-(B+i\xi A)$ in Section \ref{sec:asymptotic expansion}. Then, Section \ref{sec:fundamental solution} and Section \ref{sec:multiplier estimates} are devoted to give the {\it a priori} estimates of the solution to the system \eqref{eq:original hyperbolic system}. Moreover, the symmetry property of the system \eqref{eq:original hyperbolic system} is also discussed in Section \ref{sec:symmetry}. Then, we prove the main theorems in Section \ref{sec:proofs of main theorems}. Finally, we let the appendix section for some useful facts of the perturbation theory for linear operators in finite dimensional space together with a tool for computing the eigenprojections.
\subsection*{Notations and Definitions}
Given a matrix operator $A$, we denote $\ker(A)$, $\textrm{\normalfont ran}(A)$, $\rho(A)$ and $\sigma(A)$ the kernel, the range, the resolvent set and the spectrum of $A$ respectively.
On the other hand, we call $\lambda\in\mathbb C$ is an eigenvalue of $A$ considered in a domain $\mathcal D$ if there is $u\in \mathcal D$ such that $u\ne O_{n\times 1}$ and $Au=\lambda u$.
For $x\in \mathbb C$ small enough, if $A(x)=A^{(0)}+\mathcal O(|x|)$ and $\lambda\in\sigma(A)$ satisfying $\lambda(x)\to \lambda^{(0)}$ as $|x|\to 0$ where $\lambda^{(0)}\in\sigma(A^{(0)})$, the set of all such eigenvalues of $A$ is called the $\lambda^{(0)}$-group. Moreover, $P$ is called the total projection of a group if $P$ is the sum of the eigenprojections associated with the eigenvalues belonging to that group.
Let $T:\mathbb R\to \mathcal B$ where $\mathcal B$ is a Banach space with some suitable norm $|\cdot|_{\mathcal B}$. Define the $L^p(\mathbb R,\mathcal B)$-norm of $T$ as follows.
\begin{equation*}
\|T\|_{L^p}:=\left(\int_{-\infty}^{+\infty}|T(x)|_{\mathcal B}\,dx\right)^{1/p},\qquad 1\le p<\infty,
\end{equation*}
and
\begin{equation*}
\|T\|_{L^\infty}:=\textrm{ess\,sup}_{-\infty<x<+\infty} |T(x)|_{\mathcal B}.
\end{equation*}
From here, we use the notation $|\cdot|$ instead of $|\cdot|_{\mathcal B}$ to indicate the norm associated with $\mathcal B$.
Let $m$ be a tempered distribution, $m$ is called a Fourier multiplier on $L^p$, for $1\le p\le \infty$, if
\begin{equation*}
\sup_{\|f\|_{L^p}=1}\|\mathcal F^{-1}(m)*f\|_{L^p}<+\infty.
\end{equation*}
The $M_p$ space, for $1\le p\le \infty$, is the space of Fourier multipliers endowed with the norm
\begin{equation*}
\|m\|_{M_p}=\sup_{\|f\|_{L^p}=1}\|\mathcal F^{-1}(m)*f\|_{L^p}.
\end{equation*}
\section{Asymptotic expansions}\label{sec:asymptotic expansion}
We study the asymptotic expansions of the eigenvalues of the operator $E(i\xi)=-(B+i\xi A)$ by dividing the frequency domain $\xi\in \mathbb R$ into the low frequency as $|\xi |\to 0$, the intermediate frequency as $|\xi|$ away from $0$ and $+\infty$ and the high frequency as $|\xi|\to +\infty$.
Primarily, we consider the low-frequency case. Due to the fact that the eigenvalues of $E$ converge to the eigenvalues of $B$ as $|\xi|\to 0$ in general and the condition {\bf B}, the eigenvalues of $E$ are divided into two groups such that one among them contains the eigenvalues of $E$ converging to $0$ as $|\xi|\to 0$. Thus, we will study these two groups separately for the low-frequency case. We also recall the matrices $C$ and $D$ in \eqref{eq:reduced system} and \eqref{eq:matrix D} respectively.
\begin{proposition}[Low frequency 1]\label{prop:low frequency 1}
Let $h\in \mathbb Z^+$ be the cardinality of the spectrum of the matrix $C$ considered in $\ker(B)$. If the condition {\bf C} holds, then, for $j\in\{1,\dots,h\}$, there is $h_j\in \mathbb Z^+$ to be less than or equal to the algebraic multiplicity of the $j$-th eigenvalue of $C$ considered in $\ker(B)$ such that there are $h_j$ groups of the eigenvalues of $E$ and the approximation of the elements of the $\ell$-th group has the form
\begin{equation}\label{eq:expansion of lambdajell}
\lambda_{j\ell}(i\xi)=-ic_j\xi-d_{j\ell} \xi^2+{\scriptstyle\mathcal O}(|\xi|^2),\qquad |\xi|\to 0,
\end{equation}
where $c_j\in\sigma(C)$ considered in $\ker(B)$ and $d_{j\ell}\in\sigma\bigl(P_j^{(0)}DP_j^{(0)}\bigr)$ considered in $\ker(C-c_jI)$ for $\ell=1,\dots,h_j$ with $P_j^{(0)}$ the eigenprojection associated with $c_j$. In particular, if the condition {\bf D} holds, then
\begin{equation}\label{eq:real part of djell}
\textrm{\normalfont Re}\,(d_{j\ell})\ge \theta>0, \qquad \textrm{ for } \ell=1,\dots,h_j \textrm{ and } j=1,\dots, h.
\end{equation}
Moreover, the total projection associated with the $\ell$-th group is then approximated by
\begin{equation}\label{eq:expansion of Pjell}
P_{j\ell}(i\xi)=P_{j\ell}^{(0)}+\mathcal O(|\xi|),\qquad |\xi|\to 0,
\end{equation}
where $P_{j\ell}^{(0)}$ is the eigenprojection associated with $d_{j\ell}$ considered in $\ker(C-c_jI)$ for $\ell\in\{1,\dots,h_j\}$ and $j\in\{1,\dots,h\}$.
\end{proposition}
\begin{proof}
This proof is dealt with the $0$-group of $E$ {\it i.e.} the group contains the eigenvalues of $E$ converging to $0$ as $|\xi|\to 0$. On the other hand, we can consider $T(\zeta):=B+\zeta A$ where $\zeta=i\xi$ instead of $E$ in order to apply Proposition \ref{prop:subprojections} and Proposition \ref{prop:construction of subprojections} since $E=-T$. The proof then includes three steps of approximation and reduction steps interlacing them.
{\bf Step 0:} It is obvious that the approximation of the elements of the $0$-group of $T$ has the form
\begin{equation*}
\lambda_0(\zeta)={\scriptstyle\mathcal O}(1),\qquad |\zeta|\to 0.
\end{equation*}
On the other hand, by Proposition \ref{prop:construction of subprojections}, the total projection associated with this group is approximated by
\begin{equation*}
P_0(\zeta)=P_0^{(0)}+\mathcal O(|\zeta|),\qquad |\zeta|\to 0,
\end{equation*}
where $P_0^{(0)}$ is the eigenprojection associated with the eigenvalue $0$ of $B$.
In particular, we can perform a more accurate expansion of $P_0$. Indeed, we have
\begin{equation}\label{eq:expansion of P0}
P_0(\zeta)=P_0^{(0)}+\zeta P_0^{(1)}+\mathcal O(|\zeta|^2),\qquad |\zeta|\to 0,
\end{equation}
where $P_0^{(1)}$ can be computed by the formula \eqref{eq:P01}. We will prove the formula \eqref{eq:P01} in brief. As $|\zeta|\to 0$, for $z\in \Gamma$ any compact set contained in the resolvent set $\rho(B)$ of $B$, we have the uniformly convergent expansion
\begin{equation}\label{eq:expansion of the resolvent of T}
(T(\zeta)-zI)^{-1}= (B-zI)^{-1}-\zeta (B-zI)^{-1}A(B-zI)^{-1}+\mathcal O(|\zeta|),\qquad |\zeta|\to 0,
\end{equation}
and we also have the expansion about $0$ of the resolvent
\begin{equation}\label{eq:expansion of the resolvent of B}
(B-zI)^{-1}=\sum_{h=-\infty}^{-1}\bigl(N_0^{(0)}\bigr)^hz^h+P_0^{(0)}+\sum_{h=1}^{+\infty}\bigl(S_0^{(0)}\bigr)^hz^h,
\end{equation}
where $P_0^{(0)},N_0^{(0)}$ and $S_0^{(0)}$ are the eigenprojection, the nilpotent matrix and the reduced resolvent coefficient associated with the eigenvalue $0$ of $B$ respectively. On the other hand, the formula for $S_0^{(0)}$ is introduced in \eqref{eq:reduced resolvent coefficient}. The expansions \eqref{eq:expansion of the resolvent of T} and \eqref{eq:expansion of the resolvent of B} can be obtained easily due to the properties of the resolvent (see \citep{kato}). Therefore, since the total projection $P_0$ deduced from Proposition \ref{prop:construction of subprojections} can be seen as the Cauchy integral
\begin{equation*}
P_0(\zeta)=-\dfrac{1}{2\pi i}\int_{\Gamma_0}(T(\zeta)-zI)^{-1}\,dz,
\end{equation*}
where $\Gamma_0$ is an oriented closed curve enclosing $0$ except for the other eigenvalues of $B$ in the resolvent set $\rho(B)$. Hence, since $\Gamma_0$ is a compact set of $\rho(B)$, one can apply \eqref{eq:expansion of the resolvent of T} and \eqref{eq:expansion of the resolvent of B} into the integral formula of $P_0$ and we thus obtain \eqref{eq:expansion of P0} by computing the residue.
{\bf Reduction step:} From Proposition \ref{prop:subprojections} and Proposition \ref{prop:construction of subprojections}, one also has
\begin{equation*}
\mathbb C^n=\textrm{\normalfont ran}(P_0)\oplus \left(\mathbb C^n-\textrm{\normalfont ran}(P_0)\right),\qquad T=TP_0+T(I-P_0).
\end{equation*}
Thus, the study of the $0$-group of $T$ considered in $\mathbb C^n$ is reduced to the study of the eigenvalues of $TP_0$ considered in $\textrm{\normalfont ran}(P_0)$.
{\bf Step 1:} Under the condition {\bf B}, the eigenvalue $0$ of $B$ is semi-simple {\it i.e.} $BP_0^{(0)}=O_{n\times 1}$ and $\textrm{\normalfont ran}(P_0^{(0)})=\ker(B)$. Thus, based on the expansion \eqref{eq:expansion of P0} of $P_0$ and the fact that $TP_0=P_0TP_0$, one has
\begin{equation*}
\begin{aligned}
T(\zeta)P_0(\zeta)&=\bigl(P_0^{(0)}+\zeta P_0^{(1)}+\mathcal O(|\zeta|^2)\bigr)(B+\zeta A)\bigl(P_0^{(0)}+\zeta P_0^{(1)}+\mathcal O(|\zeta|^2)\bigr)\\
&=\zeta \bigl( C-\zeta D+\mathcal O(|\zeta|^2)\bigr),\qquad |\zeta|\to 0,
\end{aligned}
\end{equation*}
where $C$ is in \eqref{eq:reduced system} and $D$ is in \eqref{eq:matrix D}.
It follows that $\lambda\in \sigma(TP_0)$ considered in $\textrm{\normalfont ran}(P_0)$ if and only if $\tilde \lambda :=\zeta^{-1}\lambda$ is an eigenvalue of $T_0(\zeta):=C-\zeta D+\mathcal O(|\zeta|^2)$ considered in $\textrm{\normalfont ran}(P_0)$. Therefore, it returns to the eigenvalue problem of $T_0$ considered in the domain $\textrm{\normalfont ran}(P_0)$ and one can apply again Proposition \ref{prop:construction of subprojections}.
Let $c_j$ be the $j$-th element of $\sigma(C)$ considered in $\ker(B)=\textrm{\normalfont ran}(P_0^{(0)})$ for $j\in\{1,\dots,h\}$, then by Proposition \ref{prop:construction of subprojections}, $\tilde \lambda\in\sigma(T_0)$ considered in $\textrm{\normalfont ran}(P_0)$ if and only if $\tilde \lambda \to c_j$ as $|\zeta|\to 0$ for some $j\in\{1,\dots,h\}$. Thus, $\lambda\in\sigma(TP_0)$ considered in $\textrm{\normalfont ran}(P_0)$ if and only if $\zeta^{-1}\lambda\to c_j$ as $|\zeta|\to 0$ for some $j\in\{1,\dots,h\}$. One concludes that the eigenvalues of $TP_0$ considered in $\textrm{\normalfont ran}(P_0)$ are characterized by $c_j$ for $j=1,\dots,h$ and thus they are divided into $h$ groups such that the approximation of the elements of the $j$-th group with respect to $c_j$ has the form
\begin{equation*}
\lambda_j(\zeta)=c_j\zeta+{\scriptstyle\mathcal O}(|\zeta|),\qquad |\zeta|\to 0.
\end{equation*}
and on the other hand, by Proposition \ref{prop:construction of subprojections}, the total projection associated with this group is approximated by
\begin{equation}\label{eq:expansion of Pj}
P_j(\zeta)=P_j^{(0)}+\mathcal O(|\zeta|),\qquad |\zeta|\to 0,
\end{equation}
where $P_j^{(0)}$ is the eigenprojection associated with $c_j$ considered in $\ker(B)$ for $j=1,\dots,h$.
{\bf Reduction step:} By Proposition \ref{prop:construction of subprojections}, $T_0$ commutes with $P_j$ for all $j=1,\dots,h$ and one has
\begin{equation*}
\textrm{\normalfont ran}(P_0)=\bigoplus_{j=1}^h\textrm{\normalfont ran}(P_j),\qquad T_0=\sum_{j=1}^h(T_0P_j).
\end{equation*}
The study of the eigenvalues of $T_0$ considered in $\textrm{\normalfont ran}(P_0)$ is then reduced to the study of the eigenvalues of $T_0P_j$ considered in $\textrm{\normalfont ran}(P_j)$ for $j=1,\dots,h$.
{\bf Final step:} Under the condition {\bf C}, for $j\in\{1,\dots,h\}$, the eigenvalue $c_j$ of $C$ is semi-simple {\it i.e.} $(C-c_jI)P_j^{(0)}=O_{n\times 1}$ and $\textrm{\normalfont ran}(P_j^{(0)})=\ker(C-c_jI)$. Thus, based on the expansion \eqref{eq:expansion of Pj} of $P_j$ and the fact that $T_0P_j=P_jT_0P_j$, one has
\begin{equation*}
\begin{aligned}
(T_0(\zeta)-c_jI)P_j(\zeta)&=\bigl(P_j^{(0)}+\mathcal O(|\zeta|)\bigr)\bigl(C-c_jI-\zeta D+\mathcal O(|\zeta|^2)\bigr)\bigl(P_j^{(0)}+\mathcal O(|\zeta|)\bigr)\\
&=\zeta \bigl( -D_j+\mathcal O(|\zeta|)\bigr),\qquad |\zeta|\to 0.
\end{aligned}
\end{equation*}
where $D_j:=P_j^{(0)}DP_j^{(0)}$. It follows that $\lambda\in \sigma(T_0P_j)$ considered in $\textrm{\normalfont ran}(P_j)$ if and only if $\tilde \lambda :=\zeta^{-1}(\lambda-c_j)$ is an eigenvalue of $T_j(\zeta):=-D_j+\mathcal O(|\zeta|)$ considered in $\textrm{\normalfont ran}(P_j)$. Therefore, it returns to the eigenvalue problem of $T_j$ considered in the domain $\textrm{\normalfont ran}(P_j)$ and one can apply again Proposition \ref{prop:construction of subprojections}.
For $j\in\{1,\dots,h\}$, let $h_j$ be the cardinality of the spectrum of $D_j$ considered in $\ker(C-c_jI)=\textrm{\normalfont ran}(P_j^{(0)})$ and let $d_{j\ell}$ be the $\ell$-th element of the spectrum for $\ell=1,\dots,h_j$. Then by Proposition \ref{prop:construction of subprojections}, $\tilde \lambda\in\sigma(T_j)$ considered in $\textrm{\normalfont ran}(P_j)$ if and only if $\tilde \lambda \to -d_{j\ell}$ as $|\zeta|\to 0$ for some $\ell\in\{1,\dots,h_j\}$. Thus, $\lambda\in\sigma(T_0P_j)$ considered in $\textrm{\normalfont ran}(P_j)$ if and only if $\zeta^{-1}(\lambda-c_j)\to -d_{j\ell}$ as $|\zeta|\to 0$ for some $\ell\in\{1,\dots,h_j\}$. One concludes that the eigenvalues of $T_0P_j$ considered in $\textrm{\normalfont ran}(P_j)$ are characterized by $d_{j\ell}$ for $\ell=1,\dots,h_j$ and thus they are divided into $h_j$ groups such that the approximation of the elements of the $\ell$-th group with respect to $d_{j\ell}$ has the form
\begin{equation*}
\lambda_{j\ell}(\zeta)=c_j\zeta-d_{j\ell}\zeta^2+{\scriptstyle\mathcal O}(|\zeta|^2),\qquad |\zeta|\to 0.
\end{equation*}
and on the other hand, by Proposition \ref{prop:construction of subprojections}, the total projection associated with this group is approximated by
\begin{equation}
P_{j\ell}(\zeta)=P_{j\ell}^{(0)}+\mathcal O(|\zeta|),\qquad |\zeta|\to 0,
\end{equation}
where $P_{j\ell}^{(0)}$ is the eigenprojection associated with $d_{j\ell}$ considered in $\ker(C-c_jI)$ for $\ell=1,\dots,h_j$.
We then deduce from the above steps of approximation for $E(i\xi)=-T(i\xi)$ by multiplying $\lambda_{j\ell}(i\xi)$ by $-1$ to obtain \eqref{eq:expansion of lambdajell}, and \eqref{eq:expansion of Pjell} is the same as $P_{j\ell}(i\xi)$ for each $j\in\{1,\dots,h\}$ and $\ell=1,\dots,h_j$.
Finally, we prove the estimate \eqref{eq:real part of djell}. For $j\in\{1,\dots,h\}$ and $\ell\in\{1,\dots,h_j\}$, since $\lambda_{j\ell}$ in \eqref{eq:expansion of lambdajell} can be seen as an eigenvalue of $E$ and since $c_j$ is real by the condition {\bf C}, if the condition {\bf D} holds, then for $|\xi|$ small, one has
\begin{equation*}
\textrm{\normalfont Re}\,(\lambda_{j\ell}(i\xi))=-\textrm{\normalfont Re}(d_{j\ell})|\xi|^2+\textrm{\normalfont Re}\,({\scriptstyle\mathcal O}(|\xi|^2))\le -\dfrac{\theta|\xi|^2}{1+|\xi|^2}.
\end{equation*}
Passing through the limit as $|\xi|\to 0$, one has the desired estimate. The proof is done.
\end{proof}
\begin{remark}\label{rem:low frequency 1}
As a consequence, for $|\xi|$ small, in $\textrm{\normalfont ran}(P_{j\ell})$, the operator $E$ has the representation
\begin{equation}
E_{j\ell}(i\xi)=(-ic_j \xi-d_{j\ell}\xi^2)I-\xi^2N_{j\ell}^{(0)}+\mathcal O(|\xi|^3),
\end{equation}
where $N_{j\ell}^{(0)}$ is the nilpotent matrix associated with the eigenvalue $d_{j\ell}$ of $P_j^{(0)}DP_j^{(0)}$
considered in $\ker(C-c_jI)$ for $j\in\{1,\dots,h\}$ and $\ell\in\{1,\dots,h_j\}$.
\end{remark}
\begin{proposition}[Low frequency 2]\label{prop:low frequency 2}
Let $k\in\mathbb Z^+$ be the number of the nonzero distinct eigenvalues of $B$. If the condition {\bf B} holds, then there are $k$ groups of the eigenvalues of $E$ such that the approximation of the elements of the $j$-th group has the form
\begin{equation}\label{eq:expansion of etaj}
\eta_j(i\xi)=-e _j+{\scriptstyle \mathcal O}(1),\qquad |\xi|\to 0,
\end{equation}
where $e_j\in\sigma(B)$ with $\textrm{\normalfont Re}\,(e_j)>0$ for all $j=1,\dots,k$.
Moreover, the total projection associated with the $j$-th group is then approximated by
\begin{equation}\label{eq:expansion of Fj}
F_j(i\xi)=F_j^{(0)}+\mathcal O\bigl(|\xi|\bigr),\qquad |\xi|\to 0.
\end{equation}
\end{proposition}
\begin{proof}
Similarly to the proof of Proposition \ref{prop:low frequency 1}, we consider the operator $T(\zeta)=B+\zeta A$ where $\zeta=i\xi$. However, in this case, we study the eigenvalues of $T$ such that they converge to the nonzero eigenvalues of $B$ as $|\zeta|\to 0$. Let $e_j$ be the $j$-th element of the spectrum of $B$ except for $0$ for $j=1,\dots,k$. Then by Proposition \ref{prop:construction of subprojections}, for any $\eta\in\sigma(T)$ does not converge to $0$, $\eta\to e_j$ for some $j\in\{1,\dots,k\}$. Hence, the approximation of these eigenvalues of $T$ is
\begin{equation*}
\eta_j(\zeta)=e_j+{\scriptstyle\mathcal O}(1),\qquad |\zeta|\to 0,
\end{equation*}
and also from Proposition \ref{prop:construction of subprojections}, the total projection associated with this group is approximated by
\begin{equation}
F_j(\zeta)=F_j^{(0)}+\mathcal O(|\zeta|),\qquad |\zeta|\to 0,
\end{equation}
where $F_j^{(0)}$ is the eigenprojection associated with $e_j$ for $j=1,\dots,k$. In particular, $\textrm{\normalfont Re}\,(e_j)>0$ due to the condition {\bf B}.
Finally, since $E(i\xi)=-T(i\xi)$, we obtain \eqref{eq:expansion of etaj} by multiplying $\eta_j(i\xi)$ by $-1$ and \eqref{eq:expansion of Fj} is the same as $F_j(i\xi)$ for all $j=1,\dots,k$.
\end{proof}
\begin{remark}\label{rem:low frequency 2}
As a consequence, for $|\xi|$ small, in $\textrm{\normalfont ran}(F_j)$, the operator $E$ has the representation
\begin{equation}
E_j(i\xi)=-e_jI-M_j^{(0)}+\mathcal O(|\xi|),
\end{equation}
where $M_j^{(0)}$ is the nilpotent matrix associated with the eigenvalue $e_j$ of $B$ for $j\in\{1,\dots,k\}$.
\end{remark}
The intermediate-frequency case is obtained as follows.
\begin{proposition}[Intermediate frequency]\label{prop:intermediate frequency}
In the compact domain $\varepsilon\le |\xi|\le R$, there is only a finite number of the exceptional points at which the eigenprojections and the nilpotent parts associated with the eigenvalues of $E$ may have poles even the eigenvalues are continuous there.
On the other hand, in every simple domain excluded the exceptional points, the operator $E$ has $r$ (independent from $\xi$) distinct holomorphic eigenvalues denoted by $\nu_j$ with constant algebraic multiplicity together with holomorphic eigenprojections and nilpotent parts denoted by $\Psi_j$ and $ \Xi_j$ associated with them respectively for $j\in\{1,\dots, r\}$.
If the condition {\bf D} holds, then $\textrm{\normalfont Re}\,(\nu)<0$ for any $\nu\in\sigma(E)$ in the domain $\varepsilon\le |\xi|\le R$.
\end{proposition}
\begin{proof}
See \citep{kato}.
\end{proof}
For the high frequency, in order to analyze the eigenvalues of $E(i\xi)=-(B+i\xi A)$, one can analyze the eigenvalues of the operator $\bar E(i\xi):=Q^{-1}E(i\xi)Q=(-i\xi)(\bar A+(i\xi)^{-1} \bar B)$ where $\bar A$ and $\bar B$ are already introduced in \eqref{eq:A and B bar}.
\begin{proposition}[High frequency]\label{prop:high frequency}
Let $s\in \mathbb Z^+$ be the cardinality of the spectrum of the matrix $\bar A$. If the condition {\bf A} holds, then, for $j\in\{1,\dots,s\}$, there is $s_j\in \mathbb Z^+$ to be less than or equal to the algebraic multiplicity of the $j$-th eigenvalue of $\bar A$ such that there are $s_j$ groups of the eigenvalues of $\bar E$ and the approximation of the elements of the $\ell$-th group has the form
\begin{equation}\label{eq:expansion of mujell}
\mu_{j\ell}(i\xi)=-i\alpha_j\xi-\beta_{j\ell}+{\scriptstyle\mathcal O}(|\xi|^{-1}),\qquad |\xi|\to +\infty,
\end{equation}
where $\alpha_j\in\sigma(\bar A)$ considered in $\mathbb C^n$ and $\beta_{j\ell}\in\sigma\bigl(\Pi_j^{(0)}\bar B\Pi_j^{(0)}\bigr)$ considered in $\ker(\bar A-\alpha_jI)$ for $\ell=1,\dots,s_j$ with $\Pi_j^{(0)}$ defined in \eqref{eq:the eigenprojection associated with alphaj} is the eigenprojection associated with $\alpha_j$. In particular, if the condition {\bf D} holds, then
\begin{equation}\label{eq:real part of betajell}
\textrm{\normalfont Re}\,(\beta_{j\ell})\ge \theta>0, \qquad \textrm{ for } \ell=1,\dots,s_j \textrm{ and } j=1,\dots, s.
\end{equation}
Moreover, the total projection associated with the $\ell$-th group is then approximated by
\begin{equation}\label{eq:expansion of Pijell}
\Pi_{j\ell}(i\xi)=\Pi_{j\ell}^{(0)}+\mathcal O(|\xi|^{-1}),\qquad |\xi|\to +\infty,
\end{equation}
where $\Pi_{j\ell}^{(0)}$ is the eigenprojection associated with $\beta_{j\ell}$ considered in $\ker(\bar A-\alpha_jI)$ for $\ell\in\{1,\dots,s_j\}$ and $j\in\{1,\dots,s\}$.
\end{proposition}
\begin{proof}
Similarly to before, we can consider $T(\zeta):=\bar A+\zeta \bar B$ where $\zeta=(i\xi)^{-1}$ firstly in order to apply Proposition \ref{prop:subprojections} and Proposition \ref{prop:construction of subprojections} since $|\zeta|\to 0$ as $|\xi|\to +\infty$. The proof then consists of two steps of approximation and one reduction step between them.
{\bf First step:} The eigenvalues of $T$ are divided into several groups characterized by $\alpha_j\in\sigma(\bar A)$ for $j=1,\dots,s$. Moreover, for $j\in\{1,\dots,s\}$, the approximation for the elements of the $\alpha_j$-group is
\begin{equation*}
\mu_j(\zeta)=\alpha_j+{\scriptstyle\mathcal O}(1),\qquad |\zeta|\to 0,
\end{equation*}
and the total projection associated with this group is then approximated by
\begin{equation}\label{eq:expansion of Pij}
\Pi_j(\zeta)=\Pi_j^{(0)}+\mathcal O(|\zeta|),\qquad |\zeta|\to 0,
\end{equation}
where $\Pi_j^{(0)}$ is the eigenprojection associated with the eigenvalue $\alpha_j$ of $\bar A$. In particular, $\Pi_j^{(0)}$ is exactly the same as \eqref{eq:the eigenprojection associated with alphaj} since the eigenprojection $\Pi_j^{(0)}$ can be computed explicitly by the Cauchy integral
\begin{equation*}
\Pi_j^{(0)}=-\dfrac{1}{2\pi i}\int_{\Gamma_j}(\bar A-zI)^{-1}\,dz=-\dfrac{1}{2\pi i}\int_{\Gamma_j}\textrm{\normalfont diag}\,(a_1-z,\dots,a_n-z)^{-1}\,dz,
\end{equation*}
where $\Gamma_j$ is an oriented closed curve enclosing $\alpha_j$ except for the other eigenvalues of $\bar A$ in the resolvent set $\rho(\bar A)$. Hence, one obtains that
\begin{equation*}
(\Pi_j^{(0)})_{hk}=\begin{cases} 1&\textrm{if } h=k,a_h=\alpha_j,\\
0&\textrm{otherwise},
\end{cases}=\begin{cases} 1 &\textrm{if } h=k\in\mathcal S_j,\\
0&\textrm{otherwise}, \end{cases}
\end{equation*}
for all $h,k=1,\dots,n$ due to the definition of $\mathcal S_j$ the $j$-th element of the partition $\{\mathcal S_j:j=1,\dots,s\}$ of $\{1,\dots,n\}$.
{\bf Reduction step:} By Proposition \ref{prop:construction of subprojections}, $T$ commutes with $\Pi_j$ for all $j=1,\dots,s$ and one has
\begin{equation*}
\mathbb C^n=\bigoplus_{j=1}^s\textrm{\normalfont ran}(\Pi_j),\qquad T=\sum_{j=1}^s(T\Pi_j).
\end{equation*}
It implies that the study of the eigenvalues of $T$ considered in $\mathbb C^n$ is reduced to the study of the eigenvalues of $T\Pi_j$ considered in $\textrm{\normalfont ran}(\Pi_j)$ for $j=1,\dots,s$.
{\bf Final step:} Under the condition {\bf A}, the eigenvalues of $\bar A$ are semi-simple {\it i.e.} $\bar A\Pi_j^{(0)}=\alpha_j \Pi_j^{(0)}$ and $\textrm{\normalfont ran}(\Pi_j^{(0)})=\ker(\bar A-\alpha_jI)$ for all $j=1,\dots,s$. Thus, based on the expansion \eqref{eq:expansion of Pij} of $\Pi_j$ and the fact that $T\Pi_j=\Pi_jT\Pi_j$, one has
\begin{equation*}
\begin{aligned}
(T(\zeta)-\alpha_jI)\Pi_j(\zeta)&=\bigl(\Pi_j^{(0)}+\mathcal O(|\zeta|)\bigr)(\bar A-\alpha_jI+\zeta \bar B)\bigl(\Pi_j^{(0)}+\mathcal O(|\zeta|)\bigr)\\
&=\zeta \bigl( \Pi_j^{(0)}\bar B\Pi_j^{(0)}+\mathcal O(|\zeta|)\bigr),\qquad |\zeta|\to 0,
\end{aligned}
\end{equation*}
for $j=1,\dots,s$. It follows that $\mu\in \sigma(T\Pi_j)$ considered in $\textrm{\normalfont ran}(\Pi_j)$ if and only if $\tilde \mu :=\zeta^{-1}(\mu-\alpha_j)$ is an eigenvalue of $T_j(\zeta):=\Pi_j^{(0)}\bar B\Pi_j^{(0)}+\mathcal O(|\zeta|)$ considered in $\textrm{\normalfont ran}(\Pi_j)$ for $j=1,\dots,s$. Therefore, it returns to the eigenvalue problem of $T_j$ considered in the domain $\textrm{\normalfont ran}(\Pi_j)$ for $j=1,\dots,s$ and one can apply again Proposition \ref{prop:construction of subprojections}.
For $j\in\{1,\dots,s\}$, let $s_j$ be the cardinality of the spectrum of $\Pi_j^{(0)}\bar B\Pi_j^{(0)}$ considered in $\ker(\bar A-\alpha_jI)=\textrm{\normalfont ran}(\Pi_j^{(0)})$ and let $\beta_{j\ell}$ be the $\ell$-th elements of the spectrum for $\ell=1,\dots,s_j$. Then, by Proposition \ref{prop:construction of subprojections}, $\tilde\mu\in\sigma(T_j)$ considered in $\textrm{\normalfont ran}(\Pi_j)$ if and only if $\tilde\mu\to \beta_{j\ell}$ as $|\zeta|\to 0$ for some $\ell\in\{1,\dots,s_j\}$. Thus, $\mu\in \sigma(T\Pi_j)$ considered in $\textrm{\normalfont ran}(\Pi_j)$ if and only if $\zeta^{-1}(\mu-\alpha_j)\to \beta_{j\ell}$ as $|\zeta|\to 0$ for some $\ell\in\{1,\dots,s_j\}$. It implies the eigenvalues of $T\Pi_j$ considered in $\textrm{\normalfont ran}(\Pi_j)$ are characterized by $\beta_{j\ell}$ such that the approximation of the elements of the $\ell$-th group with respect to $\beta_{j\ell}$ is
\begin{equation*}
\mu_{j\ell}(\zeta)=\alpha_j+\beta_{j\ell}\zeta+{\scriptstyle\mathcal O}(|\zeta|),\qquad |\zeta|\to 0,
\end{equation*}
and also by Proposition \ref{prop:construction of subprojections} that the total projection associated with this group is approximated by
\begin{equation*}
\Pi_{j\ell}(\zeta)=\Pi_{j\ell}^{(0)}+\mathcal O(|\zeta|),\qquad |\zeta|\to 0,
\end{equation*}
where $\Pi_{j\ell}^{(0)}$ is the eigenprojection associated with $\beta_{j\ell}$ considered in $\ker(\bar A-\alpha_jI)$ for $\ell=1,\dots,s_j$.
We deduce from the above steps of approximation for $\bar E(i\xi)=(-i\xi)T\bigl((i\xi)^{-1}\bigr)$ by multiplying $\lambda_{j\ell}\bigl((i\xi)^{-1}\bigr)$ by $(-i\xi)$ to obtain \eqref{eq:expansion of mujell}, and \eqref{eq:expansion of Pijell} is the same as $\Pi_{j\ell}\bigl((i\xi)^{-1})$ for each $j\in\{1,\dots,s\}$ and $\ell=1,\dots,s_j$.
Finally, we prove the estimate \eqref{eq:real part of betajell}. For $j\in\{1,\dots,s\}$ and $\ell\in\{1,\dots,s_j\}$, since $\mu_{j\ell}$ in \eqref{eq:expansion of mujell} can be seen as an eigenvalue of $\bar E$ and thus of $E=Q\bar EQ^{-1}$ and since $\alpha_j$ is real by the condition {\bf A}, if the condition {\bf D} holds, then for $|\xi|$ large, one has
\begin{equation*}
\textrm{\normalfont Re}\,(\mu_{j\ell}(i\xi))=-\textrm{\normalfont Re}(\beta_{j\ell})+\textrm{\normalfont Re}\,({\scriptstyle\mathcal O}(1))\le -\dfrac{\theta|\xi|^2}{1+|\xi|^2}.
\end{equation*}
Passing through the limit as $|\xi|\to +\infty$, one has the desired estimate. We finish the proof.
\end{proof}
\begin{remark}\label{rem:high frequency}
As a consequence, for $|\xi|$ large, in $\textrm{\normalfont ran}(\Pi_{j\ell})$, the operator $E$ has the representation
\begin{equation}
E_{j\ell}(i\xi)=(-i\alpha_j \xi-\beta_{j\ell})I-\Theta_{j\ell}^{(0)}+\mathcal O(|\xi|^{-1}),
\end{equation}
where $\Theta_{j\ell}^{(0)}$ is the nilpotent matrix associated with the eigenvalue $\beta_{j\ell}$ of $\Pi_j^{(0)}\bar B\Pi_j^{(0)}$ considered in $\ker(\bar A-\alpha_jI)$ for $j\in\{1,\dots,s\}$ and $\ell\in\{1,\dots,s_j\}$.
\end{remark}
\section{Fundamental solution}\label{sec:fundamental solution}
The aim of this section is to introduce the estimates for the fundamental solution to \eqref{eq:original hyperbolic system} in the frequency space. Let consider the fundamental system
\begin{equation}\label{eq:fundamental system}
\partial_t\hat G-E\hat G=0,\qquad \hat G_{t=0}=I,
\end{equation}
where $E=E(i\xi)=-(B+i\xi A)$ with $\xi\in\mathbb R$.
One sets the following kernel
\begin{equation}\label{eq:parabolic kernel}
\hat K(\xi,t):=\sum_{j,\ell=1}^{h,h_j}e^{(-ic_j\xi-d_{j\ell}\xi^2)t}e^{-N_{j\ell}^{(0)}\xi^2t}P_{j\ell}^{(0)},
\end{equation}
and the kernel
\begin{equation}\label{eq:exponential decay kernel}
\hat V(\xi,t):=Q\sum_{j,\ell=1}^{s,s_j}e^{(-i\alpha_j\xi-\beta_{j\ell})t}e^{-\Theta_{j\ell}^{(0)}t}\Pi_{j\ell}^{(0)}Q^{-1},
\end{equation}
where the coefficients are introduced in the previous section.
Moreover, we introduce the two useful lemmas used in this section as follows.
\begin{lemma}\label{lem:nilpotent}
If $X$ is a constant complex nilpotent matrix, then for all $\varepsilon'>0$, there exists $C=C(\varepsilon')>0$ such that
\begin{equation*}
\bigl| e^{c X+Y}-e^{cX}\bigr|\le Ce^{\varepsilon' |c|+C|Y|}|Y|
\end{equation*}
and
\begin{equation*}
\bigl| e^{c X+Y}-e^{cX}-e^{cX}Y\bigr|\le Ce^{\varepsilon' |c|+C|Y|}|Y|^2
\end{equation*}
for every complex constant $c:=c(t)$ and matrix $Y:=Y(t)$ for $t>0$.
\end{lemma}
\begin{proof}
The proof is based on the existence of a basis of $\mathbb C^n$ such that $|X|\le \varepsilon'$ for any fixed $\varepsilon'>0$ once written in this basis, then the constant $C(\varepsilon')$ can be chosen as the product of the norm of the changing basis matrix and the norm of its inverse for any matrix norm. The second inequality due to the fact that the first order derivative $\textrm{\normalfont d}_{\exp}$ at $X$ of the application $X\to e^X$ is $e^X$ and thus one has
\begin{equation*}
\bigl| e^{X+Y}-e^X-e^XY\bigr|\le C|Y|^2\sup_{s\in[0,1]}\left|\textrm{\normalfont d}_{\exp}^2(X+sY)\right|\le C|Y|^2e^{|X|+|Y|}
\end{equation*}
where $\textrm{\normalfont d}_{\exp}^2$ is the second order derivative of $X\to e^{X}$. Thus, under a change of basis, one obtains the desired estimiate. One can find a detailed proof in \citep{bianchini07}.
\end{proof}
\begin{lemma}\label{lem:main estimates}
For $0<\varepsilon<R<+\infty$, if $a,b\ge 0$ and $c,d>0$ and $r\in[1,\infty]$, there exists $C:=C(r)>0$ such that for all $t\ge 1$, one has
\begin{equation}
\bigl\| |\cdot|^at^be^{-c|\cdot|^{d}t}\bigr\|_{L^{r}}\le Ct^{-\frac{1}{d}\frac{1}{r}+b-\frac{a}{d}}.
\end{equation}
\end{lemma}
\begin{proof}
By changing of variables.
\end{proof}
\begin{proposition}[Fundamental solution estimates]\label{prop:fundamental solution standard type}
For $0<\varepsilon<R<+\infty$, for $r\in[1,\infty]$ and $t\ge 1$, there exists positive constants $C:=C(r)$ and $\delta$ such that if the conditions {\bf A}, {\bf B}, {\bf C} and {\bf D} are satisfied, the following hold.
\noindent
1. For $|\xi|<\varepsilon$, one has
\begin{equation}\label{est:low}
\bigl\| \hat G-\hat K\bigr\|_{L^{r}} \le Ct^{-\frac12\frac{1}{r}-\frac12},\qquad \bigl\|\hat V\bigr\|_{L^{r}}\le Ce^{-\delta t}.
\end{equation}
\noindent
2. For $\varepsilon\le |\xi|\le R$, one has
\begin{equation}\label{est:intermediate}
\bigl\|\hat G\bigr\|_{L^{r}},\bigl\|\hat K\bigr\|_{L^{r}},\bigl\|\hat V\bigr\|_{L^{r}}\le Ce^{-\delta t}.
\end{equation}
\noindent
3. For $|\xi|>R$, one has
\begin{equation}\label{est:high 1}
\bigl\|\hat G-\hat V\bigr\|_{L^{r}}\le Ce^{-\delta t} \quad \textrm{for} \quad r>1, \qquad \bigl\|\hat K\bigr\|_{L^{r}}\le Ce^{-\delta t}.\end{equation}
Moreover, we also have
\begin{equation}\label{est:high 2}
\bigl\| \mathcal F^{-1}(\hat G-\hat V)\bigr\|_{L^\infty}\le Ce^{-\delta t}.
\end{equation}
\end{proposition}
\begin{proof}
For $|\xi|<\varepsilon$, by Remark \ref{rem:low frequency 1} and Remark \ref{rem:low frequency 2}, the solution $\hat G$ to the system \eqref{eq:fundamental system} is given by $\hat G=\hat G_1+\hat G_2$ where
\begin{equation}\label{eq:G1 small}
\hat G_1(\xi,t):=\sum_{j,\ell=1}^{h,h_j}e^{(-ic_j \xi-d_{j\ell}\xi^2)t}e^{-N_{j\ell}^{(0)}\xi^2t+\mathcal O(|\xi|^3)t}\bigl(P_{j\ell}^{(0)}+\mathcal O(|\xi|)\bigr),
\end{equation}
and
\begin{equation}\label{eq:G2 small}
\hat G_2(\xi,t):=\sum_{j=1}^ke^{-e_jt}e^{-M_j^{(0)}t+\mathcal O(|\xi|)t}\bigl(F_j^{(0)}+\mathcal O(|\xi|)\bigr).
\end{equation}
It follows that $\hat G-\hat K=\hat G_1-\hat K+\hat G_2=I_1+I_2+J$ where $J=\hat G_2$ and
\begin{equation}\label{eq:I_1 small}
I_1:=\sum_{j,\ell=1}^{h,h_j}e^{(-ic_j \xi-d_{j\ell}\xi^2)t}\left(e^{-N_{j\ell}^{(0)}\xi^2t+\mathcal O(|\xi|^3)t}-e^{-N_{j\ell}^{(0)}\xi^2t}\right)P_{j\ell}^{(0)}
\end{equation}
and
\begin{equation}\label{eq:I_2 small}
I_2:=\sum_{j,\ell=1}^{h,h_j}e^{(-ic_j \xi-d_{j\ell}\xi^2)t}e^{-N_{j\ell}^{(0)}\xi^2t+\mathcal O(|\xi|^3)t}\mathcal O(|\xi|).
\end{equation}
Firstly, we estimate for $I_1$ with $|\xi|<\varepsilon$ small enough by taking the matrix norm both sides of \eqref{eq:I_1 small}. Since $c_j\in\mathbb R$ for all $j\in\{1,\dots,h\}$, one has
\begin{equation*}
\bigl| I_1\bigr|\le C\sum_{j,\ell=1}^{h,h_j}e^{-\textrm{\normalfont Re}\,(d_{j\ell})|\xi|^2t}\left|e^{-N_{j\ell}^{(0)}\xi^2t+\mathcal O(|\xi|^3)t}-e^{-N_{j\ell}^{(0)}\xi^2t}\right|.
\end{equation*}
On the other hand, from Proposition \ref{prop:low frequency 1}, $\textrm{\normalfont Re}\,(d_{j\ell})\ge \theta>0$ and $N_{j\ell}^{(0)}$ is a nilpotent matrix for all $j\in\{1,\dots,h\}$ and $\ell\in\{1,\dots,h_j\}$. Thus, by choosing $\varepsilon'=\frac14\textrm{\normalfont Re}\,(d_{j\ell})$ for each $j\in\{1,\dots,h\}$ and $\ell\in\{1,\dots,h_j\}$, from Lemma \ref{lem:nilpotent}, we have
\begin{equation*}
\begin{aligned}
\bigl| I_1\bigr|&\le C\sum_{j,\ell=1}^{h,h_j}e^{-\textrm{\normalfont Re}\,(d_{j\ell})|\xi|^2t}e^{\frac14\textrm{\normalfont Re}\,(d_{j\ell})|\xi|^2t+C|\xi|^3t}|\xi|^3t\\
&\le C\sum_{j,\ell=1}^{h,h_j}e^{-\textrm{\normalfont Re}\,(d_{j\ell})|\xi|^2t}e^{\frac14\textrm{\normalfont Re}\,(d_{j\ell})|\xi|^2t+C\varepsilon |\xi|^2t}|\xi|^3t\le Ce^{-\frac12\theta|\xi|^2t}|\xi|^3t.
\end{aligned}
\end{equation*}
Hence, by applying Lemma \ref{lem:main estimates}, we have
\begin{equation}\label{est:I_1 small}
\|I_1\|_{L^{r}}\le Ct^{-\frac12\frac{1}{r}-\frac12},\quad \textrm{for } r\in [1,\infty] \textrm{ and } t\ge 1.
\end{equation}
Similarly, we also have the estimate for $I_2$ with $|\xi|<\varepsilon$ small enough. Indeed, from \eqref{eq:I_2 small}, one has
\begin{equation*}
\begin{aligned}
|I_2|&\le C\sum_{j,\ell=1}^{h,h_j}e^{-\textrm{\normalfont Re}\,(d_{j\ell})|\xi|^2t}e^{\bigl|N_{j\ell}^{(0)}\bigr||\xi|^2t+C|\xi|^3t}|\xi|\\
&\le C\sum_{j,\ell=1}^{h,h_j}e^{-\textrm{\normalfont Re}\,(d_{j\ell})|\xi|^2t}e^{C\varepsilon|\xi|^2t}|\xi|\le Ce^{-\frac12\theta |\xi|^2t}|\xi|
\end{aligned}
\end{equation*}
since one can assume that $\bigl|N_{j\ell}^{(0)}\bigr|$ is small for all $j\in\{1,\dots,h\}$ and $\ell\in\{1,\dots,h_j\}$ based on the fact that they are nilpotent matrices. Hence, by applying Lemma \ref{lem:main estimates}, we have
\begin{equation}\label{est:I_2 small}
\|I_2\|_{L^{r}}\le Ct^{-\frac12\frac{1}{r}-\frac12},\quad \textrm{for } r\in [1,\infty] \textrm{ and } t\ge 1.
\end{equation}
We estimate for $J$. From \eqref{eq:G2 small}, we have
\begin{equation*}
|J|\le C\sum_{j=1}^ke^{-\textrm{\normalfont Re}\,(e_j)t}e^{\bigl| M_j^{(0)}\bigr|t+C|\xi|t}(1+|\xi|).
\end{equation*}
Then, by Proposition \ref{prop:low frequency 2}, $\textrm{\normalfont Re}\,(e_j)>0$ and $M_j^{(0)}$ is a nilpotent matrix for $j\in\{1,\dots,k\}$, we can assume that $\bigl| M_j^{(0)}\bigr|$ is small, and thus, since $|\xi|<\varepsilon$ small enough, we obtain
\begin{equation}\label{est:J small}
\|J\|_{L^r}\le C\sum_{j=1}^ke^{-\textrm{\normalfont Re}\,(e_j)t}e^{C\varepsilon t}\bigl(1+\varepsilon\bigr)\le Ce^{-\delta t}
\end{equation}
for some $\delta >0$.
Therefore, from \eqref{est:I_1 small}, \eqref{est:I_2 small} and \eqref{est:J small}, one obtains for $|\xi|<\varepsilon$ that
\begin{equation*}
\bigl\|\hat G-\hat K\bigr\|_{L^{r}}\le \|I_1\|_{L^{r}}+\|I_2\|_{L^{r}}+\|J\|_{L^{r}}\le Ct^{-\frac12\frac{1}{r}-\frac12},\quad \textrm{for } r\in [1,\infty] \textrm{ and } t\ge 1.
\end{equation*}
We now estimate for $\hat V$ in \eqref{eq:exponential decay kernel} with $|\xi|<\varepsilon$. Since $\alpha_j\in\mathbb R$ for all $j\in\{1,\dots,s\}$, one has
\begin{equation*}
\bigl|\hat V(\xi,t)\bigr|\le C\sum_{j,\ell=1}^{s,s_j}e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}e^{\bigl|\Theta_{j\ell}^{(0)}\bigr|t}.
\end{equation*}
Thus, by Proposition \ref{prop:high frequency}, since $\textrm{\normalfont Re}\,(\beta_{j\ell})\ge \theta>0$ and $\Theta_{j\ell}^{(0)}$ is a nilpotent matrix for all $j\in\{1,\dots,h\}$ and $\ell\in\{1,\dots,h_j\}$, one obtains
\begin{equation*}
\bigl\|\hat V\bigr\|_{L^{r}}\le Ce^{-\frac12\theta t},\quad \textrm{for } r\in [1,\infty] \textrm{ and } t\ge 1,
\end{equation*}
since we can assume that $\bigl|\Theta_{j\ell}^{(0)}\bigr|$ is small for all $j\in\{1,\dots,s\}$ and $\ell\in\{1,\dots,s_j\}$ similarly to before.
In the compact domain $\varepsilon \le |\xi|\le R$, there are the exceptional points where the eigenprojections and the nilpotent parts associated with the eigenvalues of $E(i\xi)=B+i\xi A$ in this domain may not be defined even the eigenvalues are continuous there. However, the number of these exceptional points is always finite in $\varepsilon\le |\xi|\le R$ as introduced in Proposition \ref{prop:intermediate frequency}, once integrating, for $\hat G=e^{Et}$ and for some $\delta>0$, from the condition {\bf D}, we still obtain
\begin{equation*}
\bigl\|\hat G\bigr\|_{L^{r}}\le \left\| e^{-\frac{\theta|\cdot|^2}{1+|\cdot|^2}t}\right\|_{L^{r}}\le Ce^{-\delta t},\quad \textrm{for } r\in [1,\infty] \textrm{ and } t\ge 1.
\end{equation*}
For $\hat K$ in \eqref{eq:parabolic kernel}, similarly to the small frequency, for $\varepsilon\le |\xi|\le R$, one has
\begin{equation}\label{est:Kintermediate}
\bigl|\hat K(\xi,t)\bigr|\le C\sum_{j,\ell=1}^{h,h_j}e^{-\textrm{\normalfont Re}\,(d_{j\ell})|\xi|^2t}e^{\bigl|N_{j\ell}^{(0)}\bigr||\xi|^2t}\le Ce^{-\frac12\theta \varepsilon^2t}
\end{equation}
since $\textrm{\normalfont Re}\,(d_{j\ell})\ge \theta$ and $N_{j\ell}^{(0)}$ is a nilpotent matrix for all $j\in\{1,\dots,h\}$ and $\ell\in\{1,\dots,h_j\}$. Thus, for $\varepsilon\le |\xi|\le R$, one obtains
\begin{equation*}
\bigl\|\hat K\bigr\|_{L^{r}}\le Ce^{-\frac12\theta\varepsilon^2t},\quad \textrm{for } r\in [1,\infty] \textrm{ and } t\ge 1.
\end{equation*}
For $\hat V$ in \eqref{eq:exponential decay kernel}, similarly to the small frequency, for $\varepsilon\le |\xi|\le R$, one has
\begin{equation}\label{est:Vintermediate}
\bigl|\hat V(\xi,t)\bigr|\le C\sum_{j,\ell=1}^{s,s_j}e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}e^{\bigl|\Theta_{j\ell}^{(0)}\bigr|t}\le Ce^{-\frac12\theta t}
\end{equation}
since $\textrm{\normalfont Re}\,(\beta_{j\ell})\ge \theta$ and $\Theta_{j\ell}^{(0)}$ is a nilpotent matrix for all $j\in\{1,\dots,s\}$ and $\ell\in\{1,\dots,s_j\}$. Hence, for $\varepsilon\le |\xi|\le R$, we have
\begin{equation*}
\bigl\|\hat V\bigr\|_{L^{r}}\le Ce^{-\frac12\theta t},\quad \textrm{for } r\in [1,\infty] \textrm{ and } t\ge 1.
\end{equation*}
Finally, we study the case $|\xi|>R$. By Remark \ref{rem:high frequency}, the solution $\hat G$ to the system \eqref{eq:fundamental system} is given by
\begin{equation}\label{eq:G large}
\hat G(\xi,t):=\sum_{j,\ell=1}^{s,s_j}e^{(-i\alpha_j\xi-\beta_{j\ell})t}e^{-\Theta_{j\ell}^{(0)}t+\mathcal O(|\xi|^{-1})t}\bigl(\Pi_{j\ell}^{(0)}+\mathcal O(|\xi|^{-1})\bigr).
\end{equation}
Then, we have $\hat G-\hat V=I+J$ where
\begin{equation}\label{eq:I large}
I:=\sum_{j,\ell=1}^{s,s_j}e^{(-i\alpha_j\xi-\beta_{j\ell})t}\left( e^{-\Theta_{j\ell}^{(0)}t+\mathcal O(|\xi|^{-1})t}-e^{-\Theta_{j\ell}^{(0)}t}\right)\Pi_{j\ell}^{(0)}
\end{equation}
and
\begin{equation}\label{eq:J large}
J:=\sum_{j,\ell=1}^{s,s_j}e^{(-i\alpha_j\xi-\beta_{j\ell})t}e^{-\Theta_{j\ell}^{(0)}t+\mathcal O(|\xi|^{-1})t}\mathcal O(|\xi|^{-1}).
\end{equation}
We estimate for $I$ firstly and then for $J$. Since $\alpha_j\in\mathbb R$ for all $j\in\{1,\dots,s\}$, we have
\begin{equation*}
|I|\le C\sum_{j,\ell=1}^{s,s_j}e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}\left| e^{-\Theta_{j\ell}^{(0)}t+\mathcal O(|\xi|^{-1})t}-e^{-\Theta_{j\ell}^{(0)}t}\right|.
\end{equation*}
On the other hand, from Proposition \ref{prop:high frequency}, $\textrm{\normalfont Re}\,(\beta_{j\ell})\ge \theta>0$ and $\Theta_{j\ell}^{(0)}$ is a nilpotent matrix for all $j\in\{1,\dots,s\}$ and $\ell\in\{1,\dots,s_j\}$. Let $\varepsilon'=\frac14\textrm{\normalfont Re}\,(\beta_{j\ell})$ and applying Lemma \ref{lem:nilpotent}, for $|\xi|>R$ large enough, we obtain
\begin{equation*}
\begin{aligned}
|I|&\le C\sum_{j,\ell=1}^{s,s_j}e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}e^{\frac14\textrm{\normalfont Re}\,(\beta_{j\ell})t+C|\xi|^{-1}t}|\xi|^{-1}t\\
&\le C\sum_{j,\ell=1}^{s,s_j}e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}e^{\frac14\textrm{\normalfont Re}\,(\beta_{j\ell})t+CR^{-1}t}|\xi|^{-1}t\le Ce^{-\frac12\theta t}|\xi|^{-1}t.
\end{aligned}
\end{equation*}
Thus, for $|\xi|>R$ large enough, one has
\begin{equation}\label{est:I large}
\|I\|_{L^{r}}\le Ce^{-\frac14\theta t}, \quad \textrm{for } r\in (1,\infty] \textrm{ and } t\ge 1.
\end{equation}
Similarly, we estimate for $J$ for $|\xi|>R$ large enough. From \eqref{eq:J large}, one has
\begin{equation*}
\begin{aligned}
|J|&\le \sum_{j,\ell=1}^{s,s_j}e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}e^{\bigl|\Theta_{j\ell}^{(0)}\bigr|t+C|\xi|^{-1}t}|\xi|^{-1}\\
&\le \sum_{j,\ell=1}^{s,s_j}e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}e^{CR^{-1}t}|\xi|^{-1}\le Ce^{-\frac12\theta t}|\xi|^{-1}
\end{aligned}
\end{equation*}
since one can assume that $\bigl|\Theta_{j\ell}^{(0)}\bigr|$ is small for all $j\in\{1,\dots,s\}$ and $\ell\in\{1,\dots,s_j\}$. Thus, for $|\xi|>R$ large enough, one has
\begin{equation}\label{est:J large}
\|J\|_{L^{r}}\le Ce^{-\frac12\theta t}, \quad \textrm{for } r\in (1,\infty] \textrm{ and } t\ge 1.
\end{equation}
Therefore, from \eqref{est:I large} and \eqref{est:J large}, there is a constant $\delta>0$ such that
\begin{equation*}
\bigl\| \hat G-\hat V\bigr\|_{L^{r}}\le \|I\|_{L^{r}}+\|J\|_{L^{r}}\le Ce^{-\delta t},\quad \textrm{for } r\in (1,\infty] \textrm{ and } t\ge 1.
\end{equation*}
On the other hand, we estimate for $\hat K$ in \eqref{eq:parabolic kernel} with $|\xi|>R$. We have
\begin{equation}\label{est:K large}
\begin{aligned}
|\hat K(\xi,t)|&\le C\sum_{j,\ell=1}^{h,h_j}e^{-\textrm{\normalfont Re}\,(d_{j\ell})|\xi|^2t}e^{\bigl|N_{j\ell}^{(0)}\bigr||\xi|^2t}\\
&\le Ce^{-\frac12\theta R^2t}\sum_{j,\ell=1}^{h,h_j}e^{-\frac12\textrm{\normalfont Re}\,(d_{j\ell})|\xi|^2t}e^{\frac14\textrm{\normalfont Re}\,(d_{j\ell})|\xi|^2t}\le Ce^{-\frac12\theta R^2t}e^{-\frac14\theta|\xi|^2t}
\end{aligned}
\end{equation}
since $\textrm{\normalfont Re}\,(d_{j\ell})\ge \theta>0$ and $N_{j\ell}^{(0)}$ that is a nilpotent matrix can be assumed to have $\bigl|N_{j\ell}^{(0)}\bigr|$ small enough for all $j\in\{1,\dots,h\}$ and $\ell\in\{1,\dots,h_j\}$ by Proposition \ref{prop:high frequency}. Thus, for $|\xi|>R$, we have
\begin{equation*}
\|\hat K\|_{L^{r}}\le Ce^{-\frac12\theta R^2t}t^{-\frac12}\le Ce^{-\frac14\theta R^2t}, \quad \textrm{for } r\in [1,\infty] \textrm{ and } t\ge 1.
\end{equation*}
We now estimate the $L^{\infty}$-norm of the function $\mathcal F^{-1}(\hat G-\hat V)=\mathcal F^{-1}(I)+\mathcal F^{-1}(J)$ in $L^{\infty}$ for $|\xi|>R$ large enough where $I$ and $J$ are in \eqref{eq:I large} and \eqref{eq:J large} respectively. Primarily, from \eqref{eq:I large} and by applying the Taylor expansion to the application $X\to e^X$, we have $I=I_1+I_2+I_3$ where
\begin{equation}\label{eq:I_1 large}
I_1:=t\sum_{j,\ell=1}^{s,s_j}\dfrac{e^{-i\alpha_j\xi t}}{i\xi} e^{-\beta_{j\ell}t}e^{-\Theta_{j\ell}^{(0)}t}M\Pi_{j\ell}^{(0)},
\end{equation}
where $M$ is the coefficient in $\mathcal O(|\xi|^{-1})$ associated with $(i\xi)^{-1}$, and
\begin{equation}\label{eq:I_2 large}
I_2:=t\sum_{j,\ell=1}^{s,s_j}e^{(-i\alpha_j\xi -\beta_{j\ell})t}e^{-\Theta_{j\ell}^{(0)}t}\mathcal O(|\xi|^{-2})\Pi_{j\ell}^{(0)},
\end{equation}
and
\begin{equation}\label{eq:I_3 large}
I_3:=\sum_{j,\ell=1}^{s,s_j}e^{(-i\alpha_j\xi-\beta_{j\ell})t}\left( e^{-\Theta_{j\ell}^{(0)}t+\mathcal O(|\xi|^{-1})t}-e^{-\Theta_{j\ell}^{(0)}t}-e^{-\Theta_{j\ell}^{(0)}t}\mathcal O(|\xi|^{-1})t\right)\Pi_{j\ell}^{(0)}.
\end{equation}
We first estimate for $\mathcal F^{-1}(I_1)=\sum_{j=1}^s\mathcal F_{j}^{-1}(I_1)$ where
\begin{equation*}
\mathcal F_{j}^{-1}(I_1):=t\sum_{\ell=1}^{s_j}\mathcal F^{-1}\left(\dfrac{e^{-i\alpha_j\xi t}}{i\xi}\right) e^{-\beta_{j\ell}t}e^{-\Theta_{j\ell}^{(0)}t}M\Pi_{j\ell}^{(0)}.
\end{equation*}
for $j\in\{1,\dots,s\}$ .
For each $j\in\{1,\dots,s\}$, one has
\begin{equation}\label{est:I_1 large}
\begin{aligned}
\bigl|\mathcal F_j^{-1}(I_1)(x,t)\bigr|&\le Ct\sum_{\ell=1}^{s_j}\left|\int_{-\infty}^{-R}+\int_R^{+\infty}\dfrac{e^{i(x-\alpha_jt)\xi}}{i\xi}\,d\xi\right|e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}e^{\bigl|\Theta_{j\ell}^{(0)}\bigr|t}\\
&\le Ct\sum_{\ell=1}^{s_j}\left|2\int_R^{+\infty}\dfrac{\sin((x-\alpha_jt)\xi)}{\xi}\,d\xi\right|e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}e^{\bigl|\Theta_{j\ell}^{(0)}\bigr|t}\\
&\le Cte^{-\frac12\theta t}|x-\alpha_jt|
\end{aligned}
\end{equation}
since $\textrm{\normalfont Re}\,(\beta_{j\ell})\ge \theta>0$ and $\Theta_{j\ell}^{(0)}$ that is a nilpotent matrix with norm can be chosen small enough for all $j\in\{1,\dots,s\}$ and $\ell\in\{1,\dots,s_j\}$ by Proposition \ref{prop:high frequency}. Hence, if $|x|\le Ct$ where $C$ is a positive constant, then, for all $j\in\{1,\dots,s\}$, we have
\begin{equation*}
\bigl\|\mathcal F_j^{-1}(I_1)\bigr\|_{L^{\infty}}\le Ct^2e^{-\frac12\theta t}\le Ce^{-\frac14\theta t}.
\end{equation*}
We now estimate for $\mathcal F_j^{-1}(I_1)$ in the case where $|x|>Ct$ and $C$ large enough for $j\in\{1,\dots,s\}$. Noting that in this case we have
\begin{equation}\label{est:alphaj=0}
e^{x\alpha_j}\le e^{|x||\alpha_j|}\le e^{\frac{|x|^2}{t}|\alpha_j||x|^{-1}t}\le e^{\varepsilon\frac{|x|^2}{t}}
\end{equation}
where $\varepsilon$ is small enough.
One has
\begin{equation}\label{eq:I_1^a |x| large}
\bigl|\mathcal F_j^{-1}(I_1)(x,t)\bigr|\le Ct\sum_{\ell=1}^{s_j}\left|\int_{-\infty}^{-R}+\int_R^{+\infty}\dfrac{e^{i(x-\alpha_jt)\xi}}{i\xi}\,d\xi\right|e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}e^{\bigl|\Theta_{j\ell}^{(0)}\bigr|t}.
\end{equation}
We estimate for the integral
\begin{equation}\label{eq:H}
H:=\int_{-\infty}^{-R}+\int_R^{+\infty}\dfrac{e^{i(x-\alpha_jt)\xi}}{i\xi}\,d\xi=\lim_{K\to +\infty}\int_{-K}^{-R}+\int_R^{K}\dfrac{e^{i(x-\alpha_jt)\xi}}{i\xi}\,d\xi=H_1+H_2.
\end{equation}
Due to the fact that the integrand is holomorphic, we can estimate $H_2$ by considering $\xi=\zeta+i\eta\in\mathbb C$ and by changing the path $\{(\zeta,0):\zeta\textrm{ from }R \textrm{ to } K\}$ to the path $\gamma:=\gamma_1\cup \gamma_2\cup\gamma_3$ in the complex plane where
\begin{equation}
\gamma_1:=\left\{(\zeta,\eta):\zeta=R,\eta\textrm{ from }0 \textrm{ to } \frac xt\right\},
\end{equation}
\begin{equation}
\gamma_2:=\left\{(\zeta,\eta):\zeta\textrm{ from } R \textrm{ to } K,\eta=\frac xt\right\}
\end{equation}
and
\begin{equation}
\gamma_3:=\left\{(\zeta,\eta):\zeta=K,\eta\textrm{ from }\frac xt \textrm{ to } 0\right\}.
\end{equation}
Then, by parameterizing $\gamma_1(s)=R+i\frac{x}{t}s$ for $s\in[0,1]$, since $|x|> Ct$, we have
\begin{equation}\label{est:gamma1}
\begin{aligned}
\left|\lim_{K\to +\infty} \int_{\gamma_1}\dfrac{e^{i(x-\alpha_jt)\xi}}{i\xi}\,d\xi\right |&=\left| \int_{0}^{1}\dfrac{e^{i(x-\alpha_jt)R+x\alpha_js-\frac{|x|^2}{t}s}}{R+i\frac{x}{t}s}\,\dfrac{x}{t}ds\right|\\
&\le \dfrac{C}{R}\int_{0}^{1}\left(\dfrac{|x|}{t}+\dfrac{|x|^2}{t^2}\right)e^{\varepsilon\frac{|x|^2}{t}s}e^{-\frac{|x|^2}{t}s}\,ds\\
&\le \dfrac{C}{R}\left(\dfrac{1}{|x|}+\dfrac{1}{t}\right)\bigl(1-e^{-\frac{|x|^2}{2t}}\bigr)\le \dfrac{C}{R}t^{-1}.
\end{aligned}
\end{equation}
On the other hand, noting that
\begin{equation}
\dfrac{1}{-\eta+i\zeta}=\dfrac{1}{i\zeta}-\eta\left(\dfrac{1}{\zeta^2+\eta^2}+\dfrac{1}{i\zeta}\dfrac{\eta}{\zeta^2+\eta^2}\right).
\end{equation}
Thus, since $|x|>Ct$, we have
\begin{equation}\label{est:gamma2}
\begin{aligned}
\left|\lim_{K\to +\infty} \int_{\gamma_2}\dfrac{e^{i(x-\alpha_jt)\xi}}{i\xi}\,d\xi\right |&=\left| \int_{R}^{+\infty}\dfrac{e^{ix\zeta-i\alpha_j\zeta t-\frac{|x|^2}{t}+x\alpha_j}}{-\frac xt+i\zeta}\,d\zeta\right|\\
&\le e^{-\frac{|x|^2}{2t}}\left|\int_{R}^{+\infty}e^{ix\zeta}\left(\dfrac{1}{i\zeta}-\dfrac xt \left(\dfrac{1}{\zeta^2+\frac {|x|^2}{t^2}}+\dfrac{1}{i\zeta}\dfrac{\frac xt}{\zeta^2+\frac {|x|^2}{t^2}}\right)\right)\,d\zeta\right|\\
&\le Ce^{-\frac{|x|^2}{2t}}\left(\left|\int_{R}^{+\infty}\dfrac{e^{ix\zeta}}{i\zeta}\,d\zeta\right|+\left(\dfrac{|x|}{t}+\dfrac{|x|^2}{t^2}\right)\int_{R}^{+\infty}\dfrac{1}{\zeta^2}\,d\zeta\right)\\
&\le Ce^{-\frac{|x|^2}{2t}}\dfrac{|x|^2}{2t}\left(\dfrac{t}{|x|}+\dfrac{1}{|x|}+\dfrac{1}{t}\right)\le Ce^{-\delta t}.
\end{aligned}
\end{equation}
Similarly, we consider $\gamma_3(s)=K+i\frac{x}{t}(1-s)$ for $s\in[0,1]$, we have
\begin{equation}\label{est:gamma31}
\left|\lim_{K\to +\infty} \int_{\gamma_3}\dfrac{e^{i(x-\alpha_jt)\xi}}{i\xi}\,d\xi\right |= \left|\lim_{K\to +\infty} \int_{0}^{1}\dfrac{e^{i(x-\alpha_jt)K+x\alpha_j(1-s)-\frac{|x|^2}{t}(1-s)}}{K+i\frac{x}{t}(1-s)}\,\dfrac{x}{t}ds\right|.
\end{equation}
On the other hand, noting that for a fixed $K$, we have
\begin{equation}\label{est:gamma32}
\begin{aligned}
\left| \int_{0}^{1}\dfrac{e^{i(x-\alpha_jt)K+x\alpha_j(1-s)-\frac{|x|^2}{t}(1-s)}}{K+i\frac{x}{t}(1-s)}\,\dfrac{x}{t}ds\right|&\le \dfrac{C}{K}\int_{0}^{1}\left(\dfrac{|x|}{t}+\dfrac{|x|^2}{t^2}\right)e^{-\frac{|x|^2}{2t}(1-s)}\,ds\\
&=\dfrac{C}{K}\left(\dfrac{1}{|x|}+\dfrac{1}{t}\right)e^{-\frac{|x|^2}{2t}}\bigl(e^{\frac{|x|^2}{2t}}-1\bigr)\le \dfrac{C}{K}t^{-1}.
\end{aligned}
\end{equation}
One deduces that
\begin{equation}\label{est:gamma33}
\lim_{K\to +\infty} \int_{0}^{1}\dfrac{e^{i(x-\alpha_jt)K+x\alpha_j(1-s)-\frac{|x|^2}{t}(1-s)}}{K+i\frac{x}{t}(1-s)}\,\dfrac{x}{t}ds=0.
\end{equation}
Hence, it implies that
\begin{equation}\label{est:gamma34}
\left|\lim_{K\to +\infty} \int_{\gamma_3}\dfrac{e^{i(x-\alpha_jt)\xi}}{i\xi}\,d\xi\right |=0.
\end{equation}
Finally, one can estimate $H_1$ similarly by substituting $R$ and $K$ by $-R$ and $-K$ respectively. Therefore, from \eqref{eq:I_1^a |x| large}, \eqref{eq:H}, \eqref{est:gamma1}, \eqref{est:gamma2} and \eqref{est:gamma34}, one obtains
\begin{equation*}
\bigl\| \mathcal F_j^{-1}(I_1)\bigr\|_{L^{\infty}}\le Ce^{-\frac12\theta t}
\end{equation*}
for $|x|>Ct$ where $C$ large enough since $\textrm{\normalfont Re}\,(\beta_{j\ell})\ge \theta>0$ and $\Theta_{j\ell}^{(0)}$ is a nilpotent matrix for all $j\in\{1,\dots,s\}$ and $\ell\in\{1,\dots,s_j\}$ by Proposition \ref{prop:high frequency}.
Therefore, it implies that\begin{equation*}
\bigl\| \mathcal F^{-1}(I_1)\bigr\|_{L^{\infty}}\le C\sum_{j=1}^s\bigl\| \mathcal F_j^{-1}(I_1)\bigr\|_{L^{\infty}} \le Ce^{-\frac12\theta t}.
\end{equation*}
We estimate for $\mathcal F^{-1}(I_2)$ and $\mathcal F^{-1}(I_3)$ where $I_2$ and $I_3$ are in \eqref{eq:I_2 large} and \eqref{eq:I_3 large} respectively. Since $\mathcal F^{-1}:L^1\to L^\infty$, one has
\begin{equation}
\bigl\| \mathcal F^{-1}(I_{2,3})\bigr\|_{L^{\infty}}\le C\bigl\| I_{2,3}\bigr\|_{L^1}.
\end{equation}
Hence, we only need to estimate $I_2$ and $I_3$ in $L^1$.
From \eqref{eq:I_2 large}, we have
\begin{equation*}
|I_2|\le C\sum_{j,\ell=1}^{s,s_j}e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}e^{\bigl|\Theta_{j\ell}^{(0)}\bigr|t}|\xi|^{-2}t.
\end{equation*}
Thus, we obtain
\begin{equation*}
\bigl\|I_2\bigr\|_{L^1}\le Ce^{-\frac12\theta t}, \qquad\textrm{for }t\ge 1.
\end{equation*}
From \eqref{eq:I_3 large}, we have
\begin{equation*}
|I_3|\le \sum_{j,\ell=1}^{s,s_j}e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}\left| e^{-\Theta_{j\ell}^{(0)}t+\mathcal O(|\xi|^{-1})t}-e^{-\Theta_{j\ell}^{(0)}t}-e^{-\Theta_{j\ell}^{(0)}t}\mathcal O(|\xi|^{-1})t\right|.
\end{equation*}
Then, by Lemma \ref{lem:nilpotent}, we obtain
\begin{equation*}
|I_3|\le \sum_{j,\ell=1}^{s,s_j}e^{-\textrm{\normalfont Re}\,(\beta_{j\ell})t}e^{\varepsilon' t+C|\xi|^{-1}t}|\xi|^{-2}t^2,
\end{equation*}
where $\varepsilon'$ is small enough. Therefore, since $|\xi|$ large enough, we have
\begin{equation*}
\bigl\|I_3\bigr\|_{L^1}\le Ce^{-\frac12\theta t},\qquad \textrm{for }t\ge 1.
\end{equation*}
Thus, we deduces
\begin{equation*}
\bigl\| \mathcal F^{-1}(I)\bigr\|_{L^{\infty}}\le \bigl\| \mathcal F^{-1}(I_1)\bigr\|_{L^{\infty}}+\bigl\| \mathcal F^{-1}(I_2)\bigr\|_{L^{\infty}}+\bigl\| \mathcal F^{-1}(I_3)\bigr\|_{L^{\infty}}\le Ce^{-\frac12\theta t}, \quad \textrm{for }t\ge 1.
\end{equation*}
We now estimate $\mathcal F^{-1}(J)$ where $J$ is given by \eqref{eq:J large}. From \eqref{eq:J large}, one has $J=J_1+J_2$ where
\begin{equation}
J_1:=\sum_{j,\ell=1}^{s,s_j}\dfrac{e^{-i\alpha_j\xi t}}{i\xi}e^{-\beta_{j\ell}t}e^{-\Theta_{j\ell}^{(0)}t+\mathcal O(|\xi|^{-1})t}M,
\end{equation}
where $M$ is the coefficient associated with the term $(i\xi)^{-1}$ in $\mathcal O(|\xi|^{-1})$, and
\begin{equation*}
J_2:=\sum_{j,\ell=1}^{s,s_j}e^{(-i\alpha_j\xi-\beta_{j\ell})t}e^{-\Theta_{j\ell}^{(0)}t+\mathcal O(|\xi|^{-1})t}\mathcal O(|\xi|^{-2}).
\end{equation*}
Then, we can estimate $\mathcal F^{-1}(J_1)$ as the case of $\mathcal F^{-1}(I_1)$ and we can estimate $\mathcal F^{-1}(J_2)$ as the case of $\mathcal F^{-1}(I_2)$ and $\mathcal F^{-1}(I_3)$. Thus, we deduces
\begin{equation*}
\bigl\| \mathcal F^{-1}(J)\bigr\|_{L^{\infty}}\le \bigl\| \mathcal F^{-1}(J_1)\bigr\|_{L^{\infty}}+\bigl\| \mathcal F^{-1}(J_2)\bigr\|_{L^{\infty}}\le Ce^{-\frac12\theta t}, \qquad \textrm{for }t\ge 1.
\end{equation*}
Therefore, we conclude
\begin{equation*}
\bigl\| \mathcal F^{-1}(\hat G-\hat V)\bigr\|_{L^{\infty}}\le \bigl\| \mathcal F^{-1}(I)\bigr\|_{L^{\infty}}+\bigl\| \mathcal F^{-1}(J)\bigr\|_{L^{\infty}} \le Ce^{-\frac12\theta t}
\end{equation*}
for $t\ge 1$ and the proof is done.
\end{proof}
\section{Multiplier estimates}\label{sec:multiplier estimates}
This section provides some useful Fourier multiplier estimates by recalling the Young inequality.
\begin{lemma}[Young's inequality]
For all $(p,q,r)\in[1,\infty]^3$ such that $\frac1q-\frac1p=1-\frac1r$ and $(f,g)\in L^r\times L^q$, we have $f*g\in L^p$
and $\|f*g\|_{L^p}\le\|f\|_{L^r}\|g\|_{L^q}$.
\end{lemma}
We thus obtain the follows.
\subsection{Case $|x|\le Ct$}
Let $\chi_1$ and $\chi_3$ be cutoff functions defined on $[-\varepsilon,\varepsilon]$ and $(-\infty,-R]\cup[R,+\infty)$ respectively for $\varepsilon$ small and $R$ large such that $|\chi_{1,2}|\le 1$. Let $\chi_2:=1-\chi_1-\chi_3$, we introduce the multipliers
\begin{equation*}
m_j:=\chi_j(\hat G-\hat K-\hat V),\qquad j=1,2,3.
\end{equation*}
The following holds.
\begin{proposition}\label{prop:multiplier 1}
For $r\in[1,\infty]$, $m_j\in M_r$ with
\begin{equation}
\|m_j\|_{M_r}\le Ct^{-\frac12},\qquad j=1,2,3\textrm{ and }t\ge 1.
\end{equation}
\begin{proof}
We begin with $m_1$. For $|\xi|\le \varepsilon$, we have $\hat G-\hat K=I_1+I_2+J$ where $I_1,I_2$ are in \eqref{eq:I_1 small}, \eqref{eq:I_2 small} respectively and $J=\hat G_2$ as in \eqref{eq:G2 small}. We then have
\begin{equation*}
\mathcal F^{-1}(\chi_1I_1)(x,t)=\sum_{j,\ell=1}^{h,h_j}\int_{-\varepsilon}^{\varepsilon}\chi_1(\xi)e^{i(x-c_jt)\xi-d_{j\ell}\xi^2t}\left(e^{-N_{j\ell}^{(0)}\xi^2t+\mathcal O(|\xi|^3)t}-e^{-N_{j\ell}^{(0)}\xi^2t}\right)P_{j\ell}^{(0)} d\xi.
\end{equation*}
For $j\in\{1,\dots,h\}$ and $\ell\in\{1,\dots,h_j\}$, let $z=e^{i\phi/2}\xi$ where $\phi=\textrm{\normalfont arg}\,(d_{j\ell})\in (-\pi/2,\pi/2)$ since $\textrm{\normalfont Re}\,(d_{j\ell})>0$, one obtains
\begin{align*}
\mathcal F^{-1}(\chi_1I_1)(x,t)&=\sum_{j,\ell=1}^{h,h_j}\int_{\gamma}\chi_1(e^{-i\phi/2}z)e^{i(x-c_jt)e^{-i\phi/2}z-|d_{j\ell}|z^2t}\\
&\hskip2cm\cdot\left(e^{-N_{j\ell}^{(0)}e^{-i\phi}z^2t+\mathcal O(|e^{-i\phi/2}z|^3)t}-e^{-N_{j\ell}^{(0)}e^{-i\phi}z^2t}\right)P_{j\ell}^{(0)} e^{-i\phi/2}dz,
\end{align*}
where $\gamma:=\{z\in\mathbb C:z=e^{i\phi/2}\xi,\xi\in[-\varepsilon,\varepsilon]\}$. Then, we will estimate for each summand by letting $\eta:=\min\bigl\{\frac{|x-c_jt|}{2|d_{j\ell}|t},\frac{\varepsilon}{2}\bigr\}$. Since the integrand is holomorphic, we can change the path of the integral from $\gamma$ to $\gamma:=\gamma_1\cup \gamma_2\cup\gamma_3$ in the complex plane where
\begin{equation}\label{gamma1}
\gamma_1:=\left\{-\varepsilon e^{i\phi/2}+i\textrm{\normalfont sgn}(x-c_jt)\eta e^{-i\phi/2}s:s\in[0,1]\right\},
\end{equation}
\begin{equation}\label{gamma2}
\gamma_2:=\left\{ \zeta e^{i\phi/2}+i\textrm{\normalfont sgn}(x-c_jt)\eta e^{-i\phi/2}:\zeta\in[-\varepsilon,\varepsilon]\right\}
\end{equation}
and
\begin{equation}\label{gamma3}
\gamma_3:=\left\{\varepsilon e^{i\phi/2}+i\textrm{\normalfont sgn}(x-c_jt)\eta e^{-i\phi/2}(1-s):s\in[0,1]\right\}.
\end{equation}
On the other hand, we have
\begin{equation}
\left|e^{i(x-c_jt)e^{-i\phi/2}z-|d_{j\ell}|z^2t}\right|=e^{-(x-c_jt)\bigl(\cos(\phi/2)\textrm{\normalfont Im}\,z-\sin(\phi/2)\textrm{\normalfont Re}\,z\bigr)}e^{-|d_{j\ell}|(\textrm{\normalfont Re}\,z-\textrm{\normalfont Im}\,z)(\textrm{\normalfont Re}\,z+\textrm{\normalfont Im}\,z)t}.
\end{equation}
Moreover, $|\chi_1|\le 1$ and similarly to before, by Lemma \ref{lem:nilpotent}, since $N_{j\ell}^{(0)}$ is nilpotent and since $|z|=|\xi|\le \varepsilon$ small, we have
\begin{equation}
\begin{aligned}
\left|e^{-N_{j\ell}^{(0)}e^{-i\phi}z^2t+\mathcal O(|e^{-i\phi/2}z|^3)t}-e^{-N_{j\ell}^{(0)}e^{-i\phi}z^2t}\right|&\le C|z|^3te^{\varepsilon'|z|^2t+C|z|^3t}\\
&\le C(|\textrm{\normalfont Re}\,z|+|\textrm{\normalfont Im}\,z|)^3te^{\varepsilon''(|\textrm{\normalfont Re}\,z|+|\textrm{\normalfont Im}\,z|)^2t},
\end{aligned}
\end{equation}
where $\varepsilon',\varepsilon''$ can be chosen as small as one needs.
Thus, for $z\in\gamma_1$, we have
\begin{equation*}
\begin{aligned}
\textrm{\normalfont Re}\,z&=-\varepsilon\cos(\phi/2)+\textrm{\normalfont sgn}(x-c_jt)\eta\sin(\phi/2)s,\\
\textrm{\normalfont Im}\,z&=-\varepsilon \sin(\phi/2) +\textrm{\normalfont sgn}(x-c_jt)\eta\cos(\phi/2)s.
\end{aligned}
\end{equation*}
We then obtain from $\cos(\phi)>0$, $\eta^2s^2\le \varepsilon^2/2$ for $s\in[0,1]$ and $|z|\le \varepsilon$ that for some $\delta>0$, one has
\begin{equation}\label{est:gamma1small}
\left|\int_{\gamma_1}\right|\le C\int_0^1e^{-|x-c_jt|\eta \cos (\phi)s}e^{-|d_{j\ell}|\cos(\phi)(\varepsilon^2-\eta^2s^2)t}e^{\varepsilon''\varepsilon^2 t}\varepsilon^3tds\le Ce^{-\delta t}.
\end{equation}
For $z\in \gamma_2$, we have
\begin{equation*}
\begin{aligned}
\textrm{\normalfont Re}\,z&=\zeta\cos(\phi/2)+\textrm{\normalfont sgn}(x-c_jt)\eta\sin(\phi/2),\\
\textrm{\normalfont Im}\,z&=\zeta \sin(\phi/2) +\textrm{\normalfont sgn}(x-c_jt)\eta\cos(\phi/2).
\end{aligned}
\end{equation*}
Hence, one has
\begin{equation}
\left|\int_{\gamma_2}\right|\le C\int_{-\varepsilon}^{\varepsilon}e^{-|x-c_jt|\eta \cos (\phi)}e^{-|d_{j\ell}|\cos(\phi)(\zeta^2-\eta^2)t}e^{\varepsilon''(\zeta^2+2|\zeta||\eta|+|\eta|^2) t}(|\zeta|+|\eta|)^3td\zeta.
\end{equation}
If $\eta=\frac{|x-c_jt|}{2|d_{j\ell}|t}$, then since $|\zeta|\le \varepsilon$ small and $\varepsilon''$ small enough, for some $c>0$, we have
\begin{equation}\label{est:gamma2asmall}
\begin{aligned}
\left|\int_{\gamma_2}\right|&\le C\int_{-\varepsilon}^{\varepsilon}e^{-\frac{|x-c_jt|^2}{|d_{j\ell}|t} \cos (\phi)}e^{\frac{|x-c_jt|^2}{2|d_{j\ell}|t}\cos(\phi)}e^{-|d_{j\ell}|\cos(\phi)\zeta^2t}e^{\varepsilon''\zeta^2t+\varepsilon''|\zeta|\frac{|x-c_jt|}{|d_{j\ell}|t}+\varepsilon''\frac{|x-c_jt|^2}{4|d_{j\ell}|^2t}}\\
&\hskip3cm\cdot \left(|\zeta|^3t+3|\zeta|^2\dfrac{|x-c_jt|}{2|d_{j\ell}|}+3|\zeta|\dfrac{|x-c_jt|^2}{4|d_{j\ell}|^2t}+\dfrac{|x-c_jt|^3}{8|d_{j\ell}|^3t^2}\right)d\zeta\\
&\le C \sum_{k=0}^3e^{-\frac{|x-c_jt|^2}{8|d_{j\ell}|t} \cos (\phi)}\left(\dfrac{|x-c_jt|}{\sqrt{t}}\right)^{k}\int_{-\varepsilon}^{\varepsilon}e^{-\frac12|d_{j\ell}|\cos(\phi)\zeta^2t}|\zeta|^{3-k}t^{1-\frac k2}d\zeta\\
&\le Ct^{-1}e^{-\frac{|x-c_jt|^2}{c|d_{j\ell}|t}}.
\end{aligned}
\end{equation}
If $\eta=\varepsilon/2$, then $|x-c_jt|\ge \varepsilon|d_{j\ell}|t$ by the definition of $\eta$ and we have
\begin{equation}\label{est:gamma2bsmall}
\begin{aligned}
\left|\int_{\gamma_2}\right|&\le C\int_{-\varepsilon}^{\varepsilon}e^{-|x-c_jt|\eta \cos (\phi)}e^{-|d_{j\ell}|\cos(\phi)(\zeta^2-\eta^2)t}e^{\varepsilon''(\zeta^2+2|\zeta||\eta|+|\eta|^2) t}(|\zeta|+|\eta|)^3td\zeta\\
&\le Ce^{-\varepsilon^2|d_{j\ell}|\cos (\phi)t}e^{\frac14\varepsilon^2|d_{j\ell}|\cos(\phi)t}e^{\varepsilon''\varepsilon^2 t}\int_{-\varepsilon}^{\varepsilon}e^{-|d_{j\ell}|\cos(\phi)\zeta^2t}\left(|\zeta|+\dfrac{\varepsilon}{2}\right)^3td\zeta\le Ce^{-\delta t},
\end{aligned}
\end{equation}
for some $\delta>0$ since $\varepsilon''$ can be chosen small enough.
For $z\in\gamma_3$, we have
\begin{equation*}
\begin{aligned}
\textrm{\normalfont Re}\,z&=\varepsilon\cos(\phi/2)+\textrm{\normalfont sgn}(x-c_jt)\eta\sin(\phi/2)(1-s),\\
\textrm{\normalfont Im}\,z&=\varepsilon \sin(\phi/2) +\textrm{\normalfont sgn}(x-c_jt)\eta\cos(\phi/2)(1-s).
\end{aligned}
\end{equation*}
Thus, similarly to $\gamma_1$, for some $\delta>0$, one has
\begin{equation}\label{est:gamma3small}
\left|\int_{\gamma_3}\right|\le C\int_0^1e^{-|x-c_jt|\eta \cos (\phi)(1-s)}e^{-|d_{j\ell}|\cos(\phi)(\varepsilon^2-\eta^2(1-s)^2)t}e^{\varepsilon''\varepsilon^2 t}\varepsilon^3tds\le Ce^{-\delta t}.
\end{equation}
Therefore, from \eqref{est:gamma1small}, \eqref{est:gamma2asmall}, \eqref{est:gamma2bsmall}, \eqref{est:gamma3small} and the fact that $e^{-\delta t}\le Ct^{-1}e^{-\frac{|x-c_jt|^2}{c|d_{j\ell}|t}}$ since $|x|\le Ct$ and $t\ge 1$, we obtain
\begin{equation*}
\left|\mathcal F^{-1}(\chi_1I_1)(x,t)\right|\le Ct^{-1}e^{-\frac{|x-c_jt|^2}{c|d_{j\ell}|t}},\qquad t\ge 1.
\end{equation*}
By the same way for $\mathcal F^{-1}(\chi_1I_2)$ and $\mathcal F^{-1}(\chi_1J)$, we also have
\begin{equation*}
\left|\mathcal F^{-1}(\chi_1(I_2+J))(x,t)\right|\le Ct^{-1}e^{-\frac{|x-c_jt|^2}{c|d_{j\ell}|t}},\qquad t\ge 1.
\end{equation*}
Hence, for $r\in[1,\infty]$, by the Young inequality and since $m_1=\chi_1(I_1+I_2+J)$, it follows that
\begin{equation*}
\|m_1\|_{M_r}=\sup_{\|f\|_{L^r}=1}\bigl\|\mathcal F^{-1}(m_1)*f\bigr\|_{L^r}\le \bigl\|\mathcal F^{-1}(m_1)\bigr\|_{L^1}\le Ct^{-\frac12},\qquad t\ge 1.
\end{equation*}
We consider $m_2$. Since $|\chi_2(\xi)|,|e^{ix\xi }|\le 1$ for $\xi\in \mathbb R$, we have
\begin{equation}
\bigl|\mathcal F^{-1}(m_2)(x,t)\bigr|\le \int_{\varepsilon\le |\xi|\le R}\bigl(|\hat G(\xi,t)|+|\hat K(\xi,t)|+|\hat V(\xi,t)|\bigr)d\xi\le Ce^{-\delta t}
\end{equation}
for some $\delta>0$ due to \eqref{est:Kintermediate}, \eqref{est:Vintermediate} and the fact that $|\hat G(\xi,t)|\le e^{-\frac{\theta|\xi|^2}{1+|\xi|^2}t}$ on $\varepsilon\le |\xi|\le R$ for $\theta>0$.
Thus, one has
\begin{equation*}
\left|\mathcal F^{-1}(m_2)(x,t)\right|\le Ct^{-1}e^{-\frac{|x-c_jt|^2}{c|d_{j\ell}|t}},\qquad t\ge 1.
\end{equation*}
Hence, for $r\in[1,\infty]$, by the Young inequality, it follows that
\begin{equation*}
\|m_2\|_{M_r}=\sup_{\|f\|_{L^r}=1}\bigl\|\mathcal F^{-1}(m_2)*f\bigr\|_{L^r}\le \bigl\|\mathcal F^{-1}(m_2)\bigr\|_{L^1}\le Ct^{-\frac12},\qquad t\ge 1.
\end{equation*}
Finally, we consider $m_3$. Based on the decomposition $\hat G-\hat V=I+J$ where $I$ is defined as $I_1$ in \eqref{eq:I_1 large} and $J$ is the remainder, from \eqref{est:I_1 large} and a same treatment for $J$, we obtain for $|x|\le Ct$ that
\begin{equation*}
\left|\mathcal F^{-1}(\chi_3(\hat G-\hat V))(x,t)\right|\le C\sum_{j=1}^ste^{-\delta t}|x-\alpha_jt|+Ce^{-\delta t}\le Ct^{-1}e^{-\frac{|x-c_jt|^2}{c|d_{j\ell}|t}},\qquad t\ge 1
\end{equation*}
for some $\delta>0$.
Moreover, from \eqref{est:K large}, there is a $\theta>0$ such that
\begin{equation*}
\left|\mathcal F^{-1}(\chi_3\hat K)(x,t)\right|\le e^{-\frac12\theta R^2t}\int_{|\xi|\ge R}e^{-\frac14\theta |\xi|^2t}d\xi\le Ce^{-\delta t}\le Ct^{-1}e^{-\frac{|x-c_jt|^2}{c|d_{j\ell}|t}},\qquad t\ge 1
\end{equation*}
for some $\delta>0$.
Hence, for $r\in[1,\infty]$, by the Young inequality and $m_3=\chi_3(\hat G-\hat V-\hat K)$, it follows that
\begin{equation*}
\|m_3\|_{M_r}=\sup_{\|f\|_{L^r}=1}\bigl\|\mathcal F^{-1}(m_3)*f\bigr\|_{L^r}\le \bigl\|\mathcal F^{-1}(m_3)\bigr\|_{L^1}\le Ct^{-\frac12},\qquad t\ge 1.
\end{equation*}
We finish the proof.
\end{proof}
\end{proposition}
\subsection{Case $|x|>Ct$}
We introduce the multipliers
\begin{equation*}
m^1:=\hat G-\hat V,\qquad \textrm{and} \qquad m^2:=\hat K.
\end{equation*}
The following holds.
\begin{proposition}\label{prop:multiplier 2}
For $r\in[1,\infty]$, we have $m^j\in M_r$ with
\begin{equation}
\|m^j\|_{M_r}\le Ct^{-\frac12},\qquad j=1,2\textrm{ and }t\ge 1.
\end{equation}
\begin{proof}
We estimate the $L^1$-norm of $\mathcal F^{-1}(m^1)$. We have
\begin{equation}\label{multiplier:1 upper}
\mathcal F^{-1}(m^1)(x,t)=\lim_{R\to +\infty}\int_{-R}^{R}e^{ix\xi}\bigl(\hat G(\xi,t)-\hat V(\xi,t)\bigr)d\xi.
\end{equation}
On the other hand, noting that the solution $\hat G$ to \eqref{eq:fundamental system} is written as $\hat G(\xi,t)=e^{E(i\xi)t}$ and thus $\hat G$ is an entire function on the complex plane since $E(i\xi)=-(B+i\xi A)$. Moreover, due to the formula of $\hat V$ in \eqref{eq:exponential decay kernel}, $\hat V$ is also holomorphic on the complex plane. Thus, by considering $\xi=\zeta+i\eta\in\mathbb C$, one can change the path of the integral in \eqref{multiplier:1 upper} from $\{(\zeta,0):\zeta\textrm{ from }-R \textrm{ to } R\}$ to the path $\gamma:=\gamma_1\cup \gamma_2\cup\gamma_3$ in the complex plane where
\begin{equation}
\gamma_1:=\left\{(\zeta,\eta):\zeta=-R,\eta\textrm{ from }0 \textrm{ to } \frac xt\right\},
\end{equation}
\begin{equation}
\gamma_2:=\left\{(\zeta,\eta):\zeta\textrm{ from } -R \textrm{ to } R,\eta=\frac xt\right\}
\end{equation}
and
\begin{equation}
\gamma_3:=\left\{(\zeta,\eta):\zeta=R,\eta\textrm{ from }\frac xt \textrm{ to } 0\right\}.
\end{equation}
Furthermore, since $R$ and $|x|/t$ large, along these curves, the solution $\hat G$ has the representation of the high frequency case \eqref{eq:G large}. Therefore, by the same computation as in \eqref{est:gamma1}-\eqref{est:gamma34} and letting $R\to +\infty$, we obtain
\begin{equation}
\bigl| \mathcal F^{-1}(m^1)(x,t)\bigr|\le Ce^{-\frac{|x|^2}{ct}}\le Ct^{-1}e^{-\frac{|x|^2}{2c t}}
\end{equation}
for some $c,C>0$ since $e^{-\frac{|x|^2}{2ct}}\le e^{-C^2t}\le t^{-1}$ due to the fact that $|x|>Ct$ with $C$ large enough. Hence, we obtain
\begin{equation}
\bigl\| \mathcal F^{-1}(m^1) \bigr\|_{L^1}\le Ct^{-\frac12}.
\end{equation}
Thus, by the Young inequality, for $r\in[1,\infty]$, we have
\begin{equation}
\bigl\| m^1\bigr\|_{M_r}=\sup_{\|f\|_{L^r}=1}\bigl\|\mathcal F^{-1}(m^1)*f \bigr\|_{L^r}\le \bigl\|\mathcal F^{-1}(m^1) \bigr\|_{L^1}\le Ct^{-\frac12}.
\end{equation}
The estimate for $m^2$ are similar and the proof is done.
\end{proof}
\end{proposition}
\section{Symmetry}\label{sec:symmetry}
We will discuss about the conditions {\bf C'} and {\bf S} in order to increase the decay rate of the solution to the system \eqref{eq:original hyperbolic system}. Recalling the matrices $C$ and $D$ as in \eqref{eq:reduced system} and \eqref{eq:matrix D} respectively.
\begin{lemma}\label{lem:analytic eigenvalue}
If the condition {\bf C'} holds, then there are $m$ distinct eigenvalues of $E(i\xi)=-(B+i\xi A)$ converging to $0$ as $|\xi|\to 0$ and they are expanded analytically, where $m=\dim\ker(B)$. The approximation of the $j$-th eigenvalue has the form
\begin{equation}\label{eq:eigenvalues of E}
\lambda_j(i\xi)=-ic_j\xi-d_j\xi^2+\mathcal O(|\xi|^3),\qquad |\xi|\to 0,
\end{equation}
where $c_j\in\sigma(C)$ considered in $\ker(B)$ and $d_{j}\in\sigma\bigl(P_j^{(0)}DP_j^{(0)}\bigr)$ considered in $\ker(C-c_jI)$ with $P_j^{(0)}$ the eigenprojection associated with $c_j$ for $j\in\{1,\dots,m\}$.
\end{lemma}
\begin{proof}
This is just a consequence of Proposition \ref{prop:low frequency 1}. Indeed, from \eqref{eq:expansion of lambdajell}, the approximation of the eigenvalues of $E$ converging to $0$ as $|\xi|\to 0$ is
\begin{equation*}
\lambda_{j\ell}(i\xi)=-ic_j\xi-d_{j\ell} \xi^2+{\scriptstyle\mathcal O}(|\xi|^2),\qquad |\xi|\to 0,
\end{equation*}
where $c_j\in\sigma(C)$ considered in $\ker(B)$ and $d_{j\ell}\in\sigma\bigl(P_j^{(0)}DP_j^{(0)}\bigr)$ considered in $\ker(C-c_jI)$ for $\ell=1,\dots,h_j$ with $P_j^{(0)}$ the eigenprojection associated with $c_j$ for $j\in\{1,\dots,h\}$. Noting that $h_j$ is the cardinality of the spectrum of $P_j^{(0)}DP_j^{(0)}$ considered in $\ker(C-c_jI)$ for $j\in\{1,\dots,h\}$ and $h$ is the cardinality of the spectrum of $C$ considered in $\ker(B)$.
On the other hand, since the condition {\bf C'} holds, $h=m$ where $m=\dim\ker(B)$. Moreover, also by the condition {\bf C'}, one deduces that $c_j$ is simple for all $j\in\{1,\dots,m\}$. Thus, $\dim\ker(C-c_jI)=1$ and therefore $h_j=1$ for $j\in\{1,\dots,m\}$. It implies that there is only one $d_j:=d_{j1}\in \bigl(P_j^{(0)}DP_j^{(0)}\bigr)$ considered in $\ker(C-c_jI)$ for each $j\in\{1,\dots,m\}$. Moreover, $d_j$ is also simple, and thus, one can continue the reduction process as in the proof of Proposition \ref{prop:low frequency 1}. Furthermore, due to the simplicity of the coefficients in the expansion of $\lambda_j$ provided $c_j$ is simple and the reduction process, there is no splitting in the expansion of the eigenvalues $\lambda_j$ {\it i.e.} the eigenvalues $\lambda_j$ can be expanded analytically for $j\in\{1,\dots,m\}$ and the proof is done.
\end{proof}
Let $p(\lambda,\kappa):=\det (E(\kappa)-\lambda I)$ be the {\it dispersion polynomial} associated with $E(\kappa)=-(B+\kappa A)$, where $\lambda,\kappa\in\mathbb C$.
\begin{lemma}\label{lem:symmetry}
If the condition {\bf S} holds, then $p(\lambda,-\kappa)=p(\lambda,\kappa)$
for any $\lambda,\kappa\in\mathbb{C}$.
\end{lemma}
\begin{proof}
For $q(\lambda,\kappa):=p(\lambda,-\kappa)$, there holds $p(\lambda,\kappa)=0$
if and only if $q(\lambda,\kappa)=0$. Indeed, if the couple $(\lambda,\kappa)$
is such that $p(\lambda,\kappa)=0$ if and only if there exist a nonzero
vector $u$ such that
\begin{equation*}
\bigl(\lambda\,I+\kappa A +B\bigr)u=0.
\end{equation*}
In such a case, setting $v=S^{-1}u$, there also holds
\begin{equation*}
\begin{aligned}
0 & =S^{-1}\bigl(\lambda\,I+ \kappa A +B\bigr)Sv=S^{-1}\bigl(\lambda\,S+\kappa A S+BS\bigr)v\\
& =S^{-1}\bigl(\lambda\,S-\kappa SA+SB\bigr)v=\bigl(\lambda\,I- \kappa A +B\bigr)v.
\end{aligned}
\end{equation*}
Hence, $q(\lambda,\kappa)=0$. The other implication can be proved
in the same way.
For fixed $\kappa$, the polynomials $p$ and $q$ have both degree
$n$ in $\lambda$ with principal term $\lambda^n$. Hence, there
exist $\lambda_1^p,\dots,\lambda_n^p$ and $\lambda_1^q,\dots,\lambda_n^q$
with $\lambda_i^{p,q}=\lambda_i(\kappa)$ such that
\begin{equation*}
p(\lambda,\kappa)=\prod_{k=1}^n\bigl(\lambda-\lambda_k^p(\kappa)\bigr)\quad\textrm{and}\quad
q(\lambda,\kappa)=\prod_{k=1}^n\bigl(\lambda-\lambda_k^q(\kappa)\bigr)
\end{equation*}
Since $p$ and $q$ have the same zero-set, for any $k\in\{1,\dots,n\}$
there exists $j$ such that $\lambda_k^q=\lambda_j^p$. As
a consequence, $p\equiv q$.
\end{proof}
\begin{corollary}\label{cor:symmetry 0 group}
If the conditions {\bf C'} and {\bf S} hold, then there are $m$ analytic distinct eigenvalues of $E$ converging to $0$ as $|\xi|\to 0$ and the $j$-th eigenvalue has the aproximation
\begin{equation}
\lambda_j(i\xi)=-ic_j\xi-d_j\xi^2+\mathcal O(|\xi|^4),\qquad |\xi|\to 0,
\end{equation}
where $c_j\in\sigma(C)$ considered in $\ker(B)$ and $d_{j}\in\sigma\bigl(P_j^{(0)}DP_j^{(0)}\bigr)$ considered in $\ker(C-c_jI)$ with $P_j^{(0)}$ the eigenprojection associated with $c_j$ for $j\in\{1,\dots,m\}$.
\end{corollary}
\begin{proof}
From the proof of Lemma \ref{lem:analytic eigenvalue}, since $d_j$ is simple for all $j\in\{1,\dots,m\}$, one can continue the reduction process in the proof of Proposition \ref{prop:low frequency 1} and thus the formula \eqref{eq:eigenvalues of E} can be refined as
\begin{equation*}
\lambda_j(i\xi)=-ic_j\xi-d_j\xi^2-e_j(i\xi)^3+\mathcal O(|\xi|^4),\qquad |\xi|\to 0,
\end{equation*}
where $e_j\in\sigma(M_j)$ considered in $\ker\bigl(P_j^{(0)}DP_j^{(0)}-d_jI\bigr)$ for some suitable matrix $M_j$ for $j\in\{1,\dots,m\}$.
By recalling the proof of Proposition \ref{prop:low frequency 1} and by Lemma \ref{lem:analytic eigenvalue} one more time, substituting $(-i\xi)$ into $i\xi$, there are $m$ analytic distinct eigenvalues of $E(-i\xi)$ converging to $0$ as $|\xi|\to 0$ such that the
\begin{equation}\label{eq:eigenvalues of E'}
\lambda_j(-i\xi)=-i(-c_j)\xi-d_j\xi^2-(-e_j)(i\xi)^3+\mathcal O(|\xi|^4),\qquad |\xi|\to 0,
\end{equation}
where $c_j,d_j$ and $e_j$ are already introduced as before.
On the other hand, since $\sigma(E(i\xi))\equiv\sigma(E(-i\xi))$ due to Lemma \ref{lem:symmetry}, one deduces that $\sigma(M_j)$ contains both $e_j$ and $-e_j$. Moreover, since $\dim\ker\bigl(P_j^{(0)}DP_j^{(0)}-d_jI\bigr)=1$, one concludes that $e_j=-e_j=0$. The proof is done.
\end{proof}
\begin{remark}
The nilpotent parts associated with $\lambda_j$ for $j\in\{1,\dots,m\}$ are zero since these eigenvalues are distinct and simple.
Moreover, for each $j\in\{1,\dots,m\}$, the total projection associated with $\lambda_j$ is itself the eigenprojection associated with $\lambda_j$ and has the expansion \eqref{eq:expansion of Pj} with $\zeta=i\xi$ {i.e.} we have
\begin{equation}
P_j(i\xi)=P_j^{(0)}+i\xi P_j^{(1)}+\mathcal O(|\xi|^2),\qquad |\xi|\to 0,
\end{equation}
where $P_j^{(1)}$ can be computed by the formula \eqref{eq:Pj1} for $j\in\{1,\dots,m\}$. This is based on the fact that there is no splitting after the second step of the reduction process and the formula of $P_j^{(1)}$ is proved similarly to the proof of the formula \eqref{eq:P01} in the proof of Proposition \ref{prop:low frequency 1}.
\end{remark}
One sets the kernel
\begin{equation}
\hat K^*(\xi,t):=\sum_{j=1}^me^{(-ic_j\xi-d_j\xi^2)t}\bigl(P_j^{(0)}+i\xi P_j^{(1)}\bigr).
\end{equation}
Then, the first estimate in \eqref{est:low} of Proposition \ref{prop:fundamental solution standard type} can be modified by
\begin{proposition}\label{prop:low refined}
If the conditions {C'} and {\bf S} hold, for $r\in[1,\infty]$, one has
\begin{equation}\label{est:low refined}
\bigl\|\hat G-\hat K^*\bigr\|_{L^{r}}\le Ct^{-\frac12\frac{1}{r}-1},
\end{equation}
for $|\xi|<\varepsilon$ small enough and $t\ge 1$.
\end{proposition}
\begin{proof}
For $|\xi|<\varepsilon$, by Remark \ref{rem:low frequency 1}, Remark \ref{rem:low frequency 2} and Corollary \ref{cor:symmetry 0 group}, the solution to the system \eqref{eq:fundamental system} is given by $\hat G=\hat G_1+\hat G_2$ where $\hat G_2$ is given by \eqref{eq:G2 small} and
\begin{equation*}
\hat G_1(\xi,t)=\sum_{j=1}^me^{(-ic_j\xi-d_j\xi^2)t+\mathcal O(|\xi|^4)t}\bigl(P_j^{(0)}+i\xi P_j^{(1)}+\mathcal O(|\xi|^2)\bigr).
\end{equation*}
Thus, similarly to the proof of the first estimate in \eqref{est:low}, we have $\hat G-\hat K^*=I_1+I_2+J$ where $J=\hat G_2$ and
\begin{align}
\label{eq:I_1 refined} I_1&:=\sum_{j=1}^me^{(-ic_j\xi-d_j\xi^2)t}\left(e^{\mathcal O(|\xi|^4)t}-1\right)\bigl(P_j^{(0)}+i\xi P_j^{(1)}\bigr),\\
\label{eq:I_2 refined} I_2&:=\sum_{j=1}^me^{(-ic_j\xi-d_j\xi^2)t+\mathcal O(|\xi|^4)t}\mathcal O(|\xi|^2).
\end{align}
Hence, similarly to before, there is a constant $c>0$ such that
\begin{equation*}
|I_1|\le Ce^{-c|\xi|^2t}|\xi|^4t \qquad\textrm{and}\qquad |I_2|\le Ce^{-c|\xi|^2t}|\xi|^2.
\end{equation*}
Thus, together with \eqref{est:J small}, it implies that
\begin{equation*}
\bigl\|\hat G-\hat K^*\bigr\|_{L^{r}}\le \bigl\|I_1\bigr\|_{L^{r}}+\bigl\|I_2\bigr\|_{L^{r}}+\bigl\|\hat J\bigr\|_{L^{r}}\le Ct^{-\frac12\frac{1}{r}-1},
\end{equation*}
for $|\xi|<\varepsilon$, $t\ge 1$ and $r\in[1,\infty]$. We finish the proof.
\end{proof}
Similarly, by recall the multipliers $m_j$ for $j=1,2,3$ and $m^j$ for $j=1,2$ with $\hat K$ is substituted by $\hat K^*$, we can also refine Proposition \ref{prop:multiplier 1} for $|x|\le Ct$ and Proposition \ref{prop:multiplier 2} for $|x|>Ct$.
\begin{proposition}[$|x|\le Ct$]\label{prop:multiplier 1 refined}
For $r\in[1,\infty]$, $m_j\in M_r$ with
\begin{equation}
\|m_j\|_{M_r}\le Ct^{-1},\qquad j=1,2,3\textrm{ and }t\ge 1.
\end{equation}
\end{proposition}
\begin{proof}
Similarly to the proof of Proposition \ref{prop:multiplier 1}, we only need to consider $\mathcal F^{-1}(\chi_1I_1)$ and $\mathcal F^{-1}(\chi_1I_2)$ on $\gamma_2$ where $I_1,\,I_2$ are now given by \eqref{eq:I_1 refined}, \eqref{eq:I_2 refined} respectively and $\gamma_2$ is the same as \eqref{gamma2}. The others is bounded by $e^{-\delta t}$ for some $\delta >0$ and thus since $|x|\le Ct$, they are dominated by $t^{-\frac32}e^{-\frac{|x-c_jt|^2}{c|d_j|t}}$ for some $c>0$ and $t\ge 1$.
Hence, noting that since $\left| e^{\mathcal O(|e^{-i\phi/2}z|^4)t}-1\right|\le C(|\textrm{\normalfont Re}\,z|+|\textrm{\normalfont Im}\,z|)^4te^{\varepsilon(|\textrm{\normalfont Re}\,z|+|\textrm{\normalfont Im}\,z|)^2t}$ for $z=e^{i\phi/2}\xi$ where $\xi\in[-\varepsilon,\varepsilon]$ and $\phi=\textrm{\normalfont arg}(d_j)\in (-\pi/2,\pi/2)$ for $j\in\{1,\dots,m\}$, on $\gamma_2$, we have
\begin{equation*}
\begin{aligned}
\left|\mathcal F^{-1}(\chi_1I_1)\right|&\le C\sum_{j=1}^m\int_{-\varepsilon}^{\varepsilon}e^{-|x-c_jt|\eta \cos (\phi)}e^{-|d_j|\cos(\phi)(\zeta^2-\eta^2)t}e^{\varepsilon(\zeta+|\eta|)^2 t}(|\zeta|+|\eta|)^4td\zeta\\
&\le C\sum_{j=1}^m \sum_{k=0}^4e^{-\frac{|x-c_jt|^2}{c|d_j|t} \cos (\phi)}\left(\dfrac{|x-c_jt|}{\sqrt{t}}\right)^{k}\int_{-\varepsilon}^{\varepsilon}e^{-\frac12|d_j|\cos(\phi)\zeta^2t}|\zeta|^{4-k}t^{1-\frac k2}d\zeta\\
&\le C\sum_{j=1}^mt^{-\frac32}e^{-\frac{|x-c_jt|^2}{c'|d_j|t}}
\end{aligned}
\end{equation*}
for some $c,c'>0$ and $t\ge 1$. The estimate for $\mathcal F^{-1}(\chi_1I_2)$ is similarly.
Therefore, taking the $L^1$-norm in $x$ variable and using the Young inequality, we finish the proof.
\end{proof}
\begin{proposition}[$|x|>Ct$]\label{prop:multiplier 2 refined}
For $r\in[1,\infty]$, $m_j\in M_r$ with
\begin{equation}
\|m^j\|_{M_r}\le Ct^{-1},\qquad j=1,2\textrm{ and }t\ge 1.
\end{equation}
\end{proposition}
\begin{proof}
The proof is the same as in Proposition \ref{prop:multiplier 2} and the fact that $e^{-\frac{|x|^2}{t}}$ is dominated by $t^{-\frac32}e^{-\frac{|x|^2}{2t}}$ since $|x|>Ct$ and $t\ge 1$. The proof is done.
\end{proof}
\section{Proof of main results}\label{sec:proofs of main theorems}
Recall the well-known inequality
\begin{lemma}[Interpolation inequality]
Let $(p_j,q_j)_{j\in\{0,1\}}$ be two elements
of $[1,\infty]^2$. Consider a linear operator $T$ which continuously
maps $L^{p_j}$ into $L^{q_j}$ for $j\in\{0,1\}$.
For any $\theta\in[0,1]$, if
\begin{equation*}
\left(\dfrac{1}{p_{\theta}},\dfrac{1}{q_{\theta}}\right):=(1-\theta)\left(\dfrac{1}{p_0},
\dfrac{1}{q_0}\right)+\theta\left(\dfrac{1}{p_1},\dfrac{1}{q_1}\right),
\end{equation*}
then $T$ continuous maps $L^{p_{\theta}}$ into $L^{q_{\theta}}$
and $\|T\|_{\mathcal{L}(L^{p_{\theta}};L^{q_{\theta}})}\le\|T\|_{\mathcal{L}(L^{p_0};L^{q_0})}^{1-\theta}\|T\|_{\mathcal{L}(L^{p_1};L^{q_1})}^{\theta}$.
\end{lemma}
We then introduce detailed proofs for Theorem \ref{theo:standard type} and Theorem \ref{theo:standard type symmetry}.
\begin{proof}[Proof of Theorem \ref{theo:standard type}]
Let $u$ be the solution to \eqref{eq:original hyperbolic system}, recalling that $U=\sum_{j=1}^hU_j$ where $U_j$ is the solution to \eqref{eq:parabolic system} for $j\in\{1,\dots,h\}$ and $V=Q\sum_{j=1}^s V_j$ where $V_j$ is the solution to \eqref{eq:hyperbolic system} for $j\in\{1,\dots,s\}$. Then, we have
\begin{align*}
u-U-V=\mathcal{F}^{-1}\bigl(\hat G-\hat K-\hat V\bigr)*u_{0}.
\end{align*}
On the other hand, let $\chi$ be the characteristic function, we have
\begin{equation*}
\begin{aligned}
\mathcal F^{-1}\bigl(\hat G-\hat K-\hat V\bigr)&=\mathcal F^{-1}\bigl[\bigl(\hat G-\hat K-\hat V\bigr) \bigl(\chi_{[0,\varepsilon)}+\chi_{[\varepsilon,R]}+\chi_{(R,\infty)}\bigr)(|\xi|)\bigr]\\
&=\mathcal F^{-1}\bigl[\bigl(\hat G-\hat K-\hat V\bigr)\bigl(\chi_{[0,\varepsilon)}+\chi_{[\varepsilon,R]}\bigr)(|\xi|)\bigr]\\
&\hskip.25cm+\mathcal F^{-1}\bigl[\bigl(\hat G-\hat V\bigr)\chi_{(R,\infty)}(|\xi|)\bigr]-\mathcal F^{-1}\bigl[\hat K\chi_{(R,\infty)}(|\xi|)\bigr].
\end{aligned}
\end{equation*}
Thus, since $\mathcal F^{-1}:L^1\to L^\infty$, we have
\begin{equation*}
\begin{aligned}
\bigl\|\mathcal F^{-1}\bigl(\hat G-\hat K-\hat V\bigr)\bigr\|_{L^{\infty}}
&\le C\left[ \bigl\|\bigl(\hat G-\hat K-\hat V\bigr)\bigl(\chi_{[0,\varepsilon)}+\chi_{[\varepsilon,R]}\bigr)\bigr\|_{L^1}\right. \\
&\left.+\bigl\|\hat K\chi_{(R,\infty)}\bigr\|_{L^1}\right]+\bigl\| \mathcal F^{-1}\bigl[\bigl(\hat G-\hat V\bigr)\chi_{(R,\infty)}(|\xi|)\bigr]\bigr\|_{L^{\infty}}.
\end{aligned}
\end{equation*}
Hence, by the estimates \eqref{est:low}, \eqref{est:intermediate}, \eqref{est:high 1} and \eqref{est:high 2} in Proposition \ref{prop:fundamental solution standard type}, we obtain
\begin{equation*}
\bigl\| u-U-V\bigr\|_{L^\infty}\le Ct^{-1}\|u_0\|_{L^1}, \qquad t\ge 1.
\end{equation*}
Furthermore, from Proposition \ref{prop:multiplier 1} and Proposition \ref{prop:multiplier 2}, for all $r\in[1,\infty]$, we also have
\begin{equation*}
\bigl\| u-U-V\bigr\|_{L^r}\le Ct^{-\frac12}\|u_0\|_{L^r}, \qquad t\ge 1.
\end{equation*}
Therefore, by the interpolation inequality, we obtained the desired results.
The proof of \eqref{est:parabolic solution-hyperbolic solution} is similar and we finish the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{theo:standard type symmetry}]
Similarly to before and from Propositions \ref{prop:low refined}, \ref{prop:multiplier 1 refined} and \ref{prop:multiplier 2 refined}. The proof is done.
\end{proof}
\section*{Appendix}
\subsection*{Eigenprojection computation}
In this subsection, we introduce a useful tool in order to compute the eigenprojection associated with a semi-simple eigenvalue of a matrix based on the determinant of this matrix and its minors. We start with some definitions
\noindent
-- a {\it set of indices} $\mathcal{I}$ is a set $\mathcal{I}=\{k_1<\dots < k_\ell\}$ with $k_p\in\{1,\dots,n\}$ for any $p$;\\
-- an {\it index-transformation} $\chi$ is an injective map from a set of indices $\mathcal{I}$ to $\{1,\dots,n\}$.\par
Then, we introduce some additional notations:\\
-- given two matrices $A, B\in M_n(\mathbb R)$, a set of indices $\mathcal{I}=\{i\}$ and
an index-transformation $\chi$, we denote by $\Phi(A,B;\mathcal{I},\chi)$ the matrix obtained by
substituting the $i$-th column of $A$ by the $\chi(i)$-th column of $B$.\\
-- given sets of indices $\mathcal I,\mathcal K,\mathcal L$ satisfying $\mathcal K\subseteq \mathcal I$ and $|\mathcal K|=|\mathcal L|$, where $|\mathcal K|$ and $|\mathcal L|$ are the cardinalities of $\mathcal K$ and $\mathcal L$ respectively. A map $\chi_{\mathcal K\to \mathcal L}$ is an injective map from $\mathcal I$ to $\{1,\dots,n\}$ defined by $\chi_{\mathcal K\to \mathcal L}(k)=k$ if $k\not\in \mathcal K$; and there is a unique $\ell \in\mathcal L$ such that $\chi_{\mathcal K\to \mathcal L}(k)=\ell$ for each $k\in\mathcal K$.
We then set $[A]^k$ the $n\times n$ matrix with components defined by
\begin{equation*}
[A]^k_{ij}:=\sum \det\Phi( A,I; \mathcal{I},\chi_{i\to j}),
\qquad i,j\in\{1,\dots,n\},
\end{equation*}
where the sum is made on sets of indices $\mathcal {I}$ containing $i$ and with cardinality $|\mathcal{I}|=k+1$. If $k=0$, then $[A]^0=\textrm{\normalfont adj}\,(A)$. Hence, the above notation can be seen as an extended version of the adjunct of the matrix $A$.
One sets
\begin{align}
\label{func:P} \mathbb{P}_k(A)&:=\dfrac{(k+1)[A]^{k}}{\textrm{\normalfont Tr}\, [A]^{k}},\\
\label{func:S} \mathbb{S}_k(A)&:=\dfrac{(k+1)(k+2)[A]^{k+1}\textrm{\normalfont Tr}\,[A]^{k}-(k+1)^2[A]^{k}\textrm{\normalfont Tr}\, [A]^{k+1}}{(k+2)\bigl(\textrm{\normalfont Tr}\,[A]^k)^2}.
\end{align}
Let $\Gamma$ be an oriented closed curve enclosing $0$ except for the other eigenvalues of $A$ in the resolvent set $\rho(A)$, one defines
\begin{equation}\label{eq:eigenprojections}
P:=-\dfrac{1}{2\pi i}\int_{\Gamma}(A-zI)^{-1}\,dz\qquad \textrm{and}\qquad S:=\dfrac{1}{2\pi i}\int_{\Gamma}z^{-1}(A-zI)^{-1}\,dz.
\end{equation}
The matrix $P\in M_n(\mathbb R)$ is called the eigenprojection associated with $0$ and the matrix $S\in M_n(\mathbb R)$ is called the reduced resolvent coefficient associated with $0$.
\begin{proposition}\label{prop:eigenprojection computation}
Let $A\in M_n(\mathbb R)$ and $0 \in\sigma(A)$ is semi-simple with algebraic multiplicity $m\ge 1$, then the eigenprojection associated with $a$ has the formula
\begin{equation}\label{eq:eigenprojection formula}
P=\mathbb P_{m-1}(A)\qquad \textrm{and}\qquad S=\mathbb S_{m-1}(A).
\end{equation}
\end{proposition}
Before going to the proof of Proposition \ref{prop:eigenprojection computation}, we introduce the following results.
\begin{lemma}\label{mainlemma} Let $f(x):=\det\Phi(A+xB,C;\mathcal{I},\chi_{\mathcal{K}\to \mathcal{L}} )$ for $x\in\mathbb{C}$. By setting
\begin{equation*}
\mathcal{J}:=\bigl\{h \notin \mathcal{I}:\exists j \in \chi_{\mathcal K\to \mathcal L}(\mathcal I), c_j
= b_h\in \textrm{\normalfont col}(B) \bigr\}
\end{equation*}
where $\textrm{\normalfont col}(B)$ is the column space of $B$, for any non negative integer $m$ satisfying $m\le n-|\mathcal I|$, the following holds
\begin{equation*}\label{dfm}
f^{(m)}(x)=m!\sum\det \Phi(\Phi(A+xB,C;\mathcal{I},
\chi_{\mathcal{K}\to \mathcal{L}}),B;\mathcal{M},\chi_{\mathcal{M}\to \mathcal M})
\end{equation*}
where the sum is made on the set of indices $\mathcal {M}$ with cardinality $|\mathcal{M}|=m$
and $\mathcal{M}\cap( \mathcal{I}\cup\mathcal{J})=\emptyset$.
\end{lemma}
\begin{proof} By definition, we have
\begin{equation*}
f(x)=\bigwedge_{h=1}^{n}f_h(x)\qquad \textrm{where}\qquad f_h(x):=\begin{cases}(a_h+b_hx) & \textrm{if } h\notin \mathcal I,\\ c_{\chi_{\mathcal K\to \mathcal L}(h)}&\textrm{if } h\in \mathcal I.
\end{cases}
\end{equation*}
Hence, the derivative of order $m\in\mathbb N$ of $f$ satisfies
\begin{equation*}\label{derivative of order m}
f^{(m)}(x)=\sum \bigwedge_{h=1}^{n}f_h^{(s_h)}(x)\,\, \textrm{where }\,f_h^{(s_h)}(x):=\begin{cases}(a_h+b_h x)^{(s_h)} & \textrm{if } h \notin \mathcal I,\\ c_{\chi_{\mathcal K\to \mathcal L}(h)}^{(s_h )} &\textrm{if } h\in \mathcal I
\end{cases}
\end{equation*}
where $s_h:=s_h^1+\dots+s_h^m\in\mathbb N$ with $s_h^\ell \in\{0,1\}$ for all $\ell\in\{1,\dots,m\}$ and if we denote by $S\in\{1,0\}^{n\times m}$ the matrix defined by $S_{h\ell}:=s_h^\ell$, the sum is made on the set
\begin{equation*}
\mathcal S:=\left\{ S : s_1+\dots+s_n=m\right\}.
\end{equation*}
It means that $f^{(m)}(x)$ is the sum of a finite number of determinants, where the determinants are generated by the elements $S$ of $\mathcal S$ and they are given by $D_S:=\bigwedge_{h=1}^{n}f_h^{(s_h)}(x)$.
Moreover, for any matrix $S\in\mathcal S$, if $s_h\ge 2$ for
$h\notin \mathcal I$, then $(a_h+b_hx)^{(s_h)}=O_{n\times 1}$ and if $s_h \ge 1$ for $h \in\mathcal I$, $c_{\chi_{\mathcal K\to \mathcal L}(h)}^{(s_h)}=O_{n\times 1}$; and thus, the determinants related to these cases are zero. Thus, due to the condition $s_1+\dots+s_n=m$ where $m\le n-|\mathcal I|$, we can introduce a partition for $\mathcal S$ such that its elements denoted by $\mathcal S_{\mathcal M}$ are associated with index-sets $\mathcal M:=\{h_1,\dots,h_m\}\subset \{1,\dots,n\}\backslash \mathcal I$ and they are given by
\begin{equation*}
\mathcal S_{\mathcal M}:=\left\{S: s_h=\delta_{\mathcal M}(h) \textrm{ for all } h=1,\dots,n\right\}
\end{equation*}
where
\begin{equation*}\label{condition K}
\delta_{\mathcal M}(h):=\left\{\begin{matrix} 1&\textrm{ if } h \in\mathcal M,\\
0& \textrm{ if }h \notin\mathcal M.\end{matrix}\right.
\end{equation*}
In particular, for any $\mathcal M$, if $S$ and $S'$ belong to $\mathcal S_{\mathcal M}$, one has $D_S=D_{S'}$ since $s_h=\delta_{\mathcal M}(h)=s'_h$ for all $h \in\{1,\dots,n\}$ where $s_h$ and $s'_h$ are the sum of the elements of the $h$-th rows of the matrices $S$ and $S'$ respectively. On the other hand, we have
\begin{equation*}
\begin{aligned}
D_S&=\bigwedge_{h=1}^{n}f_h^{(s_h)}(x)\qquad\textrm{where}\qquad f_h^{(s_h)}(x):=\begin{cases}b_h & \textrm{if } h \in \mathcal M,\\
(a_h+b_h x) &\textrm{if } h\notin \mathcal M\cup \mathcal I, \\ c_{\chi_{\mathcal K\to \mathcal L}(h)} &\textrm{if } h\in \mathcal I
\end{cases}\\
&= \det \Phi\bigl(\Phi\bigl(A+xB,C;\mathcal I,\chi_{\mathcal K\to \mathcal L}\bigr),B;\mathcal M,\chi_{\mathcal M\to \mathcal M}\bigr).
\end{aligned}
\end{equation*}
Moreover, let $\sigma$ be a permutation of the set $\{e_1,\dots,e_m\}$ where $e_\ell$ the $\ell$-th row of the identity matrix $I_{m\times m}$. Then, by definition, the rows of any matrix $S\in\mathcal S_{\mathcal M}$ for $\mathcal M=\{h_1,\dots,h_m\}$ must be in the form of
\begin{equation*}
\begin{pmatrix} S_{h_\ell 1} &\dots&S_{h_\ell m} \end{pmatrix}=\begin{cases} \sigma(e_\ell) &\textrm{ if } \ell \in\{1,\dots,m\},\\
O_{1\times m} &\textrm{ if } \ell\in\{m+1,\dots,n\}.\end{cases}
\end{equation*}
Therefore, since the $D_S=D_{S'}$ for $S,S'\in\mathcal S_{\mathcal M}$ and since the number of $\sigma$ is $m!$, we obtain
\begin{equation*}
\begin{aligned}
f^{(m)}(x)&=\sum_{\mathcal M}\sum_{S\in\mathcal S_{\mathcal M}}D_S\\
&=m!\sum_{\mathcal M}\det \Phi\bigl(\Phi\bigl(A+xB,C;\mathcal I,\chi_{\mathcal K\to \mathcal L}\bigr),B;\mathcal M,\chi_{\mathcal M\to \mathcal M}\bigr).
\end{aligned}
\end{equation*}
Assuming that there is $\mathcal M$ to satisfy $\mathcal M\cap \mathcal J\ne \emptyset$, then there exists $h \in\mathcal M$ such that $h \notin \mathcal I$ and $b_h=c_j$ for some $j\in\chi_{\mathcal K\to \mathcal L}(\mathcal I)$. Hence, the determinants $D_S$ generated by $S\in\mathcal S_{\mathcal M}$ have
\begin{equation*}
f_h^{(s_h)}(x)=b_h=c_j.
\end{equation*}
Furthermore, since $j\in\chi_{\mathcal K\to \mathcal L}(\mathcal I)$, there is $i\in\mathcal I$ such that $\chi_{\mathcal I\to \mathcal J}(i)=j$. Thus, the determinants $D_S$ also have
\begin{equation*}
f_i^{(s_i)}(x)=c_{\chi_{\mathcal K\to \mathcal L}(i)}=c_j
\end{equation*}
as well. Since $h \notin\mathcal I$ and $i\in\mathcal I$, it implies that there are at least two columns are the same and the determinants $D_S$ are accordingly zero. Therefore, we can choose $\mathcal M\cap \mathcal J=\emptyset$ for all $\mathcal M$. The proof is done.
\end{proof}
\begin{corollary}\label{adjoint}
Let $h\in\{0,\dots,n-1\}$ and $A\in M_n(\mathbb R)$, for $x\in \mathbb C$, the following hold
\begin{equation*}
[A]^h=(h!)^{-1}(\textrm{\normalfont adj}(A+xI))^{(h)}\big|_{x=0}, \qquad\textrm{\normalfont Tr}\,[A]^h=(h!)^{-1}(\det(A+xI))^{(h+1)}\big|_{x=0}
\end{equation*}
where $\textrm{\normalfont adj}(A+xI)$ is the adjoint matrix of $A+xI$.
\end{corollary}
\begin{proof}
By the definition of the adjoint matrix, the elements of the matrix are the minors of the matrix $A+xI$. On the other hand, one defines
\begin{equation*}
M_{ji}(x):=\det \Phi(A+xI,I;\{i\},\chi_{i\to j}), \qquad i,j\in\{1,\dots,n\}.
\end{equation*}
Thus, by the definition of the minor and the definition of $\Phi$, we have $(\textrm{\normalfont adj}(A+xI))_{ij}=M_{ji}(x)$ for all $i,j\in\{1,\dots,n\}$. Moreover, by Lemma \ref{mainlemma}, the following holds
\begin{equation*}
\begin{aligned}
M_{ji}^{(h)}(0)&=h!\sum_{\mathcal{H}\not\ni i,j}\det\Phi(\Phi(A,I;\{i\},\chi_{i\to j}),I;\mathcal H,\chi_{\mathcal H\to \mathcal H})\\
&=h!\sum_{\mathcal H\cup \{i\}}\det\Phi(A,I;\mathcal H \cup \{i\},\chi_{i\to j})=h![A]^h_{ij}
\end{aligned}
\end{equation*}
where $\mathcal H$ has the cardinality $|\mathcal H|=h$ for $h\in\{0,\dots,n-1\}$. We finished proving the first equality in the statement.
By the definition of $\chi_{\mathcal M \to \mathcal N}:\mathcal I\to \{1,\dots,n\}$ for any set of indices $\mathcal I$ containing $\mathcal M$, it follows that $\chi_{i\to i}\equiv \chi_{\mathcal I\to \mathcal I}$ for any $\mathcal I$ containing $i$. Thus, by the definition of $[A]^h$ for $h\in\{0,\dots,n-1\}$, we have
\begin{equation*}
\begin{aligned}
\textrm{\normalfont Tr}\,[A]^{h}&=\sum_{i=1}^n\sum_{\mathcal I\ni i}\det \Phi(A,I;\mathcal I,\chi_{i\to i})\\
&=\sum_{\mathcal I\ni 1}\det \Phi(A,I;\mathcal I,\chi_{\mathcal I \to \mathcal I})+\dots+\sum_{\mathcal I\ni n}\det \Phi(A,I;\mathcal I,\chi_{\mathcal I\to \mathcal I})
\end{aligned}
\end{equation*}
where $\mathcal I$ has the cardinality $|\mathcal I|=h+1$.
Moreover, for any fixed set of indices $\mathcal I$ satisfying $|\mathcal I|=h+1$, $\mathcal I$ must be considered in $h+1$ terms in the right hand side of the formula of $\textrm{\normalfont Tr}\,[A]^{h}$. In fact, each of $i\in\mathcal I$ belongs to $\{1,\dots,n\}$. Thus, for any fixed $\mathcal I$, we can collect $h+1$ quantities that are the same and we have
\begin{equation*}
\textrm{\normalfont Tr}\,[A]^{h}=(h+1)\sum_{\mathcal I}\det \Phi(A,I;\mathcal I,\chi_{\mathcal I \to \mathcal I})
\end{equation*}
where $\mathcal I$ has the cardinality $|\mathcal I|=h+1$.
Furthermore, by Lemma \ref{mainlemma}, one has
\begin{equation*}
(\det(A+xI))^{(h+1)}\big|_{x=0}=(h+1)!\sum_{\mathcal I}\det \Phi(A,I;\mathcal I,\chi_{\mathcal I \to \mathcal I}).
\end{equation*}
The proof is done.
\end{proof}
We can now give a proof for Proposition \ref{prop:eigenprojection computation}.
\begin{proof}[Proof of Proposition \ref{prop:eigenprojection computation}]
By definition, the resolvent of the matrix $A$ is given by
\begin{equation*}
R(z):=(A-zI)^{-1}=\dfrac{\textrm{\normalfont adj}(A-zI)}{\det(A-zI)}.
\end{equation*}
For $z$ small, the resolvent can be expanded as
\begin{equation*}
R(z)=\dfrac{1}{z^m}\dfrac{\sum_{h=0}^{n-1}(-1)^h(h!)^{-1}(\textrm{\normalfont adj}(A+xI))^{(h)}\big|_{x=0}z^h}{\sum_{h=m}^n(-1)^h(h!)^{-1}(\det(A+xI))^{(h)}\big|_{x=0}z^{h-m}}.
\end{equation*}
Thus, Corollary \ref{adjoint} implies that
\begin{equation*}
R(z)=\dfrac{1}{z^m}\dfrac{\sum_{h=0}^{n-1}(-1)^h[A]^hz^h}{\sum_{h=m}^n(-1)^hh^{-1}\bigl(\textrm{\normalfont Tr}\,[A]^{h-1}\bigr)z^{h-m}}.
\end{equation*}
On the other hand, by using the Laurent expansion of $R(z)$ (see \citep{kato}), we also have
\begin{equation*}
R(z)=-\sum_{h=-1}^{+\infty}z^{-h-1}(N)^h-z^{-1}P+\sum_{h=0}^{+\infty}z^h(S)^{h+1},
\end{equation*}
where $P,S$ are in \eqref{eq:eigenprojections} and $N=AP$ is the nilpotent matrix associated with the eigenvalue $0$ of $A$.
Then equating two sides, we obtain the formulas. We finish the proof.
\end{proof}
\subsection*{Perturbation theory for linear operators}
In this subsection, we introduce some results from the perturbation theory for linear operators in finite dimensional space that we will use for this paper. Moreover, we will sketch the proofs of them. For whom is interested in, see \citep{kato} for more details.
\begin{proposition}\label{prop:subprojections}
Assume that $T$ is a matrix operator considered in a domain $\mathcal D:=\textrm{\normalfont ran}(P)$ where $P$ is a matrix operator. Let $(P_j)$ for $j=1,\dots,k$ be a sequence of matrix operators such that
\begin{equation}\label{eq:subprojections}
P_j^2=P_j,\quad P_jP_{j'}=O \textrm{ for } j\ne j', \quad P=\sum_{j=1}^kP_j \quad \textrm{and}\quad \textrm{\normalfont ran}(P)=\bigoplus_{j=1}^k\textrm{\normalfont ran}(P_j).
\end{equation}
If $T$ commutes with $P_j$ for $j=1,\dots,k$, then one has
\begin{equation}\label{eq:commuting with operator}
TP_j=P_jT=P_jTP_j\qquad\textrm{and}\qquad TP=PT=PTP=\sum_{j=1}^k(TP_j).
\end{equation}
Moreover, $\lambda\in\sigma(T)$ considered in $\textrm{\normalfont ran}(P)$ if and only if there is $j_0\in\{1,\dots,k\}$ such that $\lambda\in\sigma(T)$ considered in $\textrm{\normalfont ran}(P_{j_0})$.
\end{proposition}
\begin{proof}
For $j=1,\dots,k$, since $P_j^2=P_j$ by \eqref{eq:subprojections}, one has $P_jTP_j=TP_j^2=TP_j=P_jT$ if $T$ commutes with $P_j$.
Also from \eqref{eq:subprojections}, one has $P=\sum_{j=1}^kP_j$ and thus, we have
\begin{equation*}
TP=\sum_{j=1}^k(TP_j)=\sum_{j=1}^kP_jT=PT.
\end{equation*}
We now prove that $P$ is a projection. Indeed, since $P_jP_{j'}=O$ for $j\ne j'$ and $P_j^2=P_j$, we have
\begin{equation*}
P^2=\left(\sum_{j=1}^kP_j\right)^2=\sum_{j,j'=1}^kP_jP_{j'}=\sum_{j=1}^kP_j=P.
\end{equation*}
Hence, we have $PTP=P^2T=PT=TP=\sum_{j=1}^k(TP_j)$.
Assume that there is $u\in\textrm{\normalfont ran}(P)$ such that $u\ne O_{n\times 1}$ and $Tu=\lambda u$. Then, $u=Pu$ and one has $TPu=\lambda Pu$. Moreover, since $PP_j=\sum_{j'=1}^k(P_{j'}P_j)=P_j=\sum_{j'=1}(P_jP_{j'})=P_jP$ and $TP_j=P_jT$, one obtains
\begin{equation*}
TP_ju=T(P_jP)u=(P_jT)Pu=\lambda P_jPu=\lambda P_ju.
\end{equation*}
On the other hand, since the direct sum $\sum_{j=1}^k(P_ju)=Pu=u\ne O_{n\times 1}$, there is at least $j_0\in\{1,\dots,k\}$ such that $P_{j_0}u\ne O_{n\times 1}$. Thus, let $v=P_{j_0}u\in\textrm{\normalfont ran}(P_{j_0})$, $v\ne O_{n\times 1}$ and $Tv=\lambda v$.
For the inverse, let $v\in\textrm{\normalfont ran}(P_{j_0})$ for some $j_0\in\{1,\dots,k\}$ such that $v\ne O_{n\times 1}$ and $Tv=\lambda v$, since $\textrm{\normalfont ran}(P_{j_0})\subset \textrm{\normalfont ran}(P)$ by \eqref{eq:subprojections}, we finish the proof.
\end{proof}
\begin{proposition}\label{prop:construction of subprojections}
For $x\in\mathbb C$ small enough, let $T(x)=T^{(0)}+\mathcal O(|x|)$ where $T^{(0)}$ is a matrix and $T$ is considered in the domain $\mathcal D:=\textrm{\normalfont ran}(P)$ where $P(x)=P^{(0)}+\mathcal O(|x|)$. Assume that there are $k\le n$ distinct eigenvalues $\lambda_j^{(0)}$ of $T^{(0)}$ considered in $\textrm{\normalfont ran}(P^{(0)})$ where $j=1,\dots,k$. Then, there is a unique sequence $(P_j)$ satisfying \eqref{eq:subprojections} and \eqref{eq:commuting with operator} such that $P_j(x)=P_j^{(0)}+\mathcal O(|x|)$ where $P_j^{(0)}$ is the eigenprojection associated with $\lambda_j^{(0)}$ where $j=1,\dots,k$.
In particular, for any $\lambda\in\sigma(T)$ considered in $\textrm{\normalfont ran}(P)$, $\lambda\in\sigma(T)$ considered in $\textrm{\normalfont ran}(P_j)$ if and only if $\lambda(x)\to \lambda_j^{(0)}$ as $|x|\to 0$ for $j\in\{1,\dots,k\}$.
\end{proposition}
Before going to the proof of Proposition \ref{prop:construction of subprojections}, we have the following lemma. Let the resolvent of a matrix operator $T$ where $T$ depends on $x\in\mathbb C$ be
\begin{equation}
R(x,z):=(T(x)-zI)^{-1},\qquad z\in \rho(T).
\end{equation}
\begin{lemma}\label{lem:resolvent}
The resolvent of the matrix operator $T(x):=T^{(0)}+\mathcal O(|x|)$ is holomorphic in any neighborhood of $(x,y)\in\mathbb C^2$ such that $y\in \rho\bigl(T^{(0)}\bigr)$. Moreover, if $\Gamma$ a compact subset of $\rho(T^{(0)})$, then $R(x,y)$ is a convergent series as $|x|\to 0$ uniformly in $y\in\Gamma$ and thus one has the expansion
\begin{equation}\label{eq:expansion of resolvent}
R(x,y)=R^{(0)}(y)+\mathcal O(|x|), \qquad |x|\to 0,
\end{equation}
where $R^{(0)}(y):=(T^{(0)}-yI)^{-1}$.
As a consequence, there is no eigenvalue of $T$ included in $\Gamma$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lem:resolvent}]
For $z\in\rho(T)$ and $y\in \rho\bigl(T^{(0)}\bigr)$, we have
\begin{equation*}
\begin{aligned}
T(x)-zI&=\bigl(T^{(0)}-yI\bigr)-\left((z-y)I-\bigl(T(x)-T^{(0)}\bigr)\right)\\
&=\left(1-\left((z-y)I-\bigl(T(x)-T^{(0)}\bigr)\right)\bigl(T^{(0)}-\lambda^{(0)}I\bigr)^{-1}\right)\bigl(T^{(0)}-yI\bigr).
\end{aligned}
\end{equation*}
Thus, taking the inverse and since $T(x)-T^{(0)}=\mathcal O(|x|)$ for $x$ small, we obtain
\begin{equation*}
R(x,z)=R^{(0)}(y)\left(1-\left((z-y)I-\mathcal O(|x|)\right)R^{(0)}(y)\right)^{-1}.
\end{equation*}
Furthermore, for any matrix norm $\|\cdot\|$, we also have
\begin{equation*}
\left\|\left((z-y)I-\mathcal O(|x|)\right)R^{(0)}(y)\right\|\le \left(|z-y|+C|x|\right)\bigl\|R^{(0)}(y)\bigr\|<1
\end{equation*}
for $x$ and $z-y$ small enough. Thus, it implies that $R(x,z)$ can be expanded as a convergent series and is holomorphic in any neighborhood of $(x,y)$.
On the other hand, for $x$ small and $y\in\rho\bigl(T^{(0)}\bigr)$, one has
\begin{equation*}
T(x)-yI=\bigl(T^{(0)}-yI\bigr)+\mathcal O(|x|)=\bigl(I+\mathcal O(|x|)\bigl(T^{(0)}-yI\bigr)^{-1}\bigr)\bigl(T^{(0)}-yI\bigr).
\end{equation*}
Thus, one deduces
\begin{equation*}
R(x,y)=R^{(0)}(y)\bigl(I+\mathcal O(|x|)R^{(0)}(y)\bigr)^{-1}=R^{(0)}(y)\bigl(I+\mathcal O(|x|)\bigr)=R^{(0)}(y)+\mathcal O(|x|).
\end{equation*}
On the other hand, one notes that $R(x,y)$ is expressed based on $R^{(0)}(y)$. Since $\Gamma$ is a compact subset of $\rho\bigl(T^{(0)}\bigr)$, the norm $\big\|\mathcal O(|x|)R^{(0)}(y)\bigr\|$ can be bounded by $1$ uniformly for all $y\in\Gamma$. As a consequence, since $R(x,y)$ exists for all $x$ small and $y\in \Gamma$, there is no eigenvalue of $T$ belongs to $\Gamma$.
\end{proof}
We are now going back to the proof of Proposition \ref{prop:construction of subprojections}.
\begin{proof}[Proof of Proposition \ref{prop:construction of subprojections}]
Primarily, we have the follows. Let $\lambda\in \sigma(T)$ considered in $\mathbb C^n$, $\lambda$ must be a solution of the dispersion polynomial $p:=\det(T-\lambda I)$ that is an analytic function in $x\in \mathbb C$ since $T$ is analytic in $x\in \mathbb C$. Moreover, it is known that $\lambda$ is continuous and converges to an eigenvalue of $T^{(0)}$ as $|x|\to 0$ since $T(x)=T^{(0)}+\mathcal O(|x|)$ as $|x|\to 0$. Thus, one can write
\begin{equation}\label{eq:formula of eigenvalue of T}
\lambda(x):=\lambda^{(0)}+{\scriptstyle\mathcal O}(1),\qquad |x|\to 0,
\end{equation}
where $\lambda^{(0)}\in\sigma\bigl(T^{(0)}\bigr)$ considered in $\mathbb C^n$ is the limit of $\lambda$ as $|x|\to 0$. In particular, due to the formula \eqref{eq:formula of eigenvalue of T}, the eigenvectors $u\in\mathbb C^{n}$ associated with $\lambda$ can be chosen such that
\begin{equation}\label{eq:formula of eigenvector of lambda}
u(x):=u^{(0)}+{\scriptstyle\mathcal O}(1),\qquad |x|\to 0,
\end{equation}
where $u^{(0)}\in\mathbb C^n$ are the eigenvectors associated with $\lambda^{(0)}$. It follows that $u\in \textrm{\normalfont ran}(P)$ if and only if $u^{(0)}\in\textrm{\normalfont ran}\bigl(P^{(0)}\bigr)$. Indeed, one has
\begin{equation*}
Pu=\bigl(P^{(0)}+\mathcal O(|x|)\bigr)\bigl(u^{(0)}+{\scriptstyle\mathcal O}(1)\bigr)=P^{(0)}u^{(0)}+{\scriptstyle\mathcal O}(1)
\end{equation*}
and thus $Pu=u$ if and only if $P^{(0)}u^{(0)}=u^{(0)}$. It implies that $\lambda\in\sigma(T)$ considered in $\textrm{\normalfont ran}(P)$ if and only if $\lambda^{(0)}\in\sigma\bigl(T^{(0)}\bigr)$ considered in $\textrm{\normalfont ran}\bigl(P^{(0)}\bigr)$. Therefore, if $\lambda_j^{(0)}$ for $j=1,\dots,k$ are the $k$ distinct eigenvalues of $T^{(0)}$ considered in $\textrm{\normalfont ran}\bigl(P^{(0)}\bigr)$, then the above argument and the expansion \eqref{eq:formula of eigenvalue of T} show that for any eigenvalue $\lambda$ of $T$ considered in the domain $\mathcal D=\textrm{\normalfont ran}(P)$, then $\lambda$ converges to an eigenvalue $\lambda_j^{(0)}$ of $T^{(0)}$ considered in $\textrm{\normalfont ran}\bigl(P^{(0)}\bigr)$ for some $j\in\{1,\dots,k\}$ as $|x|\to 0$. In particular, for each $j\in\{1,\dots,k\}$, the set of all eigenvalues $\lambda$ of $T$ considered in $\mathcal D$ such that $\lambda\to \lambda_j^{(0)}$ as $|x|\to 0$ is the $\lambda_j^{(0)}$-group of $T$. For easy, we consider the formal formula
\begin{equation}\label{eq:subgroups 1}
\sigma(T)\textrm{ considered in } \mathcal D =\bigcup_{j=1}^k\bigl(\lambda_j^{(0)}\textrm{-group}\bigr),
\end{equation}
where
\begin{equation}\label{eq:subgroups 2}
\lambda_j^{(0)}\textrm{-group}:=\bigl\{\lambda\in \sigma(T)\textrm{ considered in } \mathcal D :\lambda\to \lambda_j^{(0)} \textrm{ as }|x|\to 0\bigr\}.
\end{equation}
We are going to prove the unique existence of a sequence $(P_j)$ satisfying \eqref{eq:subprojections} and \eqref{eq:commuting with operator} where $j=1,\dots,k$. First of all, we consider the domain $\mathcal D=\mathbb C^n$ {\it i.e.} $P=I$ the identity matrix and since $P(x)=P^{(0)}+\mathcal O(|x|)$ as $|x|\to 0$, $P^{(0)}=I$ as well. Hence, the eigenvalues $\lambda$ of $T$ and $\lambda_{j}^{(0)}$ of $T^{(0)}$ in this case are considered in $\mathbb C^n$. Let $\lambda\in\sigma(T)$ and let $\Gamma_\lambda$ be a closed curve enclosing $\lambda$ except for the other eigenvalues of $T$ in the complex plane, since $\lambda$ is singularity of the resolvent $R(z)=(T-zI)^{-1}$ of $T$, the Cauchy integral
\begin{equation}
P_\lambda(x):=-\dfrac{1}{2\pi i}\int_{\Gamma_\lambda}R(x,z)\,dz
\end{equation}
is exactly the eigenprojection associated with $\lambda$. The matrix operator $N_\lambda:=(T-\lambda I)P_\lambda$
is then the nilpotent part associated with $\lambda$. Moreover, $TP_\lambda=\lambda P_\lambda+N_\lambda=P_\lambda T$. Nonetheless, the resolvent $R(x,z)$ for $x$ small and $z\in\rho(T)$ cannot be expanded explicitly in general except for the case $z$ belongs to a compact set contained in $\rho\bigl(T^{(0)}\bigr)$ provided Lemma \ref{lem:resolvent}.
Based on that, for $j\in\{1,\dots,k\}$, let $\Gamma_j$ be a closed curve in $\rho\bigl(T^{(0)}\bigr)$ such that $\Gamma_j$ encloses the eigenvalue $\lambda_j^{(0)}$ except for the other eigenvalues of $T^{(0)}$. Then, by Lemma \ref{lem:resolvent}, there is no eigenvalue of $T$ to belong to $\Gamma_j$ and therefore, for $x$ small enough, the interior domain bounded by $\Gamma_j$ only encloses the eigenvalues of $T$ such that $\lambda\to \lambda_j^{(0)}$ as $|x|\to 0$ {\it i.e.} the $\lambda_j^{(0)}$-group is contained in this domain except for the other groups of $T$. Hence, for every $\lambda$ included in the $\lambda_j^{(0)}$-group, one can choose $\Gamma_\lambda$ such that they are strictly contained in the domain bounded by $\Gamma_j$ and one has
\begin{equation*}
P_j(x):=-\dfrac{1}{2\pi i}\int_{\Gamma_j}R(x,y)\,dy=\sum_{\lambda\in \lambda_j^{(0)}\textrm{-group}}\left(-\dfrac{1}{2\pi i}\int_{\Gamma_\lambda}R(x,z)\,dz\right)=\sum_{\lambda\in \lambda_j^{(0)}\textrm{-group}}P_\lambda.
\end{equation*}
Hence, $P_j$ is called the {\it total projection} associated with the $\lambda_j^{(0)}$-group of $T$.
The sequence of the total projections $P_j$ where $j\in\{1,\dots,k\}$ satisfies the properties \eqref{eq:subprojections}. Indeed, since, for $\lambda\in \lambda_j^{(0)}$-group, $P_\lambda$ is an eigenprojection, one has
\begin{equation*}
P_j^2=\left(\sum_{\lambda\in \lambda_j^{(0)}\textrm{-group}}P_\lambda\right)^2=\sum_{\lambda,\lambda'\in \lambda_j^{(0)}\textrm{-group}}P_\lambda P_{\lambda'}=\sum_{\lambda\in \lambda_j^{(0)}\textrm{-group}}P_\lambda=P_j,
\end{equation*}
and for $j\ne j'$, since $\lambda_j^{(0)}\ne \lambda_{j'}^0$ due to the distinct property, one has
\begin{equation*}
P_jP_{j'}=\left(\sum_{\lambda\in \lambda_j^{(0)}\textrm{-group}}P_\lambda\right)\left(\sum_{\lambda'\in \lambda_{j'}^{(0)}\textrm{-group}}P_{\lambda'}\right)=\sum_{\substack{ \lambda\in \lambda_j^{(0)}\textrm{-group} \\ \lambda'\in \lambda_{j'}^{(0)}\textrm{-group} }}P_\lambda P_{\lambda'}=O
\end{equation*}
since these two groups are distinct. Moreover, we have $\mathbb C^n=\bigoplus_{\lambda\in \sigma(T)}\textrm{\normalfont ran}(P_\lambda)$ and $I=\sum_{\lambda \in \sigma(T)}P_\lambda$ and thus from \eqref{eq:subgroups 1} and \eqref{eq:subgroups 2}, one deduces
\begin{equation*}
I=\sum_{j=1}^k\sum_{\lambda\in \lambda_j^{(0)}\textrm{-group}}P_\lambda=\sum_{j=1}^kP_j \quad \textrm{and}\quad \mathbb C^n=\bigoplus_{j=1}^k\bigoplus_{\lambda\in \lambda_j^{(0)}\textrm{-group}}\textrm{\normalfont ran}(P_\lambda)=\bigoplus_{j=1}^k\textrm{\normalfont ran}(P_j).
\end{equation*}
Then, the property \eqref{eq:commuting with operator} holds if one proves that $T$ commutes with $P_j$ for all $j\in\{1,\dots,k\}$ due to Proposition \ref{prop:subprojections}. Infact, for all $\lambda\in \sigma(T)$, since $TP_\lambda=P_\lambda T$, one obtain that
\begin{equation*}
TP_j=\sum_{\lambda\in \lambda_j^{(0)}\textrm{-group}}(TP_\lambda)=\sum_{\lambda\in \lambda_j^{(0)}\textrm{-group}}(P_\lambda T)=P_jT
\end{equation*}
for all $j\in\{1,\dots,k\}$.
On the other hand, from \eqref{eq:expansion of resolvent}, one has
\begin{equation*}
P_j(x):=-\dfrac{1}{2\pi i}\int_{\Gamma_j}R(x,y)\,dy=-\dfrac{1}{2\pi i}\int_{\Gamma_j}R^{(0)}(y)\,dy+\mathcal O(|x|),\qquad |x|\to 0,
\end{equation*}
where $R^{(0)}(y)=\bigl(T^{(0)}-yI\bigr)^{-1}$ for $y\in\rho\bigl(T^{(0)}\bigr)$. Then, it is easy to see that $R^{(0)}$ is the resolvent of the matrix $T^{(0)}$ and thus by the definition of $\Gamma_j$, it implies that
\begin{equation*}
P_j(x)=P_j^{(0)}+\mathcal O(|x|),\qquad |x|\to 0,
\end{equation*}
where $P_j^{(0)}$ is the eigenprojection associated with $\lambda_j^{(0)}$ since it is known that $P_j^{(0)}=-\dfrac{1}{2\pi i}\int_{\Gamma_j}R^{(0)}(y)\,dy$.
We already construct the desired sequence of $P_j$ where $j=1,\dots,k$ if $\mathcal D=\mathbb C^n$. For the case $\mathcal D=\textrm{\normalfont ran}(P)$, it is enough to define the unique eigenprojection $\tilde P_j$ associated with the domain $\textrm{\normalfont ran}(P_j)\cap \textrm{\normalfont ran}(P)$ where $P_j$ is constructed as before for each $j\in\{1,\dots,k\}$. One can denote $\tilde P_j$ again by $P_j$ where $j=1,\dots,k$.
Finally, we prove that for any $\lambda\in\sigma(T)$ considered in $\textrm{\normalfont ran}(P)$, $\lambda\in\sigma(T)$ considered in $\textrm{\normalfont ran}(P_j)$ if and only if $\lambda(x)\to \lambda_j^{(0)}$ as $|x|\to 0$ for $j\in\{1,\dots,k\}$. Indeed, for each $j\in\{1,\dots,k\}$, since $P_j(x)=P_j^{(0)}+\mathcal O(|x|)$ as $|x|\to 0$, similarly to the beginning of the proof of this Proposition, we already prove that $\lambda\in\sigma(T)$ considered in $\textrm{\normalfont ran}(P_j)$ if and only if $\lambda^{(0)}\in\sigma\bigl(T^{(0)}\bigr)$ considered in $\textrm{\normalfont ran}\bigl(P_j^{(0)}\bigr)$ where $\lambda^{(0)}$ is the limit of $\lambda$ as $|x|\to 0$. On the other hand, $\textrm{\normalfont ran}\bigl(P_j^{(0)}\bigr)\subset \textrm{\normalfont ran}\bigl(P^{(0)}\bigr)$ due to the fact that $\textrm{\normalfont ran}(P_j)\subset \textrm{\normalfont ran}(P)$. Thus, $\lambda^{(0)}\in\sigma\bigl(T^{(0)}\bigr)$ considered in $\textrm{\normalfont ran}\bigl(P^{(0)}\bigr)$ which is the definition of the eigenvalues $\lambda_j^{(0)}$ for $j\in\{1,\dots,k\}$. Thus, there is a unique $j_0\in\{1,\dots, k\}$ such that $\lambda^{(0)}=\lambda_{j_0}^{(0)}$ since $\lambda_j^{(0)}$ for all $j\in\{1,\dots,k\}$ are distinct. Nonetheless, since $\lambda^{(0)}$ is considered in $\textrm{\normalfont ran}\bigl(P_j^{(0)}\bigr)$, one obtains $j_0=j$ since $P_j^{(0)}P_{j_0}^{(0)}\ne O$ if and only if $j=j_0$. The proof is done.
\end{proof}
\section*{Acknowledgments} The second author's research is supported by the doctoral grant of Gran Sasso Science Institute 2014--2017.
\bibliographystyle{elsarticle-num-sort}
|
1,314,259,993,261 | arxiv | \section{Introduction}
Conformal field theories (CFTs) are basic mathematical structures in the study of a wide range of physical phenomena. A plethora of different CFTs
arises when one studies physical systems in various dimensions and in various contexts. CFTs are also a fruitful ground to study general mathematical properties of quantum physics. An interesting question is to understand whether one can classify all possible CFTs and their properties: in other words, what is the ``space'' of all CFTs and does it have a natural mathematical structure?
One can approach this question in different ways. One such approach, based on assuming some properties of a theory of interest and fully exploiting constraints following from conformal symmetry to deduce additional properties, is called the {\it conformal bootstrap}. This approach focuses on a single given CFT and has received a lot of attention in recent years \cite{Hartman:2022zik,Poland:2022qrs}. One of the basic properties defining a CFT (and a quantum field theory (QFT) in more generality) is the notion of symmetry. We can think of symmetry as a collection of topological extended operators a given theory admits \cite{Gaiotto:2014kfa}. This turns out to be a very rich structure associating mathematically a higher fusion category to a given theory. For an elementary introduction to categories see {\it e.g.} \cite{MR1712872}. The higher fusion category generalizes the familiar notion of a group one associates to a $0$-form symmetry. The generalized structure incorporates anomalies, higher form symmetries, higher group structures, and non-invertible topological defects into a single rich structure. Understanding and fully exploiting such structures also has received a lot of attention recently (see {\it e.g.} \cite{Cordova:2022ruw,Freed:2022qnc}). Studying symmetries of a theory, in principle, also focuses on properties of a given theory of interest. However, the symmetry structure of a theory obtained as a deformation of another one is constrained by the latter through the ideas of matching anomalies following the seminal work of `t Hooft \cite{tHooft:1979rat}. (See {\it e.g.} \cite{Komargodski:2020mxz, Cordova:2019jqi} for more modern applications of these ideas.)
In this note, we want to consider a connection between deformations and higher fusion categories of topological defects. In particular, we will discuss how one can understand more mathematically the set of all CFTs, either in a given dimension or in any dimension, as a category. More precisely we will discuss a $2$-category structure of the set of CFTs.\footnote{Categorical structure of the space of CFTs was discussed before, see {\it e.g.} \cite{Gaiotto:2015aoa}. There, the morphisms are taken to be interfaces of two CFTs. One might try and relate the picture of RG domain walls of \cite{Gaiotto:2012np,Dimofte:2013lba} to the one we present here (We thank C.~Beem and Y.~Tachikawa for stressing this to us.).
Moreover, TQFTs (and CFTs) themselves can be defined as functors between various categories \cite{MR2079383,MR1001453,Kontsevich:2021dmb,Dedushenko:2022zwd}. Categorical language to organize various conjectures about class ${\cal S}$ \cite{Gaiotto:2009hg,Gaiotto:2009we} theories were discussed in \cite{Tachikawa:2017byo}. } For an extensive introduction to 2-categories see {\it e.g.}~\cite{Gray1974-qy, Johnson2020-oa}.\footnote{A rigorous definition of 2-categories with monoidal structure can be found in \cite{Kapranov-book, KAPRANOV1994241}. See also \cite{Ahmadi2020-lq, BAEZ1996196}.} The $2$-category will have objects given by the CFTs, $1$-morphisms related to deformations taking one CFT to another, and finally $2$-morphisms related to $0$-form symmetry. Once this is done, one can, in principle, add to the discussion the higher form symmetries, and more generally higher categorical structures, though we will refrain from doing that explicitly here. The category structure of the set of CFTs would, in principle, impose certain mathematical relations on the space of QFTs. {\it The main motivation for this note is to rewrite known facts about CFTs in the categorical language with the hope that this will lead eventually to deeper insights into the structure of the set of all CFTs.}
Before we begin let us stress the general philosophy of the construction. We will start with the set of CFTs and will study relations between these given by various deformations. Importantly, we will not study {\it all} deformations of a given CFT. A general deformation can lead to a variety of behaviors at low energy (IR): for example, one can obtain free vector fields in the IR. The study of general structure of RG flows is an interesting problem, see {\it e.g.} \cite{Gukov:2016tnp}: we will only consider deformations which take a CFT to a CFT.
The outline of the note is as follows. We will first discuss the $1$- and $2$-category structure of set of CFTs in given dimension $D$. Then we will discuss the categorical structure of the set of CFTs in any dimension. Finally, we will make several comments and translate some of the physical statements and conjectures about the space of all possible CFTs to this categorical language.
\
\section{Category of CFTs}
Let us define the set ${\cal C}^{(D)}_0$ to be the set of all CFTs in $D$ space-time dimensions. By a CFT we mean a unitary quantum field theory which has conformal symmetry.\footnote{We will not distinguish here between relative and absolute CFTs
\cite{Freed:2012bs}. In principle we can also consider the CFTs to be defined with the corresponding symmetry TFT \cite{Freed:2012bs,Gaiotto:2020iye,Apruzzi:2021nmk,Freed:2022qnc}. Moreover, one in principle could also consider {\it direct sums} of theories \cite{Sharpe:2022ene} ({\it e.g.} these naturally seem to arise in some compactification scenarios \cite{Gadde:2015wta}): we will not explicitly consider these here.} The action of the conformal symmetry on various structures in the theory can be nontrivial (faithful) if the theory is a proper CFT, or can be non-faithful in the degenerate, TQFT, case.
One can further specialize in various ways. For example, we can consider only CFTs with particular amount of supersymmetry, or CFTs residing on the same conformal manifold. We will do so later on but for now, we shall keep the discussion more generic.
\begin{figure}
\includegraphics[scale=0.25]{cat.pdf}
\caption{ \label{cat}The category of CFTs and the morphisms as sequences of deformations.}
\end{figure}
We define a $1$-category ${\cal C}^{(D)}$ so that its objects are given by ${\cal C}^{(D)}_0$. A morphism connecting two objects $X,\, Y\in {\cal C}^{(D)}_0$, $f:X\to Y$, corresponds to a field theoretic operation on CFT $X$ which results in CFT $Y$ in the {\it IR}. We will refer to the UV theory as the {\it source} and the IR CFT as the {\it target} of the deformation. An example of such is performing a deformation corresponding to operator deformation ${\cal O}$. That is the correlation functions of the source and the target CFT are related schematically as,
\begin{eqnarray}
\langle \,\dots\, \rangle_Y = \langle \,\dots\, e^{i\lambda\, \int d^Dx \,\cal O}\rangle_X\,,
\end{eqnarray} where on the right, we take the proper low energy limit of the correlators. If $X\neq Y$ the deformation will be either relevant or exactly marginal by definition.
If $X=Y$ we can have nontrivial morphisms which correspond to irrelevant deformations. The parameter $\lambda$ is the coupling. Note that in case of exactly marginal deformations different values of coupling are different morphisms with target objects being different. In the case of relevant and irrelevant deformation the precise magnitude of $\lambda$ is inessential (as long as it is of definite sign and is small enough) as this sets an RG scale while we are only interested in the CFT endpoints of the flow. Another type of morphism that we will consider is taking a CFT and gauging part of its (generalized) global symmetry.\footnote{The gauging, again, might depend on inessential continuous coupling constants or on consequential discrete parameters, such as the level of a Chern-Simons term in $3d$. We can also gauge discrete sub-groups of the global symmetry though we will focus the discussion on continuous ones.}
Finally, one, in principle, can also consider deforming a CFT by turning on vacuum expectation value (VEV) to an operator, but we will only discuss operator and gauging deformations in what follows. Let us refer to these deformations as being the {\it basic} ones. We will soon define the morphisms more rigorously.
We immediately deduce that the above definition of morphisms has to be extended for the structure to define a category. The issue is completeness under composition of morphisms. Given objects $X,\, Y$, and $Z$ corresponding to CFTs, if we have two morphisms,
\begin{eqnarray}
f:X\to Y\,,\qquad g:Y\to Z\,,
\end{eqnarray} what is the morphism $g\circ f:X\to Z$ corresponding to the composition of the two? Naively we might be tempted to construct this morphism by searching for an appropriate deformation of one of the two types discussed above: deforming the CFT $X$, say, by an operator or gauging, leading to CFT $Z$. However, these are not enough to cover all the possibilities. Imagine that we go from $X$ to $Y$ using one of the deformations above ($f$) and the global symmetry of $Y$ is larger than that of $X$: some of the symmetry emerges in the IR. Then, as deformation $g$, we gauge a subgroup of this {\it emergent} symmetry. We cannot perform this operation directly on $X$ without first performing $f$, flowing to the IR and then gauging. Thus, in order for the structure we discuss to be a well defined category we need also to consider deformations of $X$ which are defined by any sequence of the basic deformations. See Figure \ref{cat}.
Let us be more precise and define the following. We consider, for concreteness, the set of morphisms between two CFTs corresponding to operator deformations. Since the source theory is a CFT, we can classify all Lorentz scalar deformations by their scaling dimensions. Let us then
consider the collection of all deformations of given scaling dimension $\Delta$, ${\cal O}_\alpha$.\footnote{One can generalize this discussion by turning on deformations of different scaling dimensions.} These deformations are in a (possibly reducible) representation of the $0$-form global symmetry group $G^{(X)}_0$ of the source CFT $X$. Namely, given a group element $g\in G^{(X)}_0$ we have $g\cdot {\cal O}_\alpha={\cal O}_{g(\alpha)}$, meaning that acting on the deformation of given scaling dimension, we obtain another deformation of the same scaling dimension.
Some morphisms, the basic ones, thus correspond to operators ${\cal O}_\alpha$. This is not the most general deformation:
A general morphism is defined by a sequence of basic deformations, an ordered tuple $$\{{\cal O}_1,\,{\cal O}_2,\,\cdots , {\cal O}_n\}\equiv {\cal O}_n\circ \cdots \circ {\cal O}_2\circ {\cal O}_1\,,$$
such that we first flow with ${\cal O}_1$, then we deform the IR CFT with ${\cal O}_2$, and so on.
Although we defined the above with operator deformation we can also consider gauging a subgroup of $G^{(X)}_0$ as one of the deformations.
The order of the deformations might matter. In particular, in some cases the sequence of deformations only makes sense in a particular order.
For example, we might want to gauge a symmetry which only emerges after we deform the source theory.
\section{Higher category structure}
We can think of the group elements of $G^{(X)}_0$, or more precisely certain equivalence classes to be defined soon, as generating $2$-morphisms connecting different $1$-morphisms.
The sequence of the CFTs appearing in the definition of a morphism has the following sequence of $0$-form symmetries,
$$\{G^{(X_1)}_0,\,G^{(X_2)}_0,\,\cdots\}\,.$$ We remind that the deformations can be operator deformations or gauging of symmetries. The symmetries in the sequence do not have to be subgroups (or quotients) of previous ones as some global symmetry can emerge in the IR. Moreover, even if part of the global symmetry of the source CFT is unbroken by the deformation, it can act trivially on target CFT. Given two different morphisms $f=\{{\cal O}_1, \ldots, {\cal O}_n\}:X_1 \rightarrow \cdots \rightarrow X_{n+1}$ and $f'=\{{\cal O}'_1, \ldots, {\cal O}'_n\}:X_1 \rightarrow \cdots \rightarrow X_{n+1}$ corresponding to the same sequence of CFTs, we might be able to define a $2$-morphism between them, denoted by $\alpha : f \Rightarrow f'$
in the following manner.
A $2$-morphism is an ordered set of
pairs, each consisting of a group element and a source operator (see Figure \ref{2cat}),
\begin{eqnarray}
\label{eq: 2-morph}
\alpha \equiv \{ (g_1, {\cal O}_1),\,(g_2, {\cal O}_2),\,\dots,\,(g_n, {\cal O}_n)\}\,,
\end{eqnarray}
such that,
\begin{eqnarray}
g_i \in G_0^{(X_i)} \; \text{ and } \; {\cal O}'_i = g_i \cdot {\cal O}_i \; \text{ for each } i=1, \dots, n \,,
\end{eqnarray}
{\it i.e.}~in each pair the group element transforms a deformation in $f$ into the corresponding one in $f'$. This definition ensures that every $2$-morphism uniquely specifies the source and target 1-morphisms, as required by the axioms of $2$-categories. Note that we do not assume that any two $1$-morphims connecting the same objects are related by a $2$-morphism: the corresponding deformations might not be related by an action of the $0$-form symmetry.
%
As each deformation ${\cal O}_i$ might preserve some subgroup $H_i\subset G^{(X_i)}_0$,\footnote{The subroup $H_i= Stab_{G_0^{(X_i)}}({\cal O}_i)$ preserves a deformation $({\cal O}_i)_\alpha$ if for every $h\in H_i$ $\Rightarrow$ $({\cal O}_i)_{h(\alpha)}=({\cal O}_i)_\alpha$.}
to be more precise we should replace each group element $g_i$ by the left coset $g_i H_i$, {\it i.e.} $\alpha = \{(g_1 H_1, {\cal O}_1)\,, \ldots, (g_n H_n, {\cal O}_n) \} $. Note that the identity $2$-morphism $Id_f$ on $f$ is just $Id_f= \{(H_1, {\cal O}_1),\,\ldots,\,(H_n, {\cal O}_n)\}$\footnote{Here, we do the following identification. If for some $i$, we have $X_i=X_{i+1}$ and ${\cal O}_i=id_{X_i}$ {\it i.e.} {\it no deformation}, then $H_i= G_0^{(X_i)}$, and we remove the $i$th entry from the sequence of $1$- and $2$-morphisms. This physically means we do nothing at step $i$.}.
In order not to clutter notations, we will keep labeling the equivalence classes by representative group elements.
\begin{figure}
\includegraphics[scale=0.2,trim={5px 200px 5px 200px},clip]{2cat.pdf}
\caption{ \label{2cat}The $2$-morphism as a sequence of group elements.}
\end{figure}
The definition of 2-category requires the presence of two different compositions (vertical and horizontal) for $2$-morphisms, satisfying a constraint called {\it interchange law}. In our construction, they are defined as follows.
Given two $2$-morphisms $\alpha_1$ and $\alpha_2$
\begin{equation
\alpha_1 &=\{(g_1H_1, {\cal O}_1),\ldots,\, (g_n H_n, {\cal O}_n)\,\}\,, \\
\alpha_2 &=\{(h_1 K_1, {\cal O}'_1),\ldots,\, (h_n K_n, {\cal O}'_n)\,\}\,,
\end{aligned
the vertical composition is given naturally using the group multiplication,\footnote{Note that, as a necessary condition for the vertical composition of $\alpha_1$ and $\alpha_2$ to be defined, we need that ${\cal O}'_i = g_i \cdot {\cal O}_i$ for some $g_i \in G_0^{(X_i)}$ for each $i$ in the sequence. In particular, these exist only if the sequence of CFTs in both $2$-morphisms is identical.}
\begin{equation
\alpha_2 \bullet \alpha_1 & = \{(h_1 K_1 g_1 H_1, {\cal O}_1),\,\ldots,\, (h_n K_n g_n H_n, {\cal O}_n)\} \\
& =\{(h_1 g_1 H_1, {\cal O}_1),\,\cdots,\, (h_n g_n H_n, {\cal O}_n)\}\,
\end{aligned
where the last line follows since we have the relation $K_i = g_i H_i g_{i}^{-1} $ between the stabilizer subgroups of the deformations.\footnote{Due to this, we can drop the stabilizer subgroups from our notation, but we write them explicitly whenever they are required.} The horizontal composition of two $2$-morphisms $\alpha_1$ and $\beta_1$ is naturally defined as follows. Let
\begin{equation
\alpha_1 &=\{(g_1 H_1, {\cal O}_1),\ldots,\, (g_n H_n, {\cal O}_n)\,\}\,, \\
\beta_1 &=\{(k_1 L_1, {\cal U}_1),\ldots,\, (k_m L_m, {\cal U}_m)\,\}\,,
\end{aligned
the horizontal composition then is\footnote{Note that the horizontal composition of $\alpha_1$ and $\beta_1$ requires the target of $\{{\cal O}_i\}_{i=1}^n: X_1\rightarrow X_{n+1}$ match the source of $\{{\cal U}_\ell\}_{\ell=1}^m:X'_1 \rightarrow X'_{m+1}$, {\it i.e.} $X_{n+1}=X'_1$.}
\begin{equation
\beta_1 \circ \alpha_1 = \; & \{(g_1, {\cal O}_1),\, \ldots, \, (g_n, {\cal O}_n), \\
& \qquad\qquad \, (k_1, {\cal U}_1),\,\ldots,\, (k_m, {\cal U}_m),\}
\end{aligned
as we concatenate two sequences of deformation. %
This implies that any $2$-morphism of the form \eqref{eq: 2-morph} can be expressed as the horizontal composition of a sequence of {\it basic} $2$-morphisms $\alpha_i = (g_i, {\cal O}_i)$, which act on basic deformations. %
%
Let us now verify that the interchange law is satisfied (see Figure \ref{inter}). First,\footnote{Here the $2$-morphism $\beta_2$ is defined as the sequence $ \{ (p_1,{\cal U'}_1),\, \ldots, \, (p_m, {\cal U'}_m) \}$.}
\begin{eqnarray}
&& (\beta_2 \bullet \beta_1) \circ (\alpha_2 \bullet \alpha_1) = \\
&& \qquad = \{(p_1 k_1, {\cal U}_1),\,\ldots,\, (p_k h_k, {\cal U}_m)\} \circ \nonumber\\
&& \qquad\qquad\quad \, \circ \, \{(h_1 g_1 , {\cal O}_1),\,\ldots,\, (h_n g_n , {\cal O}_n)\} \nonumber \\
&& \qquad = \{(h_1 g_1, {\cal O}_1),\, \ldots, (h_n g_n, {\cal O}_n),\, \nonumber \\
&& \qquad\qquad\qquad \, (p_1 k_1, {\cal U}_1),\, \ldots, \, (p_m k_m, {\cal U}_m)\}\,. \nonumber
\end{eqnarray}
Whereas,
\begin{eqnarray}
&& \hspace{-1em} (\beta_2\circ \alpha_2) \bullet (\beta_1\circ \alpha_1) =\\
&& = \{(h_1, {\cal O'}_1),\, \ldots, \, (h_n, {\cal O'}_n),\, (p_1, {\cal U'}_1),\, \ldots,\, (p_m, {\cal U'}_m)\} \bullet \nonumber \\
&& \quad\quad \bullet \, \{(g_1, {\cal O}_1),\, \ldots, \, (g_n, {\cal O}_n), (k_1, {\cal U}_1),\, \ldots, \, (k_m, {\cal U}_m)\}\nonumber\\
&& = \{(h_1 g_1, {\cal O}_1),\, \ldots, (h_n g_n, {\cal O}_n),\, \nonumber \\
&& \qquad\qquad \, (p_1 k_1, {\cal U}_1),\, \ldots, \, (p_m k_m, {\cal U}_m)\}\,. \nonumber
\end{eqnarray}
We thus see that,
\begin{equation}
(\beta_2 \bullet \beta_1) \circ (\alpha_2 \bullet \alpha_1) =
(\beta_2\circ \alpha_2) \bullet (\beta_1\circ \alpha_1)\,,
\end{equation}
and the interchange law holds true. The category of CFTs together with the action of the $0$-form symmetry thus forms a strict $2$-category structure.
\begin{figure}
\includegraphics[scale=0.2,trim={5px 200px 5px 200px},clip]{inter.pdf}
\caption{ \label{inter}The interchange law.}
\end{figure}
Note that gauging can be incorporated in the same structure. We take a subgroup of $G^{(X)}_0$ for a CFT $X$ and gauge it. This breaks $G^{(X)}_0$ to the centralizer of the gauge subgroup and further removes anomalous abelian factors. To gauge a symmetry we choose an embedding of the gauge group in $G^{(X)}_0$. We thus can discuss different equivalent ways to do so which are related by an action of the global symmetry leading to $2$-morphisms. Similarly we can discuss irrelevant deformations being morphism from an object to itself and exactly marginal deformations which take us between different objects but with no flow involved. We will discuss the latter next.
\section{Conformal manifold $2$-category}
Let us consider the special case of exactly marginal deformations.
Most of the concrete examples of theories with exactly marginal operators involve supersymmetric CFTs,\footnote{See {\it e.g.} \cite{Leigh:1995ep,Green:2010da,Komargodski:2020ved,Razamat:2020pra,Gomis:2015yaa,Baggio:2017mas,Baggio:2017aww,Niarchos:2021iax,Perlmutter:2020buo} for some results in the supersymmetric case.} though special degenerate cases of theories without supersymmetry are known to exist. See {\it e.g.} the discussion in \cite{Bashmakov:2017rko}. %
Exactly marginal deformations of supersymmetric theories parametrize what is called the conformal manifold ${\cal M}_c$.
Here no RG flow is triggered on ${\cal M}_c$: the values of the couplings are essential as they determine the target CFT and have a geometrical meaning as (local) coordinates on the conformal manifold. %
As a result, our procedure of defining $1$-morphisms with a sequence of flows better be interpreted as a concatenation of infinitesimal exactly marginal deformations. Geometrically, each such concatenation consists of a series of small consecutive deviations from the source CFT point along the conformal manifold: it forms a path in ${\cal M}_c$. Therefore, we conclude that $1$-morphisms between CFTs on the same conformal manifold are paths between the corresponding points, such that different paths correspond to different morphisms.\footnote{
Note that here it is somewhat natural to identify homotopically equivalent paths. In that case, every morphism becomes invertible, with the inverse given by the oppositely oriented path. This gives the category of the CFTs residing on the same conformal manifold the structure of the {\it path groupoid} \, ${\mathbb P}_1({\cal M}_c)$ of the conformal manifold
(if we identify homotopically equivalent deformations). As we will soon discuss, in some cases one can associate more than one 1-morphism to a given path. We can consider the skeletal category of the conformal manifold groupoid. If we identify morphisms corresponding to the same path this is given by the homotopy group of the conformal manifold. Moreover, in the skeletal category of ${\cal C}^{(D)}$ theories residing on the same conformal manifolds will be identified as objects.}
Turning to $2$-morphisms, we should look at the global symmetry. On generic points of the conformal manifold, it is described by the group $G_{{\cal M}_c}$. However, there might be special loci within ${\cal M}_c$ where the symmetry gets enhanced to a bigger group $G_\text{locus} \supset G_{{\cal M}_c}$.
By definition, all the exactly marginal deformations preserve $G_{{\cal M}_c}$, but on an enhancement locus, where the symmetry is larger, various deformations might be again related to each other by the action of the enhanced global symmetry group. This gives us the $2$-category structure as before. Here however, each $2$-morphism is parametrized by a sequence of (equivalence classes of) group elements associated to loci where the two paths in question intersect a locus with enhanced global symmetry.\footnote{To be precise, here the $2$-morphisms are parametrized by pairs, where the first element belongs to the appropriate coset and the second element is the corresponding source deformation.} Therefore, the source and target $1$-morphisms might correspond to the same path or different paths which intersect at least at the same loci of enhanced symmetry (see Figure \ref{confm}).
The fact that the
loci with enhanced symmetry behave differently than the generic one is because these loci will have more marginal operators (which are marginally irrelevant in supersymmetric theories) \cite{Green:2010da}. Thus, turning on generic marginal deformations, we do generate a flow which ends up on the same conformal manifold.
The $2$-category structure is intimately related with the symmetry structure of the conformal manifold.
\begin{figure}
\includegraphics[scale=0.2]{confm.pdf}
\caption{ \label{confm}A conformal manifold. On the left we depict morphisms corresponding to paths. On the right, the shaded blue and orange lines denote loci with enhanced symmetry. The two $1$-morphisms (black and green paths) are related by a $2$-morphism $\alpha=\{(g_1, {\cal O}_1), (g_2, {\cal O}_2), (g_3, {\cal O}_3)\}$, defined via the intersection of the paths (corresponding to the $1$-morphisms) and the enhancement loci. In general, when leaving one such locus we have multiple choices of how to break the symmetry, all related to each other by the action of elements of the enhanced symmetry group. These group elements $g_i$ determine the structure of $\alpha$.}
\end{figure}
\section{Monoidal structure}
The category ${\cal C}^{(D)}$ we have built admits a natural monoidal structure, {\it i.e.}~a tensor product. Given $X,Y\in {\cal C}^{(D)}_0$, we obtain a new object $Z \equiv X \otimes Y \in {\cal C}^{(D)}_0$ by taking the tensor product of Hilbert spaces and (extended) operator algebras
of the theories $X$ and $Y$. This produces the decoupled sum of the original degrees of freedom and thus it forms a consistent theory. If $X$ and $Y$ both admit a Lagrangian description, the path integral of $Z$ is nothing but the product of the path integrals of $X$ and $Y$ (the action of $Z$ is simply the sum $S_Z = S_X + S_Y$), therefore all correlation functions factorize. The {\it unit object} $\mathds{1}$ is given by the empty theory, with no dynamical fields and vanishing action, such that its product with any other object leaves the latter invariant. Under these assumptions, the tensor product is manifestly associative.
In our construction, fields of particular spin, namely the scalar field and the fermions of various types in a given dimension, are {\it elementary} objects in the sense that they are not tensor products of other objects. Theories with Lagrangian description are constructed by tensoring these elementary objects and following morphisms. The free matter fields are not necessarily the only elementary objects (see Section \ref{sec: discussion} for more details). Note that spin one fields in our discussion play a different role compared to other spins as these are associated to morphisms.
The tensor product admits a natural lift to a tensor product of $1$- and $2$-morphisms, which makes it into a $2$-functor on the $2$-category ${\cal C}^{(D)}$. Namely, given two source CFTs $X$ and $Y$ and the sequences of deformations $f : X \to X'$ and $\textsl{g} : Y \to Y'$, we can define the 1-morphism from $X\otimes Y$ to $X' \otimes Y'$, which we will denote by $f \otimes \textsl{g}$, as the combination of the two sequences of deformations. This operation preserves compositions and identity 1-morphisms. Similarly, given the deformations $f, f' : X \to X'$ and $\textsl{g}, \textsl{g}' : Y \to Y'$, such that there exist $\alpha : f \Rightarrow f'$ and $\beta : \textsl{g} \Rightarrow \textsl{g}'$ defined as above, we can build the $2$-morphism $\alpha \otimes \beta : f \otimes f' \Rightarrow \textsl{g} \otimes \textsl{g}'$ by combining the sequences describing $\alpha$ and $\beta$. Once again, this operation preserves compositions and identity $2$-morphisms. The structure that we obtain is that of a strict monoidal $2$-category $({\cal C}^{(D)}, \otimes)$.
\begin{figure}
\centering
\includegraphics[scale=0.13,trim={5px 5px 5px 5px},clip]{3cat.pdf}
\caption{Illustration of the interchange laws. We have a single object $*$ in the $3$-category and the $1$-morphisms are the CFTs.
Each axis corresponds to a different morphism: the $x$ axis is the tensor product $\otimes$, the $y$ axis is the composition $\circ$ of the category ${\cal C}^{(D)}$, while the $z$-axis is the vertical product $\bullet$ of the $2$-morphisms of ${\cal C}^{(D)}$.}
\label{fig:3cat}
\end{figure}
A natural way of understanding the structure of ${\cal C}^{(D)}$ is to view it as a (strict) 3-category, say ${\cal D}^{(D)}$, with a single object denoted by $*$ \cite{MR1712872}. Here the $1$-morphisms of ${\cal D}^{(D)}$ are the objects of ${\cal C}^{(D)}$, viewed abstractly as endomorphims of $*$. Thus, the $2$- and $3$-morphisms of ${\cal D}^{(D)}$ are nothing but the $1$- and $2$-morphisms of ${\cal C}^{(D)}$ respectively.
The three compositions on ${\cal D}^{(D)}$ are then constructed as follows (see Figure \ref{fig:3cat}). The vertical composition of $3$-morphisms of ${\cal D}^{(D)}$ coincides with the $\bullet$ of ${\cal C}^{(D)}$. Next, the functor describing the horizontal composition of the $2$-morphisms of ${\cal D}^{(D)}$, and its lift to $3$-morphisms, is the horizontal composition $\circ$ of ${\cal C}^{(D)}$. Finally, the 2-functor providing the composition of $1$-morphisms of ${\cal D}^{(D)}$, together with its lift to $2$- and $3$-morphisms, is the tensor product $\otimes$.
The axioms of a $3$-category require that these three different structures are compatible with each other and satisfy
some properties, the interchange laws:
\begin{eqnarray}\label{intlaws}
&&\left(\delta\bullet\gamma\right)\circ\left(\beta\bullet\alpha\right) =\left(\delta\circ\beta\right)\bullet\left(\gamma\circ\alpha\right)\,,\nonumber\\
&&\left(\sigma\otimes\beta\right)\bullet\left(\rho\otimes\alpha\right) =\left(\sigma\bullet\rho\right)\otimes\left(\beta\bullet\alpha\right)\,,\\
&&\left(\mu\otimes\gamma\right)\circ\left(\rho\otimes\alpha\right) =\left(\mu\circ\rho\right)\otimes\left(\gamma\circ\alpha\right)\,.\nonumber
\end{eqnarray}
These can be read from the three planes in Figure \ref{fig:3cat}.
The first interchange law is the one we already checked above for $2$-morphisms
in $\mathcal{C}^{(D)}$. To prove the others, we use the definitions for $1$-morphisms
and $2$-morphisms in $\mathcal{C}$. We denote,
\begin{eqnarray}
&&f \,: X_1\to Y_1\, = \left\{ \mathcal{O}_{1},\dots,\mathcal{O}_{n}\right\} \,,\\
&&\;\textsl{g} \,: X_2\to Y_2\, =\left\{ \mathcal{U}_{1},\dots,\mathcal{U}_{m}\right\}\,.\nonumber
\end{eqnarray}
Then, we write their tensor product as
\begin{eqnarray}
&&f\otimes \textsl{g}\,: X_1\otimes X_2\to Y_1\otimes Y_2\, =\\
&&\qquad\qquad\qquad \left\{ \mathcal{O}_{1},\cdots,\mathcal{O}_{n},\mathcal{U}_{1},\cdots,\mathcal{U}_{m}\right\}\,. \nonumber
\end{eqnarray} Note that here the ordering between the sets of $\mathcal{O}$'s and $\mathcal{U}$'s does not matter. Only the relative ordering between different ${\cal O}$'s (resp.~${\cal U}$) does. Therefore the product is commutative: $f\otimes \textsl{g}=\textsl{g}\otimes f$.
Next we employ the following notation for $2$-morphisms:
\begin{eqnarray}
\alpha=\left\{ \left(g_{1}^{(\alpha)},\mathcal{O}_{1}\right),...,\left(g_{n}^{(\alpha)},\mathcal{O}_{n}\right)\right\} \, ,
\end{eqnarray}
so that the tensor product of $2$-morphisms reads,\footnote{Here we consider the tensor product of the groups acting on the two tensored components. Note that the global symmetry of the tensor product of two theories might be bigger than the tensor product of the symmetries of the two components. An example is tensoring a collection say of $2$ complex scalar fields. However, for the purpose of checking the interchange laws this enhancement is not essential.}
\begin{eqnarray}
&&\rho\otimes\alpha=\left\{ \left(\mathds{1}\otimes g_{1}^{(\rho)},\mathcal{U}_{1}^{\textsl{g}_1}\right),\dots,\right.\\
&&\left.\left(\mathds{1}\otimes g_{m}^{(\rho)},\mathcal{U}_{m}^{\textsl{g}_1}\right), \left(g_{1}^{(\alpha)}\otimes \mathds{1},\mathcal{O}^{f_1}_{1}\right),\dots,\left(g_{n}^{(\alpha)}\otimes \mathds{1},\mathcal{O}_{n}^{f_1}\right)\right\} \,. \nonumber
\end{eqnarray}
Using the definitions of compositions we then can prove the interchange laws \eqref{intlaws}. For example,
\begin{widetext}
\begin{eqnarray}
&&\left(\sigma\otimes\beta\right) \bullet\left(\rho\otimes\alpha\right)=
\left\{ \left(\mathds{1}\otimes g_{1}^{(\sigma)},\mathcal{U}_{1}^{\textsl{g}_2}\right),\cdots ,\left(\mathds{1}\otimes g_{m}^{(\sigma)},\mathcal{U}_{m}^{\textsl{g}_2}\right), \left(g_{1}^{(\beta)}\otimes \mathds{1},\mathcal{O}_{1}^{f_2}\right),\cdots ,\left(g_{n}^{(\beta)}\otimes \mathds{1},\mathcal{O}_{n}^{f_2}\right)\right\}
\bullet\\&& \quad \bullet \left\{ \left(\mathds{1}\otimes g_{1}^{(\rho)},\mathcal{U}_{1}^{\textsl{g}_1}\right),\dots ,\left(\mathds{1}\otimes g_{m}^{(\rho)},\mathcal{U}_{m}^{\textsl{g}_1}\right), \left(g_{1}^{(\alpha)}\otimes \mathds{1},\mathcal{O}_{1}^{f_1}\right),\dots ,\left(g_{n}^{(\alpha)}\otimes \mathds{1},\mathcal{O}_{n}^{f_1}\right)\right\}\nonumber\\&&
= \left\{ \left( \mathds{1}\otimes g_{1}^{(\sigma)}g_{1}^{(\rho)},\mathcal{U}_{1}^{\textsl{g}_1}\right),\dots ,\left(\mathds{1}\otimes g_{m}^{(\sigma)}g_{m}^{(\rho)},\mathcal{U}_{m}^{\textsl{g}_1}\right), \left(g_{1}^{(\beta)}g_{1}^{(\alpha)}\otimes \mathds{1},\mathcal{O}_{1}^{f_1}\right),\dots,\left(g_{n}^{(\beta)}g_{n}^{(\alpha)}\otimes \mathds{1}, \mathcal{O}_{n}^{f_1}\right)\right\} \nonumber\\&&
= \left(\left\{ \left(g_{1}^{(\sigma)},\mathcal{U}_{1}^{\textsl{g}_2}\right),\dots ,\left(g_{m}^{(\sigma)},\mathcal{U}_{m}^{\textsl{g}_2}\right)\right\} \bullet\left\{ \left(g_{1}^{(\rho)},\mathcal{U}_{1}^{\textsl{g}_1}\right),\dots,\left(g_{m}^{(\rho)},\mathcal{U}_{m}^{\textsl{g}_1}\right)\right\} \right) \otimes \nonumber\\&& \quad
\left(\left\{ \left(g_{1}^{(\beta)},\mathcal{O}_{1}^{f_2}\right),\dots ,\left(g_{n}^{(\beta)},\mathcal{O}_{n}^{f_2}\right)\right\} \bullet\left\{ \left(g_{1}^{(\alpha)},\mathcal{O}_{1}^{f_1}\right),\dots ,\left(g_{n}^{(\alpha)},\mathcal{O}_{n}^{f_1}\right)\right\}\right)\nonumber\\&&
= \left(\sigma\bullet\rho\right)\otimes\left(\beta\bullet\alpha\right)\,.\nonumber
\end{eqnarray}
\end{widetext}
In this equation, the superscripts of ${\cal O}$ and ${\cal U}$ denote the $1$-morphisms to which the basic deformations belong.
Also, as usual, $\mathds{1}$ here denotes the identity element of a corresponding group.
\section{Category of CFTs in any dimension}
Let us next define the category of all CFTs ${\cal C}$. The set of objects in ${\cal C}$ is given by ${\cal C}_0=\cup_{D=0}^\infty {\cal C}^{(D)}_0$.
The set of morphisms includes the morphisms of ${\cal C}^{(D)}$,
\begin{equation}
\cup_{D=0}^\infty {\cal C}^{(D)}_1\subset {\cal C}_1\,,
\end{equation} but also contains additional morphisms connecting objects in ${\cal C}^{(D)}_0$ with different values of $D$. Physically we define these additional morphisms as follows. Given $X \in {\cal C}^{(D)}_0$ we can place it on a $M^{(D')} \times m^{(D-D')}$ where $m^{(D-D')}$ is a compact space. We can also turn on background gauge fields supported on $m^{(D-D')}$, $\{{\bf {\cal A}}\}$. Then we can discuss the low energy physics of this construction. The resulting physics might be describable by a $D'$-dimensional CFT $Y \in {\cal C}^{(D')}_0$. If that is the case we define a morphism $X \to Y$ which is parametrized by the compactification geometry $\{m^{(D-D')},\, {\bf{\cal A}\}}$ and the source CFT. We will refer to these as {\it across-dimensions} morphisms.
As was the case with {\it in-dimension} morphisms, we can naturally compose various across-dimension morphisms with each other and also across-dimension morphisms with in-dimension morphisms. A general morphism is thus defined by an ordered sequence of deformations, some of which are in-dimension deformations and some are across-dimension compactifications.
Next, the compactification deformation might preserve the $0$-form symmetry of the source CFT or it might break it, say by the choice of the background gauge fields. Different compactifications preserving the same symmetry of the source CFT thus again might be related by the action of the $0$-form symmetry group of the source CFT. This provides $2$-morphisms between compactification $1$-morphisms. More generally, a morphism defined by a sequence of deformations can be related by a sequence (of equivalence classes) of group elements as before. This provides a $2$-category structure on ${\cal C}$. Note that, abstractly, this structure does not distinguish in-dimension and across-dimension morphisms and treats them uniformly.
The $0$-form symmetry of the lower dimensional target theory can have several higher dimensional origins. First, it can be the $0$-form symmetry of the source CFT. Second, it can come from definitions of boundaries of the compactification geometries. Finally, it can come from higher form symmetry of the source CFTs taking the corresponding topological operators to wrap the compactification surface.
In addition, some of the $0$-form symmetry of the target CFT might be emergent as usual. Finally, also non-topological operators, local and non-local operators can give rise to local operators in lower dimensions which we can associate to morphisms, see {\it e.g.}~\cite{babuip} and Appendix E of \cite{Kim:2017toz}.
To define a monoidal structure, we need to extend the set of objects to incorporate coupled CFTs of different dimensionalities. One can naturally consider a product of two CFTs of different dimensionalities $D$ and $D'<D$ by considering some $D'$ dimensional hyperplane in $D$ dimensional space and placing the $D'$-dimensional CFT on it. The bulk CFT and the lower dimensional one are not coupled. Next, one can consider coupling the two CFTs in various ways, {\it e.g.}~gauging symmetries or coupling operators. If the resulting theory is conformally invariant we can add it to the set of objects. Note that tensoring more than two objects requires a more rigorous definition: {\it e.g.}~one can define tensored objects of same dimensionality to share the same spacetime, and objects of lower dimensionality to live on submanifolds of objects all of higher dimensionality.
The resulting category we will denote by $\widetilde {\cal C}$. The category $\widetilde {\cal C}$ can be thought of as category of CFTs in arbitrary number of dimensions in presence of arbitrary conformal defects.
\section{Smaller categories}
One can discuss various ways to simplify or constrain the category of all CFTs. One way to do so has been already discussed: we can consider the set of theories residing on the same conformal manifold. This restriction retains the structure of $2$-category but does not have a natural tensor product.
Another way to obtain a more general class of theories is to consider CFTs with at least some amount of supersymmetry.
For example, one can define a category ${\cal C}^{(D=4|{\cal N}=1)}$ such that ${\cal C}^{(D=4|{\cal N}=1)}_0$ is the subset of ${\cal C}^{(D=4)}_0$ corresponding to theories which have at least $D=4$ ${\cal N}=1$ superconformal symmetry (are SCFTs). To define morphisms between the different theories here we need to be more careful. For example, if we want the deformations to preserve supersymmetry explicitly we need to turn on first several deformations at once, scalar potentials and Yukawa terms following from a superpotential. Second, gauging a symmetry preserving ${\cal N}=1$ supersymmetry corresponds to adding not just vector fields but also gaugino fermions. In our general definition this gauging, thus, corresponds to tensoring with free fields, gauging, and turning on potentials. One can define morphisms using these supersymmetric definitions and hence consider the category of the supersymmetric theories with morphisms being supersymmetric deformations.\footnote{Note that with the supersymmetric definition of morphisms we can have relevant deformations starting and ending on the same object. A simple example is the ${\cal N}=2$ duality in $D=3$ between a single chiral superfield and $U(1)$ gauge theory with a single charge $+1$ chiral superfield \cite{Dimofte:2011ju}.} This construction can be generalized to a category of supersymmetric CFTs in any dimensions. Here one would insist discussing compactifications between dimensions preserving some amount of supersymmetry. This implies that one has in general to consider {\it twisting} along with compactification: that is turning nontrivial background fields for the R-symmetry. The $2$-category structure works in the same way as before.
\begin{figure}
\includegraphics[scale=0.18]{SU2flows.pdf}
\caption{ \label{SU2flows} An example of a network of flows. The source theory $A$ is a collection of thirty one chiral superfields (complex scalars and Weyl fermions). The various morphisms and other objects are discussed in the text. The lines $\{{\cal O}_1,\,{\cal O}_2,\,{\cal O}_3\}$ and $\{{\cal O}_2,\,{\cal O}_3\}$ correspond to morphisms which are associated to deformations which can be only defined as a sequence but cannot be thought of directly as a deformation of the source theory. }
\end{figure}
\section{Some examples}
\noindent{{\it Example I}: }\;\; Let us discuss an example of objects and morphisms in a category of supersymmetric CFTs: all of the morphisms and objects will preserve some amount of supersymmetry. We can phrase the sequence of flows in non-supersymmetric category but this would be more cumbersome. Consider as the source CFT $A$ the collection of $15+16=31$ ${\cal N}=1$ chiral superfields in four dimensions. These include complex scalars and Weyl fermions. Next we turn on deformation $\{{\cal O}_1\}$ which corresponds to splitting the fields into $2\times 8+15$ and gauging $SU(2)$ subgroup of the $U(31)$ global (non R) symmetry of $A$. Here we consider the symmetries preserving the structure of supermultiplets. The fields form $15$ singlets and $8$ fundamentals. This is a relevant deformation which takes us to CFT $B$. We can then, for example, take two of the eight doublets and form from them a mesonic operator and deform $B$ by turning on a superpotential for this operator. This is a relevant deformation $\{{\cal O}_4\}$. In the IR this flows to CFT $E$. Note that from point of view of $A$ we turned on a mass term and thus the theory $E$ is $SU(2)$ SQCD with $N_f=3$ and additional $15$ free chiral fields \cite{Seiberg:1994pq}. This SQCD flows in IR to $15$ free fields and thus CFT $E$ is a collection of $15+15=30$ free fields. We can consider this deformation directly at $A$ and then we label it as $\{ {\cal O}_1, \, {\cal O}_4\}$. Alternatively, we could first turn on the mass term which would lead to free CFT of $12+15=29$ fields in the IR, $H$, and then gauge $SU(2)$ group with now six fundamental fields, leading again to $E$. Let us consider now starting from $B$. This theory has $SU(8)\times U(15)$ global symmetry and $28$ operators in ${\bf 28}$ of $SU(8)$ which can be thought as mesons and baryons of $A$ after gauging. Let us consider an $SU(2)\times SU(6)\times U(1)$ subgroup of $SU(8)$ under which ${\bf 28}\to ({\bf 1},{\bf 15})\oplus({\bf 2},{\bf 6})\oplus ({\bf 1},{\bf 1})$. We couple the $ ({\bf 1},{\bf 15})$ to the fifteen free fields in the superpotential and denote this deformation by $\{{\cal O}_2\}$. This is a relevant deformation leading to CFT $C$.
The theory $C$ has (conjecturally) an emergent symmetry $SU(2)\times SU(6)\to E_6$ \cite{Razamat:2017wsk}. Note that, had we performed the deformation $\{{\cal O}_2\}$ directly on $A$, we would have obtained a cubic superpotential in the free theory, which is an irrelevant deformation leading us back to $A$. Hence the deformation $\{{\cal O}_1,\,{\cal O}_2\}$ represents an example of a {\it dangerously irrelevant} deformation on $A$ when intended as turning on both ${\cal O}_1$ and ${\cal O}_2$ simultaneously: however, as we stressed above, the deformations are considered to be taken one by one in a sequence of specified order. Here, $$\{{\cal O}_1,\,{\cal O}_2\} \neq \{{\cal O}_2,\,{\cal O}_1\}\sim\{{\cal O}_1\}.$$ The equivalence on the right just means that the target CFTs are the same but we will distinguish the two morphisms.
Next,
the $E_6$ global symmetry has $SU(3)^3$ maximal subgroup with one of the three $SU(3)$ factors emerging in the IR.
We can then consider various deformations making use of the emergent symmetry. For example, we can compactify the theory on a circle to three dimensions and gauge a diagonal combination of the three $SU(3)$s in the IR turning on a Chern-Simons term with some level: we denote this deformation by $\{{\cal O}_3\}$ which is by iteself a concatination of two deformations (compactification and gauging).
This leads to CFT $D$. Note that we can consider the deformation $\{{\cal O}_1,\,{\cal O}_2,\, {\cal O}_3\}$ starting from $A$ and leading to $D$.
However, this deformation cannot be defined field theoretically in $A$ as we gauge an emergent symmetry and only makes sense as a sequence of deformations. Finally we can start from an SCFT in six dimension---the rank one E-string theory \cite{Ganor:1996mu,Seiberg:1996vs,Witten:1996qb,Morrison:1996pp} --- which we denote by $F$.
This theory has $E_8$ global symmetry. We can then deform it by placing it on a torus with a flux breaking $E_8$ to $E_6\times U(1)$. We denote this deformation as $\{({\cal C}_{g=1},\, {\cal F})\}$. The theory will flow to a four dimensional CFT, $G$. A relevant superpotential deformation of $G$, denoted by $\{{\cal O}_5\}$ leads again to $D$. See \cite{Razamat:2017wsk,Razamat:2022gpm} for details.
We have discussed here some flows starting from $A$ and $F$: the resulting objects and morphisms are part of a much larger categorical structure and we only used the above as an illustration. Note that at each step we had a choice of a given subgroup to define the deformation. Different choices lead to equivalent theories in the IR, and thus the relevant deformations are related by $2$-morphisms defined by mapping one choice into the other one.
\
\begin{figure}
\includegraphics[scale=0.3]{N4Mc.pdf}
\caption{ \label{N4Mc} Conformal manifold of ${\cal N}=4$ $SU(N>2)$ SYM. The manifold has three complex dimensions. Along one of the dimensions the supersymmetry is ${\cal N}=4$ and the global symmetry in ${\cal N}=1$ language is $SU(3)$. Along two dimensions the supersymmetry is ${\cal N}=1$ and symmetry is $U(1)^2$ generically. While on general locus supersymmetry is ${\cal N}=1$ and there is no continuous global symmetry.}
\end{figure}
\noindent{{\it Example II}: }\;\; Next, let us consider the conformal manifold of ${\cal N}=4$ SYM with $SU(N)$ gauge group. For $SU(N>2)$ the conformal manifold has three complex dimensions. Along one of the complex directions the supersymmetry is ${\cal N}=4$. Viewing this theory as an ${\cal N}=1$ SCFT, the global symmetry along this direction is $SU(3)$. Along the two additional complex directions the supersymmetry is broken to ${\cal N}=1$. One of these direction preserves a $U(1)^2$ subgroup of $SU(3)$ while on a generic locus of the conformal manifold the continuous global symmetry is completely broken and one only has R-symmetry and supersymmetry \cite{Leigh:1995ep}. General $1$-morphisms between two CFTs on the manifold correspond to continuous paths. If the path passes through locus of enhanced symmetry, one has a choice of embedding for the deformation that breaks the enhanced global symmetry group once the path leaves the enhanced locus. If we have
two paths between the same pairs of points on ${\cal M}_c$ which pass through same loci of enhanced symmetry we can define a $2$-morphism between them. The $2$-morphism is parametrized by a sequence of (equivalence classes of) group elements of the enhanced symmetry transforming the choice of one deformation into the other.
\section{Discussion and Comments}
\label{sec: discussion}
In this note, we have discussed a categorical language to organize our thinking of the space of CFTs.
This discussion fits the general framework of higher form/higher group/categorical symmetries. The layer we are trying to add corresponds to deformations of a CFT. The higher category of generalized symmetries
associated to a given theory acts on various operators in that theory. In particular, some of these operators can be used to deform a given CFT to a new CFT. These deformations are $1$-morphisms of a category, while the
symmetries provide a higher categorical structure. Another type of morphisms is given by gauging some (generalized) symmetries. In particular, we have discussed in detail the application of this idea to operator deformations and gauging of $0$-form symmetries.\footnote{
The deformations can be thought of as space-time filling objects. For example, the operator deformations are terms in the action. These deformations are not topological in the usual sense and thus do not correspond to what is often called $(-1)$-form symmetries \cite{Cordova:2019uob,Vandermeulen:2022edk}.
However, as we are interested only in the fixed points, the fine details of the values of relevant and irrelevant couplings are inessential and one can view this as a topological property.}
There are various
ways in which the discussion can be extended. For example, we can consider gauging higher form/group symmetries \cite{Tachikawa:2017gyf}. The gauging of such symmetries does not lead to RG flow but does change the spectrum of operators of different dimensionality the theory has and thus leads to a different CFT. Moreover one can also consider gauging global symmetries of various forms on submanifolds of various codimensions \cite{Roumpedakis:2022aik}.
As our main motivation to develop the categorical language is to discuss various conjectures and questions regarding the space of all CFTs (with the hope that such a reformulation will eventually lead to deeper insights),
let us list some of the questions/conjectures.\footnote{See also \cite{razamatstrings22}.}
\begin{itemize}
\item {\it Is there a morphism in ${\cal C}^{(D)}_1$ to any given CFT from an object corresponding to a free theory in $D\leq 4$?} Remember that a free CFT is a tensor product of some number of free scalars and free fermions. This question amounts to wondering whether any CFT has a Lagrangian construction in a given number of dimensions. Note that by Lagrangian construction here we include sequences of deformations.
We can phrase this as asking whether one can define a set of {\it elementary objects} (which might not be unique) such that: (i) it includes free matter theories (ii) all the other theories are obtained from it by tensor products and deformations; and whether this set of elementary theories is strictly larger than the set of free theories. This question can be refined in various ways.
\item {\it Is there a supersymmetric morphism to any given SCFT from an object corresponding to a free theory in $D\leq 4$?} This question might be refined by demanding the deformations and collections of free fields to be also supersymmetric.
\item{\it Is there a morphism in ${\cal C}_1$ to any given CFT starting from an object corresponding to a CFT in $D=6$?} Here we wonder whether any CFT in lower dimensions can be obtained as a compactification, and possibly subsequent deformation, of a six dimensional CFT.
\item{\it Is any $D\leq 4$ (S)CFT obtained from a six dimensional CFTs also in the target of free CFTs?} That is, whether all compactifications are across-dimensions dual to lower dimensional field theoretic constructions.
\item {\it What are the nontrivial objects with no outgoing morphisms which are not TQFTs?} Such theories are sometimes called dead-end CFTs \cite{Nakayama:2015bwa,Frenkel:1988xz}.
\item Studying the structure of theory space led in the past to various explicit quantitative results. An example is the relation between compactifications of $6d$ CFTs on surfaces and supersymmetric partition functions \cite{Pestun:2016zxk} of lower dimensional theories.
Here, the supersymmetric partition functions can be either labeled by the target lower dimensional CFT, {\it e.g.} $Z[T_{4d}]$, and then typically hard to compute, or by the across-dimensions morphisns, {\it e.g.} $Z[T_{4d}]=Z[\left(m^{(2)},\{{\cal A}\}\right),T_{6d})]$, and then often easier to derive. See {\it e.g.} \cite{Alday:2009aq,Gadde:2009kb}.
\end{itemize}
\
\noindent{\bf Acknowledgments}:~
We are grateful to Chris Beem, Dan Freed, Zohar Komargodski, Elli Pomoni, Sakura Schafer-Nameki, Yuji Tachikawa, and Amos Yarom for insightful discussions and comments.
This research is supported in part by Israel Science Foundation under grant no. 2289/18, grant no. 2159/22, by I-CORE Program of the Planning and Budgeting Committee, by a Grant No. I-1515-303./2019 from the GIF, the German-Israeli Foundation for Scientific Research and Development, by BSF grant no. 2018204. SSR is grateful to the Aspen Center of Physics for hospitality during initial stages of the project and to the Simons Center for Geometry and Physics.
|
1,314,259,993,262 | arxiv |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.